Abstract

We present an inexpensive architecture for converting a frequency-modulated continuous-wave LiDAR system into a compressive-sensing based depth-mapping camera. Instead of raster scanning to obtain depth-maps, compressive sensing is used to significantly reduce the number of measurements. Ideally, our approach requires two difference detectors. Due to the large flux entering the detectors, the signal amplification from heterodyne detection, and the effects of background subtraction from compressive sensing, the system can obtain higher signal-to-noise ratios over detector-array based schemes while scanning a scene faster than is possible through raster-scanning. Moreover, by efficiently storing only 2m data points from m < n measurements of an n pixel scene, we can easily extract depths by solving only two linear equations with efficient convex-optimization methods.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Depth-mapping, also known as range-imaging or 3D imaging, is defined as a non-contact system used to produce a 3D representation of an object or scene [1]. Depth-maps are typically constructed through either triangulation and projection methods, which rely on the angle between an illumination source and a camera offset to infer distance, or on time-of-flight (TOF) LiDAR (light detection and ranging) techniques. LiDAR systems are combined with imaging systems to produce 2D maps or images where the pixel values correspond to depths.

Triangulation and projection methods, such as with Microsoft’s Kinect [2], are inexpensive and can easily operate at video frame rates but are often confined to ranges of a few meters with sub-millimeter uncertainty. Unfortunately, the uncertainty increases to centimeters after roughtly ten meters [1, 3]. Phase-based triangulation systems also suffer from phase ambiguities [4]. Alternatively, LiDAR-based methods can obtain better depth-precision with less uncertainty, to within micrometers [1, 5], or operate up to kilometer ranges. Thus, LiDAR based systems are chosen for active remote sensing applications when ranging accuracy or long-range sensing is desired. Additionally, technological advances in detectors, lasers, and micro-electro-mechanical devices are making it easier to incorporate depth-mapping into multiple technologies such as automated guided vehicles [6], planetary exploration and docking [7–9], and unmanned aerial vehicle surveillance and mapping [10]. Unfortunately, the need for higher resolution accurate depth-maps can be prohibitively expensive when designed with detector arrays or prohibitively slow when relying on raster scanning.

Here we show how a relatively inexpensive, yet robust, architecture can convert one of the most accurate LiDAR systems available [11] into an efficient high-resolution depth-mapping system. Specifically, we present an architecture that transforms a frequency-modulated continuous-wave LiDAR into a depth-mapping device that uses compressive sensing (CS) to obtain high-resolution images. CS is a measurement technique that compressively acquires information significantly faster than raster-based scanning systems, trading time limitations for computational resources [12–14]. With the speed of modern-day computers, the compressive measurement trade-off results in a system that is easily scalable to higher resolutions, requires fewer measurements than raster-based techniques, obtains higher signal-to-noise ratios than systems using detector arrays, and is potentially less expensive than other systems having the same depth accuracy, precision, and image resolution.

CS is already used in both pulsed and amplitude-modulated continuous-wave LiDAR systems (discussed in the next section). CS was combined with a pulsed time-of-flight (TOF) LiDAR using one single-photon-counting avalanche diode (SPAD), also known as a Geiger-mode APD, to yield 64 × 64 pixel depth-images of objects concealed behind a burlap camouflage [15,16]. CS was again combined with an SPAD-based TOF-LiDAR system to obtain 256 × 256 pixel depth-maps, while also demonstrating background subtraction and 32 × 32 pixel resolution compressive video [17]. Replacing the SPAD with a photodiode within a single-pixel pulsed LiDAR CS camera, as in [18], allowed for faster signal acquisition to obtain 128 × 128 pixel resolution depth-maps and real-time video at 64 × 64 pixel resolution. A similar method was executed with a coded aperture [19]. In addition to pulsed-based 3D compressive LiDAR imaging, continuous-wave amplitude-modulated LiDAR CS imaging [20–22], ghost-imaging LiDAR [23], 3D reflectivity imaging [24], have been studied with similar results. The architecture we presented here, to our knowledge, is the first time CS has been applied to a frequency-modulated continuous-wave LiDAR system.

Furthermore, we show how to efficiently store data from our compressive measurements and use a single total-variation minimization followed by two least-squares minimizations to efficiently reconstruct high-resolution depth-maps. We test our claims through computer simulations using debilitating levels of injected noise while still recovering depth maps. The accuracies of our reconstructions are then evaluated at varying levels of noise. Results show that the proposed method is robust against noise and laser-linewidth uncertainty.

2. Current LiDAR and depth-mapping technologies

There are many types of LiDAR systems used in depth-mapping, yet there are two general categories for obtaining TOF information: pulsed LiDAR and continuous wave (CW) LiDAR [25,26] (also known as discrete and full-waveform LiDAR, respectively [27,28]).

Pulsed LiDAR systems use fast electronics and waveform generators to emit and detect pulses of light reflected from targets. Timing electronics measure the pulse’s TOF directly to obtain distance. Depth resolution is dependent on the pulse width and timing resolution – with more expensive systems obtaining sub-millimeter precision. Additionally, the pulse irradiance power is significantly higher than the background which allows the system to operate outside and over long range. Unfortunately, the high-pulse power can be dangerous and necessitates operating at an eye-insensitive frequency, such as the far-infrared, which adds additional expense. Generally, APDs or SPADs detect returning low-flux radiation and require raster scanning to form a depth-map. More expensive SPAD-arrays can quickly acquire depth-maps at the expense of SNR, but are currently limited to resolutions of 512 × 128 [29].

Alternatively, CW-LiDAR systems continuously emit a signal to a target while keeping a reference signal, also known as a local oscillator. Because targets are continuously illuminated, they can operate with less power compared to the high peak-power of pulsed systems. CW-LiDAR systems either modulate the amplitude while keeping the frequency constant, as in amplitude-modulated continuous-wave (AMCW) LiDAR, or they modulate the frequency while keeping the amplitude constant, as in frequency-modulated continuous wave (FMCW) LiDAR.

AMCW-LiDAR systems typically require high-speed radio frequency (RF) electronics to modulate the laser’s intensity. However, low-speed electronics can be used to measure the return-signal after it has been demodulated. Examples of AMCW-LiDAR systems include phase-shift LiDAR, where the laser intensity is sinusoidally [11] or randomly modulated [30]. The TOF is obtained by convolving the demodulated local oscillator with the demodulated time-delayed return signal. Another popular AMCW system uses a linear-chirp of the laser’s intensity and electronic heterodyne detection to generate a beat note proportional to the TOF [31,32].

FMCW-LiDAR is the most precise LiDAR system available. The laser frequency is linearly chirped with a small fraction of the beam serving as a local oscillator. The modulation bandwidth is often larger than in linearly-chirped AMCW-LiDAR – yielding superior depth resolution. Yet, high-speed electronics are not required for signal detection. Instead, detection requires an optical heterodyne measurement, using a slow square-law detector, to demodulate the return signal and generate a beat-note frequency that can be recorded by slower electronics. Unfortunately, FMCW-LiDAR is limited by the coherence length of the laser [11].

Both AMCW and FMCW linearly-chirped LiDAR systems can obtain similar or better depth precision compared to pulsed systems while operating with slower electronics. This is because the heterodyne beat note can be engineered to reside within the bandwidth of slow electronics and can be measured more precisely. For this reason, there is much interest in using AMCW- and FMCW-LiDAR for depth-mapping. Our approach focuses on FMCW-LiDAR because of the the detection electronics are relatively easier to implement. Thus, a linearly-chirped FMCW-LiDAR that uses a high-bandwidth detector array for superior accuracy and a larger dynamic range is desirable. Yet, a high-bandwidth detector array may be prohibitively expensive and raster scanning a high-dimensional scene with a readily available high-bandwidth detector is oftentimes too slow. Our solution to this problem is to combine a compressive camera [13] with linearly-chirped FMCW-LiDAR system.

3. FMCW-LiDAR compressive depth mapping

3.1. Single-object FMCW-LiDAR

Within an FMCW-LiDAR system, a laser is linearly swept from frequency ν0 to νf over a period T. The electric field E takes the form

E(t)=Aexp(2πi[ν0+νfν02Tt]t),
where A is the field amplitude. It can be easily verified, by differentiation of Eq. (1), that the instantaneous laser frequency is ν0 + [(νfν0)/T]t.

A small fraction of the laser’s power is split off to form a local oscillator (ELO) while the remainder is projected towards a target at an unknown distance d. The returning signal reflected off the target, with electric field ESig(tτ) and time delay τ, is combined with ELO and superimposed on a square-law detector. The resulting signal, as seen by an oscilloscope (PScope) and neglecting detector efficiency, is

PScope(t)=0c2[ASig2+ALO2+2ALOASigsin(2πΔντTt+ϕ)],
where ϕ=ν0τΔν2Tτ2 is a constant phase, 0 is the permittivity of free space, c is the speed of light in vacuum, and Δν = νfν0. The alternating-current (AC) component within Eq. (2) oscillates at the beat-note frequency ν = Δντ/T from which a distance-to-target can be calculated from
d=νTc2Δν.

3.2. Multiple-object FMCW-LiDAR

The detection scheme should be altered when using a broad illumination profile to detect multiple targets simultaneously. Just as beat notes exist between the local oscillator and signal, there now exist beat notes between reflected signals at varying depths that will only inject noise into the final readout. Balanced heterodyne detection is used to overcome this additional noise source. A 50/50 beam-splitter is used to first mix ELO and ESig. Both beamsplitter outputs, containing equal powers, are then individually detected and differenced with a difference-detector.

Mathematically, ELO is the same as before, but the return signal from j objects at different depths takes the form

jESig(tτj)=jAjexp(2πi[ν0+Δν2T(tτj)](tτj)),
where we have labeled each object with an identifier index j that introduces a time delay τj and amplitude Aj, depending on the reflectivity of the object. After mixing at the beamsplitter, the optical power from the difference detector, as seen by an oscilloscope (PScope), again neglecting detector efficiency, is
PScope(t)=0c2(|ELO(t)+ijESig(tτj)2|2|iELO(t)+jESig(tτj)2|2)
=0cjALOAjsin(2πΔντjTt+ϕj),
where ϕj=ν0τjΔν2Tτj2 is a constant phase. The sum-frequency signal components are filtered by the slow-bandwidth detectors. The resulting balanced heterodyne signal in Eq. (6) is still amplified by the local oscillator and contains AC signals at only the frequencies νj = Δντj/T. Again, these frequencies can be converted to distance using Eq. (3).

3.3. Compressive imaging background

Compressive imaging is a technique that trades a measurement problem for a computational reconstruction within limited-resource systems. Perhaps, the most well known example of a limited-resource system is the Rice single-pixel camera [13]. Instead of raster scanning a single-pixel to form an n-pixel resolution image x, i.e. x ∈ ℝn, an n-pixel digital micro-mirror device (DMD) takes mn random projections of the image. The set of all DMD patterns can be arranged into a sensing matrix A ∈ ℝm×n, and the measurement is modeled as a linear operation to form a measurement vector y ∈ ℝm such that y = Ax.

Once y has been obtained, we must reconstruct x within an undersampled system. As there are an infinite number of viable solutions for x within the problem y = Ax, CS requires additional information about the signal according to a previously-known function g(x). The function g(x) first transforms x into a sparse or approximately sparse representation, i.e. a representation with few, or approximately few, non-zero components. The function g(x) then uses the L1-norm to return a scalar.

Within image-reconstruction problems, g(x) is often the total-variation (TV) operator because it provides a sparse representation by finding an image’s edges. Anisotropic TV is defined as TV(x)=[xT,yT]Tx1, where ∇ is a finite difference operator that acts on an image’s Cartesian elements (x and y) and *T is the transpose of *. Note that the Lp-norm of x is defined as xp=(i|xi|p)1/p. Thus, we solve the following TV-minimization problem:

argminx^nAx^y22+αTV(x^),
where the first term is a least-squares fitting parameter consistent with the measurements y, the second term is a sparsity promoting parameter, and α is a weighting constant.

3.4. Compressive FMCW-LiDAR depth-mapping theory

The proposed experimental diagram is shown in Fig. 1. A balanced-heterodyne-detection based FMCW-LiDAR system for multiple-object ranging is combined with a DMD-based compressive-imaging system to obtain transverse spatial information more efficiently than raster scanning. A linear-frequency chirped laser broadly illuminates a scene and the reflected radiation is imaged onto a DMD. The DMD can take projections using ±1 pixel values, meaning the sensing matrix A is limited to ±1 values. Because the 1 and −1 DMD pixel values reflect light in different directions, we split the acquisition process into separate (1, 0) and (0, 1) balanced heterodyne detections. A readout instrument, such as an oscilloscope, records the two signals. The time-dependent readout signals are then Fourier transformed – keeping only the positive frequency components and the amplitudes are differenced to arrive at a (1, −1) balanced heterodyne projection frequency representation. The measurement vector construction and depth-map reconstruction processes are detailed in this section.

 figure: Fig. 1

Fig. 1 (Proposed Experiment) HWP: half-wave plate, PBS: polarizing beam-splitter, DMD: digital micro-mirror device, BS: beam-splitter. A linearly chirped laser is split into two beams, designated as a local-oscillator and a signal, via a HWP and a PBS. The signal illuminates a target scene and the reflected radiation is used to image the scene onto a DMD. The DMD takes pseudo-random spatial projections, consisting of ±1 pixel values, and directs the projections to balanced-heterodyne detectors using the local-oscillator.

Download Full Size | PPT Slide | PDF

Let a three-dimensional scene x of depth L be discretized into N depths and n transverse spatial pixels containing objects at j depths. Let a 2D image of the scene x at depth j be represented by a real one-dimensional vector xIj ∈ ℝn. The lth pixel in xIj is represented as xIjl. The scene at depth j (i.e. xIj) yields a time-dependent heterodyne signal of l=1nxIjl=0cALOAjsin(2πνjt+ϕj), where νj = Δντj/T. Let us also assume that we are time-sampling at the Nyquist rate needed to see an object at a maximum depth L having a beat-note frequency of 2ΔνL/(Tc), meaning we acquire 2N data points when sampling at a frequency of 4ΔνL/(Tc) over a period T. This is the minimum sampling requirement. Within an experiment, sampling faster than the Nyquist rate will enable the accurate recovery of both the frequency and amplitude at depth L. Let a single pattern to be projected onto the imaged scene be chosen from the sensing matrix A ∈ ℝm×n and be represented by a real one-dimensional vector Ak ∈ ℝn for k = 1, 2, ...m, where m < n. For efficent compression, we require mn. When including the 2N time samples, the heterodyne signal is now represented as a matrix of n pixels by 2N time points, PScope(t) ∈ ℝm×2N, such that

PScope(t)=0cjALOAxIjsin(2πΔντjTt+ϕj).

To obtain TOF information, it is helpful to work in the Fourier domain. The absolute value of the first N elements in the Fourier transform of PScope(t) returns the compressive measurement matrix of positive frequency amplitudes, yI (ν+) = | [PScope(t)]+| ∈ ℝm×N, where [*]+ returns the positive-frequency components of *. Thus, yI (ν+) consists of N different measurement vectors of length m. Each of the N measurement vectors contain compressed transverse information for a particular depth.

While individual images at a particular depth can be reconstructed from yI (ν+), it is inefficient based on the size of the measurement matrix and the number of needed reconstructions. A more practical method with a significantly smaller memory footprint and far fewer reconstructions requires that we trace out the frequency dependence and generate only two measurement vectors of length m, designated as yI and y ∈ ℝm such that

yI=ν+=1N|[PScope(t)]+|
yIν=ν+=1N|[PScope(t)]+|ν+,
where ν+ ∈ ℝN×N is a diagonal matrix of frequency indices that weights each frequency amplitude of yI(ν+) by its own frequency (ν) before summation.

To summarize, the measurement vector yI is formed by summing the positive-frequency amplitudes, per projection, and is equivalent to a typical compressive-imaging measurement. The measurement vector y requires that each positive-frequency amplitude first be weighted by its frequency before summation. A depth map d can be extracted by reconstructing the transverse image profiles (I, ∈ ℝn) and then performing a scaled element-wise division as

d=x^Iνx^ITc2Δν,
where /I is an element-wise division for elements > 0. A similar technique designed for photon-counting with time-tagging was implemented in [15].

3.5. Compressive gains

Now that the projective aspect of the CS measurement is understood, it is beneficial to emphasize the advantages compressive depth-mapping offers over raster and detector-array based systems. In short, projective measurements result in eye-safety and SNR enhancements.

Regarding eye-safety, it is easier to render the illuminating radiation of CS methods eye-safe when compared to raster-scanning methods. As demonstrated in [18], a broad illumination-profile can have a lower intensity across the eye when compared against raster-scanning with a tightly focused laser.

The SNR is also enhanced in compressive methods, particularly in comparison to detector arrays, and has been extensively studied in [33–36]. In the case of depth-mapping with broad-illumination profiles, lower return flux will be an issue. If the flux rate R of the returning radiation is divided among α pixels within a β-pixel detector array (assuming αβ reflective pixels within the scene are imaged onto the β-pixel detector array), the number of photons received by a single detector after an integration time t will be approximately Rt/α. In the single-detector compressive case, roughly half of the photons will make it to the detector due to the random projection. If we want the compressive method – requiring m different projective measurements – to have the same total integration time t, then the integration time for a single projection must be t/m. Assuming a DMD can operate at this rate, the number of photons received by the detector during a single compressive projection will be Rt/(2m). Because compression allows for 2m < αβ, the increased flux would reduce shot-noise uncertainty.

In addition to a higher SNR from using fewer pixels, the system presented here benefits from both balanced heterodyne detection and background subtraction. Heterodyne detection will amplify the return radiation with the local oscillator. Background subtraction from the (1, −1) projections will cancel correlated noise [33,35–37]. Here, we define correlated noise as identical noise sources present in the (+1, 0) and (0, −1) projections from the DMD. Compressive sensing with (1, −1) valued patterns has been shown to successfully recover signals even when the background is larger than the signal itself. This can be easily seen by adding complex valued noise terms to the signal and then propagating them through the system up to the generation of the measurement vectors. Perfectly correlated noise sources, such as uniform background radiation, will cancel. Partially correlated noise sources, such as spatially varying background radiation, will only partially cancel. The computer simulations presented below inject uncorrelated noise and must be removed through denoising algorithms.

4. Computer simulation

4.1. Simulation parameters

A noisy simulation of compressive depth-mapping is presented within this section. The simulation presents detailed steps needed to acquire depth-maps from signals disrupted by both laser-linewidth uncertainty and Gaussian white noise when compressively measuring the scene presented in Fig. 2. We assume the scene contains Lambertian-scattering objects and is illuminated with 1 W of 780 nm light using an illumination profile shown in Fig. 3. With a large enough angular spread of optical power, the system can be rendered eye-safe at an appropriate distance from the source. We discretize the scene with a 128 × 128 pixel resolution DMD (n = 16384) operating at 1 kHz and use a 2 inch collection optic according to the diagram in Fig. 1. Figure 3 presents an example DMD projection of the returned radiation intensities before the balanced heterodyne detection as seen by only one of the detectors (corresponding to a (1,0) projection). A local oscillator of 100 μW is mixed with the returned image for the balanced heterodyne detection.

 figure: Fig. 2

Fig. 2 Image (a) presents a 3-dimensional scene composed of Lambertian-scattering targets. Image (b) presents the depth map we wish to compressively recover.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3

Fig. 3 Image (a) presents an illumination profile that consists of 1 Watt in total power. The scene is discretized into a 128 × 128 pixel resolution scene with a maximum power per pixel of 119 μW. When using a 2 inch collection optic and modeling each object as a Lambertian scatter, image (b) presents a single projection taken by the DMD of the reflected radiation as seen by one detector. The maximum power per pixel is of order nanowatts.

Download Full Size | PPT Slide | PDF

To model the frequency sweep, we use experimental parameters taken from [1] and [38] which state that a 100 GHz linear sweep over 1 ms is realizable for various laser sources. To bias the simulation towards worse performance, we model the laser linewidth with a 1 MHz Lorentzian full-width half-maximum (FWHM), meaning the beat note on our detectors will have a 2 MHz FWHM Lorentzian linewidth [39]. A narrower linewidth can be obtained with many lasers on the market today, but we bias the simulation towards worse operating parameters to show the architecture’s robustness. The simulated laser has a coherence length of c/(πΔνFWHM) ≈ 95 m, where ΔνFWHM = 1 MHz.

FMCW-LiDAR can use relatively inexpensive measurement electronics while still providing decent resolution at short to medium range. Using a 100 GHz frequency sweep over 1 ms leads to a beat note of 16.67 MHz from an object 25 m away. Keeping with the idea of slower electronics, we model the detection scheme using balanced-detectors and an oscilloscope that can see 16.65 MHz. The simulated oscilloscope sampled at 33.3 MHz over 1 ms, corresponding to a record-length of 33.3 × 103 samples per frequency sweep. The frequency resolution is 1 kHz with a maximum depth resolution of 1.5 mm. The furthest object in our test scene is located at approximately 22 m.

We also assume a 1 ms integration time per DMD projection, corresponding to one period of the frequency sweep per projection. We model the linear frequency chirp using a sawtooth function for simplicity, as opposed to more commonly used triangle function in experiments.

The sensing matrix used for the compressive sampling is based on a randomized Sylvester-Hadamard matrix displays values of ±1. We use two detectors to acquire the information with one projection – resulting in only m projections for an m-row sampling matrix. Alternatively, a cheaper single-detector version can also be implemented, but it must display two patterns for every row of the sensing matrix – resulting in 2m projections. While the alternate detection scheme requires twice as many measurements, we still operate in the regime where 2mn.

4.2. Compressive measurements

To simulate data acquisition in a noisy environment, we add Gaussian white noise in addition to laser-linewidth uncertainty. This section explains how the Fourier-transformed oscilloscope amplitudes are processed before applying the operations found in Eqs. (9) and (10).

Figure 4 explains the simulated noise addition and removal process. Image (a) depicts a 2 MHz normalized beat note Lorentzian linewidth uncertainty in terms frequency components. Image (b) presents a single ideal DMD projection, using a 1 ms integration time, that has not been distorted by any form of noise. Image (b) clearly depicts 5 main peaks, one per object. However, beat-note linewidth uncertainty will blur these features together such that image (c) contains frequency uncertainty. In addition to laser-linewidth, varying levels of Gaussian white noise are injected into image (c). Image (d) presents a typical noisy signal one would expect to see for a peak signal-to-noise ratio (PSNR) of 5, meaning the SNR of the brightest Lorentzian-broadened Fourier component has an SNR equal to 5. SNR is defined as the signal’s boradened amplitude divided by the standard deviation of Gaussian white noise. Note that the farthest object has an SNR at or below unity.

 figure: Fig. 4

Fig. 4 Image (a) shows the 2 MHz Lorentzian linewidth on the beat notes expected from a laser’s 1 MHz Lorentzian linewidth as seen by an oscilloscope sampling at 33.3 MHz. Image (b) shows the noiseless, positive-frequency components from a single noiseless projection from Fig. 3 when using a zero-linewidth laser. When accounting for a 2 MHz beat-note linewidth, image (c) shows the broadened expected frequency components. When including a peak SNR = 5 (based on the SNR of the brightest object or pixel), image (d) presents a realistic noisy signal one would measure in an experiment. Image (e) presents the result of denoising the signal in (d) with a Bayes-shrink denoising filter using a symlet-20 wavelet decomposition. Image (f) presents the result of deconvolving the 2 MHz Lorentzian linewidth with the denoised signal in (e) using a Weiner filter. Image (f) is the cleaned signal from the (1,0) projection. A similarly cleaned result from the (0,1) signal will then be subtracted from (f) and then used to form yI and y.

Download Full Size | PPT Slide | PDF

To clean a typical signal seen by only one detector, as in image (d), we use a BayesShrink filter [40] with a symlet-20 wavelet decomposition (see the PyWavelets library for more details). BayesShrink is a filter designed to suppress Gaussian noise using a level-dependent soft-threshold at each level of a wavelet decomposition. Because a symlet-20 wavelet is reasonably close in shape to the original signal components found in image (b), it has excellent denoising properties for these particular data sets. Alternate wavelets or denoising algorithms can be implement and may demonstrate better performance. Image (e) is the result of applying a symlet-20 based BayesShrink filter. A Weiner deconvolution filter [41] is then used to deconvolve the known Lorentzian linewidth of image (a) with the denoised signal in (e) to produce signal in (f). Image (f) constitutes only one of the needed projections, i.e. (1,0). The process must also be applied to the signal from the second detector consisting of (0,1) projection. The two cleaned signals are then subtracted to produce a (1,−1) projection. The differenced signal is then used to form a single element of each measurement vector yI and y using Eqs. 9 and 10.

4.3. Depth-map reconstruction

Once the process outlined in section 4.2 has been repeated m times, depth maps are reconstructed from yI and y using the procedure outlined in section 3.4. Details to the recovery of I and are presented in this section.

Ideally, one would only solve Eq. (7) twice to obtain I and and then apply Eq. (11) to extract high-resolution depth-maps. Unfortunately, TV-minimization will not always return accurate pixel values, particularly at low sample percentages, and will result in depth-maps with well-defined objects but at incorrect depths. While TV-minimization excels at finding objects when making α > 1, the least-squares term that ensure accurate pixel values suffers. To obtain accurate depth maps, we must locate objects accurately and obtain accurate pixel values.

Instead, we only apply one TV-minimization to locate the objects within I. After hard thresholding the image returned from TV-minimization, if needed to eliminate background noise, we are left with an image of objects with incorrect pixel values. An example of this step can be found in images (a) and (d) of Fig. 5, where image (a) was obtained from m = .25n measurements and image (d) was obtained from m = .05n measurements. From I, we generate a binary mask M, as demonstrated in images (b) and (e) that tells us the locations of interest.

 figure: Fig. 5

Fig. 5 When using a PSNR = 5, image (a) shows a typical total-variation minimization reconstruction from a 25% sample-ratio intensity-measurement vector yI. Shapes are identified, but the pixel values are incorrect. Image (b) is a binary mask generated by hard-thresholding image (a). After representing image (b) in a sparse basis, such as a Haar-wavelet decomposition, least-squares can be performed on the m/3 largest signal components to construct xI and x. After applying Eq. (11), a depth map is presented in image (c). Images (d), (e), and (f) demonstrate the same process for a 5% sample ratio, again with PSNR = 5. Image (g) is the true depth-map presented for easy comparison. Images (h) and (i) are smoothed depth-maps after applying a 4 ×4 pixel averaging kernel to depth-maps (f) and (c), respectively.

Download Full Size | PPT Slide | PDF

Once we have a binary mask M, a unitary transform Ψ converts M into a sparse representation s such that s = ΨM. In our simulations, Ψ is a wavelet transformation using a Haar wavelet. We tested other wavelets with similar results. We then keep only m/3 of the largest signal components of s to form a vector sm/3 = Ps, where the operator P ∈ ℝm/3×n is a sub-sampled identity matrix that extracts the m/3 largest signal components of s. We chose m/3 rather than a larger percentage of m based on empirical results. In essence, sm/3 is the support of s, i.e. the significant nonzero components of s containing the transverse spatial information of our depth-map. Because we have a vector sm/3 ∈ ℝm/3 and two measurement vectors yI ∈ ℝm and y ∈ ℝm, we can perform least-squares on the now-overdetermined system. Specifically, we apply the operations

sm/3=PΨM^(argminxInAxIyI22+αTV(xI))
x^I=MΨ1PT(argminsm/3AΨ1PTsm/3yI22)
x^Iν=MΨ1PT(argminsm/3AΨ1PTsm/3yIν22),
where the operation M̂ returns a mask M. Note that PT is the transpose of P and Ψ−1 is the inverse wavelet transform. Also notice that sm/3 and P contain the transverse spatial information of interest within our depth-map and are used in both Eqs. (13) and (14). They allow us to obtain an accurate heterodyne amplitude image I (from yI) and an accurate frequency weighted heterodyne amplitude image (from y).

The total variation minimization was performed with a custom augmented Lagrangian algorithm written in Python. It is based on the the work in [42] but is modified to use fast-Hadamard transforms. Additionally, the gradient operators within the TV operator use periodic boundary conditions that enables the entire algorithm to function with only fast-Hadamard transforms, fast-Fourier transforms, and soft-thresholding. The least-squares minimizations in Eqs. (13) and (14) are performed by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm [43,44] – a fast quasi-Newton method. Depth maps are obtained by applying Eq. (11). Example depth maps are found in images (c) and (f) of Fig. 5. If desired, light smoothing can be applied to obtain images (h) and (i).

The reconstruction algorithms recovered the low-spatial-frequency components of the depth-maps in Fig. 5, and arguably recovered a low-resolution depth-map. However, missing details can be retrieved when using a higher resolution DMD, sampling with higher sampling ratios, or applying adaptive techniques [45].

4.4. Results

To test the robustness and accuracy of the reconstructions with respect to noise and sample ratios, varying levels of Gaussian white noise were introduced into the simulation. Six different noise levels are presented in Fig. 6 (a). For each PSNR level, 10 different critically sampled (m = n) measurement vectors were generated using different sensing matrices but with the same parameters listed in section 4.1. Moving in increments of 2%, m was adjusted from .02n to n for each measurement vector. Reconstructions were performed at each sample percentage and the non-smoothed reconstructed depth-map were compared against the true depth-map to generate a mean-squared error. Thus, every data point in Fig. 6 is the average mean-squared error of 10 reconstructions while the solid colored regions above and below each marker designate the standard error in the mean. As seen in Fig. 5, the mean-squared error increases rapidly for PSNR< 5, probably because the SNR of distant objects is well below unity. Additionally, we see that a sample ratio of roughly m/n = .2 is sufficient for depth-map generation at almost all noise levels.

 figure: Fig. 6

Fig. 6 (a) Reconstruction Mean-Squared Error: Each data point corresponds to the average mean squared error from 10 different reconstructions of different data sets for varying sample ratios (m/n) and peak-SNR cases. The standard error for each mean is given by the shaded regions. (b) Depth Uncertainty to Toroid: Each data point represents the average uncertainty in depth to the toroid within Fig. 2. Points were calculated by first considering the standard deviation of each pixel over 10 different reconstructions. Standard deviations associated with only the toroid pixels were then averaged. Different peak-SNR scenarios, FWHM Lorentzian linewidths for the beat-note frequency uncertainties, and sample ratios were considered and compared to the results obtained by simulating raster scans. Raster scans were not affected by linewidth uncertainty.

Download Full Size | PPT Slide | PDF

Within Fig. 6, image (a) presents a reasonable way of analyzing how well all depths are recovered. However, it does not present the depth uncertainty of a single object as a function of the sample percentage and SNR. For that comparison, image (b) presents the average depth uncertainty to the toroid. Each data point was obtained by performing 10 reconstructions of 10 different data sets, finding the standard deviation in the error of each pixel (by first subtracting the true depth-map from each reconstruction), and then computing the average of the standard deviations associated with pixels comprising the toroid. These simulations were repeated with different SNR levels (using the PSNR method). Additionally, the laser-linewidth was reduced to 500 kHz and then 50 kHz (corresponding to a beat-note linewidth of 1 MHz and then 100 kHz, respectively). A simulation with an infinite PSNR and 100 kHz beat-note linewidth was included in an attempt to arrive at an upper bound on the performance when using a typical laser suited for FMCW-LiDAR. For completeness, the uncertainty in the toroid’s depth from a raster scan is also included.

Image (b) within Fig. 6 shows that the compressive depth-mapping scheme is not as accurate as raster-scanning systems, regardless of the SNR and laser linewidth – most likely due to reconstruction errors. The compressive LiDAR camera located the toroid with an uncertainty between approximately 10 – 100 cm, depending on the SNR, linewidth, and measurement ratio, while the raster-scan consistently identified the depth within 5 cm. Because raster scanning with a FMCW-LiDAR requires finding the central frequency distribution associated with a single depth, laser linewidth and noise have little impact on identifying this value – particularly after denoising.

In contrast, simultaneous object detection requires isolating the peaks of different frequency components that may only be resolvable after denoising and then deconvolving the laser’s linewidth. Because these operations are unlikely to be optimal (particularly in these simulations), noise and uncertainty will inevitably find their way into the measurement vectors. Additionally, the constrained optimization algorithms require a sparse basis transform Ψ and user input parameters (e.g. α) – each of which will impact the reconstruction. Because pixels associated with a single object are likely to reside near one another, using a total-variation minimization with a small α parameter in lieu of the BFGS algorithm or applying a light smoothing operation within Eqs. (13) and (14) will reduce the depth variation in neighboring pixels. Finally, Eq. (14) is minimized using the measurement vector y – whose elements were calculated by first using a linear frequency weighting in Eq. (10). As such, objects farther away contribute more to the minimization and may bias the reconstructions with nearby objects having less accuracy. Simulations suggest that moving to a sub-linear frequency weighting scheme (possibly using a square-root or logarithm) may hold better performance in identifying depths both near and far.

To summarize this section, the simulations suggest compressive depth-mapping will not be as accurate as a raster-scanned LiDAR system due a variety of tunable computational operations and basis choices. Fortunately, there is room for improvement though multiple avenues: using better denoising and deconvolution algorithms, more carefully selecting a sparse basis representation, optimizing the number of signal coefficients to operate on rather than using a fixed fraction (m/3), or even adding a small total-variation penalty parameter to the least-squares algorithm. Finally, trading depth accuracy for a boost in acquisition speed may not be detrimental in all applications. The speed enhancement will become even more noticeable at higher resolutions due to the increase in image compression efficiency.

4.5. Experimental considerations

With the proposed experimental diagram in Fig. 1 and parameters presented in section 4.1, an experimental realization should be straightforward. Several points to consider when building this system are presented here.

The most difficult task is the generation of the linear frequency sweep due to the nonlinear relationship between the laser’s frequency and the modulation current. Using optoelectronic feedback, this problem has been effectively solved in [38]. However, there exists a fundamental difference in a simulated versus experimental frequency sweep. The simulation used a sawtooth function for the linear sweep – which injected aliasing artifacts into the generated frequency signals. Aliasing artifacts arose when the latter half of the delayed sawtooth interfered with the start of a new local oscillator waveform to generate high-frequency components that could not be resolved. These artifacts were merely attributed to additional noise in the simulation. However, drastic changes to a laser’s modulation current can damage the laser, so it is better to use a triangle waveform. This will, in turn, inject low-frequency noise into the signal. Additionally, 1/ f noise may be present within the system along with Gaussian white noise. Thus, a high-pass filter will be needed to clean these signals. Furthermore, the triangle function in this system may allow for compressive Doppler-mapping by using both the rise and fall of the frequency sweep.

Regarding the imaging system, the optics and detector area must be chosen such that the necessary spatial-frequency components leaving the DMD reach the difference-detector’s diode. Clipping from small apertures or from insufficient detector size will lead to blurring of the depth-map.

This architecture’s main limitation is in capturing quickly moving objects. The problem can be alleviated by moving to faster frame rates arising from fewer measurements. Our simulations assumed a 1 ms integration time per projection. When using a 20% sampling ratio for a 128 × 128 pixel resolution depth-map, only 3.3 sec of light-gathering time are needed. Perhaps the easiest way to reduce the acquisition time is to reduce the resolution. Reducing the resolution from 128 × 128 pixels to 64 × 64 pixels while maintaining the same active area on the DMD offers a factor of at least 4 timing improvement because the SNR will increase with larger super-pixel sizes. Given the short range of this LiDAR system and that some off-the-shelf DMD’s can operate at 20 kHz, the 1 ms integration time can also be readily reduced by a factor of 10.

Spatially coherent returning radiation may introduce diffraction from the DMD – which is essentially a diffraction grating. The simulations relied on incoherent Lambertian scattering and were not affected by the DMD. Fortunately, once light has been imaged onto the DMD, the spatial profile of the light after that point is of little consequence. All that matters is that the light leaving the DMD makes it to the detector. Even if some of the radiation is lost to diffraction, a depth map can still be generated, but with a loss of SNR.

The purity of the local oscillator also plays an important role. Within section 3.5, we mentioned several SNR enhancement abilities resulting from increased flux over fewer pixels, heterodyne detection, and background subtraction. There, we assumed the local oscillator contained no noise. If noise is injected into the local oscillator, many of the noise-cancellation properties disappear. Thus, it is essential that the local oscillator be phase stable with a clean frequency ramp. Fortunately, optoelectronic feedback systems used to generate the frequency sweep will also improve the phase stability of the laser [38].

Finally, the current Python-based reconstruction algorithms take approximately 10 sec to reconstruct a 128 × 128 depth map using a single thread on a 4.1 GHz Intel i7-4790K CPU – preventing real-time video. Because the least-squares minimization of I and are independent, the reconstructions can be done in parallel. Moving the reconstruction algorithms from our serial implementation to a faster programming language, executed on a parallel architecture that takes advantage of multi-core CPUs or graphics cards, will significantly alleviate timing overhead from the reconstructions.

5. Conclusion

FMCW-LiDAR is a valuable technique for ranging because of its accuracy and relatively low cost. It is a natural extension to apply FMCW-LiDAR systems to depth-mapping, whether through raster scanning or other means. The compressive architecture presented in this article is an efficient means of depth-mapping that is inexpensive and can be easily implemented with off-the-shelf components. It can be easily rendered as eye-safe compared to pulsed or raster-based ranging systems. Finally, we demonstrate how to minimize the memory overhead from the sampling process. Instead of storing m high-dimensional spectrograms (consisting of m × N numbers), we demonstrate how only 2m recorded numbers can be effectively used to recover high-dimensional depth-maps.

This article presents an architecture to build a compressive FMCW-LiDAR depth-mapping system. Simulations support our conclusion that this system will be robust to noise when operating in the field. While there is room for improvement within the experimental design and reconstruction algorithms, we present a viable architecture that simplifies the data-taking process and significantly reduces the data storage and computation overhead. These advances serve to reduce the overall expense of such a system while simultaneously increasing the resolution and SNR for real-time depth-mapping.

Funding

Air-Force Office of Scientific Research (AFOSR) (FA9550-16-1-0359).

References and links

1. G. S. Cheok, M. Juberts, and M. Franaszek, “3D Imaging Systems for Manufacturing, Construction, and Mobility (NIST TN 1682),” Tech. Note (NIST TN)-1682 (2010).

2. J. Smisek, M. Jancosek, and T. Pajdla, “3D with kinect,” in Consumer Depth Cameras for Computer Vision, (Springer, 2013), pp. 3–25. [CrossRef]  

3. K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012). [CrossRef]   [PubMed]  

4. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011). [CrossRef]  

5. B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017). [CrossRef]  

6. A. A. Frank and M. Nakamura, “Laser radar for a vehicle lateral guidance system,” (1993). US Patent 5,202,742.

7. F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202. [CrossRef]  

8. R. Stettner, “Compact 3D flash lidar video cameras and applications,” in Laser Radar Technology and Applications XV, Vol. 7684 (International Society for Optics and Photonics, 2010). [CrossRef]  

9. A. E. Johnson and J. F. Montgomery, “Overview of terrain relative navigation approaches for precise lunar landing,” in IEEE Aerospace Conference (2008), pp. 1–10.

10. F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

11. M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001). [CrossRef]  

12. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006). [CrossRef]  

13. M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008). [CrossRef]  

14. Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012). [CrossRef]  

15. G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3d imaging,” Appl. Opt. 50, 5917–5920 (2011). [CrossRef]   [PubMed]  

16. A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.

17. G. A. Howland, D. J. Lum, M. R. Ware, and J. C. Howell, “Photon counting compressive depth mapping,” Opt. Express 21, 23822–23837 (2013). [CrossRef]   [PubMed]  

18. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016). [CrossRef]  

19. A. Kadambi and P. T. Boufounos, “Coded aperture compressive 3-d lidar,” in IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2015), pp. 1166–1170.

20. A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express 19, 21485–21507 (2011). [CrossRef]   [PubMed]  

21. M. H. Conde, “Compressive sensing for the photonic mixer device,” in Compressive Sensing for the Photonic Mixer Device, (Springer, 2017), pp. 207–352. [CrossRef]  

22. S. Antholzer, C. Wolf, M. Sandbichler, M. Dielacher, and M. Haltmeier, “A framework for compressive time-of-flight 3d sensing,” https://arxiv.org/abs/1710.10444.

23. C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012). [CrossRef]  

24. W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. 54, 363–367 (2015). [CrossRef]  

25. R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016). [CrossRef]  

26. F. Remondino and D. Stoppa, TOF Range-Imaging Cameras, Vol. 68121 (Springer, 2013). [CrossRef]  

27. K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003). [CrossRef]  

28. C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogramm. Remote. Sens. 64, 1–16 (2009). [CrossRef]  

29. S. Burri, Y. Maruyama, X. Michalet, F. Regazzoni, C. Bruschini, and E. Charbon, “Architecture and applications of a high resolution gated spad image sensor,” Opt. Express 22, 17573–17589 (2014). [CrossRef]   [PubMed]  

30. N. Takeuchi, H. Baba, K. Sakurai, and T. Ueno, “Diode-laser random-modulation cw lidar,” Appl. Opt. 25, 63–67 (1986). [CrossRef]   [PubMed]  

31. B. L. Stann, W. C. Ruff, and Z. G. Sztankay, “Intensity-modulated diode laser radar using frequency-modulation/continuous-wave ranging techniques,” Opt. Eng. 35, 3270–3279 (1996). [CrossRef]  

32. O. Batet, F. Dios, A. Comeron, and R. Agishev, “Intensity-modulated linear-frequency-modulated continuous-wave lidar for distributed media: fundamentals of technique,” Appl. Opt. 49, 3369–3379 (2010). [CrossRef]   [PubMed]  

33. W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016). [CrossRef]  

34. F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016). [CrossRef]   [PubMed]  

35. V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

36. W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

37. T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

38. N. Satyan, A. Vasilyev, G. Rakuljic, V. Leyva, and A. Yariv, “Precise control of broadband frequency chirps using optoelectronic feedback,” Opt. Express 17, 15991–15999 (2009). [CrossRef]   [PubMed]  

39. M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989). [CrossRef]  

40. M. Hashemi and S. Beheshti, “Adaptive noise variance estimation in bayesshrink,” IEEE Signal Process. Lett. 17, 12–15 (2010). [CrossRef]  

41. N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series (MIT, 1964).

42. W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010). [CrossRef]  

43. R. Fletcher, Practical Methods of Optimization (John Wiley & Sons, 2013).

44. R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995). [CrossRef]  

45. M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” IEEE Trans. Inf. Theory 60, 4001–4012 (2014). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. G. S. Cheok, M. Juberts, and M. Franaszek, “3D Imaging Systems for Manufacturing, Construction, and Mobility (NIST TN 1682),” Tech. Note (NIST TN)-1682 (2010).
  2. J. Smisek, M. Jancosek, and T. Pajdla, “3D with kinect,” in Consumer Depth Cameras for Computer Vision, (Springer, 2013), pp. 3–25.
    [Crossref]
  3. K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
    [Crossref] [PubMed]
  4. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
    [Crossref]
  5. B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
    [Crossref]
  6. A. A. Frank and M. Nakamura, “Laser radar for a vehicle lateral guidance system,” (1993). US Patent 5,202,742.
  7. F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202.
    [Crossref]
  8. R. Stettner, “Compact 3D flash lidar video cameras and applications,” in Laser Radar Technology and Applications XV, Vol. 7684 (International Society for Optics and Photonics, 2010).
    [Crossref]
  9. A. E. Johnson and J. F. Montgomery, “Overview of terrain relative navigation approaches for precise lunar landing,” in IEEE Aerospace Conference (2008), pp. 1–10.
  10. F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).
  11. M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
    [Crossref]
  12. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
    [Crossref]
  13. M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
    [Crossref]
  14. Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012).
    [Crossref]
  15. G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3d imaging,” Appl. Opt. 50, 5917–5920 (2011).
    [Crossref] [PubMed]
  16. A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.
  17. G. A. Howland, D. J. Lum, M. R. Ware, and J. C. Howell, “Photon counting compressive depth mapping,” Opt. Express 21, 23822–23837 (2013).
    [Crossref] [PubMed]
  18. M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
    [Crossref]
  19. A. Kadambi and P. T. Boufounos, “Coded aperture compressive 3-d lidar,” in IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2015), pp. 1166–1170.
  20. A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express 19, 21485–21507 (2011).
    [Crossref] [PubMed]
  21. M. H. Conde, “Compressive sensing for the photonic mixer device,” in Compressive Sensing for the Photonic Mixer Device, (Springer, 2017), pp. 207–352.
    [Crossref]
  22. S. Antholzer, C. Wolf, M. Sandbichler, M. Dielacher, and M. Haltmeier, “A framework for compressive time-of-flight 3d sensing,” https://arxiv.org/abs/1710.10444 .
  23. C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
    [Crossref]
  24. W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. 54, 363–367 (2015).
    [Crossref]
  25. R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016).
    [Crossref]
  26. F. Remondino and D. Stoppa, TOF Range-Imaging Cameras, Vol. 68121 (Springer, 2013).
    [Crossref]
  27. K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
    [Crossref]
  28. C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogramm. Remote. Sens. 64, 1–16 (2009).
    [Crossref]
  29. S. Burri, Y. Maruyama, X. Michalet, F. Regazzoni, C. Bruschini, and E. Charbon, “Architecture and applications of a high resolution gated spad image sensor,” Opt. Express 22, 17573–17589 (2014).
    [Crossref] [PubMed]
  30. N. Takeuchi, H. Baba, K. Sakurai, and T. Ueno, “Diode-laser random-modulation cw lidar,” Appl. Opt. 25, 63–67 (1986).
    [Crossref] [PubMed]
  31. B. L. Stann, W. C. Ruff, and Z. G. Sztankay, “Intensity-modulated diode laser radar using frequency-modulation/continuous-wave ranging techniques,” Opt. Eng. 35, 3270–3279 (1996).
    [Crossref]
  32. O. Batet, F. Dios, A. Comeron, and R. Agishev, “Intensity-modulated linear-frequency-modulated continuous-wave lidar for distributed media: fundamentals of technique,” Appl. Opt. 49, 3369–3379 (2010).
    [Crossref] [PubMed]
  33. W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
    [Crossref]
  34. F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
    [Crossref] [PubMed]
  35. V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.
  36. W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).
  37. T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.
  38. N. Satyan, A. Vasilyev, G. Rakuljic, V. Leyva, and A. Yariv, “Precise control of broadband frequency chirps using optoelectronic feedback,” Opt. Express 17, 15991–15999 (2009).
    [Crossref] [PubMed]
  39. M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989).
    [Crossref]
  40. M. Hashemi and S. Beheshti, “Adaptive noise variance estimation in bayesshrink,” IEEE Signal Process. Lett. 17, 12–15 (2010).
    [Crossref]
  41. N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series (MIT, 1964).
  42. W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010).
    [Crossref]
  43. R. Fletcher, Practical Methods of Optimization (John Wiley & Sons, 2013).
  44. R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995).
    [Crossref]
  45. M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” IEEE Trans. Inf. Theory 60, 4001–4012 (2014).
    [Crossref]

2017 (1)

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

2016 (4)

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

2015 (1)

2014 (3)

S. Burri, Y. Maruyama, X. Michalet, F. Regazzoni, C. Bruschini, and E. Charbon, “Architecture and applications of a high resolution gated spad image sensor,” Opt. Express 22, 17573–17589 (2014).
[Crossref] [PubMed]

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” IEEE Trans. Inf. Theory 60, 4001–4012 (2014).
[Crossref]

2013 (1)

2012 (2)

K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref] [PubMed]

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

2011 (4)

A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express 19, 21485–21507 (2011).
[Crossref] [PubMed]

J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
[Crossref]

F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3d imaging,” Appl. Opt. 50, 5917–5920 (2011).
[Crossref] [PubMed]

2010 (3)

O. Batet, F. Dios, A. Comeron, and R. Agishev, “Intensity-modulated linear-frequency-modulated continuous-wave lidar for distributed media: fundamentals of technique,” Appl. Opt. 49, 3369–3379 (2010).
[Crossref] [PubMed]

M. Hashemi and S. Beheshti, “Adaptive noise variance estimation in bayesshrink,” IEEE Signal Process. Lett. 17, 12–15 (2010).
[Crossref]

W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010).
[Crossref]

2009 (2)

2008 (1)

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

2006 (1)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

2003 (1)

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
[Crossref]

2001 (1)

M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

1996 (1)

B. L. Stann, W. C. Ruff, and Z. G. Sztankay, “Intensity-modulated diode laser radar using frequency-modulation/continuous-wave ranging techniques,” Opt. Eng. 35, 3270–3279 (1996).
[Crossref]

1995 (1)

R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995).
[Crossref]

1989 (1)

M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989).
[Crossref]

1986 (1)

Agishev, R.

Amann, M.-C.

M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Amzajerdian, F.

F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202.
[Crossref]

Andrés, P.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

Baba, H.

Baney, D. M.

M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989).
[Crossref]

Baraniuk, R. G.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

Barazzetti, L.

F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

Batet, O.

Beheshti, S.

M. Hashemi and S. Beheshti, “Adaptive noise variance estimation in bayesshrink,” IEEE Signal Process. Lett. 17, 12–15 (2010).
[Crossref]

Behroozpour, B.

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

Bosch, T.

M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Boser, B. E.

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

Boufounos, P. T.

A. Kadambi and P. T. Boufounos, “Coded aperture compressive 3-d lidar,” in IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2015), pp. 1166–1170.

Bretar, F.

C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogramm. Remote. Sens. 64, 1–16 (2009).
[Crossref]

Bruschini, C.

Burri, S.

Byrd, R. H.

R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995).
[Crossref]

Cevher, V.

V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

Charbon, E.

Chellappa, R.

V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

Chen, M.

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

Cheok, G. S.

G. S. Cheok, M. Juberts, and M. Franaszek, “3D Imaging Systems for Manufacturing, Construction, and Mobility (NIST TN 1682),” Tech. Note (NIST TN)-1682 (2010).

Clemente, P.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

Colaço, A.

A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express 19, 21485–21507 (2011).
[Crossref] [PubMed]

A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.

Comeron, A.

Conde, M. H.

M. H. Conde, “Compressive sensing for the photonic mixer device,” in Compressive Sensing for the Photonic Mixer Device, (Springer, 2017), pp. 207–352.
[Crossref]

Davenport, M. A.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Dios, F.

Dixon, P. B.

Donoho, D. L.

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

Duarte, M. F.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

Edgar, M. P.

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Elberink, S. O.

K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref] [PubMed]

Eldar, Y. C.

Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012).
[Crossref]

Evangelidis, G.

R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016).
[Crossref]

Fletcher, R.

R. Fletcher, Practical Methods of Optimization (John Wiley & Sons, 2013).

Flood, M.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
[Crossref]

Franaszek, M.

G. S. Cheok, M. Juberts, and M. Franaszek, “3D Imaging Systems for Manufacturing, Construction, and Mobility (NIST TN 1682),” Tech. Note (NIST TN)-1682 (2010).

Frank, A. A.

A. A. Frank and M. Nakamura, “Laser radar for a vehicle lateral guidance system,” (1993). US Patent 5,202,742.

Geng, J.

J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
[Crossref]

Gerrits, T.

T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

Gibson, G. M.

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Gong, W.

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

Goyal, V. K.

A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express 19, 21485–21507 (2011).
[Crossref] [PubMed]

A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.

Han, S.

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

Hansard, M.

R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016).
[Crossref]

Hashemi, M.

M. Hashemi and S. Beheshti, “Adaptive noise variance estimation in bayesshrink,” IEEE Signal Process. Lett. 17, 12–15 (2010).
[Crossref]

Hines, G.

F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202.
[Crossref]

Horaud, R.

R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016).
[Crossref]

Howell, J.

T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

Howell, J. C.

G. A. Howland, D. J. Lum, M. R. Ware, and J. C. Howell, “Photon counting compressive depth mapping,” Opt. Express 21, 23822–23837 (2013).
[Crossref] [PubMed]

G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3d imaging,” Appl. Opt. 50, 5917–5920 (2011).
[Crossref] [PubMed]

A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.

Howland, G. A.

G. A. Howland, D. J. Lum, M. R. Ware, and J. C. Howell, “Photon counting compressive depth mapping,” Opt. Express 21, 23822–23837 (2013).
[Crossref] [PubMed]

G. A. Howland, P. B. Dixon, and J. C. Howell, “Photon-counting compressive sensing laser radar for 3d imaging,” Appl. Opt. 50, 5917–5920 (2011).
[Crossref] [PubMed]

A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.

Jancosek, M.

J. Smisek, M. Jancosek, and T. Pajdla, “3D with kinect,” in Consumer Depth Cameras for Computer Vision, (Springer, 2013), pp. 3–25.
[Crossref]

Johnson, A. E.

A. E. Johnson and J. F. Montgomery, “Overview of terrain relative navigation approaches for precise lunar landing,” in IEEE Aerospace Conference (2008), pp. 1–10.

Juberts, M.

G. S. Cheok, M. Juberts, and M. Franaszek, “3D Imaging Systems for Manufacturing, Construction, and Mobility (NIST TN 1682),” Tech. Note (NIST TN)-1682 (2010).

Kadambi, A.

A. Kadambi and P. T. Boufounos, “Coded aperture compressive 3-d lidar,” in IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2015), pp. 1166–1170.

Kelly, K. F.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Khoshelham, K.

K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref] [PubMed]

Kirmani, A.

A. Kirmani, A. Colaço, F. N. C. Wong, and V. K. Goyal, “Exploiting sparsity in time-of-flight range acquisition using a single time-resolved sensor,” Opt. Express 19, 21485–21507 (2011).
[Crossref] [PubMed]

A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.

Kutyniok, G.

Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012).
[Crossref]

Lamb, R.

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Lan, R.-M.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

Lancis, J.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

Laska, J. N.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Lescure, M.

M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Leyva, V.

Li, E.

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

Li, L.-Z.

Lim, K.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
[Crossref]

Liu, X.-F.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. 54, 363–367 (2015).
[Crossref]

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

Lu, P.

R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995).
[Crossref]

Lum, D.

T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

Lum, D. J.

Mallet, C.

C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogramm. Remote. Sens. 64, 1–16 (2009).
[Crossref]

Malloy, M. L.

M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” IEEE Trans. Inf. Theory 60, 4001–4012 (2014).
[Crossref]

Maruyama, Y.

Matsui, Y.

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

Ménier, C.

R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016).
[Crossref]

Michalet, X.

Mirin, R.

T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

Montgomery, J. F.

A. E. Johnson and J. F. Montgomery, “Overview of terrain relative navigation approaches for precise lunar landing,” in IEEE Aerospace Conference (2008), pp. 1–10.

Morgan, S.

W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010).
[Crossref]

Myllyla, R.

M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Nakamura, M.

A. A. Frank and M. Nakamura, “Laser radar for a vehicle lateral guidance system,” (1993). US Patent 5,202,742.

Nam, S. W.

T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

Nazarathy, M.

M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989).
[Crossref]

Newton, S. A.

M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989).
[Crossref]

Nex, F.

F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

Nocedal, J.

R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995).
[Crossref]

Nowak, R. D.

M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” IEEE Trans. Inf. Theory 60, 4001–4012 (2014).
[Crossref]

Padgett, M. J.

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Pajdla, T.

J. Smisek, M. Jancosek, and T. Pajdla, “3D with kinect,” in Consumer Depth Cameras for Computer Vision, (Springer, 2013), pp. 3–25.
[Crossref]

Petway, L.

F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202.
[Crossref]

Pierrottet, D.

F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202.
[Crossref]

Quack, N.

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

Radwell, N.

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Rakuljic, G.

Reddy, D.

V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

Regazzoni, F.

Remondino, F.

F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

F. Remondino and D. Stoppa, TOF Range-Imaging Cameras, Vol. 68121 (Springer, 2013).
[Crossref]

Rioux, M.

M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Roback, V.

F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202.
[Crossref]

Ruff, W. C.

B. L. Stann, W. C. Ruff, and Z. G. Sztankay, “Intensity-modulated diode laser radar using frequency-modulation/continuous-wave ranging techniques,” Opt. Eng. 35, 3270–3279 (1996).
[Crossref]

Sakurai, K.

Sandborn, P. A.

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

Sankaranarayanan, A.

V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

Sarazzi, D.

F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

Satyan, N.

Scaioni, M.

F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

Seok, T. J.

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

Smisek, J.

J. Smisek, M. Jancosek, and T. Pajdla, “3D with kinect,” in Consumer Depth Cameras for Computer Vision, (Springer, 2013), pp. 3–25.
[Crossref]

Soldevila, F.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

Sorin, W. V.

M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989).
[Crossref]

Stann, B. L.

B. L. Stann, W. C. Ruff, and Z. G. Sztankay, “Intensity-modulated diode laser radar using frequency-modulation/continuous-wave ranging techniques,” Opt. Eng. 35, 3270–3279 (1996).
[Crossref]

Stettner, R.

R. Stettner, “Compact 3D flash lidar video cameras and applications,” in Laser Radar Technology and Applications XV, Vol. 7684 (International Society for Optics and Photonics, 2010).
[Crossref]

St-Onge, B.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
[Crossref]

Stoppa, D.

F. Remondino and D. Stoppa, TOF Range-Imaging Cameras, Vol. 68121 (Springer, 2013).
[Crossref]

Sun, B.

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Sun, M.-J.

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Sun, T.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Sztankay, Z. G.

B. L. Stann, W. C. Ruff, and Z. G. Sztankay, “Intensity-modulated diode laser radar using frequency-modulation/continuous-wave ranging techniques,” Opt. Eng. 35, 3270–3279 (1996).
[Crossref]

Tajahuerce, E.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

Takbar, D.

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

Takeuchi, N.

Treitz, P.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
[Crossref]

Ueno, T.

Uribe-Patarroyo, N.

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

Vasilyev, A.

Verma, V.

T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

Wang, C.

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

Wang, H.

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

Ware, M. R.

Wiener, N.

N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series (MIT, 1964).

Wong, F. N. C.

Wu, L.-A.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

Wu, M. C.

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

Wulder, M.

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
[Crossref]

Xu, W.

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

Yang, J.

W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010).
[Crossref]

Yao, X.-R.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. 54, 363–367 (2015).
[Crossref]

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

Yariv, A.

Yin, W.

W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010).
[Crossref]

Yu, W.-K.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. 54, 363–367 (2015).
[Crossref]

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

Zhai, G.-J.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

W.-K. Yu, X.-R. Yao, X.-F. Liu, L.-Z. Li, and G.-J. Zhai, “Three-dimensional single-pixel compressive reflectivity imaging based on complementary modulation,” Appl. Opt. 54, 363–367 (2015).
[Crossref]

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

Zhai, Y.

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

Zhang, Y.

W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010).
[Crossref]

Zhao, C.

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

Zhao, Q.

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

Zhu, C.

R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995).
[Crossref]

Adv. Opt. Photonics (1)

J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3, 128–160 (2011).
[Crossref]

Appl. Opt. (4)

Appl. Phys. Lett. (1)

C. Zhao, W. Gong, M. Chen, E. Li, H. Wang, W. Xu, and S. Han, “Ghost imaging lidar via sparsity constraints,” Appl. Phys. Lett. 101, 141123 (2012).
[Crossref]

IEEE J. Solid-State Circuits (1)

B. Behroozpour, P. A. Sandborn, N. Quack, T. J. Seok, Y. Matsui, M. C. Wu, and B. E. Boser, “Electronic-Photonic Integrated Circuit for 3D Microimaging,” IEEE J. Solid-State Circuits 52, 161–172 (2017).
[Crossref]

IEEE Signal Process. Lett. (1)

M. Hashemi and S. Beheshti, “Adaptive noise variance estimation in bayesshrink,” IEEE Signal Process. Lett. 17, 12–15 (2010).
[Crossref]

IEEE Signal Process. Mag. (1)

M. F. Duarte, M. A. Davenport, D. Takbar, J. N. Laska, T. Sun, K. F. Kelly, and R. G. Baraniuk, “Single-pixel imaging via compressive sampling,” IEEE Signal Process. Mag. 25, 83–91 (2008).
[Crossref]

IEEE Trans. Inf. Theory (2)

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory 52, 1289–1306 (2006).
[Crossref]

M. L. Malloy and R. D. Nowak, “Near-optimal adaptive compressed sensing,” IEEE Trans. Inf. Theory 60, 4001–4012 (2014).
[Crossref]

Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. (1)

F. Remondino, L. Barazzetti, F. Nex, M. Scaioni, and D. Sarazzi, “UAV photogrammetry for mapping and 3d modeling–current status and future perspectives,” Int. Arch. Photogramm. Remote. Sens. Spatial Inf. Sci. 38, C22 (2011).

ISPRS J. Photogramm. Remote. Sens. (1)

C. Mallet and F. Bretar, “Full-waveform topographic lidar: State-of-the-art,” ISPRS J. Photogramm. Remote. Sens. 64, 1–16 (2009).
[Crossref]

J. Light. Technol. (1)

M. Nazarathy, W. V. Sorin, D. M. Baney, and S. A. Newton, “Spectral analysis of optical mixing measurements,” J. Light. Technol. 7, 1083–1096 (1989).
[Crossref]

Mach. Vis. Appl. (1)

R. Horaud, M. Hansard, G. Evangelidis, and C. Ménier, “An overview of depth cameras and range scanners based on time-of-flight technologies,” Mach. Vis. Appl. 27, 1005–1020 (2016).
[Crossref]

Nat. Commun. (1)

M.-J. Sun, M. P. Edgar, G. M. Gibson, B. Sun, N. Radwell, R. Lamb, and M. J. Padgett, “Single-pixel three-dimensional imaging with time-based depth resolution,” Nat. Commun. 7, 12010 (2016).
[Crossref]

Opt. Commun (1)

W.-K. Yu, X.-R. Yao, X.-F. Liu, R.-M. Lan, L.-A. Wu, G.-J. Zhai, and Q. Zhao, “Compressive microscopic imaging with “positive–negative” light modulation,” Opt. Commun 371, 105–111 (2016).
[Crossref]

Opt. Eng. (2)

B. L. Stann, W. C. Ruff, and Z. G. Sztankay, “Intensity-modulated diode laser radar using frequency-modulation/continuous-wave ranging techniques,” Opt. Eng. 35, 3270–3279 (1996).
[Crossref]

M.-C. Amann, T. Bosch, M. Lescure, R. Myllyla, and M. Rioux, “Laser ranging: a critical review of usual techniques for distance measurement,” Opt. Eng. 40, 10–19 (2001).
[Crossref]

Opt. Express (4)

Proc. SPIE (1)

W. Yin, S. Morgan, J. Yang, and Y. Zhang, “Practical compressive sensing with toeplitz and circulant matrices,” Proc. SPIE 7744, 77440K (2010).
[Crossref]

Prog. Phys. Geogr. (1)

K. Lim, P. Treitz, M. Wulder, B. St-Onge, and M. Flood, “Lidar remote sensing of forest structure,” Prog. Phys. Geogr. 27, 88–106 (2003).
[Crossref]

Sci. Rep. (2)

W.-K. Yu, X.-F. Liu, X.-R. Yao, C. Wang, Y. Zhai, and G.-J. Zhai, “Complementary compressive imaging for the telescopic system,” Sci. Rep. 4, 05834 (2014).

F. Soldevila, P. Clemente, E. Tajahuerce, N. Uribe-Patarroyo, P. Andrés, and J. Lancis, “Computational imaging with a balanced detector,” Sci. Rep. 6, 29181 (2016).
[Crossref] [PubMed]

Sensors (1)

K. Khoshelham and S. O. Elberink, “Accuracy and resolution of kinect depth data for indoor mapping applications,” Sensors 12, 1437–1454 (2012).
[Crossref] [PubMed]

SIAM J. on Sci. Comput. (1)

R. H. Byrd, P. Lu, J. Nocedal, and C. Zhu, “A limited memory algorithm for bound constrained optimization,” SIAM J. on Sci. Comput. 16, 1190–1208 (1995).
[Crossref]

Other (16)

R. Fletcher, Practical Methods of Optimization (John Wiley & Sons, 2013).

N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series (MIT, 1964).

Y. C. Eldar and G. Kutyniok, Compressed Sensing: Theory and Applications (Cambridge University, 2012).
[Crossref]

A. Kadambi and P. T. Boufounos, “Coded aperture compressive 3-d lidar,” in IEEE International Conference on Acoustics, Speech and Signal Processing (IEEE, 2015), pp. 1166–1170.

A. Colaço, A. Kirmani, G. A. Howland, J. C. Howell, and V. K. Goyal, “Compressive depth map acquisition using a single photon-counting detector: Parametric signal processing meets sparsity,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2012), pp. 96–102.

G. S. Cheok, M. Juberts, and M. Franaszek, “3D Imaging Systems for Manufacturing, Construction, and Mobility (NIST TN 1682),” Tech. Note (NIST TN)-1682 (2010).

J. Smisek, M. Jancosek, and T. Pajdla, “3D with kinect,” in Consumer Depth Cameras for Computer Vision, (Springer, 2013), pp. 3–25.
[Crossref]

A. A. Frank and M. Nakamura, “Laser radar for a vehicle lateral guidance system,” (1993). US Patent 5,202,742.

F. Amzajerdian, D. Pierrottet, L. Petway, G. Hines, and V. Roback, “Lidar systems for precision navigation and safe landing on planetary bodies,” in International Symposium on Photoelectronic Detection and Imaging 2011, (International Society for Optics and Photonics, 2011), p. 819202.
[Crossref]

R. Stettner, “Compact 3D flash lidar video cameras and applications,” in Laser Radar Technology and Applications XV, Vol. 7684 (International Society for Optics and Photonics, 2010).
[Crossref]

A. E. Johnson and J. F. Montgomery, “Overview of terrain relative navigation approaches for precise lunar landing,” in IEEE Aerospace Conference (2008), pp. 1–10.

V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa, “Compressive sensing for background subtraction,” in European Conference on Computer Vision (Springer, 2008), pp. 155–168.

T. Gerrits, D. Lum, J. Howell, V. Verma, R. Mirin, and S. W. Nam, “A short-wave infrared single photon camera,” in Imaging and Applied Optics 2017, of 2017 Technical Digest Series (Optical Society of America, 2017), paper CTu4B.5.

M. H. Conde, “Compressive sensing for the photonic mixer device,” in Compressive Sensing for the Photonic Mixer Device, (Springer, 2017), pp. 207–352.
[Crossref]

S. Antholzer, C. Wolf, M. Sandbichler, M. Dielacher, and M. Haltmeier, “A framework for compressive time-of-flight 3d sensing,” https://arxiv.org/abs/1710.10444 .

F. Remondino and D. Stoppa, TOF Range-Imaging Cameras, Vol. 68121 (Springer, 2013).
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1
Fig. 1 (Proposed Experiment) HWP: half-wave plate, PBS: polarizing beam-splitter, DMD: digital micro-mirror device, BS: beam-splitter. A linearly chirped laser is split into two beams, designated as a local-oscillator and a signal, via a HWP and a PBS. The signal illuminates a target scene and the reflected radiation is used to image the scene onto a DMD. The DMD takes pseudo-random spatial projections, consisting of ±1 pixel values, and directs the projections to balanced-heterodyne detectors using the local-oscillator.
Fig. 2
Fig. 2 Image (a) presents a 3-dimensional scene composed of Lambertian-scattering targets. Image (b) presents the depth map we wish to compressively recover.
Fig. 3
Fig. 3 Image (a) presents an illumination profile that consists of 1 Watt in total power. The scene is discretized into a 128 × 128 pixel resolution scene with a maximum power per pixel of 119 μW. When using a 2 inch collection optic and modeling each object as a Lambertian scatter, image (b) presents a single projection taken by the DMD of the reflected radiation as seen by one detector. The maximum power per pixel is of order nanowatts.
Fig. 4
Fig. 4 Image (a) shows the 2 MHz Lorentzian linewidth on the beat notes expected from a laser’s 1 MHz Lorentzian linewidth as seen by an oscilloscope sampling at 33.3 MHz. Image (b) shows the noiseless, positive-frequency components from a single noiseless projection from Fig. 3 when using a zero-linewidth laser. When accounting for a 2 MHz beat-note linewidth, image (c) shows the broadened expected frequency components. When including a peak SNR = 5 (based on the SNR of the brightest object or pixel), image (d) presents a realistic noisy signal one would measure in an experiment. Image (e) presents the result of denoising the signal in (d) with a Bayes-shrink denoising filter using a symlet-20 wavelet decomposition. Image (f) presents the result of deconvolving the 2 MHz Lorentzian linewidth with the denoised signal in (e) using a Weiner filter. Image (f) is the cleaned signal from the (1,0) projection. A similarly cleaned result from the (0,1) signal will then be subtracted from (f) and then used to form yI and y.
Fig. 5
Fig. 5 When using a PSNR = 5, image (a) shows a typical total-variation minimization reconstruction from a 25% sample-ratio intensity-measurement vector yI. Shapes are identified, but the pixel values are incorrect. Image (b) is a binary mask generated by hard-thresholding image (a). After representing image (b) in a sparse basis, such as a Haar-wavelet decomposition, least-squares can be performed on the m/3 largest signal components to construct xI and x. After applying Eq. (11), a depth map is presented in image (c). Images (d), (e), and (f) demonstrate the same process for a 5% sample ratio, again with PSNR = 5. Image (g) is the true depth-map presented for easy comparison. Images (h) and (i) are smoothed depth-maps after applying a 4 ×4 pixel averaging kernel to depth-maps (f) and (c), respectively.
Fig. 6
Fig. 6 (a) Reconstruction Mean-Squared Error: Each data point corresponds to the average mean squared error from 10 different reconstructions of different data sets for varying sample ratios (m/n) and peak-SNR cases. The standard error for each mean is given by the shaded regions. (b) Depth Uncertainty to Toroid: Each data point represents the average uncertainty in depth to the toroid within Fig. 2. Points were calculated by first considering the standard deviation of each pixel over 10 different reconstructions. Standard deviations associated with only the toroid pixels were then averaged. Different peak-SNR scenarios, FWHM Lorentzian linewidths for the beat-note frequency uncertainties, and sample ratios were considered and compared to the results obtained by simulating raster scans. Raster scans were not affected by linewidth uncertainty.

Equations (14)

Equations on this page are rendered with MathJax. Learn more.

E ( t ) = A exp ( 2 π i [ ν 0 + ν f ν 0 2 T t ] t ) ,
P Scope ( t ) = 0 c 2 [ A Sig 2 + A LO 2 + 2 A LO A Sig sin ( 2 π Δ ν τ T t + ϕ ) ] ,
d = ν T c 2 Δ ν .
j E Sig ( t τ j ) = j A j exp ( 2 π i [ ν 0 + Δ ν 2 T ( t τ j ) ] ( t τ j ) ) ,
P Scope ( t ) = 0 c 2 ( | E LO ( t ) + i j E Sig ( t τ j ) 2 | 2 | i E LO ( t ) + j E Sig ( t τ j ) 2 | 2 )
= 0 c j A LO A j sin ( 2 π Δ ν τ j T t + ϕ j ) ,
arg min x ^ n A x ^ y 2 2 + α TV ( x ^ ) ,
P Scope ( t ) = 0 c j A LO A x I j sin ( 2 π Δ ν τ j T t + ϕ j ) .
y I = ν + = 1 N | [ P Scope ( t ) ] + |
y I ν = ν + = 1 N | [ P Scope ( t ) ] + | ν + ,
d = x ^ I ν x ^ I T c 2 Δ ν ,
s m / 3 = P Ψ M ^ ( arg min x I n A x I y I 2 2 + α TV ( x I ) )
x ^ I = M Ψ 1 P T ( arg min s m / 3 A Ψ 1 P T s m / 3 y I 2 2 )
x ^ I ν = M Ψ 1 P T ( arg min s m / 3 A Ψ 1 P T s m / 3 y I ν 2 2 ) ,

Metrics