Abstract

We propose a time-of-flight measurement algorithm for depth and intensity that is robust to fog. The key idea of the algorithm is to compensate for the scattering effects of fog by using multiple time-gating and assigning one time-gated exposure for scattering property estimation. Once the property is estimated, the depth and intensity can be reconstructed from the rest of the exposures via a physics-based model. Several experiments with artificial fog show that our method can measure depth and intensity irrespective of the traits of the fog. We also confirm the effectiveness of our method in real fog through an outdoor experiment.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Time-of-flight (ToF) imaging is a growing research topic in computer vision and plays important roles in various applications, including factory automation and autonomous driving. A ToF camera captures depth and intensity images of a scene. Depth images are obtained from the round trip time between light emission and observation of the reflected light. Intensity images are also obtained from the total amount of reflected light within the whole exposure time. A ToF camera, however, cannot accurately measure the depth and intensity in scattering media such as fog. Figure 1 shows measurements from a ToF camera in clear and foggy scenes. The intensity of the foggy image shown in (e) suffers from a low contrast view because the light from the target is attenuated by fog whereas the scattered light is strongly observed. Similarly, the depth in the foggy image shown in (f) is underestimated because the scattered light is observed much earlier than the light from the target.

 figure: Fig. 1.

Fig. 1. Difference of ToF measurement between clear (upper) and foggy (lower) scenes. From the left to right columns, (a) and (d) are RGB images from a normal camera, (b) and (e) are intensity images from a ToF camera, and (c) and (f) are depth images from a ToF camera. Both depth and intensity measurements are affected by fog.

Download Full Size | PPT Slide | PDF

In this paper, we develop a measurement algorithm for depth and intensity that is robust to fog using a short-pulse ToF (SP-ToF) camera. An SP-ToF camera principally requires two time-gated exposures under short-pulse modulated illumination to measure depth and intensity; however, an off-the-shelf SP-ToF camera uses more than two time-gated exposures to improve the accuracy of the measurements [1]. A key concept of the proposed method is to compensate for the scattering effects of fog by using an additional time-gated exposure to estimate the scattering property of fog. Once the scattering property is estimated, the depth and intensity can be robustly reconstructed by the remaining two exposures based on a physics-based fog model.

The advantages of the proposed method are summarized as follows:

  • • The proposed method can remove the light scattering effects by assigning one additional exposure and can improve the depth and intensity measurements of the SP-ToF camera in fog.
  • • The proposed method is robust to scattering media traits such as the thickness of fog, single scattering albedo, and phase function.
  • • The proposed method is effective for real-time tasks such as autonomous driving. A demo movie in the supplemental material demonstrates real-time measurements.
The remainder of this paper is organized as follows. Section 2 reviews the related work. Section 3 models the SP-ToF measurement in fog on the basis of a physics-based fog model. Section 4 describes a measurement method that is robust to light scattering based on the measurement model in fog. Section 5 shows the results of the proposed method and its robustness to the various types of foggy scenes. Finally, we conclude the paper in Section 6.

2. Related work

2.1 Multipath interference corrections in ToF measurement

The depth and intensity measured by a ToF camera are greatly distorted by interreflection and light scattering because these phenomena cause multipath interference (MPI). Many methods have been widely proposed to compensate for the MPI problem of continuous-wave ToF (CW-ToF) imaging [214] and SP-ToF imaging [1523].

In this study, we consider the volume scattering, i.e., foggy and underwater conditions, as the cause of the MPI problem. To compensate for volume scattering MPI, Mure-Dubois et al. [2] optimizes the inverse filter for depth images to obtain the scattering compensation results. Although this method works in real time, the intensity is not recovered. Phasor-based methods [9,13,14] use the phasor representation to model the CW-ToF measurement in fog. These methods, however, need either many measurements or heavy computational costs and also do not restore the intensity. Methods based on temporal response recovery [4,7,1720,23,24] reconstruct the target depth by extracting only the light reflected on the target from the temporal response. Although response-based methods can remove the effects of light scattering well in terms of both depth and intensity, they require either a large number of measurements with delayed time-gated exposures or extensive hardware modifications. Our goal is to remove the effect of fog from both the depth and intensity in an interactive time using an off-the-shelf ToF camera with no hardware extension. This work is the first attempt to realize this goal.

2.2 Fog removal from images

In the computer vision area, recovering a clear view of images captured in hazy or foggy scenes has been an active research target. Recovering a clear view from a single RGB image is an ill-posed problem and difficult to solve. Earlier works have introduced priors such as the dark channel prior [25], the color line prior [26], and the non local prior [27] to solve the ill-posed problem. However, these methods do not often work well because hand-crafted priors or assumptions are often violated. However, data-driven methods using deep learning [2833] have recently been proposed and provided rapid improvements. These data-driven methods learn prior knowledge from data, and in most cases, these methods have outperformed earlier methods.

One of our goals is to defog the intensity image of a ToF camera. While color-based dehazing methods sometimes struggle with artificial objects that are white or gray, our method does not depend on color information, but rather it depends on temporal information. Moreover, color-based dehazing methods need to introduce some image prior or learning process to tackle an ill-posed problem; however, our method does not need such a technique because our assignment of multiple time-gates makes the problem tractable.

3. Physics-based model for SP-ToF measurement

In this section, we describe a physics-based scattering model for SP-ToF measurement in fog. Before explaining the model, we outline the basic principle of SP-ToF measurement and the problems in fog.

3.1 Principle of SP-ToF measurement

A ToF camera can measure the depth from the round trip time of light from the camera to an object. The depth of a scene $d$ is obtained as

$$d = \frac{c\tau}{2},$$
where $c$ is the speed of light and $\tau$ is the round trip time of light.

As shown in Figs. 2(a) and (b), a SP-ToF camera emits short-pulse modulated light with a duration that is several tens of nanoseconds $T$, and then the camera observes the reflected light with two time-gated exposures $Q_1$ and $Q_2$ with a gate length that is the same as the pulse duration $T$. The observed values $Q_1$ and $Q_2$ are represented as

$$ Q_1 = \int_{0}^{T} L_{\textit{ret}}(t)dt, $$
$$ Q_2 = \int_{T}^{2T} L_{\textit{ret}}(t)dt, $$
where $t=0$ represents the start time of the light pulse and $L_{\textit {ret}}(t)$ represents the waveform of the reflected light. Based on an assumption that a scene has a single depth, i.e., the reflected light $L_{\textit {ret}}(t)$ keeps its waveform rectangular, the round trip time $\tau$ and the intensity $I$ are indirectly calculated as
$$ \tau = \frac{Q_2}{Q_1 + Q_2} T, $$
$$ I = Q_1 + Q_2. $$

 figure: Fig. 2.

Fig. 2. The differences of the ordinary SP-ToF measurements and the proposed method. (a) In clear air, the waveform of the reflected light is rectangular. (d) In fog, however, the waveform of the reflected light is greatly distorted because the light is scattered by fog particles in front of the objects. This causes changes in the observed values for each exposure, and then an indirect calculation of the depth and intensity of the ordinary SP-ToF measurement fails. The proposed method assigns an additional exposure time for the observation of scattering light and uses it to remove scattering from the measurement.

Download Full Size | PPT Slide | PDF

The above method using two exposures is the basic principle of the SP-ToF measurement. Using additional time-gated exposures is known to be helpful for improving the abilities of SP-ToF, e.g., high depth resolution [34] or background light removal [1]. Especially, [1] modeled the background light such as ambient light and sensor noise, which are independent on light source, and removed it from the measurements by using an additional time-gated exposure $Q_0'$ as follows:

$$ Q_0' = \int_{{-}T}^{0} \epsilon dt = \epsilon T, $$
$$ Q_1' = \int_{0}^{T} (L_{\textit{ret}}(t) + \epsilon)dt = \int_{0}^{T} L_{\textit{ret}}(t) dt + \epsilon T, $$
$$ Q_2' = \int_{T}^{2T} (L_{\textit{ret}}(t) + \epsilon)dt = \int_{T}^{2T} L_{\textit{ret}}(t) dt + \epsilon T, $$
where $\epsilon$ represents the background light and is eliminated as
$$Q_1 = Q_1' - Q_0', $$
$$Q_2 = Q_2' - Q_0'. $$

The ordinary SP-ToF method and [1], however, cannot accurately measure the depth and intensity in fog because they assume a single depth target. In fog, the light is scattered and reflected back from various depths, as shown in Figs. 2(d) and (e), and this causes the calculation failure because the waveform of the reflected light changes considerably. To use a SP-ToF camera in fog, a new measurement algorithm considering light scattering is necessary.

3.2 Single scattering model for SP-ToF measurement

To consider a new algorithm, we model the light scattering effects on the SP-ToF measurement. Note that we do not consider the background light in the modeling. Moreover, similar to many previous methods [14,3537], we assume that single scattering is dominant in the scene. This assumption holds when the scattering media are relatively thin such as fog. Additionally, the density of fog is assumed to be homogeneous along its depth, and the light source is approximately infinitesimal and located at the optical center of the camera.

As shown in Fig. 3, the reflected light $L_{\textit {ret}}(t)$ is the convolution of illumination light $L_{\textit {emit}}(t)$ and the temporal response $i(t)$ of the scene as

$$L_{\textit{ret}}(t) = L_\textit{emit}(t) \ast i(t),$$
where $\ast$ is the convolution operator. The SP-ToF camera emits a short-pulsed light $L_{\textit {emit}}(t)$ to a scene as
$$L_{\textit{emit}}(t) = \begin{cases} I_0 & \quad 0 \leq t \leq T, \\ 0 & otherwise, \end{cases}$$
where $I_0$ represents the illumination intensity. The temporal response $i(t)$ is the temporal profile of the reflected light of the impulse illumination, i.e., the intensity from various depths. The temporal response in fog can be modeled based on a single scattering model [35]. In this model, the temporal response $i(t)$ is composed of a direct component $i_r(t)$ and a scattering component $i_s(t)$ as
$$i(t) = i_r(t) + i_s(t).$$
The direct component $i_r(t)$ is the light returned from the target object, including the attenuation effect of fog. The scattering component $i_s(t)$ is the returned light from fog particles before reaching the object, which should be observed earlier than the direct component. Following [35], these components are expressed as
$$ i_r(d, r, \sigma_t, t) = \frac{1}{d^{2}} r e^{{-}2\sigma_t d} \delta \left(t-\frac{2d}{c}\right) \mathrm{d}t, $$
$$ i_s(\sigma_t, t) = \begin{cases} \frac{1}{z^{2}}\omega \sigma_t p(g, \theta) e^{{-}2\sigma_t z} \mathrm{d}z & 0 < t \leq 2d/c, \\ 0 & otherwise, \end{cases} $$
where $\delta$ is Dirac’s delta function. $z (= ct/2)$ is the depth where light is scattered, and $\mathrm {d}z = c/2 \mathrm {d}t$. $d$ and $r$ represent the depth and reflectance of the objects, respectively. $\sigma _t$ is the extinction coefficient, which is the sum of the scattering coefficient $\sigma _s$ and the absorption coefficient $\sigma _a$, that is, $\sigma _t = \sigma _s + \sigma _a$. $\omega$ is called the single scattering albedo ($\omega = \sigma _s / \sigma _t$). $p$ is the phase function, which expresses the distribution with respect to the scattering angle $\theta$. Note that we use the Henyey-Greenstein (HG) phase function $p(g, \theta )$ [38]:
$$p(g, \theta) = \frac{1}{4\pi} \frac{1-g^{2}}{(1+g^{2}-2gcos\theta)^{\frac{3}{2}}},$$
where $g \in [-1, 1]$ is the asymmetry parameter equal to the average cosine $\theta$, which represents the anisotropy of the scattering. We assume that the single scattering albedo $\omega$ and the asymmetry parameter $g$ of fog are constant and known as $\omega =0.98$ [36] and $g=0.90$ [39], respectively. Moreover, we use the scattering angle $\theta =\pi$ according to the assumption on the light source. The above assumptions mean that only the extinction coefficient $\sigma _t$ dominates the scattering property of fog.

 figure: Fig. 3.

Fig. 3. Modeled temporal response and reflection in (i) clear air and (ii) fog. The temporal response represents the intensity from various depths. (i) In clean air, the temporal response is composed of only direct light from the target object. (ii) In fog, the temporal response is composed of direct light with attenuation by fog and scattered light from fog.

Download Full Size | PPT Slide | PDF

Examples of the modeled temporal response and reflected light in clear air and fog are illustrated in Figs. 3(b) and (c). In clean air (i), the temporal response (i-b) is composed of only direct light from a target object because the extinction coefficient $\sigma _t$ is zero. In this case, the reflected light (i-c) keeps its waveform rectangular. In fog (ii), however, the temporal response (ii-b) is composed of both direct light with attenuation by fog and scattered light from fog, and the waveform of the reflected light (ii-c) is distorted. Note that the intensity of the reflected light before the time the direct light reaches is only composed of the scattered light, as shown in Fig. 3(ii-c).

4. Fog removal using an additional time-gated exposure

As explained in the previous section, the existing SP-ToF measurement is not designed for use in fog. In this section, we describe a new measurement method of the depth and intensity using an additional time-gated exposure for removing the effect of light scattering. Note that we do not consider the illumination-independent background light in this study because it can be easily eliminated by adding one more exposure or measurement following [1].

4.1 Timing of illumination and exposure

As described in Section 3.2, only the scattered light is observed before the target light reaches the camera for the first time. To remove the effect of light scattering, our key concept is to assign an additional time-gate to expose a short time period $\Delta t$ immediately after light illumination to observe only the scattered light and estimate the scattering property of fog.

For this purpose, we assume the object depth $d > c\Delta t/2$, i.e., the light returned from the target object is not observed for $\Delta t$ after the start of the illumination. Then, as shown in Figs. 2(c) and (f), we set the integration intervals of each for three time-gated exposures $\tilde{Q}_0, \tilde{Q}_1$, and $\tilde{Q}_2$ as $[0,\Delta t]$, $[\Delta t, T+\Delta t / 2]$ and $[T+\Delta t / 2, 2T]$, respectively. Recall that the illumination turns on when $t=0$ and the duration of the illumination is $T$. In this exposure setting, because the additional exposure $\tilde{Q}_0$ observes only the scattered light, the modeled function of time-gated exposures $\tilde{Q}_0^{\textit {model}}, \tilde{Q}_1^{\textit {model}}$, and $\tilde{Q}_2^{\textit {model}}$ is described as follows:

$$\begin{aligned} \tilde{Q}_0^{\textit{model}}(\sigma_t) &= \int_{0}^{\Delta t} L_{\textit{emit}}(t) \ast i(d, r, \sigma_t, t) dt\\ &= \int_{0}^{\Delta t} L_{\textit{emit}}(t) \ast i_s(\sigma_t, t) dt,\end{aligned}$$
$$ \tilde{Q}_1^{\textit{model}}(d, r, \sigma_t) = \int_{\Delta t}^{T+\Delta t / 2} L_{\textit{emit}}(t) \ast i(d, r, \sigma_t, t) dt, $$
$$ \tilde{Q}_2^{\textit{model}}(d, r, \sigma_t) = \int_{T+\Delta t / 2}^{2T} L_{\textit{emit}}(t) \ast i(d, r, \sigma_t, t) dt. $$

4.2 Optimization for parameter estimation

As described in Section 4.1, the observation value of each exposure can be modeled by three parameters: scene depth $d$, scene reflectance $r$, and extinction coefficient $\sigma _t$. Now, we describe the estimation method of the parameters and the calculation method of intensity from the observed values $(\tilde{Q}_0^{\textit obs}, \tilde{Q}_1^{\textit obs}, \tilde{Q}_2^{\textit obs})$. Note that our purpose is to estimate depth and intensity; however, we also estimate the reflectance and extinction coefficient as byproducts. An algorithm flowchart of the proposed method is illustrated in Fig. 4.

 figure: Fig. 4.

Fig. 4. Algorithm flowchart of the proposed method. The input images are the observations of all time-gated exposures: $\tilde{Q}_1$, $\tilde{Q}_2$, and $\tilde{Q}_3$. The output images are the defogged depth and intensity images and, as byproducts, the estimated reflectance and extinction coefficient images.

Download Full Size | PPT Slide | PDF

4.2.1 Estimation of extinction coefficient

First, $\tilde{Q}_0^{obs}$ is the value observed only for scattered light and is independent of the object parameters $d$ and $r$. Therefore, we can estimate the scattering parameter $\sigma _t$ by fitting the function $\tilde{Q}_0^{\textit {model}}(\sigma _t)$ to $\tilde{Q}_0^{obs}$ in the least squares manner as follows:

$$\hat{\sigma_t} = \mathop{\textrm{argmin}}\limits_{\sigma_t} \left| \tilde{Q}_0^{obs} - \tilde{Q}_0^{\textit{model}}(\sigma_t) \right|^{2}.$$
The estimate $\hat {\sigma _t}$ is easily solved because the function $\tilde{Q}_0^{\textit {model}}(\sigma _t)$ increases monotonically with respect to $\sigma _t$.

4.2.2 Estimation of depth and reflectance

Next, we estimate the depth $d$ and reflectance $r$ from $\tilde{Q}_1^{obs}$ and $\tilde{Q}_2^{obs}$ using $\hat {\sigma _t}$ by fitting the function $\tilde{Q}_1^{\textit {model}}(d, r, \hat {\sigma _t})$ and $\tilde{Q}_2^{\textit {model}}(d, r, \hat {\sigma _t})$ to them in the least squares manner as follows:

$$\begin{aligned} (\hat{d}, \hat{r}) = \mathop{\textrm{argmin}}\limits_{d, r}& \left| \tilde{Q}_1^{obs} - \tilde{Q}_1^{\textit{model}}(d, r, \hat{\sigma_t}) \right|^{2}\\ & + \left| \tilde{Q}_2^{obs} - \tilde{Q}_2^{\textit{model}}(d, r, \hat{\sigma_t}) \right|^{2}. \end{aligned}$$

4.2.3 Calculation of intensity

Finally, we calculate the intensity $I$ from the estimated depth $\hat {d}$ and reflectance $\hat {r}$. Because the extinction coefficient $\sigma _t$ is zero in clear scenes and $\tilde{Q}_0^{\textit {model}}(0)=0$, the defogged intensity is calculated as

$$\begin{aligned} \hat{I} &= \tilde{Q}_0^{\textit{model}}(0) + \tilde{Q}_1^{\textit{model}}(\hat{d}, \hat{r}, 0) + \tilde{Q}_2^{\textit{model}}(\hat{d}, \hat{r}, 0)\\ &= \tilde{Q}_1^{\textit{model}}(\hat{d}, \hat{r}, 0) + \tilde{Q}_2^{\textit{model}}(\hat{d}, \hat{r}, 0). \end{aligned}$$
Note that the reflectance $r$ is different from the intensity $I$ in terms of the dependency on the light sources.

4.3 Lookup table

We use a forward lookup table to solve the optimization problem in Eqs. (20) and (21). The table is built using Eqs. (17), (18), and (19) in all the combinations of depth $d$, reflectance $r$, and extinction coefficient $\sigma _t$ on Table 1.

In the optimization steps, given a query $(\tilde{Q}_0^{\textit obs}, \tilde{Q}_1^{\textit obs}, \tilde{Q}_2^{\textit obs})$, the best combination $(\hat {d}, \hat {r}, \hat {\sigma _t})$ is searched in the table via grid search so that the modeled observation is the most similar to the query according to Eqs. (20) and (21). After the optimization steps, the proposed method calculates the intensity $\hat {I}$ according to Eq. (22).

Tables Icon

Table 1. The lookup table used for optimization.

5. Experiments

To evaluate the effectiveness of the proposed method, we conducted two experiments in a controlled environment and an outdoor environment. In both experiments, we compare the proposed method with the methods based on Han et al. [1] as baselines of the SP-ToF measurements with multiple time-gated exposures. We also demonstrated a real-time implementation via a supplementary video (see Visualization 1).

5.1 Experimental settings of the ToF camera

In both experiments, we used an off-the-shelf SP-ToF camera (Brookman Technology BEC80T). This ToF camera has three time-gated exposures and implements the algorithm of [1] by default. Its timing parameters are controllable by 2.65 ns steps; hence, our method is also executable on this camera with no hardware extension.

In both settings of [1] and our method, we used the same intensity of the light source $I_0=480$ and duration of the illumination $T=29.15\,\textrm{ns}$ to evaluate the methods under the same illumination conditions. In our setting, we used a short period exposure time as $\Delta t=$5.3 ns. Note that the background light is eliminated from the measurement of our method using two frames captured with and without light emission by subtracting the latter from the former based on [1].

However, the proposed method is more advantageous than [1] because our method does not directly use the short period exposure, which observes the relatively strong scattered light, for depth and intensity measurements. For fair comparison, we also implemented a method based on [1] (called [1]+), which does not observe the light from the first $\Delta t$ seconds by using $\tilde{Q}_1^{\textit obs}$ and $\tilde{Q}_2^{\textit obs}$ of our setting.

5.2 Experiments in a controlled environment

To evaluate the effectiveness of the proposed method, we conducted an experiment for a scene in a fog chamber, as shown in Fig. 5(a). Inside the room, we set a road-like scene, where a papercrafted white car, a stop sign and building are placed, as shown in Fig. 5(b). In the fog chamber, we generate a fog-like medium using a fog generator, and the density is controllable. Moreover, the density of the fog is numerically evaluated as the visibility. The visibility is defined as the distance at which an object light can be clearly discerned. We used a visibility meter (VAISALA PWD 10), which is used for the meteorological observation of fog.

In Fig. 6, we show the results of depth and intensity measurements by using [1], [1]+, and our method in different densities of fog. The visibility of each scene are $1141\,\textrm{m},\,{40}\,\textrm{m},\,{15}\,\textrm{m}$, and ${10}\,\textrm{m}$ from left to right, which are represented below the bottom row. Note that the international definition of fog is visibility of less than 1000 m. Moreover, we also show the absolute depth error maps of them to highlight the difference between the measured depths and the ground truth. Note that both the depth and intensity measurements of [1] in clear air can be regarded as the ground truth.

 figure: Fig. 5.

Fig. 5. Experimental environment. (a) shows the appearance of the fog chamber, where we conduct several experiments with a fog generator. (b) shows the scene setting of the experiments. The road-like scene is in front of the ToF camera (BEC80T).

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Measurement comparisons among [1], [1]+, and our method under different thicknesses of fog, such as clear air, thin fog, medium fog, and thick fog, from left to right. From the top to bottom rows: (a) target scenes, the intensity of (b) [1], (c) [1]+, (d) our method, the depth of (e) [1], (f) [1]+, (g) our method, and the absolute depth error of (h) [1], (i) [1]+, (j) our method, respectively. Note that both the depth and intensity measurements of [1] in clear air can be regarded as the ground truth. The visibility of each scene are ${1141}\,\textrm{m},\,{40}\,\textrm{m},\,{15}\,\textrm{m}$, and ${10}\,\textrm{m}$ from left to right, which are represented below the bottom row.

Download Full Size | PPT Slide | PDF

Depth. First, we can confirm that our method measured the depth accurately in clear air without changing the algorithm, as shown in the first column of Figs. 6(e), (g), (h), and (j). Truly, this is a remarkable result because most of the related defogging methods [13,14,24] do not consider measurements in clear air.

In addition, our method could measure the depth well even if the fog is thick while the depth estimated by [1] was greatly distorted in foggy scenes, even if the fog is thin. As shown in Figs. 6(g) and (j), our method was less affected by the object reflectance on the whole image pixels whereas [1] was affected.

Moreover, we show the regional average of both the depth and absolute errors of the objects such as the car, the sign, and the building in Table 2. Note that the underlined values are the ground truth depths of the objects. [1] miscalculated the depth values of all objects in fog, and the depth errors become large as the fog thickens. Especially, the depth error of the building is 1.15 m in thick fog. However, our depth has a maximum error of 0.14 m; hence, our method can estimate the depth of objects much more accurately than [1]. [1]+ could also improve the accuracy in fog, however, much less than our method.

Tables Icon

Table 2. Numerical evaluation of the mean depths of the three target objects. The underlined depth values of [1] of a clear air scene are regarded as the ground truth. Each cell shows the mean depth (m) / mean absolute error (m).

Intensity. As shown in Figs. 6(b–d), [1] and [1]+ measured low contrast and unclear intensity in fog irrelevant to whether they observe the strong scattered light close to the camera or not. On the contrary, our method can reconstruct the target light and remove the scattered light even if the fog is thick. We also show the quantitative results of the defogged intensity image using the peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) in Table 3. In both indices, our method is superior to [1] and [1]+ in fog.

Tables Icon

Table 3. Quantitative results of intensity using PSNR (dB) and SSIM. Each value is compared with the value of the intensity of [1] in clear air because it is regarded as the ground truth. Higher values represent better results.

Sensitivity to the other scattering parameters. We also show the robustness of the proposed method to foggy conditions. Our method assumes that the HG phase function parameter $g$ and the single scattering albedo $\omega$ are constant. Recall that the parameter $g$ is approximately 0.9 in fog [39], and the single scattering albedo $\omega$ of a white medium such as fog is near 1.0 [36]. However, these values have a range due to the medium condition [36,39]. To investigate the sensitivity to these hyperparameters, we evaluated the error rates for the estimates by changing either $g \in [0.85, 0.95]$ or $\omega \in [0.8, 1.0]$, which $\hat {d}'(g,\omega )$ and $\hat {I}'(g,\omega )$ denote. The error rates are calculated by $\frac {|\hat {d}(g,\omega )-\hat {d}|}{\hat {d}}$ and $\frac {|\hat {I}(g,\omega )-\hat {I}|}{\hat {I}}$, where $\hat {d},\hat {I}$ are the estimates of $g = 0.90$ and $\omega = 0.98$.

Figures 7 and 8 show the sensitivity of $g$ and $\omega$, respectively, to the measurements. The error rates of the depth are lower than $0.5 \%$ for both parameters. For the intensity, the error rates are lower than $4.0 \%$ and $1.0\%$ for $g$ and $\omega$, respectively. We can confirm our assumption that $g$ and $\omega$ can be treated as constants under various foggy conditions since the sensitivity of $g$ and $\omega$ are quite limited.

 figure: Fig. 7.

Fig. 7. Evaluation of the sensitivity of HG phase function parameter $g$ to the estimates of the (a) depth and (b) intensity. Each plot represents the average error rate $[\%]$ of the depth and intensity compared with the values of $g = 0.90$.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8.

Fig. 8. Evaluation of the sensitivity of the single scattering albedo $\omega$ to the estimates of the (a) depth and (b) intensity. Each plot represents the average error rate $[\%]$ of the depth and intensity using the value of $\omega = 0.98$.

Download Full Size | PPT Slide | PDF

5.3 Experiments in an outdoor scene

To confirm that the proposed method can also be effective for real fog, we conduct an experiment in an outdoor scene. We captured a gray car in a riverbank, as shown in Fig. 9. We measured the depth and intensity of the car in both clear air and fog at the same position. In this experiment, we regard the measurement of [1] in clear air as the ground truth.We show the results of the depth and intensity measured by [1] and our method in Fig. 10. Moreover, we show the absolute error of the depth in Fig. 11.

 figure: Fig. 9.

Fig. 9. Outdoor scene under (a) clear air and (b) fog.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10.

Fig. 10. Measurements comparison among [1], [1]+, and our method under clear air and a real foggy scene. From the left to right columns: the measurements of (a, e) [1] in the clear scene, which can be regarded as the ground truth; the measurements in the foggy scene obtained by (b, f) [1], (c, g) [1]+, and (d, h) our method.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11.

Fig. 11. Absolute depth error maps of (a) [1], (b) [1]+, and (c) our method. The errors of [1] and [1]+ are large where the reflectance is low; however, the error of our method is low regardless of the reflectance.

Download Full Size | PPT Slide | PDF

As shown in Figs. 10 and 11, the depth of [1] and [1]+ is strongly distorted according to the reflectance of the target objects despite the depth being almost the same. Precisely, the depth error is small in the white license plate; however, the depth error of the gray car body is large. However, our method could estimate the low-error depth in both the low-reflectance and high-reflectance regions. Unlike the results of the depth measurements, the intensity of [1] and [1]+ is distorted little in the scene; however, we can confirm that our method can work under real fog.

5.4 Real-time implementation

Visualization 1 demonstrates a real-time application of the proposed method, which can measure the depth and intensity in fog. For a real-time implementation, we need a fast solver for the optimization problem described as Eqs. (20) and (21). Section 4.3 describes the method to solve the optimization problem by using the forward lookup table; however, it is time-consuming because the table is searched each time after a query. For this reason, we construct a reverse lookup table, i.e., we calculate the relationship in advance between all the possible observation values of the ToF camera ($\tilde{Q}_0$, $\tilde{Q}_1$, $\tilde{Q}_2$) and the output of the proposed method ($\hat {d}$, $\hat {r}$, $\hat {\sigma _t}$, $\hat {I}$).

The major concern is the time-memory trade-off. The Section 4.3 method is time-consuming but it needs less memory. However, preserving all the combinations needs less time but considerable memory. To overcome the problem, we coarsely quantize the input of $\tilde{Q}_1$ and $\tilde{Q}_2$ by 5 Least Significant Bit (LSB). Note that the input of $\tilde{Q}_0$ is not coarsely quantized because $\tilde{Q}_0$ is important for estimating the property of the fog. This quantization means that the estimation accuracy of the proposed method is sacrificed for a real-time application; however, Visualization 1 shows the degradation is qualitatively small. In the application, the memory occupancy of the reverse lookup table is ${1.04}\,\textrm{GB}$ and the entire system (i.e., from capture to output) runs at about ${6}\,\textrm{fps}$ on an Intel Core i5-6200U CPU @ ${2.30}\,\textrm{GHz}$ with ${8}\,\textrm{GB}$ of RAM.

6. Conclusion

We proposed a measurement algorithm for the depth and intensity in fog using an SP-ToF camera. We assigned one exposure of the SP-ToF camera to estimate the extinction coefficient of fog. We revealed that the proposed method helps us to measure the low error depth and clear intensity under various fog conditions including artificial fog and real fog. Moreover, we implemented the real-time application of the proposed method without the use of deep-learning techniques or a heavy computing environment such as GPUs.

The proposed method creates an exposure a few nanoseconds after light illumination to estimate the extinction coefficient of fog, which means that the scene objects cannot exist nearer than $c\Delta t/2$ because it is difficult to estimate the extinction coefficient accurately if the direct light from the objects is observed in $Q_0$. Specifically, the minimum measurable depth of our method based on the BEC80T is approximately 0.4 m when we set $\Delta t = {2.65}\,\textrm{ns}$. However, this is not a showstopper because off-the-shelf ToF cameras, such as the BEC80T and Microsoft Kinect, generally cannot measure near range objects at less than approximately 0.5 m due to saturation.

In addition, our method assumes the density of fog becomes homogeneous, precisely speaking, along the depth. The proposed method can estimate the scattering property of fog per pixel. In other words, fog can be inhomogeneous in the vertical and lateral directions. Because the density of fog tends to change according to its height from the ground in an actual foggy scene, the inhomogeneity of fog is dominated along the vertical direction. Thus, the assumption does not limit the effectiveness of our methods much for these foggy scenes.

It is expected that the proposed method can work for any thin scattering media. We verified the method using only fog as a scattering media in this study; however, underwater descattering is another possible application for the proposed method.

We showed a practical solution to the depth and intensity measurements in fog. We consider that this method can be practically used for autonomous driving because the proposed method works in real time and measures low error depth and high contrast intensity images under various thicknesses or traits of fog.

Funding

Japan Society for the Promotion of Science (JP18H03265, JP18K19822, JP19H04138); Core Research for Evolutional Science and Technology (JPMJCR1764).

Acknowledgment

The authors thank Koito Manufacturing for valuable discussions about experiments.

Disclosures

The authors declare no conflicts of interest.

References

1. S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015). [CrossRef]  

2. J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762, 67620H (2007). [CrossRef]  

3. S. Fuchs, “Multipath interference compensation in time-of-flight camera images,” in Proceedings of IEEE International Conference on Pattern Recognition, (IEEE, 2010), pp. 3583–3586.

4. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013). [CrossRef]  

5. D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.

6. D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014). [CrossRef]  

7. F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014). [CrossRef]  

8. N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

9. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015). [CrossRef]  

10. J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017). [CrossRef]  

11. G. Agresti and P. Zanuttigh, “Combination of spatially-modulated tof and structured light for mpi-free depth estimation,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 355–371.

12. S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 6383–6392.

13. T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from phasor distortions in fog,” Opt. Express 27(13), 18858–18868 (2019). [CrossRef]  

14. Y. Fujimura, M. Sonogashira, and M. Iiyama, “Simultaneous estimation of object region and depth in participating media using a tof camera,” IEICE Trans. Inf. Syst. E103.D(3), 660–673 (2020). [CrossRef]  

15. B. Das, K. Yoo, and R. Alfano, “Ultrafast time-gated imaging in thick tissues: a step toward optical mammography,” Opt. Lett. 18(13), 1092–1094 (1993). [CrossRef]  

16. K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997). [CrossRef]  

17. I. M. Baker, S. S. Duncan, and J. W. Copley, “A low-noise laser-gated imaging system for long-range target identification,” Proc. SPIE 5406, 133–144 (2004). [CrossRef]  

18. O. David, N. S. Kopeika, and B. Weizer, “Range gated active night vision system for automobiles,” Appl. Opt. 45(28), 7248–7254 (2006). [CrossRef]  

19. Y. Grauer and E. Sonn, “Active gated imaging for automotive safety applications,” Proc. SPIE 9407, 94070F (2015). [CrossRef]  

20. S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016). [CrossRef]  

21. K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017). [CrossRef]  

22. P. Risholm, J. Thorstensen, J. T. Thielemann, K. Kaspersen, J. Tschudi, C. Yates, C. Softley, I. Abrosimov, J. Alexander, and K. H. Haugholt, “Real-time super-resolved 3d in turbid water using a fast range-gated cmos camera,” Appl. Opt. 57(14), 3927–3937 (2018). [CrossRef]  

23. X. Yin, H. Cheng, K. Yang, and M. Xia, “Bayesian reconstruction method for underwater 3d range-gated imaging enhancement,” Appl. Opt. 59(2), 370–379 (2020). [CrossRef]  

24. G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2018), pp. 1–10.

25. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011). [CrossRef]  

26. R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph. 34(1), 1–14 (2014). [CrossRef]  

27. D. Berman, T. Treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 1674–1682.

28. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016). [CrossRef]  

29. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

30. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.

31. D. Yang and J. Sun, “Proximal dehaze-net: A prior learning-based deep network for single image dehazing,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 702–717.

32. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

33. I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

34. Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020). [CrossRef]  

35. S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2005), pp. 420–427.

36. S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006). [CrossRef]  

37. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009). [CrossRef]  

38. L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941). [CrossRef]  

39. S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2003), p. 665.

References

  • View by:
  • |
  • |
  • |

  1. S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
    [Crossref]
  2. J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762, 67620H (2007).
    [Crossref]
  3. S. Fuchs, “Multipath interference compensation in time-of-flight camera images,” in Proceedings of IEEE International Conference on Pattern Recognition, (IEEE, 2010), pp. 3583–3586.
  4. A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
    [Crossref]
  5. D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.
  6. D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014).
    [Crossref]
  7. F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
    [Crossref]
  8. N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.
  9. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015).
    [Crossref]
  10. J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
    [Crossref]
  11. G. Agresti and P. Zanuttigh, “Combination of spatially-modulated tof and structured light for mpi-free depth estimation,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 355–371.
  12. S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 6383–6392.
  13. T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from phasor distortions in fog,” Opt. Express 27(13), 18858–18868 (2019).
    [Crossref]
  14. Y. Fujimura, M. Sonogashira, and M. Iiyama, “Simultaneous estimation of object region and depth in participating media using a tof camera,” IEICE Trans. Inf. Syst. E103.D(3), 660–673 (2020).
    [Crossref]
  15. B. Das, K. Yoo, and R. Alfano, “Ultrafast time-gated imaging in thick tissues: a step toward optical mammography,” Opt. Lett. 18(13), 1092–1094 (1993).
    [Crossref]
  16. K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997).
    [Crossref]
  17. I. M. Baker, S. S. Duncan, and J. W. Copley, “A low-noise laser-gated imaging system for long-range target identification,” Proc. SPIE 5406, 133–144 (2004).
    [Crossref]
  18. O. David, N. S. Kopeika, and B. Weizer, “Range gated active night vision system for automobiles,” Appl. Opt. 45(28), 7248–7254 (2006).
    [Crossref]
  19. Y. Grauer and E. Sonn, “Active gated imaging for automotive safety applications,” Proc. SPIE 9407, 94070F (2015).
    [Crossref]
  20. S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
    [Crossref]
  21. K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
    [Crossref]
  22. P. Risholm, J. Thorstensen, J. T. Thielemann, K. Kaspersen, J. Tschudi, C. Yates, C. Softley, I. Abrosimov, J. Alexander, and K. H. Haugholt, “Real-time super-resolved 3d in turbid water using a fast range-gated cmos camera,” Appl. Opt. 57(14), 3927–3937 (2018).
    [Crossref]
  23. X. Yin, H. Cheng, K. Yang, and M. Xia, “Bayesian reconstruction method for underwater 3d range-gated imaging enhancement,” Appl. Opt. 59(2), 370–379 (2020).
    [Crossref]
  24. G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2018), pp. 1–10.
  25. K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
    [Crossref]
  26. R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph. 34(1), 1–14 (2014).
    [Crossref]
  27. D. Berman, T. Treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 1674–1682.
  28. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
    [Crossref]
  29. W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.
  30. B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.
  31. D. Yang and J. Sun, “Proximal dehaze-net: A prior learning-based deep network for single image dehazing,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 702–717.
  32. H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.
  33. I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.
  34. Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
    [Crossref]
  35. S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2005), pp. 420–427.
  36. S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
    [Crossref]
  37. T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
    [Crossref]
  38. L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941).
    [Crossref]
  39. S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2003), p. 665.

2020 (3)

Y. Fujimura, M. Sonogashira, and M. Iiyama, “Simultaneous estimation of object region and depth in participating media using a tof camera,” IEICE Trans. Inf. Syst. E103.D(3), 660–673 (2020).
[Crossref]

X. Yin, H. Cheng, K. Yang, and M. Xia, “Bayesian reconstruction method for underwater 3d range-gated imaging enhancement,” Appl. Opt. 59(2), 370–379 (2020).
[Crossref]

Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
[Crossref]

2019 (1)

2018 (1)

2017 (2)

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

2016 (2)

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
[Crossref]

2015 (3)

Y. Grauer and E. Sonn, “Active gated imaging for automotive safety applications,” Proc. SPIE 9407, 94070F (2015).
[Crossref]

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015).
[Crossref]

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

2014 (3)

D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014).
[Crossref]

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref]

R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph. 34(1), 1–14 (2014).
[Crossref]

2013 (1)

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

2011 (1)

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

2009 (1)

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

2007 (1)

J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762, 67620H (2007).
[Crossref]

2006 (2)

O. David, N. S. Kopeika, and B. Weizer, “Range gated active night vision system for automobiles,” Appl. Opt. 45(28), 7248–7254 (2006).
[Crossref]

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

2004 (1)

I. M. Baker, S. S. Duncan, and J. W. Copley, “A low-noise laser-gated imaging system for long-range target identification,” Proc. SPIE 5406, 133–144 (2004).
[Crossref]

1997 (1)

K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997).
[Crossref]

1993 (1)

1941 (1)

L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941).
[Crossref]

Abrosimov, I.

Agresti, G.

G. Agresti and P. Zanuttigh, “Combination of spatially-modulated tof and structured light for mpi-free depth estimation,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 355–371.

Alexander, J.

Alfano, R.

Alon, J.

I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

Aoto, T.

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

Aoyama, S.

Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
[Crossref]

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

Avidan, S.

I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

D. Berman, T. Treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 1674–1682.

Baker, I. M.

I. M. Baker, S. S. Duncan, and J. W. Copley, “A low-noise laser-gated imaging system for long-range target identification,” Proc. SPIE 5406, 133–144 (2004).
[Crossref]

Barsi, C.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Bekerman, Y.

I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

Berman, D.

D. Berman, T. Treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 1674–1682.

Bhandari, A.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Bing Kang, S.

N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

Cai, B.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
[Crossref]

Cao, X.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

Chai, T.

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

Cheng, H.

Chua, S.

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

Copley, J. W.

I. M. Baker, S. S. Duncan, and J. W. Copley, “A low-noise laser-gated imaging system for long-range target identification,” Proc. SPIE 5406, 133–144 (2004).
[Crossref]

Das, B.

David, O.

Dong, Y.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Donner, C.

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

Dorrington, A.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Duncan, S. S.

I. M. Baker, S. S. Duncan, and J. W. Copley, “A low-noise laser-gated imaging system for long-range target identification,” Proc. SPIE 5406, 133–144 (2004).
[Crossref]

Fattal, R.

R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph. 34(1), 1–14 (2014).
[Crossref]

Feng, D.

B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.

Freedman, D.

D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.

Fuchs, S.

S. Fuchs, “Multipath interference compensation in time-of-flight camera images,” in Proceedings of IEEE International Conference on Pattern Recognition, (IEEE, 2010), pp. 3583–3586.

Fujimura, Y.

Y. Fujimura, M. Sonogashira, and M. Iiyama, “Simultaneous estimation of object region and depth in participating media using a tof camera,” IEICE Trans. Inf. Syst. E103.D(3), 660–673 (2020).
[Crossref]

Funatomi, T.

T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from phasor distortions in fog,” Opt. Express 27(13), 18858–18868 (2019).
[Crossref]

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

Galarneau, P.

K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997).
[Crossref]

Grauer, Y.

Y. Grauer and E. Sonn, “Active gated imaging for automotive safety applications,” Proc. SPIE 9407, 94070F (2015).
[Crossref]

Greenstein, J. L.

L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941).
[Crossref]

Guo, N.

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

Gupta, M.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015).
[Crossref]

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

Gutierrez, D.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Han, S. M.

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

Haugholt, K. H.

He, K.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Heide, F.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref]

S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 6383–6392.

Heidrich, W.

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref]

S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 6383–6392.

Henyey, L. G.

L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941).
[Crossref]

Hernandez, Q.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Hügli, H.

J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762, 67620H (2007).
[Crossref]

Hullin, M. B.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015).
[Crossref]

F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, “Imaging in scattering media using correlation image sensors and sparse convolutional coding,” Opt. Express 22(21), 26338–26350 (2014).
[Crossref]

Iiyama, M.

Y. Fujimura, M. Sonogashira, and M. Iiyama, “Simultaneous estimation of object region and depth in participating media using a tof camera,” IEICE Trans. Inf. Syst. E103.D(3), 660–673 (2020).
[Crossref]

Izadi, S.

N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

Jarabo, A.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Jensen, H. W.

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

Jia, K.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
[Crossref]

Jiménez, D.

D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014).
[Crossref]

Kadambi, A.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

Kagawa, K.

Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
[Crossref]

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

Kaspersen, K.

Kawahito, S.

Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
[Crossref]

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

Kim, M. H.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Kitano, K.

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

Knafo, L.

I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

Kolb, A.

Kopeika, N. S.

Koppal, S. J.

S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2005), pp. 420–427.

Krupka, E.

D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.

Kubo, H.

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

Leichter, I.

D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.

Levesque, M.

K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997).
[Crossref]

Li, B.

B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.

Liu, S.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

Marco, J.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Martin, J.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015).
[Crossref]

Mazo, M.

D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014).
[Crossref]

Mor, A.

I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

Mukaigawa, Y.

T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from phasor distortions in fog,” Opt. Express 27(13), 18858–18868 (2019).
[Crossref]

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

Munoz, A.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Muraji, T.

Mure-Dubois, J.

J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762, 67620H (2007).
[Crossref]

Naik, N.

N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

Narasimhan, S. G.

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2005), pp. 420–427.

S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2003), p. 665.

Nayar, S. K.

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015).
[Crossref]

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2005), pp. 420–427.

S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2003), p. 665.

Okamoto, T.

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

Palazuelos, S.

D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014).
[Crossref]

Pan, J.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

Parent, A.

K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997).
[Crossref]

Patel, V. M.

H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

Peng, X.

B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.

Pizarro, D.

D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014).
[Crossref]

Qing, C.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
[Crossref]

Ramamoorthi, R.

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

Raskar, R.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2018), pp. 1–10.

Ren, W.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

Rhemann, C.

N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

Risholm, P.

Satat, G.

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2018), pp. 1–10.

Schechner, Y. Y.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

Schmidt, M.

D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.

Seet, G. L.

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

Shirakawa, Y.

Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
[Crossref]

Smolin, Y.

D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.

Snell, K. J.

K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997).
[Crossref]

Softley, C.

Sonn, E.

Y. Grauer and E. Sonn, “Active gated imaging for automotive safety applications,” Proc. SPIE 9407, 94070F (2015).
[Crossref]

Sonogashira, M.

Y. Fujimura, M. Sonogashira, and M. Iiyama, “Simultaneous estimation of object region and depth in participating media using a tof camera,” IEICE Trans. Inf. Syst. E103.D(3), 660–673 (2020).
[Crossref]

Streeter, L.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Su, S.

S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 6383–6392.

Sun, B.

S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2005), pp. 420–427.

Sun, J.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

D. Yang and J. Sun, “Proximal dehaze-net: A prior learning-based deep network for single image dehazing,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 702–717.

Takasawa, T.

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

Tal, I.

I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

Tan, C.

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

Tanaka, K.

T. Muraji, K. Tanaka, T. Funatomi, and Y. Mukaigawa, “Depth from phasor distortions in fog,” Opt. Express 27(13), 18858–18868 (2019).
[Crossref]

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

Tancik, M.

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2018), pp. 1–10.

Tang, X.

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

Tao, D.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
[Crossref]

Thielemann, J. T.

Thorstensen, J.

Tong, X.

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

Treibitz, T.

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

D. Berman, T. Treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 1674–1682.

Tschudi, J.

Wang, X.

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

Wang, Z.

B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.

Weizer, B.

Wetzstein, G.

S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 6383–6392.

Whyte, R.

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

Xia, M.

Xiao, L.

Xu, J.

B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.

Xu, X.

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
[Crossref]

Yang, D.

D. Yang and J. Sun, “Proximal dehaze-net: A prior learning-based deep network for single image dehazing,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 702–717.

Yang, K.

Yang, M.-H.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

Yasutomi, K.

Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
[Crossref]

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

Yates, C.

Yin, X.

Yoo, K.

Zanuttigh, P.

G. Agresti and P. Zanuttigh, “Combination of spatially-modulated tof and structured light for mpi-free depth estimation,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 355–371.

Zhang, H.

H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

ACM Trans. Graph. (5)

A. Kadambi, R. Whyte, A. Bhandari, L. Streeter, C. Barsi, A. Dorrington, and R. Raskar, “Coded time of flight cameras: sparse deconvolution to address multipath interference and recover time profiles,” ACM Trans. Graph. 32(6), 1–10 (2013).
[Crossref]

M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-based time-of-flight imaging,” ACM Trans. Graph. 34(5), 1–18 (2015).
[Crossref]

J. Marco, Q. Hernandez, A. Munoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, “Deeptof: off-the-shelf real-time correction of multipath interference in time-of-flight imaging,” ACM Trans. Graph. 36(6), 1–12 (2017).
[Crossref]

R. Fattal, “Dehazing using color-lines,” ACM Trans. Graph. 34(1), 1–14 (2014).
[Crossref]

S. G. Narasimhan, M. Gupta, C. Donner, R. Ramamoorthi, S. K. Nayar, and H. W. Jensen, “Acquiring scattering propoerties of participating media by dilution,” ACM Trans. Graph. 25(3), 1003–1012 (2006).
[Crossref]

Appl. Opt. (3)

Astrophys. J. (1)

L. G. Henyey and J. L. Greenstein, “Diffuse radiation in the galaxy,” Astrophys. J. 93, 70–83 (1941).
[Crossref]

IEEE J. Electron Devices Soc. (1)

S. M. Han, T. Takasawa, K. Yasutomi, S. Aoyama, K. Kagawa, and S. Kawahito, “A time-of-flight range image sensor with background canceling lock-in pixels based on lateral electric field charge modulation,” IEEE J. Electron Devices Soc. 3(3), 267–275 (2015).
[Crossref]

IEEE Trans. Image Process. (1)

B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, “Dehazenet: An end-to-end system for single image haze removal,” IEEE Trans. Image Process. 25(11), 5187–5198 (2016).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

T. Treibitz and Y. Y. Schechner, “Active polarization descattering,” IEEE Trans. Pattern Anal. Mach. Intell. 31(3), 385–399 (2009).
[Crossref]

K. He, J. Sun, and X. Tang, “Single image haze removal using dark channel prior,” IEEE Trans. Pattern Anal. Mach. Intell. 33(12), 2341–2353 (2011).
[Crossref]

IEICE Trans. Inf. Syst. (1)

Y. Fujimura, M. Sonogashira, and M. Iiyama, “Simultaneous estimation of object region and depth in participating media using a tof camera,” IEICE Trans. Inf. Syst. E103.D(3), 660–673 (2020).
[Crossref]

Image Vis. Comput. (1)

D. Jiménez, D. Pizarro, M. Mazo, and S. Palazuelos, “Modeling and correction of multipath interference in time of flight cameras,” Image Vis. Comput. 32(1), 1–13 (2014).
[Crossref]

IPSJ Trans. Comput. Vis. Appl. (1)

K. Kitano, T. Okamoto, K. Tanaka, T. Aoto, H. Kubo, T. Funatomi, and Y. Mukaigawa, “Recovering temporal psf using tof camera with delayed light emission,” IPSJ Trans. Comput. Vis. Appl. 9(1), 15–16 (2017).
[Crossref]

J. Eur. Opt. Soc. Rapid Publ. (1)

S. Chua, X. Wang, N. Guo, C. Tan, T. Chai, and G. L. Seet, “Improving three-dimensional (3d) range gated reconstruction through time-of-flight (tof) imaging analysis,” J. Eur. Opt. Soc. Rapid Publ. 11, 16015 (2016).
[Crossref]

Opt. Express (2)

Opt. Lett. (1)

Proc. SPIE (4)

K. J. Snell, A. Parent, M. Levesque, and P. Galarneau, “Active range-gated near-ir tv system for all-weather surveillance,” Proc. SPIE 2935, 171–181 (1997).
[Crossref]

I. M. Baker, S. S. Duncan, and J. W. Copley, “A low-noise laser-gated imaging system for long-range target identification,” Proc. SPIE 5406, 133–144 (2004).
[Crossref]

Y. Grauer and E. Sonn, “Active gated imaging for automotive safety applications,” Proc. SPIE 9407, 94070F (2015).
[Crossref]

J. Mure-Dubois and H. Hügli, “Optimized scattering compensation for time-of-flight camera,” Proc. SPIE 6762, 67620H (2007).
[Crossref]

Sensors (1)

Y. Shirakawa, K. Yasutomi, K. Kagawa, S. Aoyama, and S. Kawahito, “An 8-tap cmos lock-in pixel image sensor for short-pulse time-of-flight measurements,” Sensors 20(4), 1040 (2020).
[Crossref]

Other (14)

S. G. Narasimhan, S. K. Nayar, B. Sun, and S. J. Koppal, “Structured light in scattering media,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2005), pp. 420–427.

W. Ren, S. Liu, H. Zhang, J. Pan, X. Cao, and M.-H. Yang, “Single image dehazing via multi-scale convolutional neural networks,” in Proceedings of European Conference on Computer Vision, (Springer, 2016), pp. 154–169.

B. Li, X. Peng, Z. Wang, J. Xu, and D. Feng, “Aod-net: All-in-one dehazing network,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2017), pp. 4770–4778.

D. Yang and J. Sun, “Proximal dehaze-net: A prior learning-based deep network for single image dehazing,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 702–717.

H. Zhang and V. M. Patel, “Densely connected pyramid dehazing network,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 3194–3203.

I. Tal, Y. Bekerman, A. Mor, L. Knafo, J. Alon, and S. Avidan, “Nldnet++: A physics based single image dehazing network,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2020), pp. 1–10.

G. Satat, M. Tancik, and R. Raskar, “Towards photography through realistic fog,” in Proceedings of IEEE International Conference on Computational Photography, (IEEE, 2018), pp. 1–10.

D. Berman, T. Treibitz, and S. Avidan, “Non-local image dehazing,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2016), pp. 1674–1682.

S. Fuchs, “Multipath interference compensation in time-of-flight camera images,” in Proceedings of IEEE International Conference on Pattern Recognition, (IEEE, 2010), pp. 3583–3586.

D. Freedman, Y. Smolin, E. Krupka, I. Leichter, and M. Schmidt, “Sra: Fast removal of general multipath for tof sensors,” in Proceedings of European Conference on Computer Vision, (Springer, 2014), pp. 234–249.

G. Agresti and P. Zanuttigh, “Combination of spatially-modulated tof and structured light for mpi-free depth estimation,” in Proceedings of European Conference on Computer Vision, (Springer, 2018), pp. 355–371.

S. Su, F. Heide, G. Wetzstein, and W. Heidrich, “Deep end-to-end time-of-flight imaging,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2018), pp. 6383–6392.

N. Naik, A. Kadambi, C. Rhemann, S. Izadi, R. Raskar, and S. Bing Kang, “A light transport model for mitigating multipath interference in time-of-flight sensors,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2015), pp. 73–81.

S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2003), p. 665.

Supplementary Material (1)

NameDescription
» Visualization 1       The video that demonstrates the real-time implementation of the proposed method

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Difference of ToF measurement between clear (upper) and foggy (lower) scenes. From the left to right columns, (a) and (d) are RGB images from a normal camera, (b) and (e) are intensity images from a ToF camera, and (c) and (f) are depth images from a ToF camera. Both depth and intensity measurements are affected by fog.
Fig. 2.
Fig. 2. The differences of the ordinary SP-ToF measurements and the proposed method. (a) In clear air, the waveform of the reflected light is rectangular. (d) In fog, however, the waveform of the reflected light is greatly distorted because the light is scattered by fog particles in front of the objects. This causes changes in the observed values for each exposure, and then an indirect calculation of the depth and intensity of the ordinary SP-ToF measurement fails. The proposed method assigns an additional exposure time for the observation of scattering light and uses it to remove scattering from the measurement.
Fig. 3.
Fig. 3. Modeled temporal response and reflection in (i) clear air and (ii) fog. The temporal response represents the intensity from various depths. (i) In clean air, the temporal response is composed of only direct light from the target object. (ii) In fog, the temporal response is composed of direct light with attenuation by fog and scattered light from fog.
Fig. 4.
Fig. 4. Algorithm flowchart of the proposed method. The input images are the observations of all time-gated exposures: $\tilde{Q}_1$, $\tilde{Q}_2$, and $\tilde{Q}_3$. The output images are the defogged depth and intensity images and, as byproducts, the estimated reflectance and extinction coefficient images.
Fig. 5.
Fig. 5. Experimental environment. (a) shows the appearance of the fog chamber, where we conduct several experiments with a fog generator. (b) shows the scene setting of the experiments. The road-like scene is in front of the ToF camera (BEC80T).
Fig. 6.
Fig. 6. Measurement comparisons among [1], [1]+, and our method under different thicknesses of fog, such as clear air, thin fog, medium fog, and thick fog, from left to right. From the top to bottom rows: (a) target scenes, the intensity of (b) [1], (c) [1]+, (d) our method, the depth of (e) [1], (f) [1]+, (g) our method, and the absolute depth error of (h) [1], (i) [1]+, (j) our method, respectively. Note that both the depth and intensity measurements of [1] in clear air can be regarded as the ground truth. The visibility of each scene are ${1141}\,\textrm{m},\,{40}\,\textrm{m},\,{15}\,\textrm{m}$, and ${10}\,\textrm{m}$ from left to right, which are represented below the bottom row.
Fig. 7.
Fig. 7. Evaluation of the sensitivity of HG phase function parameter $g$ to the estimates of the (a) depth and (b) intensity. Each plot represents the average error rate $[\%]$ of the depth and intensity compared with the values of $g = 0.90$.
Fig. 8.
Fig. 8. Evaluation of the sensitivity of the single scattering albedo $\omega$ to the estimates of the (a) depth and (b) intensity. Each plot represents the average error rate $[\%]$ of the depth and intensity using the value of $\omega = 0.98$.
Fig. 9.
Fig. 9. Outdoor scene under (a) clear air and (b) fog.
Fig. 10.
Fig. 10. Measurements comparison among [1], [1]+, and our method under clear air and a real foggy scene. From the left to right columns: the measurements of (a, e) [1] in the clear scene, which can be regarded as the ground truth; the measurements in the foggy scene obtained by (b, f) [1], (c, g) [1]+, and (d, h) our method.
Fig. 11.
Fig. 11. Absolute depth error maps of (a) [1], (b) [1]+, and (c) our method. The errors of [1] and [1]+ are large where the reflectance is low; however, the error of our method is low regardless of the reflectance.

Tables (3)

Tables Icon

Table 1. The lookup table used for optimization.

Tables Icon

Table 2. Numerical evaluation of the mean depths of the three target objects. The underlined depth values of [1] of a clear air scene are regarded as the ground truth. Each cell shows the mean depth (m) / mean absolute error (m).

Tables Icon

Table 3. Quantitative results of intensity using PSNR (dB) and SSIM. Each value is compared with the value of the intensity of [1] in clear air because it is regarded as the ground truth. Higher values represent better results.

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

d = c τ 2 ,
Q 1 = 0 T L ret ( t ) d t ,
Q 2 = T 2 T L ret ( t ) d t ,
τ = Q 2 Q 1 + Q 2 T ,
I = Q 1 + Q 2 .
Q 0 = T 0 ϵ d t = ϵ T ,
Q 1 = 0 T ( L ret ( t ) + ϵ ) d t = 0 T L ret ( t ) d t + ϵ T ,
Q 2 = T 2 T ( L ret ( t ) + ϵ ) d t = T 2 T L ret ( t ) d t + ϵ T ,
Q 1 = Q 1 Q 0 ,
Q 2 = Q 2 Q 0 .
L ret ( t ) = L emit ( t ) i ( t ) ,
L emit ( t ) = { I 0 0 t T , 0 o t h e r w i s e ,
i ( t ) = i r ( t ) + i s ( t ) .
i r ( d , r , σ t , t ) = 1 d 2 r e 2 σ t d δ ( t 2 d c ) d t ,
i s ( σ t , t ) = { 1 z 2 ω σ t p ( g , θ ) e 2 σ t z d z 0 < t 2 d / c , 0 o t h e r w i s e ,
p ( g , θ ) = 1 4 π 1 g 2 ( 1 + g 2 2 g c o s θ ) 3 2 ,
Q ~ 0 model ( σ t ) = 0 Δ t L emit ( t ) i ( d , r , σ t , t ) d t = 0 Δ t L emit ( t ) i s ( σ t , t ) d t ,
Q ~ 1 model ( d , r , σ t ) = Δ t T + Δ t / 2 L emit ( t ) i ( d , r , σ t , t ) d t ,
Q ~ 2 model ( d , r , σ t ) = T + Δ t / 2 2 T L emit ( t ) i ( d , r , σ t , t ) d t .
σ t ^ = argmin σ t | Q ~ 0 o b s Q ~ 0 model ( σ t ) | 2 .
( d ^ , r ^ ) = argmin d , r | Q ~ 1 o b s Q ~ 1 model ( d , r , σ t ^ ) | 2 + | Q ~ 2 o b s Q ~ 2 model ( d , r , σ t ^ ) | 2 .
I ^ = Q ~ 0 model ( 0 ) + Q ~ 1 model ( d ^ , r ^ , 0 ) + Q ~ 2 model ( d ^ , r ^ , 0 ) = Q ~ 1 model ( d ^ , r ^ , 0 ) + Q ~ 2 model ( d ^ , r ^ , 0 ) .

Metrics