Abstract

Recent advances in image processing for atmospheric propagation have provided a foundation for tackling the similar but perhaps more complex problem of underwater imaging, which is impaired by scattering and optical turbulence. As a result of these impairments underwater imagery suffers from excessive noise, blur, and distortion. Underwater turbulence impact on light propagation becomes critical at longer distances as well as near thermocline and mixing layers. In this work, we demonstrate a method for restoration of underwater images that are severely degraded by underwater turbulence. The key element of the approach is derivation of a structure tensor oriented image quality metric, which is subsequently incorporated into a lucky patch image processing framework. The utility of the proposed image quality measure guided by local edge strength and orientation is emphasized by comparing the restoration results to an unsuccessful restoration obtained with equivalent processing utilizing a standard isotropic metric. Advantages of the proposed approach versus three other state-of-the-art image restoration techniques are demonstrated using the data obtained in the laboratory water tank and in a natural environment underwater experiment. Quantitative comparison of the restoration results is performed via structural similarity index measure and normalized mutual information metric.

© 2015 Optical Society of America

1. Introduction

Imaging underwater is becoming an important problem for scientific and military applications such as natural resources exploration, optical communications, and situational awareness. Unfortunately, light propagation underwater is severely impaired by scattering on particulates and by optical turbulence. Recently, advances in active imaging technology have allowed some degree of mitigation of scattering losses, which has led to significant increase of underwater light propagation distances [1–3]. However, developed techniques have been demonstrated only in a relatively still water environment and therefore remain vulnerable to scattering on turbulence, which becomes a limiting factor over longer ranges and under conditions of stronger turbulence [4,5].

Turbulent underwater inhomogeneities of the flow are associated with fluctuations in temperature and salinity. These fluctuations alter water density, inducing variations in the refractive index and resulting in near-forward scattering of light. Leading image degradations due to light propagation through underwater turbulence are attenuation of higher spatial frequencies, blur, noise, and shape distortions, which are similar to those resulting from atmospheric turbulence [4,5]. However, the underwater turbulence physics is more complex and significantly less studied compared to that of the atmosphere with some basic concepts still requiring experimental verification [4,6], hence direct transfer of turbulence model can be difficult. Therefore the data-driven approach to image restoration is proposed in this paper but model-driven techniques are considered for comparison. Also the problem considered in this work is somewhat simplified as the salinity component of the turbulence is not present i.e. the presented imagery is collected through turbulence which is caused only by thermal fluctuations in the water. The results obtained with the presented algorithm however, can be extended to a more general case because of their independence from a turbulence model.

In the past severely limited underwater imaging range prompted relatively low interest in underwater turbulence effect on imaging hence resulting in lack of approaches focusing specifically on restoration of images degraded by underwater turbulence. Some efforts mimicked very basic atmospheric turbulence correction algorithms and consisted of, e.g. the Wiener filter with parameters derived from atmospheric turbulence model [7]. Our previous work used a modified lucky patch approach, which included fusion of only a small number of short exposure images with carefully selected time interval between them [8]. However, in the closely related field of imaging underwater through turbulent air-water interface several original multi-frame algorithms were proposed. The most notable examples are bispectrum based restoration [9] and two-stage image registration and de-blurring [10]. Unfortunately, these approaches can handle only either very weak turbulence or offer a rather small improvement in image quality.

In this work, we propose structure tensor oriented image quality (STOIQ) metric, which provides higher resolution and lower-noise estimate of image sharpness compared to a standard isotropic kernel-based metric. Restoration of the degraded underwater imagery is performed by incorporation of STOIQ metric into lucky patch technique. For comparison, the bispectrum algorithm developed for atmospheric propagation as well as two other state-of-the-art restoration techniques are applied to the same data. The experiments show that noise and resolution advantages offered by STOIQ translate into superior performance of the augmented lucky patch algorithm compared to bispectrum method and other tried image restoration techniques.

The remainder of this paper is organized as follows. In Section 2, we introduce the STOIQ metric and briefly explain lucky patch approach to restoration of imagery degraded by turbulence. In Section 3, underwater image sequences collected in laboratory and natural environment are described. In Section 4, three restoration techniques derived from imaging through atmospheric turbulence and through water waves are outlined. In Section 5, we demonstrate and compare restoration results. Finally, Section 6 contains concluding remarks.

2. Image restoration using structure tensor oriented image quality metric

The multi-frame lucky patch technique has been developed for atmospheric imaging in recognition of the fact that due to the presence of dynamical phase distortions, different short-exposure frames can contain random local areas with better image quality (IQ) – lucky patches [11–14]. Detection of such high-resolution areas and their fusion into a single higher-quality image represents the essence of this image restoration approach. Thus, the lucky patch algorithm utilizes a sequence of short-exposure images to produce a single reconstruction. Processing consists of the three main steps: motion estimation and compensation between frames; local image quality estimation of each motion compensated frame; and image-quality-based fusion of lucky patches from all the frames.

The goal of the first step is to align all the images in the sequence with respect to the reference image which typically represents the mean image. This can be achieved by computation of optical flow between the frames with subsequent motion compensation based on straightforward interpolation. The optical flow algorithm used in this work is TVL1, which is chosen for its relatively high accuracy, flatness of the motion field patches, and high computational speed relative to the top optical flow algorithms [15].

The second algorithm step, which bears the most importance in the context of this work, is estimation of local image quality. Note that in the framework of lucky patch processing, the term image quality implies image sharpness, while for the purpose of algorithm comparison image quality is defined as degree of similarity between the compared image and an ideal degradation-free image. The general form of the sharpness measure J can be derived from the first or second derivative of the image I as,

J=|nI(r)|mG(rr)dr,m=1,2n=1,2
where G~exp{|r(x,y)|2/2σ2} is normalized Gaussian kernel. Gaussian filtering provides smoothing for the derivatives of an image, which is necessary due to noisy nature of discretized derivatives. It becomes essential in the case of images collected underwater due to their inherently high levels of noise. The degree of spatial averaging is controlled by the standard deviation parameter σ. Based on previous experience the sharpness measure used in this work utilizes parameter values of n = 2 and m = 2 [12].

When an edge is encountered the support of isotropic Gaussian kernel extends significantly outside the gradient region and the IQ metric for a given pixel is computed taking into account the full circularly symmetric neighborhood of the pixel. However, a pixel affected by noise (that can create a local gradient) and an actual edge need to be distinguished, thus IQ measure should estimate local edge sharpness given the anisotropy of pixel neighborhood i.e. this points to a certain direction in computation [16]. Such direction can be determined by computing the local image structure tensor S,

S=IIT=(Ix2IxIyIxIyIy2),
where Ix and Iy denote x- and y- image derivatives and calculating its eigenvectors ν1,2 which point to the directions of the lowest and highest local gradient respectively. Typically, the spatially averaged image is used to compute derivatives and their values are additionally averaged using Gaussian filtering.

Incidentally, the eigenvalues of the structure tensor λ1,2 provide the strength of the estimated edge in two corresponding orthogonal directions. Three cases with respect to eigenvalues λ21 are distinguished:

λ1 ≈λ2 ≈0 - homogeneous area, minimal structure present;

λ1 ≈0, λ2 » 0 - strong single local orientation, an edge;

λ1 » 0, λ2 » 0 - ambiguous orientation, corner or distributed texture.

To take this information into account one can construct anisotropic Gaussian kernel which is weighted and rotated to the appropriate direction,

Gφ,ρ=exp{(xcosφ+ysinφ)2/2ρ22(xsinφycosφ)2/2ρ12},
where the angleφ=tan1(ν1/ν2) and the weights ρ1,2 ~λ1,2 are determined by local image structure tensor S. Substituting such kernel into Eq. (1) leads to more accurate estimation of the local image sharpness for a wide range of spatial frequencies constituting image content.

The final step of the lucky patch algorithm represents sequential fusion of each motion-compensated frame In with constantly updated reference frame Inrsuch that,

In+1r=Inrγn(InrIn),n=1...N
here gainγn(r) has non-zero values only for the pixels for which the IQ metric value of the frame being fused, Jnis greater than the IQ metric value of the updated reference frame, Jnr,
γn(r)=K{JnJnr,ifJn>Jnr0,ifJn<Jnr
where scaling factor K is a free parameter that depends on dynamic range, noise level, and content of images.

3. Experimental data – imagery collected through underwater turbulence

The experimental data that are used to investigate the impact of the turbulence on imaging underwater consist of two sets: imagery collected in a laboratory tank under controlled conditions and imagery collected in the field experiment during the Skaneateles Optical Turbulence Exercise (SOTEX, July 2010) [17]. The laboratory setup consists of Rayleigh-Bérnard convective tank which has dimensions of 5m × 0.5m × 0.5m and contains highly conductive stainless-steel heating plates on the top and bottom of the tank. Temperature-controlled water is used to regulate the temperature of the plates, and thus the intensity of the turbulent structure within the convection tank. The primary focus of the setup is to maintain a uniform distribution of both turbulent kinetic energy as well as temperature variance dissipation rates, which are key elements affecting the strength and structure of the optical turbulence. The tank is filled with purified water and does not contain any meaningful particulate. In both data collections a monochromatic EO camera with a f#4.6 lens is employed.

The imagery used in the laboratory tank experiment yields a simple pattern consisting of horizontal lines with 312.5 cyc/rad, 625 cyc/rad, 1250 cyc/rad, and 2500 cyc/rad spatial frequencies (Fig. 1). The lower spatial frequency lines are partially obscured by the shape of a velocimeter immersed in the water tank. Full image dimensions are 1221x916 pixels and the sequence length is 501 frames. The exposure time of the collected images is 3ms and the time interval between frames is also 3ms.

 

Fig. 1 Imaging through underwater turbulence in the laboratory tank (Visualization 1).

Download Full Size | PPT Slide | PDF

In the SOTEX experiment images, were collected over a 5m path length at 8m depth in the water column, concurrent with profiles of the turbulent strength, optical properties, temperature, and conductivity [17]. Lake Skaneateles was chosen due to its well-known optically clean waters, thus allowing for imaging through turbulence with relatively little scattering contribution from particulates.

The SOTEX experiment target is the USAF 1951 resolution chart. Numerical experiments were performed using a 311x240 pixel fragment that contained the highest spatial frequency content and therefore experienced the most intense turbulence effects. The largest group of lines on the upper right of the chosen fragment corresponds to 1900 cyc/rad spatial frequency. The sequence contains 140 short-exposure (15ms) images and time interval between the frames is 30ms (Fig. 2).

 

Fig. 2 Imaging through underwater turbulence in natural environment (Visualization 2).

Download Full Size | PPT Slide | PDF

4. Algorithms for restoration of images degraded by turbulence

Due to the aforementioned lack of algorithms designed specifically for underwater imaging, the method proposed in this work is compared to the state-of-the-art techniques developed for images degraded by either atmospheric turbulence or random waves on the water surface. These processing techniques solve the same problems of deconvolution, denoising, and dewarping despite the differences of the underlying physical models of the environment. Both model-driven and data-driven approaches are considered. The example of the former is the bispectrum algorithm [18], which was adapted by the authors of this paper to underwater imaging conditions, and the examples of the latter are robust multichannel (MC) blind deconvolution [19] and two-stage reconstruction [10] algorithms, both of which were downloaded from publicly available websites. The bispectrum algorithm is traditionally applied to hundreds and even thousands-frame-long sequences. Megapixel size images are not uncommonly processed with this method [20]. On the other hand, MC blind deconvolution and two-stage reconstruction techniques have never been applied to processing data of that size. It was found that processing of the line-pattern sequence by MC blind deconvolution would require several Terabytes of RAM and processing of the same sequence by two-stage reconstruction would take several months of computational time on a XEON 3.47GHz 6 cores, 12GB of RAM system. Therefore restoration results of bispectrum and lucky patch methods are demonstrated using the full sequence collected in the laboratory, while restoration results of all described algorithms are presented and quantitatively compared using the sequence consisting of 207x137 pixel fragments of line-pattern images (Fig. 3) as well as the full SOTEX experiment sequence.

 

Fig. 3 Fragment of image sequence collected through underwater turbulence in the laboratory tank (Visualization 3).

Download Full Size | PPT Slide | PDF

The bispectrum approach is based on computations in Fourier domain where the image is represented by amplitude and phase. The amplitudes are recovered by speckle interferometry [21] and phases by the bispectral analysis [18]. Both approaches rely on well-known facts that their respective transfer functions are real and non-zero up to the diffraction limit of the sensor thus facilitating high-spatial-frequency information recovery. For example, the equivalent of the image formation equation in speckle interferometry is:

|I(u)|2N=|O(u)|2|H(u)|2N,
where, I, O and H stand for Fourier transforms of an image i, an object o, and a system point spread function (PSF) h, respectively, u is a 2D vector in the spatial-frequency Fourier plane and .N denotes the average over N short-exposure images. In the case of atmospheric turbulence it has been shown that the speckle transfer function |H(u)|2(STF) falls rather slowly with increasing u and contains non-negligible power almost up to the diffraction cut-off. This is in contrast to the average optical transfer function H(u) which becomes negligibly small very fast for Kolmogorov turbulence. The caveat to applying the same technique to underwater data is that a similar theoretical or experimental test of the speckle transfer function has, to our knowledge, never been performed. This is because the theoretical understanding of underwater turbulence is not as developed as that of atmospheric turbulence. On the other hand, the bispectrum method has been applied successfully to restoration of the imagery degraded by random waves on the air-water interface [9].

Given the above-mentioned reasoning and the proviso that underwater STF has non-zero high-frequency content, a division of the data- and transfer power spectra should reveal the form of the object power spectrum whose square root gives the Fourier amplitude of the object. In astronomy, observations of a point source – a single star – are often performed concurrently with the target observations in order to obtain an empirical STF. This is not possible in underwater imaging. Therefore we adopted the following strategy. First, turbulence strength was estimated directly from the sequence using the approach from Ref [22]. Secondly, the theoretical STF, based on the model of partially-developed speckle and Kolmogorov turbulence, was computed for that turbulence strength [22].

With the similar stipulations as before, one can use the bispectrum to compute Fourier phases of an object. Bispectrum is defined as a 4D functional of two independent 2D frequency vectors uand v,

BI(u,v)=I(u)I(v)I(u+v),
where I, Fourier transforms of an image i, are computed at three different locations in the Fourier plane and sign denotes conjugation. Equation (6) can now be rewritten for bispectra of an image i, an object o, and PSF h:
BI(u,v)N=O(u)O(v)O*(u+v)BH(u,v)N,
here the bispectrum of the transfer function H, BH(u,v)N has non-zero high-frequency content as in the case of the STF. Solving for object’s bispectrum one obtains,
O*(u+v)=BI(u,v)NO(u)O(v)BH(u,v)N.
Then the recursive solution for phase of O(u+v) is,
arg[O(u+v)]=arg[O(u)]+arg[O(v)]arg[BI(u,v)]
In the last expression we used the fact that the phase of BH(u,v)N is zero. One has to apply some boundary conditions in order to start the iterations. A good starting point is the origin of the frequency domain because there the object’s Fourier phase is known to be zero. Additionally, phases up and down and left and right from the origin can be set to zero. Fourier amplitude and phase estimates are combined and inverse transformed to give an estimate of the object.

The particular choice of the MC blind deconvolution via fast alternating minimization is due to its previously demonstrated excellent performance, its independence of turbulence physical model (hence blind deconvolution), and its ability to handle large blurs [19]. The main idea of the approach is to seek a solution alternatively optimizing with respect to the image and with respect to the blurs. While the algorithm is relatively fast computationally [19], it requires vast amounts of operating memory when handling long image sequences. Because of this limitation, the maximum kernel size is set to 19x19 pixels for USAF chart images and to 15x15 pixels for line-pattern images. Additionally the latter sequence is divided into three equal parts which are processed separately and resulting restored images are averaged. This is done to fit computation into the maximum available RAM memory size of 64GB.

The third algorithm for the comparison, a two-stage reconstruction approach, consists of iterative registration and rank minimization noise reduction [10]. The algorithm is chosen for its particular emphasis on motion compensation and noise removal. Default settings of the algorithm are used including five-stage iteration and more robust mutual information option.

5. Restoration of imagery degraded by underwater turbulence

The standard lucky patch algorithm is comprised of solution of Eqs. (5) and (6) with IQ metric computed according to Eq. (1) using the standard Gaussian. The extensive search of lucky patch algorithm parameter space was performed and the values that provided minimum image distortions with maximum restoration quality were chosen, such that σ = 1.5 and K = 5*103 and K = 3*103 for USAF chart and line-pattern sequences, respectively. These parameters correspond to the image intensity normalization range of [0 1]. Gaussian kernel size is 11x11 pixels.

The lucky patch algorithm utilizing STOIQ kept the same parameter values as the standard implementation, but instead of a single smoothing parameter σ it included computation of local image structure tensor according to Eqs. (2) and (3), in which two parameters are used: σimage = 0.33 and σderivative = 1.0, responsible for smoothing an image and its derivatives, respectively. Local neighborhood size of the anisotropic Gaussian kernel was kept equal to the isotropic Gaussian kernel size of the standard algorithm implementation.

In principle, a variety of ways of expressing the relationship between anisotropic Gaussian kernel weights and image structure tensor eigenvalues ρ1,2 ~λ1,2 can be considered. In this work we restrict ourselves to kernels that essentially have a single pixel width along a local edge. In practice, this is realized by setting the anisotropic weight in the direction of the edge, ρ1 directly proportional to the ratio of local structure tensor eigenvalues ρ1 = 0.8*λ1/ λ2 and keeping the orthogonal anisotropic weight constant ρ2 = 1.25. In the case of poorly conditioned local structure tensor (λ1 = λ2 = 0), both weights are kept equal: ρ1,2 = 0.8.

The computation time of processing USAF 1951 resolution chart sequence with STOIQ lucky patch algorithm implemented in MATLAB is 46 seconds per frame on a XEON 3.47GHz 6 cores, 12GB of RAM system. This constitutes approximately 20% increase of execution time compared to standard lucky patch implementation. For comparison bispectrum algorithm execution time for the full resolution chart sequence takes 25 sec on a Core i7 3.4GHz, 4 cores, 8GB of RAM system.

The significance of kernel anisotropy for IQ estimation can be demonstrated by considering sharpness map estimation of the mean image obtained from the line-pattern fragment sequence [Fig. 4(a)]. In the isotropic kernel case, reduction of Gaussian size represents a way to obtain a sharper, more focused and hence more desirable IQ map at the price of less averaging [compare Fig. 4(c) and 4(e)]. However, Fig. 4(c) demonstrates that even in the limit of single-pixel kernel it cannot offer an IQ map that would contain fully resolved edge structure of the processed image. Instead the result shows inconsistent bright lines, from which computational instabilities due to lucky patch fusion tend to develop [Fig. 5(c)]. Increasing isotropic averaging by using larger kernel size helps to resolve higher spatial frequencies (on the right part of the image) but fails to map correctly the lower frequency content [Fig. 4(d)]. Further increase of the kernel size leads to blurring of the high frequencies but yet does not produce correct edges of the lower-frequency line group [Fig. 4(e)]. In contrast, anisotropic kernel adapting to edge direction captures edges and resolves all spatial frequencies that are present in the mean underwater image [Fig. 4(b)].

 

Fig. 4 IQ estimation using the sequence mean of line-pattern image fragment (a) and employing (b) STOIQ algorithm with adaptive kernel (two example kernels and their locations are shown on the left of the image); (c) isotropic Gaussian with σ = 0.4; (d) isotropic Gaussian with σ = 0.8; (d) isotropic Gaussian with σ = 1.5 (shapes of the kernels are shown above the images).

Download Full Size | PPT Slide | PDF

 

Fig. 5 Restoration results of the line-pattern sequence (a) the mean of the sequence; restored image using (b) bispectrum technique; (c) standard lucky patch algorithm; (d) lucky patch algorithm employing STOIQ.

Download Full Size | PPT Slide | PDF

The ability of the STOIQ metric to resolve a wide range of spatial frequencies is emphasized by comparison of the line-pattern sequence restoration results to the groundtruth image collected without turbulence in Fig. 5. All four groups of lines are fully restored by the lucky patch algorithm utilizing STOIQ metric while the mean image is missing high spatial frequencies. Images obtained with bispectrum and the standard lucky patch techniques exhibit only partial recovery of the high frequency content. It is evident, that the bispectrum technique misjudges the amplitudes of the recovered frequencies resulting in skewed image histogram [Fig. 5(b)]. Also the lucky patch algorithm employing standard IQ metric suffers from numerical instabilities arising from lucky region fusion based on sharpness estimation using an isotropic kernel [Fig. 5(c)]. All processed images contain some distortions in the second highest spatial frequency column (1250 cyc/rad) which is due to lack of averaging, i.e., insufficient number of frames to attain a distortionless average content in this particular frequency range.

The nearly periodic nature of the imaged content offers an opportunity to evaluate restoration results in the Fourier domain by computing vertical power spectra of restored images (Fig. 6). The advantage of such analysis is its lower reliance on distortion-free reference image when its spectral content is well defined. Figure 6 presents power spectral densities computed as squared absolute values of the 2D Fourier transforms of the restored images and averaged over the three lowest spatial frequencies in the horizontal direction. Fourier transforms are performed using images with subtracted means as well as normalized to standard deviations. The lucky patch algorithm using standard IQ is excluded from this analysis for clarity of the figure and due to its strong image distortions. The two lowest spatial frequencies 312.5cyc/rad and 625cyc/rad are well resolved in all images on Figs. 5(a)–5(d) including the temporal mean and this is confirmed by two corresponding peaks on Fig. 6. The degree of high spatial frequency restoration achieved by each algorithm can be computed by integrating each power spectral density around known spatial frequency values of the line groups. The bispectrum method achieves 7 times amplification and lucky patch algorithm using STOIQ metric achieves 3 times amplification compared to the mean image signal around 1250cyc/rad spatial frequency. The lower amplification factor of the lucky patch algorithm is due to distortions in this particular spectral range (mentioned above) that spread the boost to other frequencies in two dimensions. The broadening of the spectral peak observed in the bispectrum result can be attributed to absence of motion compensation in the applied algorithm. Analysis of the highest frequency (2500 cyc/rad) restoration shows that the bispectrum method achieves 4 times frequency amplification and lucky patch algorithm using STOIQ metric achieves 6 times amplification with respect to the frequency content present in the mean image. This is fully consistent with conclusions that are based on visual observation of the images produced by two algorithms (Fig. 5).

 

Fig. 6 Vertical power spectra of restored images, the mean image and the ground-truth image.

Download Full Size | PPT Slide | PDF

Direct analysis of restored image quality is performed using two image comparison metrics, structure similarity index measure (SSIM) [23] and normalized mutual information (NMI) [24]. The former estimates perceived differences in local structural information of the compared images i.e. evaluates local relationship between pixel intensities, and the latter estimates accuracy of image alignment. Thus these measures assess different aspects of image comparison. SSIM is defined between two patches Pi and Pj of the compared images as,

SSIMPi,j=(2μiμj+a)(2σi,j+b)(μi2+μj2+a)(σi2+σj2+b),
where µi and µj are the means of the pixel intensities in the patches Pi and Pj respectively, σi2 and σj2 are variances of the pixel intensities in the same patches, and σi,j is the covariance. Parameters stabilizing division are taken a = 10−4L2 and b = 9*10−4L2, where L is dynamic range of the pixel values. The final SSIM score is the mean of the scores computed at all locations of the image.

NMI is determined as a ratio of the sums of two image entropies H1,2(I1,2) = -∑i pi log2pi to the joint entropy H(I1,I2) = -∑ik pi,k log2pi,k,

NMI1,2=H1+H2H,
where pi is the probability of occurrence of intensity i at the pixel in image I which is computed from the image histogram, and pi,k is the joint probability of intensity i in image I1 and intensity k in image I2.

The fragment of the line-pattern sequence used for algorithm comparison represents a slightly reduced image shown in Fig. 4(a) (see Fig. 3). It contained only two sets of lines with spatial frequencies of 1250cyc/rad and 2500cyc/rad. Dynamic range of the image restored by bispectrum was observed to be an order of magnitude higher than those of the images produced by all other techniques, therefore for comparison purposes it was cropped by 25% at its top and bottom limits. Also, an additional final deblurring step using Deblurring BM3D (DEBBM3D) [25] is added to the lucky patch procesing for quantitative comparison of its results to the results obtained with other algorithms all of which included a deconvolution operation. DEBBM3D options are Gaussian deblurring kernel with σ = 1 and 0.06 noise variance.

Restored images obtained from line-pattern sequence are presented in Fig. 7. The image obtained with lucky patch approach utilizing standard IQ metric exhibits definite signs of numerical instability resulting in restoration failure [Fig. 7(c)]. The two-stage technique offers reasonable restoration of low spatial frequencies but completely erases high spatial frequency information [Fig. 7(d)]. Both MC blind deconvolution and bispectrum algorithms produce random patches of restored content. Quantitative comparison of restoration results demonstrated in Table 1 shows that these algorithms provide minimal improvement of SSIM score and no improvement of NMI score versus the scores of the temporal average image. In contrast to that performance the lucky patch algorithm employing STOIQ metric demonstrates restoration of both high- and low-frequency content corroborated by increase in both SSIM and NMI scores (see Fig. 7g and Table 1).

 

Fig. 7 Restoration results of line-pattern fragment sequence: (a) underwater image collected without turbulence; (b) the mean of the sequence; and restored image using (c) standard lucky patch algorithm; (d) two-stage reconstruction algorithm for seeing through water; (e) MC blind deconvolution via fast alternating minimization; (f) bispectrum technique; (g) lucky patch algorithm employing STOIQ.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Line-pattern Results

The restoration results of the USAF sequence are presented in Fig. 8 and compared quantitatively in Table 2 using SSIM and NMI scores. Similarly to the previous experiment, the standard lucky patch technique fails to improve image quality [Fig. 8(c)]. Two-stage restoration algorithm [Fig. 8(d)] modestly improves image resolution compared to simple temporal averaging [Fig. 8(b)]. Improvement involves better visibility of at least one vertical line group on the right of the image and marginal increase of SSIM score. The MC blind deconvolution technique produces a sharper image than the image obtained using the two-stage approach, albeit at the price of added noise due to deconvolution inaccuracies [see Figs. 8(d) and 8(e)]. As a result the SSIM and NMI scores for these methods are equal. Despite the lower scores of the bispectrum result which are due to severe high-frequency intensity modulation, the method demonstrates very sharp image features and resolves at least one more vertical and two more horizontal line groups. Real improvement of SSIM and NMI scores compared to those of temporal averaging is obtained only by the lucky patch algorithm augmented with STOIQ metric (see Table 2). The algorithm produces low noise image with three more vertical and two more horizontal resolved line groups compared to the mean image. However, some of the edges are not as sharp as those obtained by the bispectrum algorithm. This can be expected because algorithmically the lucky patch technique does not attempt precise estimation of the blurring kernel for deconvolution and does not aim to attain the diffraction limit.

 

Fig. 8 Restoration results of USAF sequence: (a) underwater image collected with minimal underwater turbulence; (b) the mean of the sequence; and restored image using (c) standard lucky patch algorithm; (d) two-stage reconstruction algorithm for seeing through water; (e) MC blind deconvolution via fast alternating minimization; (f) bispectrum technique; (g) lucky patch algorithm employing STOIQ.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. USAF Resolution Chart Results

Overall improvement of image quality attained by the proposed algorithm over that attained by the temporal averaging is higher when measured by SSIM rather than NMI (Tables 1 and 2). This is because lucky patch algorithm successfully restores high frequency content hence improving local image structure estimated by SSIM. In contrast, improvement of registration accuracy, which is measured by NMI is less significant because restored high frequency content has low impact on registration and because the temporal mean is used as a starting point for the lucky patch reconstruction.

6. Summary

In summary, an algorithm for restoration of imagery degraded by underwater turbulence is proposed. The key enabling step of the presented technique is a novel approach to image quality estimation that is based on an adaptive anisotropic averaging kernel. Image structure tensor-guided filtering of the image derivatives offers correct mapping of image sharpness, which allows successful fusion of lucky regions from different images degraded by underwater turbulence.

Numerical experiments demonstrate that the proposed approach is capable of resolving a wider range of spatial frequencies compared to other state-of-the-art techniques. An important feature of the presented algorithm is its ability to handle large amounts of data (long sequences of relatively large images) in contrast to the tested MC blind deconvolution or two-stage restoration techniques. The speckle imaging approach possesses similar property of concise computation and exhibits the potential of recovering high frequencies, however it relies on a turbulence model, which is still under development for underwater environment.

Acknowledgments

This research was supported by Naval Research Laboratory and is part of the project ATLIMIS (Atmospheric Limitations of Military Systems, No. E/UR1M/9A265/AF170), commissioned and sponsored by the WTD91 (Technical Centre of Weapons and Ammunition) of the German Armed Forces.

References and links

1. B. Cochenour, L. Mullen, and J. Muth, “Modulated pulse laser with pseudorandom coding capabilities for underwater ranging, detection, and imaging,” Appl. Opt. 50(33), 6168–6178 (2011). [CrossRef]   [PubMed]  

2. L. Mullen, A. Laux, and B. Cochenour, “Propagation of modulated light in water: implications for imaging and communications systems,” Appl. Opt. 48(14), 2607–2612 (2009). [CrossRef]   [PubMed]  

3. B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014). [CrossRef]  

4. W. Hou, “A simple underwater imaging model,” Opt. Lett. 34(17), 2688–2690 (2009). [CrossRef]   [PubMed]  

5. S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

6. L. Lu, X. Ji, and Y. Baykal, “Wave structure function and spatial coherence radius of plane and spherical waves propagating through oceanic turbulence,” Opt. Express 22(22), 27112–27122 (2014). [CrossRef]   [PubMed]  

7. A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012). [CrossRef]  

8. Y. Miao, “Underwater image adaptive restoration and analysis by turbulence model,“ in Proceedings of IEEE World Congress on Information and Communication Technologies, (IEEE 2012), pp. 1182-1187. [CrossRef]  

9. Z. Wen, A. Lambert, D. Fraser, and H. Li, “Bispectral analysis and recovery of images distorted by a moving water surface,” Appl. Opt. 49(33), 6376–6384 (2010). [CrossRef]   [PubMed]  

10. O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 1153–1160. [CrossRef]  

11. M. A. Vorontsov, “Parallel image processing based on an evolution equation with anisotropic gain: integrated optoelectronic architectures,” J. Opt. Soc. Am. 16(7), 1623–1637 (1999).

12. M. A. Vorontsov and G. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A 18(6), 1312–1324 (2001). [CrossRef]   [PubMed]  

13. M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).

14. D. D. Baumgartner and B. J. Schachter, “Improving FLIR ATR performance in a turbulent atmosphere with a moving platform,” Proc. SPIE 8391, 839103 (2012).

15. D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).

16. E. P. Simoncelli and H. Farid, “Steerable wedge filters for local orientation analysis,” IEEE Trans. Image Process. 5(9), 1377–1382 (1996). [CrossRef]   [PubMed]  

17. W. Hou, S. Woods, E. Jarosz, W. Goode, and A. Weidemann, “Optical turbulence on underwater image degradation in natural environments,” Appl. Opt. 51(14), 2678–2686 (2012). [CrossRef]   [PubMed]  

18. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22(24), 4028–4037 (1983). [CrossRef]   [PubMed]  

19. F. Sroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Image Process. 21(4), 1687–1700 (2012). [CrossRef]   [PubMed]  

20. S. Gładysz, “Estimation of turbulence strength directly from target images,” in Propagation Through and Characterization of Distributed Volume Turbulence, OSA Technical Digest (Optical Society of America 2013), paper JW1A.4.

21. A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier-analyzing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

22. S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).

23. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004). [CrossRef]   [PubMed]  

24. D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999). [CrossRef]   [PubMed]  

25. K. Dabov, A. Foi, and K. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008).

References

  • View by:
  • |
  • |
  • |

  1. B. Cochenour, L. Mullen, and J. Muth, “Modulated pulse laser with pseudorandom coding capabilities for underwater ranging, detection, and imaging,” Appl. Opt. 50(33), 6168–6178 (2011).
    [Crossref] [PubMed]
  2. L. Mullen, A. Laux, and B. Cochenour, “Propagation of modulated light in water: implications for imaging and communications systems,” Appl. Opt. 48(14), 2607–2612 (2009).
    [Crossref] [PubMed]
  3. B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
    [Crossref]
  4. W. Hou, “A simple underwater imaging model,” Opt. Lett. 34(17), 2688–2690 (2009).
    [Crossref] [PubMed]
  5. S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).
  6. L. Lu, X. Ji, and Y. Baykal, “Wave structure function and spatial coherence radius of plane and spherical waves propagating through oceanic turbulence,” Opt. Express 22(22), 27112–27122 (2014).
    [Crossref] [PubMed]
  7. A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012).
    [Crossref]
  8. Y. Miao, “Underwater image adaptive restoration and analysis by turbulence model,“ in Proceedings of IEEE World Congress on Information and Communication Technologies, (IEEE 2012), pp. 1182-1187.
    [Crossref]
  9. Z. Wen, A. Lambert, D. Fraser, and H. Li, “Bispectral analysis and recovery of images distorted by a moving water surface,” Appl. Opt. 49(33), 6376–6384 (2010).
    [Crossref] [PubMed]
  10. O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 1153–1160.
    [Crossref]
  11. M. A. Vorontsov, “Parallel image processing based on an evolution equation with anisotropic gain: integrated optoelectronic architectures,” J. Opt. Soc. Am. 16(7), 1623–1637 (1999).
  12. M. A. Vorontsov and G. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A 18(6), 1312–1324 (2001).
    [Crossref] [PubMed]
  13. M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).
  14. D. D. Baumgartner and B. J. Schachter, “Improving FLIR ATR performance in a turbulent atmosphere with a moving platform,” Proc. SPIE 8391, 839103 (2012).
  15. D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).
  16. E. P. Simoncelli and H. Farid, “Steerable wedge filters for local orientation analysis,” IEEE Trans. Image Process. 5(9), 1377–1382 (1996).
    [Crossref] [PubMed]
  17. W. Hou, S. Woods, E. Jarosz, W. Goode, and A. Weidemann, “Optical turbulence on underwater image degradation in natural environments,” Appl. Opt. 51(14), 2678–2686 (2012).
    [Crossref] [PubMed]
  18. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22(24), 4028–4037 (1983).
    [Crossref] [PubMed]
  19. F. Sroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Image Process. 21(4), 1687–1700 (2012).
    [Crossref] [PubMed]
  20. S. Gładysz, “Estimation of turbulence strength directly from target images,” in Propagation Through and Characterization of Distributed Volume Turbulence, OSA Technical Digest (Optical Society of America 2013), paper JW1A.4.
  21. A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier-analyzing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).
  22. S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).
  23. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
    [Crossref] [PubMed]
  24. D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
    [Crossref] [PubMed]
  25. K. Dabov, A. Foi, and K. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008).

2014 (3)

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

L. Lu, X. Ji, and Y. Baykal, “Wave structure function and spatial coherence radius of plane and spherical waves propagating through oceanic turbulence,” Opt. Express 22(22), 27112–27122 (2014).
[Crossref] [PubMed]

2012 (5)

A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012).
[Crossref]

D. D. Baumgartner and B. J. Schachter, “Improving FLIR ATR performance in a turbulent atmosphere with a moving platform,” Proc. SPIE 8391, 839103 (2012).

W. Hou, S. Woods, E. Jarosz, W. Goode, and A. Weidemann, “Optical turbulence on underwater image degradation in natural environments,” Appl. Opt. 51(14), 2678–2686 (2012).
[Crossref] [PubMed]

F. Sroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Image Process. 21(4), 1687–1700 (2012).
[Crossref] [PubMed]

S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).

2011 (1)

2010 (1)

2009 (4)

L. Mullen, A. Laux, and B. Cochenour, “Propagation of modulated light in water: implications for imaging and communications systems,” Appl. Opt. 48(14), 2607–2612 (2009).
[Crossref] [PubMed]

W. Hou, “A simple underwater imaging model,” Opt. Lett. 34(17), 2688–2690 (2009).
[Crossref] [PubMed]

D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).

M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).

2008 (1)

K. Dabov, A. Foi, and K. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008).

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

2001 (1)

1999 (2)

M. A. Vorontsov, “Parallel image processing based on an evolution equation with anisotropic gain: integrated optoelectronic architectures,” J. Opt. Soc. Am. 16(7), 1623–1637 (1999).

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

1996 (1)

E. P. Simoncelli and H. Farid, “Steerable wedge filters for local orientation analysis,” IEEE Trans. Image Process. 5(9), 1377–1382 (1996).
[Crossref] [PubMed]

1983 (1)

1970 (1)

A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier-analyzing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

Aubailly, M.

M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).

Baumgartner, D. D.

D. D. Baumgartner and B. J. Schachter, “Improving FLIR ATR performance in a turbulent atmosphere with a moving platform,” Proc. SPIE 8391, 839103 (2012).

Baykal, Y.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Britton, W.

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Caimi, F. M.

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Carhart, G. W.

Carhartb, G. W.

M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).

Cochenour, B.

Cremers, D.

D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).

Dabov, K.

K. Dabov, A. Foi, and K. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008).

Dalgleish, F. R.

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Egiazarian, K.

K. Dabov, A. Foi, and K. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008).

Farid, H.

E. P. Simoncelli and H. Farid, “Steerable wedge filters for local orientation analysis,” IEEE Trans. Image Process. 5(9), 1377–1382 (1996).
[Crossref] [PubMed]

Foi, A.

K. Dabov, A. Foi, and K. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008).

Font, C.

S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

Fraser, D.

Gallé, R. B.

S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).

Giddings, T. E.

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Gladysz, S.

S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).

Goode, W.

Hawkes, D. J.

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

Hayes, C.

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

Hill, D. L. G.

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

Hou, W.

S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012).
[Crossref]

W. Hou, S. Woods, E. Jarosz, W. Goode, and A. Weidemann, “Optical turbulence on underwater image degradation in natural environments,” Appl. Opt. 51(14), 2678–2686 (2012).
[Crossref] [PubMed]

W. Hou, “A simple underwater imaging model,” Opt. Lett. 34(17), 2688–2690 (2009).
[Crossref] [PubMed]

Jarosz, E.

Ji, X.

Johnson, R.

S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).

Kanaev, A. V.

S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012).
[Crossref]

Kann, L.

S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).

Labeyrie, A.

A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier-analyzing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

Lambert, A.

Laux, A.

Leach, M. O.

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

Li, H.

Lohmann, A. W.

Lu, L.

Matt, S.

S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

Milanfar, P.

F. Sroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Image Process. 21(4), 1687–1700 (2012).
[Crossref] [PubMed]

Mitzel, D.

D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).

Mullen, L.

Muth, J.

Nootz, G.

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Oreifej, O.

O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 1153–1160.
[Crossref]

Ouyang, B.

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Pace, T.

O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 1153–1160.
[Crossref]

Pock, T.

D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).

Restaino, S. R.

S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

Rueckert, D.

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

Schachter, B. J.

D. D. Baumgartner and B. J. Schachter, “Improving FLIR ATR performance in a turbulent atmosphere with a moving platform,” Proc. SPIE 8391, 839103 (2012).

Schoenemann, T.

D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).

Shah, M.

O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 1153–1160.
[Crossref]

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Shu, G.

O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 1153–1160.
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

E. P. Simoncelli and H. Farid, “Steerable wedge filters for local orientation analysis,” IEEE Trans. Image Process. 5(9), 1377–1382 (1996).
[Crossref] [PubMed]

Smith, L. N.

A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012).
[Crossref]

Sonoda, L. I.

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

Sroubek, F.

F. Sroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Image Process. 21(4), 1687–1700 (2012).
[Crossref] [PubMed]

Valley, M. T.

M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).

Vorontsov, M. A.

M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).

M. A. Vorontsov and G. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A 18(6), 1312–1324 (2001).
[Crossref] [PubMed]

M. A. Vorontsov, “Parallel image processing based on an evolution equation with anisotropic gain: integrated optoelectronic architectures,” J. Opt. Soc. Am. 16(7), 1623–1637 (1999).

Vuorenkoski, A. K.

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

Weidemann, A.

Weigelt, G.

Wen, Z.

Wirnitzer, B.

Woods, S.

W. Hou, S. Woods, E. Jarosz, W. Goode, and A. Weidemann, “Optical turbulence on underwater image degradation in natural environments,” Appl. Opt. 51(14), 2678–2686 (2012).
[Crossref] [PubMed]

A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012).
[Crossref]

Appl. Opt. (5)

Astron. Astrophys. (1)

A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier-analyzing speckle patterns in star images,” Astron. Astrophys. 6, 85–87 (1970).

IEEE Trans. Image Process. (3)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13(4), 600–612 (2004).
[Crossref] [PubMed]

E. P. Simoncelli and H. Farid, “Steerable wedge filters for local orientation analysis,” IEEE Trans. Image Process. 5(9), 1377–1382 (1996).
[Crossref] [PubMed]

F. Sroubek and P. Milanfar, “Robust multichannel blind deconvolution via fast alternating minimization,” IEEE Trans. Image Process. 21(4), 1687–1700 (2012).
[Crossref] [PubMed]

IEEE Trans. Med. Imaging (1)

D. Rueckert, L. I. Sonoda, C. Hayes, D. L. G. Hill, M. O. Leach, and D. J. Hawkes, “Nonrigid registration using free-form deformations: application to breast MR images,” IEEE Trans. Med. Imaging 18(8), 712–721 (1999).
[Crossref] [PubMed]

J. Opt. Soc. Am. (1)

M. A. Vorontsov, “Parallel image processing based on an evolution equation with anisotropic gain: integrated optoelectronic architectures,” J. Opt. Soc. Am. 16(7), 1623–1637 (1999).

J. Opt. Soc. Am. A (1)

Lect. Notes Comput. Sci. (1)

D. Mitzel, T. Pock, T. Schoenemann, and D. Cremers, “Video super resolution using duality based TV-L1 optical flow,” Lect. Notes Comput. Sci. 5748, 432–441 (2009).

Opt. Eng. (2)

A. V. Kanaev, W. Hou, S. Woods, and L. N. Smith, “Restoration of turbulence degraded underwater images,” Opt. Eng. 51(5), 057007 (2012).
[Crossref]

B. Ouyang, F. R. Dalgleish, F. M. Caimi, T. E. Giddings, W. Britton, A. K. Vuorenkoski, and G. Nootz, “Compressive line sensing underwater imaging system,” Opt. Eng. 53(5), 051409 (2014).
[Crossref]

Opt. Express (1)

Opt. Lett. (1)

Proc. SPIE (5)

S. R. Restaino, W. Hou, A. V. Kanaev, S. Matt, and C. Font, “Adaptive optics correction of a laser beam propagating under-water,” Proc. SPIE 9083, 90830 (2014).

M. Aubailly, M. A. Vorontsov, G. W. Carhartb, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630 (2009).

D. D. Baumgartner and B. J. Schachter, “Improving FLIR ATR performance in a turbulent atmosphere with a moving platform,” Proc. SPIE 8391, 839103 (2012).

S. Gładysz, R. B. Gallé, R. Johnson, and L. Kann, “Image reconstruction of extended objects: demonstration with the Starfire optical range 3.5m telescope,” Proc. SPIE 8535, 85350 (2012).

K. Dabov, A. Foi, and K. Egiazarian, “Image restoration by sparse 3D transform-domain collaborative filtering,” Proc. SPIE 6812, 681207 (2008).

Other (3)

S. Gładysz, “Estimation of turbulence strength directly from target images,” in Propagation Through and Characterization of Distributed Volume Turbulence, OSA Technical Digest (Optical Society of America 2013), paper JW1A.4.

Y. Miao, “Underwater image adaptive restoration and analysis by turbulence model,“ in Proceedings of IEEE World Congress on Information and Communication Technologies, (IEEE 2012), pp. 1182-1187.
[Crossref]

O. Oreifej, G. Shu, T. Pace, and M. Shah, “A two-stage reconstruction approach for seeing through water,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (IEEE, 2011), pp. 1153–1160.
[Crossref]

Supplementary Material (3)

NameDescription
» Visualization 1: MPG (12808 KB)     
» Visualization 2: AVI (934 KB)     
» Visualization 3: AVI (2190 KB)     

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Imaging through underwater turbulence in the laboratory tank (Visualization 1).
Fig. 2
Fig. 2 Imaging through underwater turbulence in natural environment (Visualization 2).
Fig. 3
Fig. 3 Fragment of image sequence collected through underwater turbulence in the laboratory tank (Visualization 3).
Fig. 4
Fig. 4 IQ estimation using the sequence mean of line-pattern image fragment (a) and employing (b) STOIQ algorithm with adaptive kernel (two example kernels and their locations are shown on the left of the image); (c) isotropic Gaussian with σ = 0.4; (d) isotropic Gaussian with σ = 0.8; (d) isotropic Gaussian with σ = 1.5 (shapes of the kernels are shown above the images).
Fig. 5
Fig. 5 Restoration results of the line-pattern sequence (a) the mean of the sequence; restored image using (b) bispectrum technique; (c) standard lucky patch algorithm; (d) lucky patch algorithm employing STOIQ.
Fig. 6
Fig. 6 Vertical power spectra of restored images, the mean image and the ground-truth image.
Fig. 7
Fig. 7 Restoration results of line-pattern fragment sequence: (a) underwater image collected without turbulence; (b) the mean of the sequence; and restored image using (c) standard lucky patch algorithm; (d) two-stage reconstruction algorithm for seeing through water; (e) MC blind deconvolution via fast alternating minimization; (f) bispectrum technique; (g) lucky patch algorithm employing STOIQ.
Fig. 8
Fig. 8 Restoration results of USAF sequence: (a) underwater image collected with minimal underwater turbulence; (b) the mean of the sequence; and restored image using (c) standard lucky patch algorithm; (d) two-stage reconstruction algorithm for seeing through water; (e) MC blind deconvolution via fast alternating minimization; (f) bispectrum technique; (g) lucky patch algorithm employing STOIQ.

Tables (2)

Tables Icon

Table 1 Line-pattern Results

Tables Icon

Table 2 USAF Resolution Chart Results

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

J= | n I( r ) | m G( r r )d r , m=1,2 n=1,2
S=I I T =( I x 2 I x I y I x I y I y 2 ),
G φ,ρ =exp{ ( xcosφ+ysinφ ) 2 / 2 ρ 2 2 ( xsinφycosφ ) 2 / 2 ρ 1 2 },
I n+1 r = I n r γ n ( I n r I n ), n=1...N
γ n ( r )=K{ J n J n r , if J n > J n r 0 , if J n < J n r
| I( u ) | 2 N = | O( u ) | 2 | H( u ) | 2 N ,
B I ( u , v )=I( u )I( v ) I ( u + v ),
B I ( u , v ) N =O( u )O( v ) O * ( u + v ) B H ( u , v ) N ,
O * ( u + v )= B I ( u , v ) N O( u )O( v ) B H ( u , v ) N .
arg[ O( u + v ) ]=arg[ O( u ) ]+arg[ O( v ) ]arg[ B I ( u , v ) ]
SSI M P i,j = ( 2 μ i μ j +a )( 2 σ i,j +b ) ( μ i 2 + μ j 2 +a )( σ i 2 + σ j 2 +b ) ,
NM I 1,2 = H 1 + H 2 H ,

Metrics