Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging through strong turbulence with a light field approach

Open Access Open Access

Abstract

Under strong turbulence conditions, object’s images can be severely distorted and become unrecognizable throughout the observing time. Conventional image restoring algorithms do not perform effectively in these circumstances due to the loss of good references on the object. We propose the use a plenoptic sensor as a light field camera to map a conventional camera image onto a cell image array in the image’s sub-angular spaces. Accordingly, each cell image on the plenoptic sensor is equivalent to the image acquired by a sub-aperture of the imaging lens. The wavefront distortion over the lens aperture can be analyzed by comparing cell images in the plenoptic sensor. By using a modified “Laplacian” metric, we can identify a good cell image in a plenoptic image sequence. The good cell image corresponds with the time and sub-aperture area on the imaging lens where wavefront distortion becomes relatively and momentarily “flat”. As a result, it will reveal the fundamental truths of the object that would be severely distorted on normal cameras. In this paper, we will introduce the underlying physics principles and mechanisms of our approach and experimentally demonstrate its effectiveness under strong turbulence conditions. In application, our approach can be used to provide a good reference for conventional image restoring approaches under strong turbulence conditions. This approach can also be used as an independent device to perform object recognition tasks through severe turbulence distortions.

© 2016 Optical Society of America

1. Introduction

Atmospheric and water turbulence distorts the point spread functions in conventional imaging systems [1]. As a result, the images obtained will have time dependent blurry, twisting and shimmering effects. Under strong distortion conditions, objects can even become visually unrecognizable [2].

Software approaches have been developed and successfully demonstrated over the past decades in solving image distortion problems under weak or moderate turbulence conditions [3]. In general, the software approaches will analyze an image sequence taken over time for a fixed target. “Bad” frames with relatively large geometric distortions or blurring effects will be identified and discarded as they are not shared by the majority of the frames. The remaining good frames with small geometric distortions and blurring effects will be co-added (fused) to suppress noise and facilitate blind deconvolution [4,5] so that a sharp image can be produced. Alternatively, software can also be used to implement the “lucky imaging” method [6,7] which selects the good frames that match with known features on the target (such as sharp edges/corners). However, under strong turbulence conditions, the number of common features between good frames will be severely reduced, making it difficult to distinguish good frames from bad ones [8]. There can also be severe visual distortions on the target, making it impossible to identify certain features and to apply the “lucky imaging” method [9].

Light field cameras have been invented over the past few years [10] to record individual rays that form a conventional 2D image and facilitate ray-based image editing. For example, the Lytro camera [11] can be used to generate images with different focal depths with one shot. The light field imaging principles, in a wide sense, provide new thoughts to solving the problems of imaging through strong turbulent channels. Intuitively, a few groups of deviated rays will jeopardize the entire image on a conventional camera. However, the light field camera has the potential to identify the bad rays and remove them from the rendering process of a final image.

Based on the light field principles, we propose the use of a plenoptic sensor [12] to solve severe image distortion problems under strong turbulence conditions. Under these strong turbulence conditions, we assume that the viewer does not have any information about the target and that the severe image distortion causes the target to become unrecognizable for most of the time. The goal of our approach is to recover at least one image that reveals the fundamental truths of the target under the severe turbulence condition.

In our approach, we replace the image sensor in a conventional camera with a plenoptic sensor. In this design, rays that form a conventional image are sub-sampled in the angular space and mapped into a plenoptic image in the form of a cell image array. Equivalently, each cell image on the plenoptic sensor is formed by incident rays arriving at various sub-apertures of the imaging lens. Therefore, the distorted wavefront over the full aperture of the imaging lens can be evaluated by sub-aperture areas. If a good sub-aperture can be found due to a momentarily “flat” wavefront, the resulting good cell image can be used to reveal the fundamental truths of the target object. Our approach will independently restore true vision of a severely distorted object under strong turbulence condition with a simple and effective image metric at the cost of sacrificing overall image resolution.

In the manuscript, we will first describe the fundamental principles to improve imaging results under turbulence condition based on Fried’s work in part 2. The mechanism of the plenoptic sensor as well as the procedure of finding the good cell image will be discussed in part 3. In part 4, we will demonstrate the experimental results that verify our approach and compare its effectiveness with image processing results on a normal camera. Part 5 will be the conclusions.

2. Fundamental principles

It is well known [13] that atmospheric turbulence places an upper limit on image resolution under long exposure conditions. In imaging through a turbulent channel, the corresponding coherence length (Fried parameter r0) describes the diameter of a diffraction limited lens that can acquire equivalent resolution in absence of the atmospheric turbulence. Short exposure times provide better resolution under the same turbulence condition but the improvement won’t be significantly better [13]. In the case of non-Kolmogorov assumptions [14], there is no significant change to this conclusion and the Fried parameter can be safely used for estimation. In general, the coherence length r0 satisfies the −3/5 power law with path averaged turbulence strength <Cn2> and the observing distance L. Under strong or deep turbulence conditions, the coherence length of the incoming light wave drops significantly below the aperture diameter of an imaging lens. In order to improve imaging results under the Fried parameter “budget”, two principles can be applied:

  • (1) Use short exposure times, where each frame should be acquired before major changes occur to the turbulent channel.
  • (2) Use an adaptive imaging aperture instead of a rigid aperture to dynamically match with the flat (spatially coherent) regions of the wavefront, so that PSF is still sharp.

The first principle is based on the fact that the short exposure condition provides better resolution than the long exposure condition. Intuitively, “lucky imaging” selects the moment when the PSF becomes temporarily sharp to improve the imaging result. The second principle aims at adaptively choosing the best imaging aperture to improve PSF for the moment. In other words, if the “lucky imaging” methods finds the “luckiest” moment regarding the first principle, the second principle seeks the “luckiest” imaging aperture. In fact, our plenoptic sensor approach will utilize both improvement principles.

3. Mechanisms

The plenoptic sensor uses a shared objective lens and a microlens array (MLA) to form a mini-Keplerian telescope array. The plenoptic image is acquired at the back focal plane of the MLA. With this structure, the obtained image can be used to analyze the light field that forms the image at the front focal plane of the objective lens [15]. A simplified structure diagram of the plenoptic sensor is shown in Fig. 1.

 figure: Fig. 1

Fig. 1 Structure diagram of using a plenoptic sensor to analyze image formation.

Download Full Size | PDF

In Fig. 1, f1 is the focal length of the objective lens and f2 is the focal length of the MLA lenslets The waveform of the light field that forms a conventional image is represented by t1(x,y), where x and y are geometric coordinates in the transverse plane. The objective lens functions as conducting a spatial Fourier transform [16] on field t1(x,y) with fx = u/λf1 and fy = v/λf1 (λ is the central wavelength) to render the light field t2(u,v) at its back focal plane. The transformed field t2(u,v) is also regarded as the angular spectrum of t1(x,y) in Fourier optics [16]. Then, the MLA sub-samples the light field of t2(u,v) and performs a second layer of local Fourier transforms in each lenslet cell to render the light field t3(s,t) at the back focal plane of the MLA. A plenoptic image is obtained by the image sensor which samples the intensity distribution of t3(s,t). As a result, the plenoptic sensor maps the original image into an image array representing the image’s quantized angular spectra. Intuitively, each MLA lenslet geometrically matches with a sub-aperture area of the objective lens. The fan of rays leaving each point on t1(x,y)is divided into an array of smaller fans on the objective lens’s aperture. Each of the fractional fans of rays is collected by a matching MLA lenslet to form a local point spread function in its cell image.

In general, without turbulence distortion, the plenoptic sensor duplicates the original image into an image array. Each cell image will be identical with every other. When turbulence gets involved, the cell images begin to show differences from each other. Based on the distorted wavefront structure at various sub-aperture regions of the imaging lens, each cell image will have different geometric distortions and blurring effects. We define the “deviation” metric between two cell images by their root mean square (RMS) of pixel value differences, which is expressed as:

D(m1,n1;m2,n2)=i,j[Im1,n1(i,j)Im2,n2(i,j)]2

In Eq. (1), m1 and n1 (or m2 and n2) are integer indices for a cell image. The summation symbol “Σ” runs over all of the pixels on the cell images with integer indices i and j representing the pixel locations and integer I representing the pixel values (0-255). The cell image deviation between neighboring cells reflects the local uniformity of the wavefront distortion. For example, for a wavefront distortion that resembles a defocus, the cell image deviation is minimized at the center cell and maximized at the edge cells. Correspondingly imaging results at the center cells are better than imaging results at the outer cells. The generation of cell image difference is illustrated in Fig. 2.

 figure: Fig. 2

Fig. 2 Illustration diagram of cell image difference generated by turbulence distortion in various regions.

Download Full Size | PDF

In Fig. 2, the target object is presented by the blue solid arrow pointing upward and its image is represented by the blue dashed arrow pointing downward at the right hand side of the imaging lens. In the absence of turbulence distortion, the gray dashed lines profile the fan of light rays diverging from the tip point of the target object and converging to its image point after the lens. The light field that forms the image point will be further analyzed by the plenoptic sensor in its sub-angular space, which is illustrated at the right side of Fig. 2. When turbulence is involved, as shown by the red line segments representing a non-flat wavefront, the conventional image (the blue dashed arrow) collects the distortion over the entire aperture of the imaging lens. Comparatively, the sub-aperture images on the plenoptic sensor are only affected by the distortion in each respective area. The “deviation” metric, in a wide sense, measures how different the wavefront is tilted at different sub-aperture areas on the imaging lens.

Under weak turbulence conditions where the major part of the wavefront is relatively flat, the sub-aperture areas with a non-flat wavefront can be indicated by relatively large “deviation” metric values with regard to normal cell images. However, under strong turbulence conditions where the major part of the wavefront is distorted, the “deviation” metric can’t be directly used to differentiate good cell images from “bad” ones. Since the focus of our approach is to find a good cell image under strong turbulence distortions that corresponds with a momentary “flat” wavefront in a sub-aperture area, we find the second order of cell image difference to be effective. Accordingly, we define the “acceleration metric” as a second order cell image difference:

avertical(m,n,t)=i,j[Im+1,n,t(i,j)+Im1,n,t(i,j)2Im,n,t(i,j)]2
ahorizontal(m,n,t)=i,j[Im,n+1,t(i,j)+Im,n1,t(i,j)2Im,n,t(i,j)]2
atime(m,n,t)=i,j[Im,n,t+1(i,j)+Im,n,t1(i,j)2Im,n,t(i,j)]2.

In Eqs. (2)-(4), we separate the image cell acceleration vector into scalar forms in vertical (avertical), horizontal (ahorizontal) and time (atime) directions respectively using six neighbor principles (up, down, left, right, before and after) [17]. The integer indices m and n represent the cell index in a plenoptic image. Integer t represents the frame index in the image sequence along the time dimension. The summation symbol “Σ” in Eqs. (2)-(4) sum over all of the pixels in the cell images with integer indices I and j representing the pixel locations and I representing the pixel values (0-255).

In order to find a stationary sub-aperture area on the imaging lens where the wavefront is relatively “flat”, we search for specific wavefront distortion patterns in the neighboring sub-aperture areas that are tilted oppositely along the vertical and horizontal directions. Equivalently, this corresponds to the regional wavefront structure featuring a local maximum/minimum/saddle point. A relatively “flat” wavefront can be harvested near these points. Correspondingly, the image “acceleration” metric that matches with these wavefront features along the vertical and horizontal directions should also be large.

We then discuss the “acceleration” metric along the time direction when the above sub-aperture wavefront features are satisfied. Intuitively, the life-time of these wavefront features should be comparatively short due to the large deviation from homogenous fluid density. It is well known that in a pressure constant fluid system, the restoring force back to uniform density [18] counteracts large structural distortions. Without loss of generality, we illustrate how a maximally curved wavefront distortion is affected by the restoring force in Fig. 3.

 figure: Fig. 3

Fig. 3 Illustration of how a large curved wavefront distortion is affected by the restoring force.

Download Full Size | PDF

In Fig. 3, we illustratively plot the density deviation for each sub-aperture area by the dashed oval circles. Long dashed circles represent a much higher density deviation while short dashed circles represent small deviations. The symbol “+” denotes that the local fluid density is higher than the channel average and symbol “-” denotes that the local density is lower than the channel average. Each sub-aperture region is marked by the cell image indices with m representing geometric location along the vertical direction and t representing the time of the frame. For simplicity, we do not plot the horizontal dimension, which can be similarly analyzed as the vertical dimension. The diffusion flows generated by the density gradients are represented by the yellow arrows with their widths indicating the flow volume. The wavefront distortion caused by the regional density difference is shown by the blue solid curve. Because of the diffusion flows, the maximally curved wavefront distortion is not stable. In fact, it changes rapidly over time. The red arrows reflect the moving directions and magnitude of local wavefronts. Therefore, the “flat” wavefront condition near the stationary point at time “t” becomes tilted at time “t+1” due to the local large and asymmetric diffusion flows in Fig. 3. Similarly, a tilted local wavefront near the peak of the wavefront curvature may appear to be “flat” in the next frame, featuring the formation of another stationary point. This is shown by the local wavefront in cell “m+1”, which is tilted at time “t” but becomes “flat” at time “t+1”. Correspondingly, rapid changes in a cell image along time dimension can be observed. Large “acceleration” metric values along the time dimension can be used to assist identification of the desired wavefront structures, where a relatively “flat” wavefront area can be captured.

Besides the diffusion flows caused by density gradient, global convective flow (bulk flow) such as the wind also contributes to large “acceleration” metrics along the time dimension for a cell image. Intuitively, for the wavefront distortion at time “t” in Fig. 3, the peak of the wavefront distortion arrives at image cell “m”. If the global flow happens to be shifting upward along the vertical direction, the cell “m” samples the wavefront tilt above the peak at time “t-1” and the wavefront tilt below the peak at time “t+1”, which leads to a large “acceleration” metric along the time dimension. Therefore, large “acceleration” metric values along vertical, horizontal and time directions should all be combined to identify the desirable wavefront shape in a sub-aperture area. A good cell image can be corresponding retrieved at the moment and sup-aperture area on the imaging lens where the wavefront is relatively “flat”.

The combined “acceleration” metric for a MLA cell is defined as:

M(m,n,t)=avertical(m,n,t)ahorizontal(m,n,t)atime(m,n,t)

In Eq. (5), we define the combined “acceleration” metric by multiplying “acceleration” values for each cell along 3 dimensions together. Intuitively, the multiplication helps to treat each dimension more equally when compared to adding them together. A “lucky” cell image can be automatically identified at wherever the combined “acceleration” metric value maximizes. In other words, without knowing what the target might be under severe turbulence distortions, a sub-aperture image can be identified by the proposed metric to reveal the fundamental truths of the object. As discussed above, the “lucky/good” cell image is acquired when the regional wavefront distortion features either one of the three shapes: peak of a maximum concave, peak of a maximum convex, or saddle point area where maximum curvatures along horizontal and vertical dimensions are in opposite directions.

4. Experimental results

The experimental setup to verify our proposed light field approach in resolving imaging problems through strong turbulence condition is shown in Fig. 4.

 figure: Fig. 4

Fig. 4 Experimental arrangement for imaging through turbulence.

Download Full Size | PDF

In the experimental arrangement shown by Fig. 4, we use a 50mm wide and 1.5 m long water tube with heating wires at the bottom to generate severe image distortions. The water tube distortion is combined with a hot plate distortion placed at the end of the propagation channel to give high frequency scintillations. The overall length of the channel is 3m. The imaging lens is made up of a binocular with symmetric branches. A plenoptic sensor is concatenated with one branch of the binocular and a normal camera is concatenated with the other branch. The objective lens for the plenoptic sensor has a 2 inch diameter and 150mm focal length. The MLA lenslets have 300μm width and 5.1mm focal length. The size of the MLA is 10mm × 10mm and the size of the image sensor is 1024 × 1024 with pixel size 5.5μm. Due to the limiting size of the image sensor, the plenoptic sensor can generate a maximum number of 18 × 18 cell images per frame [12]. In the normal camera branch, the imaging lens after the binocular is set near f/4. The image sensor has the same features as the plenoptic sensor branch. Under the same turbulence conditions, we use a 30fps recording speed to take 150 frames with the plenoptic sensor branch and the normal camera branch respectively. A short exposure condition is satisfied by the high frame rate (30 fps). During the recording process for both branches, most of the frames are affected by large geometric distortions and blurring effects. The strong turbulence created by the water tube and the hot plate results in severe visual distortion and difficulty in object recognition.

In analyzing the plenoptic image sequence, we calculate the combined “acceleration” metric for each cell image (except for the boundary cell images such as those in the first/last frame or at the boundary of the MLA) and select the cell image with the largest metric value as the best cell image. The best cell image reflects the sub-aperture area and the moment in time where the wavefront happen to be “flat” based on the discussion in part 3. Fundamental truths of the target (ignoring small geometric distortions and blurring) can be revealed by the best cell image defined by our metric. The result is shown in Fig. 5.

 figure: Fig. 5

Fig. 5 Experimental result of using the proposed image metric to auto select the best image cell.

Download Full Size | PDF

In Fig. 5, the upper-left plot shows the uniform cell image without turbulence (with the presence of the water tube but the heating wire is not turned on), which serves as a reference image to test the effectiveness of our proposed algorithm. The best cell image with maximized combined “acceleration” metric value is shown in the upper-right plot. The lower-left plot shows a random cell image without considering the metric value. Evidently, the metric helps to identify a good moment and sub-aperture area on the imaging lens to select a good cell image of the target. For further comparison, we show the actual best cell image in the lower-right plot, which is obtained through a brute force search for the cell image of highest correlation with the reference image. The correlation is measured by the correlation coefficient between the reference cell image (no turbulence) and each cell image (affected by strong turbulence) in the plenoptic image sequence. Naturally, the cell image with highest correlation coefficient resembles the reference cell image best and should be the actual best image. It is notable that the good cell image with maximum metric value is very close to the optimized imaging result through strong turbulence conditions, and requires no knowledge about the target. On the other hand, the optimized result acquired through the exhaustive brute force search is good but not realistic as the observer often has limited knowledge of the target. Obviously, the reference image used by the brute force search is already the ideal answer (no turbulence involved), which makes it inapplicable to many applications in reality. Similar to the brute force search method, we should also point out that other imaging processing approaches that optimize feature matching to identify the best image can be effective [19,20] against strong turbulence conditions, but they also all depend on good guiding references. When the reference knowledge is missing or vague, those approaches often lead to poor results. Comparatively, our plenoptic sensor approach directly uses the imaging result to evaluate the wavefront structure and adaptively find a good sub-aperture area on the imaging lens as well as a good moment to take the right image of the target under strong turbulence conditions.

Quantitatively, the correlation coefficient between the metric selected cell image and the reference cell image is 98.09%. The correlation coefficient between the reference image and the best cell image through brute force search is 98.72%. The difference between the metric selected best cell image and the actual best cell image (by the brute force search) are fairly small (0.63%). Comparatively, the averaged correlation coefficient between the reference image and all the recorded cell images is 85.07%. The correlation coefficient between the reference image and the lower-left plot of a random cell image is 81.02%. This confirms that most of the cell images are bad images with significant image distortion. In fact, when the correlation coefficient drops below an empirically determined value of 90%, the image is typically unrecognizable.

Without loss of generality, we can measure the image quality for each cell image by the correlation coefficient between the cell image and the reference cell image. The relation between a cell image’s quality (correlation coefficient with the reference image) and its combined “acceleration” metric value can be statistically shown by Fig. 6.

 figure: Fig. 6

Fig. 6 Scatter plot between the cell image quality and cell image metric value.

Download Full Size | PDF

In Fig. 6, we use a 2D scatter plot to show how the metric value of a cell image is correlated with the correlation coefficient value between the cell image and the reference cell image. It is evident to show that cell images with high metric values will also have high correlation coefficient with the reference cell image. Therefore, the use of the proposed combined “acceleration” metric as a sufficient condition to indicate good cell images can be justified. However, we should also point out that high metric values are not necessary conditions for good cell images. Obviously in Fig. 6, one can find many good cell images (such as the actual best cell image) with high correlation coefficients but don’t have high metric values. This is because the combined “acceleration” metric only identifies specific wavefront structures that contain a regionally “flat” wavefront. However, some exceptions, such as a flat wavefront across the entire aperture, are not identified by this metric but still result in good cell images.

Neighboring cell images must be correlated to accurately evaluate the structure of a wavefront distortion. If the wavefront distortions in neighboring sub-aperture areas are uncorrelated, the imaging result in neighboring cells will be independent. In this case, our metric approach will fail as the cell image differences will have no physical meaning. A similar requirement should also be considered in the time dimension because neighboring frames must be taken before turbulence changes significantly. Three rows of cell images sampled along vertical, horizontal and time directions are shown in Fig. 7.

 figure: Fig. 7

Fig. 7 Neighboring cell images of turbulence distorted target symbol “M” in the recorded plenoptic image sequence: (a) neighboring cell images sampled along vertical direction; (b) neighboring cell images sampled along horizontal direction; (c) neighboring cell images sampled along time direction.

Download Full Size | PDF

In Fig. 7, we arbitrarily select the cell (m = 4, n = 4, t = 75) as a common image cell and present the neighboring cells along vertical, horizontal and time dimensions. Two observations can be made on Fig. 7. First, neighboring cell images are correlated in all of the 3 directions (vertical, horizontal and time), which verifies the use of our proposed metric to identify the time and sub-aperture on the imaging lens to acquire a good cell image. Second, image differences between neighboring cells along the three directions are not equal (cell image differences along the time dimension are more obvious), which supports the use of multiplication to combine the “acceleration” metric values along 3 directions in Eq. (5).

For further comparison, we use the normal camera branch to image the same target under the same turbulence condition. To avoid complexity (as there are countless ways to realize lucky imaging under different constraints), we use the same brute force search method to find the frame that has the highest correlation coefficient with the reference image (turbulence off) and use it as the “lucky” frame. Although it is questionable whether such ideal reference information can be retrieved through the heavily distorted image sequence, we assume it can be theoretically achieved and proceed with the brute force search. Naturally, this “lucky” frame should reflect the best moment of an optimized wavefront condition on the normal camera with a rigid lens aperture. The truths of the target can be best revealed by this frame under severe turbulence distortions. The result is shown in Fig. 8.

 figure: Fig. 8

Fig. 8 Lucky imaging result on the normal camera branch for the same target under the same turbulence condition.

Download Full Size | PDF

In Fig. 8, the image that has the highest correlation coefficient with the reference image is shown on the left plot. The associate correlation coefficient is 91.22%. The averaged correlation coefficient of the image sequence (150 frames) with the reference image is 61.61%. A randomly selected image and its next frame are shown in the middle and right plots in Fig. 8 respectively. Four observations can be made based on the result shown in Fig. 8.

  • (1) Compared with a cell image in the plenoptic sensor, an image by normal camera is typically much more blurred under the same strong turbulence condition.
  • (2) Neighboring frames on the normal camera are much more visually different when compared with neighboring cell image differences in the plenoptic sensor.
  • (3) A lucky frame (best frame) that reveals most of the truth on the target object can be found, but some features (like the bottom of the symbol “M”) will be blurred.
  • (4) The “lucky” frame acquired by the normal camera branch reveals sharp features better than a good cell image by the plenoptic sensor branch.

Observation (1) and (2) can be explained in terms of geometric optics. When a ray gets deviated by the distorted wavefront, it adds blurring to the image on the normal camera. However, a deviated ray only adds blurring effect to one cell image on the plenoptic sensor. Therefore, the accumulated blurring in a normal camera image will be much worse than the distributed blurring effect in a plenoptic sensor’s cell image. Similarly, as the images in the normal camera are much more blurred, the changes of the blurring effect between neighboring frames will be much more distinctive than the changes between neighboring cell images on the plenoptic sensor. This explains observation (2). In observation (3), a good image can still be obtained on the normal camera when the major area of the imaging lens’s aperture receives a relatively “flat” wavefront. However, compared with the plenoptic sensor approach that fetches a sub-aperture area, the probability to have a relatively “flat” wavefront over the entire lens aperture is extremely low [21]. Thus, regional blurring effects in the best frame are inevitable. Observation (4) is associated with the diffraction limiting effect. In principle, the maximum resolution of an imaging system is determined by the smallest numerical aperture in the system [20]. In the case of the plenoptic sensor, the maximum resolution limit is determined by the MLA lenslet, which has width of 300μm and effective focal length of 5.1mm. By using the central wavelength 550nm of visible spectrum (400nm to 700nm), the diffraction limited PSF [22] will have a width (Airy diameter) of 11.4μm, which is twice the width of a camera pixel (5.5μm) in our imaging system. Comparatively, the normal camera branch has PSF width of 2.7μm (for f/4), which is less than the width of a camera pixel (5.5μm). Therefore, the normal camera branch reveals sharp features better than the plenoptic sensor branch when wavefront distortion happens to be “flat” on their corresponding apertures. We should point out that in weak/moderate turbulence regime where severe visual distortion rarely happens, it may not be wise to use the plenoptic sensor approach to perform tasks of imaging through a turbulent channel due to the loss of resolution on sharp features. In addition, the much reduced pixel resolution for a good cell image on the plenoptic sensor won’t be able to reveal a target object with over-complex patterns through strong turbulent channels as the image may not be sufficiently Nyquist sampled [23]. In this case, our approach can only reveal the outline truths (low spatial frequency parts) on the target object.

Based on the above results and discussions, we can conclude that under strong turbulence conditions where severe visual distortion frequently occurs and the basic shape of the object becomes unrecognizable for most of the observation time, the plenoptic sensor approach provides an effective and self-dependent (no need for reference information) solution to retrieve fundamental truths of the target.

5. Conclusions

In this article, we have introduced the principle, mechanism and algorithm of using a plenoptic sensor to reveal fundamental truths of a target object under strong turbulence conditions which cause severe visual distortions in conventional imaging systems. Our proposed approach will evaluate wavefront conditions at sub-aperture areas on the imaging lens and adaptively identify a good moment as well as a good sub-aperture area to obtain a cell image with relatively flat wavefront. The fundamental truths of the target objects can therefore be revealed on the plenoptic sensor without relying on any external information. To verify the effectiveness of our approach, we have conducted a lab-scale experiment featuring strong turbulence conditions and demonstrated how to use the plenoptic sensor to retrieve a good cell image of the target object. In application, our plenoptic sensor approach can be used as an independent device to enhance vision and recognize objects under severe vision conditions caused by strong turbulence. It can also be used to provide correct and reliable references to help conventional image processing approaches restore accurate and high resolution imaging results under strong turbulence conditions.

References and links

1. K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. 193, L45–L48 (1974). [CrossRef]  

2. X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 157–170 (2013). [CrossRef]   [PubMed]  

3. M. C. Roggemann, B. M. Welsh, and B. R. Hunt, Imaging through turbulence (CRC press, 1996).

4. G. R. Ayers and J. C. Dainty, “Interative blind deconvolution method and its applications,” Opt. Lett. 13(7), 547–549 (1988). [CrossRef]   [PubMed]  

5. D. Li, R. M. Mersereau, and S. Simske, “Atmospheric turbulence-degraded image restoration using principal components analysis,” IEEE Geosci. Remote Sens. Lett. 4(3), 340–344 (2007). [CrossRef]  

6. N. M. Law, C. D. Mackay, and J. E. Baldwin, “Lucky imaging: high angular resolution imaging in the visible from the ground,” Astron. Astrophys. 446(2), 739–745 (2006). [CrossRef]  

7. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M. T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009). [CrossRef]  

8. A. V. Kanaev, W. Hou, S. R. Restaino, S. Matt, and S. Gładysz, “Restoration of images degraded by underwater turbulence using structure tensor oriented image quality (STOIQ) metric,” Opt. Express 23(13), 17077–17090 (2015). [CrossRef]   [PubMed]  

9. E. Chen, O. Haik, and Y. Yitzhaky, “Detecting and tracking moving objects in long-distance imaging through turbulent medium,” Appl. Opt. 53(6), 1181–1190 (2014). [CrossRef]   [PubMed]  

10. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Computer Science Technical Report 2, no. 11 (2005): 1–11.

11. T. Georgiev, Z. Yu, A. Lumsdaine, and S. Goma, “Lytro camera technology: theory, algorithms, performance analysis,” Proc. SPIE 8667, 86671J (2013). [CrossRef]  

12. C. Wu, J. Ko, and C. C. Davis, “Determining the phase and amplitude distortion of a wavefront using a plenoptic sensor,” J. Opt. Soc. Am. A 32(5), 964–978 (2015). [CrossRef]   [PubMed]  

13. D. L. Fried, “Optical resolution through a randomly inhomogeneous medium for very long and very short exposures,” J. Opt. Soc. Am. 56(10), 1372–1379 (1966). [CrossRef]  

14. A. Zilberman, E. Golbraikh, and N. S. Kopeika, “Propagation of electromagnetic waves in Kolmogorov and non-Kolmogorov atmospheric turbulence: three-layer altitude model,” Appl. Opt. 47(34), 6385–6391 (2008). [CrossRef]   [PubMed]  

15. C. Wu, J. Ko, and C. Davis, “Object recognition through turbulence with a modified plenoptic camera,” Proc. SPIE 9354, 93540V (2015).

16. J. W. Goodman, Introduction to Fourier Optics 3ed (Roberts and Company Publishers, 2005).

17. A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall Inc, 1989).

18. A. S. Monin and A. M. Yaglom, Statistical Fluid Mechanics Volume II: Mechanics of Turbulence (Courier Corporation, 2013).

19. M. A. Vorontsov and G. W. Carhart, “Anisoplanatic imaging through turbulent media: image recovery by local information fusion from a set of short-exposure images,” J. Opt. Soc. Am. A 18(6), 1312–1324 (2001). [CrossRef]   [PubMed]  

20. C. Wu, J. Ko, and C. C. Davis, “Imaging through turbulence using a plenoptic sensor,” Proc. SPIE 9614, 961405 (2015).

21. D. L. Fried, “Probability of getting a lucky short-exposure image through turbulence,” J. Opt. Soc. Am. 68(12), 1651–1657 (1978). [CrossRef]  

22. K. Fliegel, “Modeling and measurement of image sensor characteristics,” Radio Eng. 13(4), 27–34 (2004).

23. R. G. Baraniuk, “Compressive sensing,” IEEE Signal Process. Mag. 24(4), 118–121 (2007). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Structure diagram of using a plenoptic sensor to analyze image formation.
Fig. 2
Fig. 2 Illustration diagram of cell image difference generated by turbulence distortion in various regions.
Fig. 3
Fig. 3 Illustration of how a large curved wavefront distortion is affected by the restoring force.
Fig. 4
Fig. 4 Experimental arrangement for imaging through turbulence.
Fig. 5
Fig. 5 Experimental result of using the proposed image metric to auto select the best image cell.
Fig. 6
Fig. 6 Scatter plot between the cell image quality and cell image metric value.
Fig. 7
Fig. 7 Neighboring cell images of turbulence distorted target symbol “M” in the recorded plenoptic image sequence: (a) neighboring cell images sampled along vertical direction; (b) neighboring cell images sampled along horizontal direction; (c) neighboring cell images sampled along time direction.
Fig. 8
Fig. 8 Lucky imaging result on the normal camera branch for the same target under the same turbulence condition.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

D( m 1 , n 1 ; m 2 , n 2 )= i,j [ I m 1 , n 1 (i,j) I m 2 , n 2 (i,j) ] 2
a vertical (m,n,t)= i,j [ I m+1,n,t (i,j)+ I m1,n,t (i,j)2 I m,n,t (i,j) ] 2
a horizontal (m,n,t)= i,j [ I m,n+1,t (i,j)+ I m,n1,t (i,j)2 I m,n,t (i,j) ] 2
a time ( m,n,t )= i,j [ I m,n,t+1 (i,j)+ I m,n,t1 (i,j)2 I m,n,t (i,j) ] 2 .
M(m,n,t)= a vertical ( m,n,t ) a horizontal ( m,n,t ) a time ( m,n,t )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.