Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Spatial stabilization of deep-turbulence-induced anisoplanatic blur

Open Access Open Access

Abstract

We explore the feasibility of post-detection restoration when imaging through deep turbulence characterized by extreme anisoplanatism. A wave-optics code was used to simulate relevant short-exposure point spread functions (PSFs) and their decorrelation as a function of point-source separation was computed. In addition, short-exposure images of minimally extended objects were simulated and shown to retain a central lobe that is clearly narrower than the long-exposure counterpart. This suggests that short-exposure image data are more informative than long-exposure data, even in the presence of extreme anisoplanatism. The implications of these findings for image restoration from a sequence of short-exposure images are discussed.

© 2016 Optical Society of America

1. Introduction

In many incoherent-imaging applications, the quality of the imagery is degraded by atmospheric turbulence. A variety of clever pre-detection and post-detection correction approaches have been developed to reduce or remove the effects of turbulence-induced blur. These approaches include adaptive optics (AO) [1], shift-and-add [2], speckle imaging [3], multi-frame blind deconvolution [4,5], deconvolution from wave-front sensing [6], and phase-diverse speckle [7]. Most of these techniques have been developed and exercised under a space-invariant imaging model. The problem of correcting for turbulence-induced blur becomes significantly more challenging in geometries for which a space-variant imaging model is indicated. The source of space variance when imaging through volume turbulence is readily understood by referring to Fig. 1. Light emanating from spatially separated object points will encounter different volumes of turbulence that will induce differing aberrations and yield Point Spread Functions (PSFs) with differing shapes in the image plane.

 figure: Fig. 1

Fig. 1 Light emanating from spatially separated object points encounters different volumes of turbulence yielding field-dependent blur in the image plane.

Download Full Size | PDF

The change in the blur as a function of displacement in field-of-view (FOV) depends upon the imaging geometry and the associated intervening turbulence. This change is often quantified by the isoplanatic angle, θo [8], defined by Fried in the context of Adaptive-Optics (AO) correction of wavefront error induced by Kolmogorov turbulence. In the limiting case of large apertures and ideal AO correction, the mean-square wavefront phase error as a function of angle from the beacon, θ, is found to be [9]:

 σ2=(θθo)5/3.
The Strehl ratio degrades gracefully with angular displacement from the beacon and, under the extended Maréchal approximation [10], is reduced to 1/e (0.37) when the angle reachesθo. Thus, the isoplanatic angle provides an indication of the field-of-view over which an AO system is able make a reasonable correction.

Pre-detection correction (with AO) is preferable to post-detection correction because it avoids noise-amplification encountered in image restoration. However, there are many applications for which AO is not available. In this paper we are interested in the problem of image restoration in the presence of turbulence-induced space-variant blur. This is a more challenging problem than the space-invariant counterpart. Nevertheless, a number of methods have been developed to operate in the presence of space variance. When the isoplanatic angle is sufficiently large, then space-invariant algorithms, such as speckle-imaging approaches, can be applied to overlapping patches after which mosaicking methods can be exercised to create a wide-field restoration [11,12].

Multiple researchers have contributed to another class of successful algorithms characterized by the operations dewarp-add-and-deblur (the space-variant counterpart of shift-and-add) [13,14]. Another approach is “lucky region fusion” in which an image-quality map, computed on each short-exposure frame, is used to identify lucky regions that can be merged into a fused video stream [15]. Whereas the aforementioned algorithms are directed at mitigating space-variant blur, they work best when the isoplanatic angle is larger than the resolution angle, dictated by diffraction in the absence of turbulence.

It is also interesting to consider a generalization of the method of phase diversity [16,17], in which a sequence of in-focus and defocused short-exposure image pairs is used to estimate a diffraction-limited image. This method, called Phase-Diverse Speckle (PDS), has been shown in simulation to work in the presence of pronounced turbulence-induced space variance (where the isoplanatic angle is one third the size of the resolution angle) by appealing to a distributed-phase-screen model [18].

In this paper we explore the feasibility of post-detection restoration in the presence of deep-turbulence-induced anisoplanatic blur. The degree of space variance encountered with deep turbulence is far greater than that for which image restoration has been previously attempted. For example, for an imaging scenario of interest that we investigate, we compute that there are 355 isoplanatic angles across a single resolution angle! Such extreme anisoplanatism raises several questions: What is the physical interpretation of image formation? Is the simulation of image formation from an extended object computationally tractable? Is there any hope of bounding the number of parameters needed to model the image-formation process so that image restoration is plausible? In the following sections we explore these questions. In Section 2, we review scenarios for imaging through volume turbulence and characterize the particular scenario of interest, the reference scenario, which involves deep turbulence. In Section 3 we explore the simulation of deep-turbulence-induced blur in imagery, for both point objects and objects of modest extent. Section 4 contains a discussion of the feasibility of image restoration for extended objects in the presence of deep turbulence. Conclusions are drawn in Section 5.

2. Imaging-through-turbulence scenarios

The impact of turbulence on imaging performance is governed by the imaging geometry and the associated distribution of turbulence between the object and the telescope pupil. Telescope-independent parameters used to characterize imaging performance in the presence of turbulence include the atmospheric coherence diameter (or Fried’s parameter), ro [8], the isoplanatic angle, θo [8], and the Rytov number (log-amplitude variance, computed from first-order Rytov theory), σχ2 [19]. For completeness we provide finite-range expressions for these parameters.

ro[0.422(2πλ)20LdzCn2(z) (LzL)5/3]3/5
θo[2.91(2πλ)20L dzCn2(z) z5/3]3/5
σχ20.56(2πλ)7/60LdzCn2(z)(LzL)5/6z5/6 ,
whereλis the wavelength, Cn2(z)is the structure constant for turbulence-induced index-of- refraction variations (distributed in the range dimension z) and all path integrals are from the pupil, located at z=0, to the target, located at range L. Whereas Eqs. (2) and (4) were derived for spherical-wave propagation suitable for imaging at finite-ranges, the definition for isoplanatic angle is for plane-wave propagation and it is the turbulence that extends for a finite distance, L. Therefore, this definition of isoplanatic angle is not a perfect fit for a finite-range scenario, although it is the expression commonly used throughout the community for such scenarios. We acknowledge that this expression overemphasizes the impact of the turbulence near the target so that it tends to give a smaller isoplanatic angle than we expect experimentally.

Imaging performance is also influenced by the diffraction associated with the telescope aperture diameter, D. The diffraction-limited angular resolution, which would be achieved in the absence of turbulence, is given by

ρ=λ/D .
For imaging scenarios of interest here, resolution is limited by turbulence as opposed to diffraction. The degradation in long-exposure-imaging resolution owing to turbulence is quantified by the ratio D/ro.

A final telescope-dependent parameter that is helpful in interpreting imaging performance is the Isoplanatic angle to the Resolution angle Ratio (IRR),

IRR  θo/ρ.
This ratio indicates the number of diffraction-limited resolution elements that span (in length) an isoplanatic region. Of course the reciprocal of the IRR is the number of isoplanatic angles that span a resolution angle, which may be more meaningful in deep-turbulence scenarios.

To illustrate the variability in imaging performance among differing imaging geometries, we consider three distinct imaging scenarios: (1) ground-based astronomy in the zenith direction (up-looking), (2) airborne imaging in the nadir direction (down-looking), and (3) airborne long-range slant-path imaging. Scenario 1 is representative of a solar-astronomy observation, where the telescope is directed at the zenith angle and the target has a range L=1.5×108km (essentially infinity). Scenario 2 is representative of high-altitude airborne surveillance, involving an airborne platform at an altitude of 20km looking in the nadir direction. Scenario 3 is also a high-altitude surveillance scenario but with an extremely long slant-path range. The airborne platform has an altitude of 20km, a slant range L=300km, and a depression angle from the platform of 5.16 degrees, under a spherical-earth model. Note that this corresponds to an elevation angle (of the sensor as seen from the target at ground level) of only 2.47 degrees. Whereas scenarios 1 and 2 provide context, it is scenario 3, the reference scenario, which illustrates imaging through deep turbulence. The reference scenario presents several practical issues including interpretation of low-depression-angle images, line-of-sight access to targets, and reduced signal owing to path-transmission losses. In this paper we are concentrating on the effects of turbulence.

To isolate the impact of the differing geometries, we consider a common telescope, wavelength, and Cn2vertical profile among all scenarios. In our analysis, the telescope diameter is D=0.416m (with a 25% central obscuration, by area) and the wavelength is λ=650nm. For this telescope and wavelength, the diffraction-limited angular resolution is found to be ρ=1.563 μrad. The Cn2vertical profile is modeled as a Hufnagel-Valley (HV) distribution [20]:

Cn2(h) = 8.148x1026*v2*(h1000)10exp(h1000)+2.7x1016exp(h1500)+Cn2oexp(h100),
where h is given in meters above the ground and Cn2 has units of meters to the −2/3 power. The two free parameters, used to model the wind speed and the turbulence strength at the ground, have been respectively set to be:
v = 21,
Cn2o=1x1013 ,
where v is given in meters per second and Cn2ois given in meters to the −2/3 power.

Table 1 shows telescope-independent and telescope-dependent imaging parameters for each of these three scenarios. For scenario 1 we see that θo/ρ   1. In words, there are multiple resolution elements across an isoplanatic patch so that deconvolution methods could be applied to individual isoplanatic patches. Thus, mosaicking methods could be utilized to mitigate the space-variant blur over an extended object. In scenario 2, the isoplanatic angle is smaller than the diffraction-limited angular resolution, precluding restoration by mosaicking. However, ro>D, suggesting that the blur is very mild in this scenario so that deblurring is not needed. In scenario 3, θo/ρ =0.0028 1. Thus the isoplanatic angle is a very small fraction of the resolution angle. The reciprocal of this ratio indicates that there are 355 isoplanatic angles across a resolution angle. In addition, the Rytov number is found to be σχ2=14.78. Deep turbulence is commonly defined as θoρ and σχ21 [21]. Clearly scenario 3, our scenario of interest, is an example of imaging in deep turbulence, an operational regime with problems not encountered in scenarios 1 and 2.

Tables Icon

Table 1. Parameters Characterizing Imaging Performance for Three Scenarios

3. Simulation of blur

In this section, we explore the simulation of deep-turbulence-induced blur in imagery, for both point objects and objects of modest extent. Simulations were performed using the Atmospheric Compensation Simulation (ACS) tool, a time-domain, wave-optics and hardware-emulation code [22]. ACS is a dynamic-simulation tool that utilizes scalar diffraction theory to propagate wavefronts through a sequence of phase screens generated under Kolmogorov statistics. The discrete screens are scaled to match the desired turbulence profile and wavefront wavelength. The exact screen strengths and locations are optimally selected to minimize the difference in the computation of a weighted sum of key turbulence parameters (ro, θo, and σχ2, in order of weighting) when using a continuous-volume computation, using Eqs. (2) through (4), and one using a distribution of discrete phase screens. We note parenthetically that other strategies have been proposed for optimal placement of phase screens, for both plane-wave [23] and spherical-wave geometries [24]. For the wave-optics simulations described herein, a combination of internal and external checks were implemented to insure that no-aliasing occurred during the propagations. In addition, there is a margin on the periphery of each phase screen to enable the possibility of light being deviated back into the geometric-optics cone defined by the pupil for each point source. Wave-optics simulations, when carefully executed, are known to successfully model wave propagation in deep-turbulence scenarios. ACS has been used to model propagation in both vertical and horizontal paths for astronomical imaging systems, high-energy laser systems, and free-space laser communications.

3.1 Simulations with point objects

ACS was used to simulate the propagation of radiation from a point object through differing volume-turbulence realizations (modelled with a distribution of phase screens) associated with the reference scenario (300km slant-path range) to produce the complex field in the telescope pupil. The associated short-exposure PSF is found by multiplying the complex field with the binary pupil function, applying a DFT, and taking the magnitude squared. Figure 2 shows a surface plot for the short-exposure PSF for a single turbulence realization, along with that of the diffraction-limited PSF, an intermediate-exposure PSF, and a theoretical long-exposure PSF for comparison. Note that the short-exposure PSF retains a central peak and fine structure that is comparable in width to the central peak of the diffraction-limited PSF and much narrower than the central peak of the long-exposure PSF. The intermediate-exposure PSF was constructed by incoherently averaging 100 independent realizations of short-exposure PSFs. Its shape is approaching the theoretical long-exposure PSF, although it is not circularly symmetric and has not filled out fully, owing to the moderate number of realizations used.

 figure: Fig. 2

Fig. 2 Reference-scenario simulated PSFs. The short-exposure PSF shown is for a single turbulence realization. The PSFs have been scaled to have the same peak value.

Download Full Size | PDF

It is well known that such fine structure can be found in virtually all short-exposure PSFs. This is illustrated in Fig. 3, which shows short-exposure PSFs for four different turbulence realizations. It is the retention of this fine structure that allows near-diffraction-limited imagery to be restored from a series of short-exposure images in scenarios for which the blur is space-invariant (e.g., via speckle imaging or multi-frame blind deconvolution).

 figure: Fig. 3

Fig. 3 Short-exposure PSFs for four turbulence realizations in the reference imaging scenario (R1-R4, left) compared with the theoretical long-exposure PSF and the vacuum-diffraction-limit PSF. The image chips measure 25.4x25.4 μrad.

Download Full Size | PDF

Of course in the reference scenario, the blur is at the other end of the space-variance spectrum, exhibiting extreme space variance. In order to examine the change in the blur as a function of displacement in the field-of-view, we simulated the PSF for each point in a closely spaced linear array of point sources (61 points with 0.5 nrad spacing). A single realization of volume turbulence was modeled with 10 phase screens, orthogonal to and optimally distributed along the slant path. The offsets in the linear array were implemented by shifting the phase screens as opposed to shifting the point-source position. Thus the angles are not limited by the array size and the grid spacing, removing the risk of aliasing at the outer point sources on the array. ACS was used to propagate the spherical wave emanating from each point source through the phase screens to the pupil using scalar diffraction theory. A PSF for each point in the linear array was then computed using the wavefront (amplitude and phase) aberration in the pupil. These PSFs can then be compared to see how they change with separation. We use the spatial correlation coefficient to quantify differences in PSFs associated with sources i and j:

ri,jx,y[si(x,y)s¯i][sj(xxij,yyij)s¯j]{x,y[si(x,y)s¯i]2}12{x[sj(x,y)s¯j]2}12 ,
where si(x,y) is the discretely sampled 2D PSF associated with the ith source, s¯i is the spatial average of si(x,y), and (xij,yij) is an image-plane vector originating from and terminating at the ideal image locations for the ith and jth points, respectively. The vector (xij,yij) is used to register the two PSFs. In general, these registration-vector coordinates are real valued so that the discretely sampled PSFs sj(x,y) were appropriately interpolated using a linear phase factor in the pupil. Because the turbulence is random, we are interested in the average behavior of the spatial correlation coefficient, which only depends on the separation of the point sources, Δ, and not on their absolute positions. A linear array of points can be used to average the spatial correlation coefficient over multiple pairs of points, all with the same separation (spatial averaging). In addition, we can perform ensemble averaging by changing the turbulence realization. Accordingly, we computed the spatial and ensemble averaged (100 independent volume-turbulence realizations) spatial correlation coefficient, r¯Δ, as a function of separation. Figure 4 plots r¯Δ for two different turbulence profiles (profile 1 is the HV profile; profile 2 is 0.075 × HV). The mean and the 99% confidence limits (under an imperfect Gaussian-distributed assumption) are shown for two turbulence profiles. As expected, r¯Δdecays monotonically as the separation between points increases. In addition, the decay is more gradual for Cn2 profile 2, for which the turbulence strength has been reduced. Note that the error bars are larger for larger separation, owing to the fact that there are fewer source pairs with longer separation than with short separation, given our finite linear array of source points.

 figure: Fig. 4

Fig. 4 Average spatial correlation coefficient for PSFs as a function of angular separation, for the reference scenario. The mean and the 99% confidence limits are shown for two turbulence profiles (profile 1 is HV, profile 2 is 0.075 × HV). For both profiles, the Fried isoplanatic angle approximately corresponds to r¯Δ=0.86.

Download Full Size | PDF

It is interesting that for the two turbulence profiles used in Fig. 4, the average spatial correlation coefficient for ACS-simulated PSFs is about 0.89 at the Fried isoplanatic angle. However, there is no guarantee that the Fried isoplanatic angle will always correspond to a fixed average spatial correlation coefficient value. Recall that the Fried isoplanatic angle is defined in the context of adaptive optics. By contrast, we have defined the function r¯Δ to characterize image-domain behavior, in the context of non-adaptive imaging. We could contemplate defining a maximum tolerable value for PSF decorrelation when performing an image-restoration task. This could be quantified through analysis of image-restoration degradation as a function of PSF model mismatch. A maximum tolerable decorrelation would imply a corresponding maximum acceptable separation, Δo, which could then serve as an alternative, image-based definition for isoplanatic angle.

To investigate the need to interpret the isoplanatic angle differently in non-adaptive and adaptive-optics applications, we also computed r¯Δ as a function of point-source separation for the case of complete adaptive correction, where the correction corresponds to one of the two points (the beacon). This was straightforward to compute since our simulations provide the wavefront aberration for each point. Accordingly, we simulated ideal phase correction for the first point, which serves as a beacon, and used the same correction for the second point, separated by Δ. The corresponding plot is given in Fig. 5, where we see that the PSF decorrelation drops off more slowly than in the non-adaptive case. In addition, we see that the value for r¯Δthat corresponds to the Fried isoplanatic angle is significantly different from that of the non-adaptive case and there is less consistency between the two turbulence profiles. The Fried isoplanatic angle corresponds to r¯Δ = 0.94 and 0.93 for profiles 1 and 2, respectively. We also evaluated r¯Δfor profile 1 at longer separations than are shown in Figs. 4 and 5 and found that the functions for both non-adaptive and adaptive-optics become asymptotically flat. However the asymptote for the two cases is significantly different. The asymptotic value for the non-adaptive case is r¯Δ 0.57 whereas it is found to be r¯Δ 0.47 for the AO case. The lower asymptotic value for the AO case can be appreciated by realizing that once the separation from the beacon is sufficiently large, the wavefront aberration is completely uncorrelated from that of the beacon and the adaptive correction actually degrades the second PSF.

 figure: Fig. 5

Fig. 5 Average spatial correlation coefficient for PSFs as a function of angular separation in the adaptive-optics case (both points are corrected using the phase aberration of the first point). The mean and the 99% confidence limits are shown for two turbulence profiles (profile 1 is HV, profile 2 is 0.075 × HV). The Fried isoplanatic angle approximately corresponds to r¯Δ=0.94 for profile 1 and r¯Δ=0.93for profile 2.

Download Full Size | PDF

3.2 Simulations with objects of modest extent

Under any definition of isoplanatic angle, it is clear that for the reference scenario there are an extreme number of isoplanatic angles across a single diffraction-limited resolution element. This observation provokes the core question we are trying to answer: What happens during imaging of an extended object under conditions when the isoplanatic patch is much smaller than the diffraction-limited resolution of the imaging system? We would like to determine if the short-exposure image of an extended object retains information on the scale of the diffraction-limited detail, as is the case for field-invariant blur. Or does spatial averaging of independent-turbulence PSFs wash out such detail? As a thought experiment, consider the limiting case when the isoplanatic angle becomes infinitesimally small. In this limit, spatial averaging of independent PSFs will approach the temporal averaging that occurs with a long-exposure PSF. Therefore, in this limit there would be no value in collecting short-exposure data for purposes of image restoration, since all of the fine detail would be lost.

To address our core question, we computed reference-scenario images of a minimally extended object: a uniform square, 0.8 μrad on a side, sized to approximately equal an object-domain Nyquist sample for our reference optical system (ρ1.6 μrad). This object was chosen since it represents a barely resolved patch (or diffraction-limited Nyquist cell) over which space-variant PSFs could be spatially integrated. Because the isoplanatic patch is so small, one might reasonably ask if image formation should be modeled in the regime of partial spatial coherence owing to the sun’s limited extent. An analysis using the van Cittert-Zernike theorem indicates that there are nearly 200 solar-illumination coherence areas [25] within an isoplanatic patch, for illumination in the same direction as the observation. We conclude that we are safely in the spatially incoherent imaging regime, although this may need to be revisited for other illumination geometries. Accordingly, we simulate the imaging process by first computing the PSF for each point in a densely sampled array (61 × 61) of points on the object. The net image of the square object was then formed by incoherently summing PSF intensities over all contributing sources. The 61 × 61array was found to be an adequate representation of the minimally extended object by simulating the image of successively denser arrays until the image no longer changed.

Figure 6 provides examples of the resulting short-exposure image of a minimally extended object for each of four independent volume-turbulence realizations. Surface plots for the images of both point and minimally extended objects are shown. Note that the image of a minimally extended object can be viewed as an extended blur function and that a truly extended image can be constructed from a mosaic of many such extended blur functions, so long as the scene is sufficiently smooth. The key result of this article is that, despite the very large number of PSFs contributing to an image of the minimally extended object, main-lobe structure is retained with feature sizes intermediate to that of the diffraction-limited and long-exposure limits. That is, information content supporting significant restoration from short-exposure imagery is retained despite the severe anisoplanatism of this imaging scenario.

 figure: Fig. 6

Fig. 6 Surface plots comparing short-exposure images of a point object (top row) with short-exposure images of a minimally extended object (middle row) for four independent volume-turbulence realizations (R1-R4). The long-exposure and diffraction-limited images of both a point and a minimally extended object are also shown (bottom row) for comparison. The image chips measure 25.4 × 25.4 μrad in extent.

Download Full Size | PDF

In order to quantify the detail retained in the images of the minimally extended object (or extended blur), we have computed a radius metric illustrated in Fig. 7. The figure shows a single short-exposure image of the square object, where the sampling pitch is λ/(8D) (4 times the Nyquist sampling rate) to facilitate viewing. The peak intensity is identified along with a surrounding pixel contour (red) defined by the boundary of all pixels whose intensity is less than or equal to half the peak value. The distance between the peak pixel and each contour pixel is computed, forming a list of radii characterizing the extent of the main lobe of each image. The occurrence of radii shorter than the similarly measured radius of the long-exposure image indicates the presence of spatial information lying between that of the long-exposure and diffraction-limited images.

 figure: Fig. 7

Fig. 7 Example image of minimally extended object showing the half-intensity contour (red pixels) surrounding the peak intensity. The arrow illustrates measurement of a single radius from the peak intensity pixel to a pixel located on the contour. This chip is sampled at 4 times the Nyquist rate.

Download Full Size | PDF

Figure 8 plots histograms of all radii measurements for 100 images derived from independent volume-turbulence realizations (blue). In addition, we computed an area-weighted mean radius for each image, defined by the radius of a circle having the same area as that within the contour. The histogram of the mean radius over the 100 images is also plotted (red). Radii associated with diffraction-limited and long-exposure images of the square object are indicated for comparison. It is clear that significant opportunities for image restoration are retained by collecting short-exposure data. Over half (53%) of all radii measure 1.5 μrad or less and 30% are 1.1 μrad or less, compared to the long-exposure limit of 3.6 μrad. In a multi-frame restoration approach, only a fraction of the radii in any given image needs to be shorter than the long-exposure radius to provide fine-resolution information. Moreover, only a reasonable percentage of short-exposure images need to retain the fine detail. It is interesting that some radii appear to be smaller than the diffraction limit. This may be due to turbulence-induced superresolution [26,27]. Although we cannot conclusively state that we are observing evidence of superresolution in these particular simulations, we have clearly observed this phenomenon in more recent simulations and with real data collected in the field.

 figure: Fig. 8

Fig. 8 Histograms showing the relative occurrence of main-lobe extent in 100 images of the minimally extended object, each derived using an independent volume-turbulence realization. Separate histograms are displayed for the area-weighted mean radius of each main lobe (red) and for all radii (blue) involving many radii per image over 100 images. The dashed lines indicate the associated cumulative probabilities. Radii associated with diffraction-limited and long-exposure images of the square object are indicated for comparison.

Download Full Size | PDF

Our wave-optics simulations included the effects of scintillation. Therefore our simulated images of objects with modest extent, which involve summing many constituent PSFs (each having a different amount of amplitude aberration) implicitly includes the effects of scintillation. Accordingly, the histograms in Fig. 8 implicitly include all the effects of the intervening turbulence (including scintillation). In addition, these simulations were performed under a monochromatic assumption. We believe that the conclusions regarding the value of short-exposure data generalize to narrow-band data as well. Constraints on the optical bandwidth for conventional speckle imaging insure that the fractional elongation of speckles due to varying wavelength is kept in check [3]. Similar criteria can be applied in the case of deep-turbulence so that each constituent short-exposure PSF used in creating the narrow-band image of an object of modest extent will retain its basic speckle-realization shape. The incoherently integrated narrow-band image will therefore retain a shape similar to the monochromatic result.

A second important result of this study is that the dimensionality of an extended blur function is greatly reduced compared to the dimensionality of the set of 612 point-object images. The reduced dimensionality provides insight into the space-variant estimation problem that must be solved to realize the image-restoration potential. In order to quantify the dimensionality of an extended blur function we performed a fitting procedure on each of the 100 images using two-dimensional Gaussian basis functions (with elliptical contours):

g^(x,y)=k=1KAkexp{[ak(xx0k)22bk(xx0k)(yy0k)+ck(yy0k)2]}.
Each basis function, denoted with the subscript k, requires 6 parameters: amplitude Ak centroid parameters x0k and y0k, and the parameters ak,bk and ck. These latter 3 parameters are readily interpreted in terms of the major-axis orientation parameter θ, and the major and minor axes width parameters σxk2 and σyk2,
ak=cos2θ2σxk2+sin2θ2σyk2bk=sin(2θ)4σxk2+sin(2θ)4σyk2ck=sin2θ2σxk2+cos2θ2σyk2
Figures 9 and 10 show the quality of fit as a function of number of basis functions, K, used for minimally and highly structured images, respectively, selected from the 100 simulations. The images are normalized so that the sum of squared pixel values is unity. The sum of squared error (SSE) between each image and the model fit is indicated for each case shown.

 figure: Fig. 9

Fig. 9 Example image of a minimally extended object (extended blur function) which exhibits minimal spatial structure along with fits to this image using Gaussian basis functions. The images are normalized so that the sum of squares of the pixel values is unity. The sum of the squared error (SSE) in the fits drops as the number of basis functions, K, increases.

Download Full Size | PDF

 figure: Fig. 10

Fig. 10 Example image of a minimally extended object (extended blur function) which exhibits pronounced spatial structure along with fits to this image using Gaussian basis functions. The images are normalized so that the sum of the squares of the pixel values is unity. The sum of the squared error (SSE) in the fits drops as the number of basis functions, K, increases.

Download Full Size | PDF

Table 2 summarizes the fitting error, in terms of SSE, as a function of the number of fitting parameters across all of the 100 images of a minimally extended object. Typically the SSE is less than 0.01 with K = 5 or fewer functions (< 30 parameters) required. Modelling these images with Gaussian basis functions is far from optimal, nor do we claim that the optimization used to find the model parameters necessarily provides a global solution. However, the quality of the fit with 30 parameters suggests that the number of parameters needed to simulate a short-exposure image of a minimally extended object is dramatically less than the accumulated number of parameters inherent in the 612 PSFs used to build such an image. By extension, simulation of a fully extended scene now seems at least plausible with this parameter reduction.

Tables Icon

Table 2. Fitting Error (Sum of Squared Errors) for Images of a Minimally Extended Object as a Function of Number of Parameters

4. Implications for image restoration

We have shown that even in the reference scenario, where the space variance is extreme, there is value in collecting and processing short-exposure imagery. Apparently, the constituent short-exposure PSFs that are incoherently added to create the image of a minimally extended object are “spatially stabilized” to retain a central lobe that is clearly narrower than the corresponding long-exposure image (the limiting case of infinitesimal isoplanatic angle).

Consider collecting a sequence of short-exposure images of a fully extended scene in the reference scenario. We have argued that this data set is more informative, with the potential to restore finer-resolution imagery, than would be a single long-exposure image. How might such short-exposure data be processed to extract the fine-resolution result that we desire? Recall that Multi-Frame Blind Deconvolution (MFBD) is often used with short-exposure data for imagery with space-invariant blur. The algorithm seeks a joint estimate of the static scene, common to all frames, and the parameters that specify the space-invariant blur function for each frame. Conceptually, this algorithm is readily generalized to accommodate space-variant blur, in which case the abbreviation could then stand for Multi-Frame Blind Deblurring. Under a generalized MFBD algorithm, the joint estimate includes estimating parameters that specify the space-variant blur for each frame.

Although a generalized MFBD algorithm is conceptually straightforward, it is practically challenging. A generalized MFBD algorithm will need two features: (1) An economical parametric model for the space-variant blur and (2) a computationally fast implementation of the forward model (modelling the data from an estimated scene and estimated space-variant blur for each frame). The first feature is essential so that the number of parameters needed to specify the field-dependent blur over the entire image does not overwhelm the estimation problem. The second feature is also vital since MFBD is an iterative algorithm that requires the forward-model computations to be repeatedly exercised.

We have shown that there is a tremendous reduction in the number of parameters when modelling the image of a minimally extended object (or extended blur function), relative to the number of parameters implicit in the 612 constituent PSFs. However, this reduction alone is not sufficient for a generalized MFBD algorithm since each Nyquist sample in the entire data set would require on the order of 30 parameters. An additional reduction can be achieved with a more efficient representation of a minimally extended object, such as a wavelet or a dictionary representation. This may help but will still not be sufficient. From an analysis of the number of equations and unknowns, it is clear that a generalized MFBD algorithm needs an economy of parameters that accumulates less than one parameter per image pixel (Nyquist sample) that is recorded. This is because MFBD jointly estimates the object pixels common to all turbulence realizations and turbulence parameters characterizing each realization. Clearly, we need a parametric model that captures the relationship between neighboring extended blur functions rather than treating each generalized blur as an independent parametric entity.

A promising approach that may well succeed in reaching the desired parametric reduction involves modelling the field-dependent behavior with distributed phase screens, as illustrated in Fig. 11. This approach is physically motivated, with discrete phase screens approximating volume turbulence and free-space propagation occurring between phase screens. A key feature of a distributed-phase-screen model is that the spatial behavior of the field-dependent blur is entirely coded in the phase-screen parameters. Nearly all wave-optics codes that simulate the propagation of light through volume turbulence use a distributed-phase-screen approach. Recall that we used 10 such phase screens to model the volume turbulence when simulating the reference scenario. Obviously, more phase screens and more parameters per phase screen will provide a more accurate model. However, it may be possible to use fewer phase screens and/or fewer parameters per phase screen, thus reducing the total number of net parameters per turbulence realization, and still retain a model with sufficient fidelity to be of use in image restoration.

 figure: Fig. 11

Fig. 11 Distributed-phase-screen model that can be used to efficiently parameterize field-dependent blur.

Download Full Size | PDF

A useful volume-turbulence parameterization that involves less than 1 turbulence parameter per imaging Nyquist sample was demonstrated in simulation for Phase-Diverse Speckle (PDS) imaging in the presence of pronounced space variance [18]. In this study, the simulated image data were created with 19 distributed phase screens using a wave-optics code, whereas the mismatched (and economical) model used for image restoration involved only 3 phase screens, each with 100 aberration parameters, for a total of 300 parameters per estimated volume-turbulence realization. The imagery was 32 × 32 pixels (for a total of 1024 samples per image). Thus the number of turbulence parameters per recorded pixel in a single image was 300/1024 ≈0.3, which meets the sub-unity goal with margin. Near diffraction-limited imagery was reconstructed from multiple short-exposure images even with this model mismatch. Although PDS data are not the same as MFBD data, this example provides hope that a distributed-phase-screen model may provide the desired economy of parameters.

Another promising approach for parameter economy is to use volume basis functions. For example, a Karhunen-Loève decomposition of the volume turbulence, tuned to the particular Cn2 profile, could be truncated in an optimal fashion to manage parameter economy. Of course wave-propagation codes don’t operate on continuous volume-turbulence descriptions. One could convert the truncated volume-turbulence representation into a sequence of any number of phase screens for use in the wave-propagation software needed for the forward model. In this approach, the number of phase screens impacts the speed of the forward model but not the number of parameters needed to characterize the turbulence.

5. Conclusions

We have explored imaging through deep turbulence in the absence of adaptive optics. To do this we simulated a reference scenario that predicts extreme space variance: 355 Fried isoplanatic angles across a single resolution element! Using a wave-propagation code, we simulated the decorrelation of short-exposure PSFs as a function of point-source separation and found that the behavior for non-adaptive and single-conjugate adaptive-optics data is quite different. This motivates an image-based definition for isoplanatic angle for use in the context of image restoration from non-adaptive data. Such a definition would be distinct from the conventional Fried definition, defined to quantify AO performance.

Short-exposure images of minimally extended objects were carefully simulated. These extended blur functions were shown to be sufficiently spatially stabilized so as to retain a central lobe that is clearly narrower than the corresponding long-exposure image. We conclude that even with extreme space variance, collecting multiple short-exposure images is more informative than collecting a single long-exposure image. This is a promising result for image restoration.

We have provided a loose bound on the number of parameters needed to model an extended blur function through the use of suboptimal Gaussian basis functions. As expected, the number of parameters is dramatically reduced relative to the number of parameters used in the constituent PSFs. This result makes plausible the simulation of extended imagery in the presence of extreme turbulence.

The implications of these findings for image restoration from a sequence of short-exposure images were discussed. Whereas it is conceptually straight-forward to generalize an MFBD algorithm to accommodate extreme space variance, it is also practically challenging. We believe that a distributed-phase-screen model or a truncated series of volume-turbulence basis functions will yield sufficient parametric parsimony to achieve the goal of less than 1 turbulence parameter per image sample. Computational speed with the forward model clearly needs to be addressed.

Funding

Air Force Research Laboratory/Sensors Directorate (AFRL/RY) (FA8650-12-D-1344).

References and links

1. R. Tyson, Principles of Adaptive Optics (CRC, 2010).

2. B. R. Hunt, W. R. Fright, and R. H. T. Bates, “Analysis of the shift-and-add method for imaging through turbulent media,” J. Opt. Soc. Am. 73(4), 456–465 (1983). [CrossRef]  

3. J. C. Dainty, Laser Speckle and Related Phenomena (Springer-Verlag, 1975).

4. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images,” J. Opt. Soc. Am. A 10(5), 1064–1073 (1993). [CrossRef]  

5. W. van Kampen and R. G. Paxman, “Multiframe blind deconvolution of infinite-extent objects,” Proc. SPIE 3433, 296–307 (1998). [CrossRef]  

6. J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am. A 7(9), 1598–1608 (1990). [CrossRef]  

7. R. G. Paxman, J. H. Seldin, M. G. Löfdahl, G. B. Scharmer, and C. U. Keller, “Evaluation of phase-diversity techniques for solar-image restoration,” Astrophys. J. 466, 1087–1099 (1996). [CrossRef]  

8. D. L. Fried, “Anisoplanatism in adaptive optics,” J. Opt. Soc. Am. 72(1), 52–61 (1982). [CrossRef]  

9. D. G. Sandler, S. Stahl, J. R. P. Angel, M. Lloyd-Hart, and D. McCarthy, “Adaptive optics for diffraction-limited infrared imaging with 8-m telescopes,” J. Opt. Soc. Am. A 11(2), 925–945 (1994). [CrossRef]  

10. V. N. Mahajan, “Strehl ratio for primary aberrations in terms of their aberration variance,” J. Opt. Soc. Am. 73(6), 860–861 (1983). [CrossRef]  

11. C. J. Carrano, “Progress in horizontal and slant-path imaging using speckle imagery,” Proc. SPIE 5001, 56–64 (2003). [CrossRef]  

12. C. J. Carrano, “Anisoplanatic performance of horizontal-path speckle imaging,” Proc. SPIE 5162, 14–27 (2003). [CrossRef]  

13. D. Fraser, G. Thorpe, and A. Lambert, “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A 16(7), 1751–1758 (1999). [CrossRef]  

14. X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 157–170 (2013). [CrossRef]   [PubMed]  

15. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009). [CrossRef]  

16. R. A. Gonsalves and R. Chidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979). [CrossRef]  

17. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9(7), 1072–1085 (1992). [CrossRef]  

18. B. J. Thelen, R. G. Paxman, D. A. Carrara, and J. H. Seldin, “Overcoming turbulence-induced space-variant blur by using phase-diverse speckle,” J. Opt. Soc. Am. A 26(1), 206–218 (2009). [CrossRef]   [PubMed]  

19. R. R. Beland, “Propagation through atmospheric optical turbulence,” in Atmospheric Propagation of Radiation (Volume 2 of The Infrared & Electro-Optical systems Handbook), F.G. Smith, ed. (SPIE Optical Engineering, 1993).

20. M. C. Roggemann and B. M. Welsh, Imaging Through Turbulence, (CRC, 1996).

21. G. A. Tyler, “Adaptive optics compensation for propagation through deep turbulence: a study of some interesting approaches,” Opt. Eng. 52(2), 021011 (2012). [CrossRef]  

22. D. J. Link, “Comparison of the effects of near-field and distributed atmospheric turbulence on the performance of an adaptive optics system,” Proc. SPIE 2120, 87–94 (1994). [CrossRef]  

23. E. P. Waliner, “Optimizing the locations of multiconjugate wavefront correctors,” Proc. SPIE 2201, 110–116 (1994). [CrossRef]  

24. R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763-01, 2–10 (1999). [CrossRef]  

25. J. W. Godman, Statistical Optics (Wiley, 2015).

26. A. J. Lambert, D. Fraser, M. R. Sayyah Jahromi, and B. R. Hunt, “Superresolution in image restoration of wide area images viewed through atmospheric turbulence,” Proc. SPIE 4792, 35–43 (2002). [CrossRef]  

27. M. Charnotskii, “Superresolution in dewarped anisoplanatic images,” Appl. Opt. 47(28), 5110–5116 (2008). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1 Light emanating from spatially separated object points encounters different volumes of turbulence yielding field-dependent blur in the image plane.
Fig. 2
Fig. 2 Reference-scenario simulated PSFs. The short-exposure PSF shown is for a single turbulence realization. The PSFs have been scaled to have the same peak value.
Fig. 3
Fig. 3 Short-exposure PSFs for four turbulence realizations in the reference imaging scenario (R1-R4, left) compared with the theoretical long-exposure PSF and the vacuum-diffraction-limit PSF. The image chips measure 25.4x25.4 μrad.
Fig. 4
Fig. 4 Average spatial correlation coefficient for PSFs as a function of angular separation, for the reference scenario. The mean and the 99% confidence limits are shown for two turbulence profiles (profile 1 is HV, profile 2 is 0.075 × HV). For both profiles, the Fried isoplanatic angle approximately corresponds to r ¯ Δ =0.86 .
Fig. 5
Fig. 5 Average spatial correlation coefficient for PSFs as a function of angular separation in the adaptive-optics case (both points are corrected using the phase aberration of the first point). The mean and the 99% confidence limits are shown for two turbulence profiles (profile 1 is HV, profile 2 is 0.075 × HV). The Fried isoplanatic angle approximately corresponds to r ¯ Δ =0.94 for profile 1 and r ¯ Δ =0.93 for profile 2.
Fig. 6
Fig. 6 Surface plots comparing short-exposure images of a point object (top row) with short-exposure images of a minimally extended object (middle row) for four independent volume-turbulence realizations (R1-R4). The long-exposure and diffraction-limited images of both a point and a minimally extended object are also shown (bottom row) for comparison. The image chips measure 25.4 × 25.4 μrad in extent.
Fig. 7
Fig. 7 Example image of minimally extended object showing the half-intensity contour (red pixels) surrounding the peak intensity. The arrow illustrates measurement of a single radius from the peak intensity pixel to a pixel located on the contour. This chip is sampled at 4 times the Nyquist rate.
Fig. 8
Fig. 8 Histograms showing the relative occurrence of main-lobe extent in 100 images of the minimally extended object, each derived using an independent volume-turbulence realization. Separate histograms are displayed for the area-weighted mean radius of each main lobe (red) and for all radii (blue) involving many radii per image over 100 images. The dashed lines indicate the associated cumulative probabilities. Radii associated with diffraction-limited and long-exposure images of the square object are indicated for comparison.
Fig. 9
Fig. 9 Example image of a minimally extended object (extended blur function) which exhibits minimal spatial structure along with fits to this image using Gaussian basis functions. The images are normalized so that the sum of squares of the pixel values is unity. The sum of the squared error (SSE) in the fits drops as the number of basis functions, K, increases.
Fig. 10
Fig. 10 Example image of a minimally extended object (extended blur function) which exhibits pronounced spatial structure along with fits to this image using Gaussian basis functions. The images are normalized so that the sum of the squares of the pixel values is unity. The sum of the squared error (SSE) in the fits drops as the number of basis functions, K, increases.
Fig. 11
Fig. 11 Distributed-phase-screen model that can be used to efficiently parameterize field-dependent blur.

Tables (2)

Tables Icon

Table 1 Parameters Characterizing Imaging Performance for Three Scenarios

Tables Icon

Table 2 Fitting Error (Sum of Squared Errors) for Images of a Minimally Extended Object as a Function of Number of Parameters

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

  σ 2 = ( θ θ o ) 5/3 .
r o [ 0.422 ( 2π λ ) 2 0 L dz C n 2 ( z )  ( Lz L ) 5/3 ] 3/5
θ o [ 2.91 ( 2π λ ) 2 0 L  dz C n 2 ( z )  z 5/3 ] 3/5
σ χ 2 0.56 ( 2π λ ) 7/6 0 L dz C n 2 ( z ) ( Lz L ) 5/6 z 5/6  ,
ρ=λ/D .
IRR   θ o /ρ.
C n 2 ( h ) = 8.148x 10 26 * v 2 * ( h 1000 ) 10 exp( h 1000 ) +2.7x 10 16 exp( h 1500 )+ C n 2 o exp( h 100 ),
v = 21,
C n 2 o =1x 10 13  ,
r i,j x,y [ s i ( x,y ) s ¯ i ][ s j ( x x ij ,y y ij ) s ¯ j ] { x,y [ s i ( x,y ) s ¯ i ] 2 } 1 2 { x [ s j ( x,y ) s ¯ j ] 2 } 1 2  ,
g ^ (x,y)= k=1 K A k exp{ [ a k ( x x 0k ) 2 2 b k ( x x 0k )( y y 0k )+ c k ( y y 0k ) 2 ] } .
a k = cos 2 θ 2 σ xk 2 + sin 2 θ 2 σ yk 2 b k = sin( 2θ ) 4 σ xk 2 + sin( 2θ ) 4 σ yk 2 c k = sin 2 θ 2 σ xk 2 + cos 2 θ 2 σ yk 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.