We explore the feasibility of post-detection restoration when imaging through deep turbulence characterized by extreme anisoplanatism. A wave-optics code was used to simulate relevant short-exposure point spread functions (PSFs) and their decorrelation as a function of point-source separation was computed. In addition, short-exposure images of minimally extended objects were simulated and shown to retain a central lobe that is clearly narrower than the long-exposure counterpart. This suggests that short-exposure image data are more informative than long-exposure data, even in the presence of extreme anisoplanatism. The implications of these findings for image restoration from a sequence of short-exposure images are discussed.
© 2016 Optical Society of America
In many incoherent-imaging applications, the quality of the imagery is degraded by atmospheric turbulence. A variety of clever pre-detection and post-detection correction approaches have been developed to reduce or remove the effects of turbulence-induced blur. These approaches include adaptive optics (AO) , shift-and-add , speckle imaging , multi-frame blind deconvolution [4,5], deconvolution from wave-front sensing , and phase-diverse speckle . Most of these techniques have been developed and exercised under a space-invariant imaging model. The problem of correcting for turbulence-induced blur becomes significantly more challenging in geometries for which a space-variant imaging model is indicated. The source of space variance when imaging through volume turbulence is readily understood by referring to Fig. 1. Light emanating from spatially separated object points will encounter different volumes of turbulence that will induce differing aberrations and yield Point Spread Functions (PSFs) with differing shapes in the image plane.
The change in the blur as a function of displacement in field-of-view (FOV) depends upon the imaging geometry and the associated intervening turbulence. This change is often quantified by the isoplanatic angle, , defined by Fried in the context of Adaptive-Optics (AO) correction of wavefront error induced by Kolmogorov turbulence. In the limiting case of large apertures and ideal AO correction, the mean-square wavefront phase error as a function of angle from the beacon, , is found to be :10], is reduced to (0.37) when the angle reaches. Thus, the isoplanatic angle provides an indication of the field-of-view over which an AO system is able make a reasonable correction.
Pre-detection correction (with AO) is preferable to post-detection correction because it avoids noise-amplification encountered in image restoration. However, there are many applications for which AO is not available. In this paper we are interested in the problem of image restoration in the presence of turbulence-induced space-variant blur. This is a more challenging problem than the space-invariant counterpart. Nevertheless, a number of methods have been developed to operate in the presence of space variance. When the isoplanatic angle is sufficiently large, then space-invariant algorithms, such as speckle-imaging approaches, can be applied to overlapping patches after which mosaicking methods can be exercised to create a wide-field restoration [11,12].
Multiple researchers have contributed to another class of successful algorithms characterized by the operations dewarp-add-and-deblur (the space-variant counterpart of shift-and-add) [13,14]. Another approach is “lucky region fusion” in which an image-quality map, computed on each short-exposure frame, is used to identify lucky regions that can be merged into a fused video stream . Whereas the aforementioned algorithms are directed at mitigating space-variant blur, they work best when the isoplanatic angle is larger than the resolution angle, dictated by diffraction in the absence of turbulence.
It is also interesting to consider a generalization of the method of phase diversity [16,17], in which a sequence of in-focus and defocused short-exposure image pairs is used to estimate a diffraction-limited image. This method, called Phase-Diverse Speckle (PDS), has been shown in simulation to work in the presence of pronounced turbulence-induced space variance (where the isoplanatic angle is one third the size of the resolution angle) by appealing to a distributed-phase-screen model .
In this paper we explore the feasibility of post-detection restoration in the presence of deep-turbulence-induced anisoplanatic blur. The degree of space variance encountered with deep turbulence is far greater than that for which image restoration has been previously attempted. For example, for an imaging scenario of interest that we investigate, we compute that there are 355 isoplanatic angles across a single resolution angle! Such extreme anisoplanatism raises several questions: What is the physical interpretation of image formation? Is the simulation of image formation from an extended object computationally tractable? Is there any hope of bounding the number of parameters needed to model the image-formation process so that image restoration is plausible? In the following sections we explore these questions. In Section 2, we review scenarios for imaging through volume turbulence and characterize the particular scenario of interest, the reference scenario, which involves deep turbulence. In Section 3 we explore the simulation of deep-turbulence-induced blur in imagery, for both point objects and objects of modest extent. Section 4 contains a discussion of the feasibility of image restoration for extended objects in the presence of deep turbulence. Conclusions are drawn in Section 5.
2. Imaging-through-turbulence scenarios
The impact of turbulence on imaging performance is governed by the imaging geometry and the associated distribution of turbulence between the object and the telescope pupil. Telescope-independent parameters used to characterize imaging performance in the presence of turbulence include the atmospheric coherence diameter (or Fried’s parameter), , the isoplanatic angle, , and the Rytov number (log-amplitude variance, computed from first-order Rytov theory), . For completeness we provide finite-range expressions for these parameters.Eqs. (2) and (4) were derived for spherical-wave propagation suitable for imaging at finite-ranges, the definition for isoplanatic angle is for plane-wave propagation and it is the turbulence that extends for a finite distance, L. Therefore, this definition of isoplanatic angle is not a perfect fit for a finite-range scenario, although it is the expression commonly used throughout the community for such scenarios. We acknowledge that this expression overemphasizes the impact of the turbulence near the target so that it tends to give a smaller isoplanatic angle than we expect experimentally.
Imaging performance is also influenced by the diffraction associated with the telescope aperture diameter, D. The diffraction-limited angular resolution, which would be achieved in the absence of turbulence, is given by
A final telescope-dependent parameter that is helpful in interpreting imaging performance is the Isoplanatic angle to the Resolution angle Ratio (IRR),
To illustrate the variability in imaging performance among differing imaging geometries, we consider three distinct imaging scenarios: (1) ground-based astronomy in the zenith direction (up-looking), (2) airborne imaging in the nadir direction (down-looking), and (3) airborne long-range slant-path imaging. Scenario 1 is representative of a solar-astronomy observation, where the telescope is directed at the zenith angle and the target has a range (essentially infinity). Scenario 2 is representative of high-altitude airborne surveillance, involving an airborne platform at an altitude of 20km looking in the nadir direction. Scenario 3 is also a high-altitude surveillance scenario but with an extremely long slant-path range. The airborne platform has an altitude of 20km, a slant range , and a depression angle from the platform of 5.16 degrees, under a spherical-earth model. Note that this corresponds to an elevation angle (of the sensor as seen from the target at ground level) of only 2.47 degrees. Whereas scenarios 1 and 2 provide context, it is scenario 3, the reference scenario, which illustrates imaging through deep turbulence. The reference scenario presents several practical issues including interpretation of low-depression-angle images, line-of-sight access to targets, and reduced signal owing to path-transmission losses. In this paper we are concentrating on the effects of turbulence.
To isolate the impact of the differing geometries, we consider a common telescope, wavelength, and vertical profile among all scenarios. In our analysis, the telescope diameter is (with a 25% central obscuration, by area) and the wavelength is . For this telescope and wavelength, the diffraction-limited angular resolution is found to be . The vertical profile is modeled as a Hufnagel-Valley (HV) distribution :
Table 1 shows telescope-independent and telescope-dependent imaging parameters for each of these three scenarios. For scenario 1 we see that . In words, there are multiple resolution elements across an isoplanatic patch so that deconvolution methods could be applied to individual isoplanatic patches. Thus, mosaicking methods could be utilized to mitigate the space-variant blur over an extended object. In scenario 2, the isoplanatic angle is smaller than the diffraction-limited angular resolution, precluding restoration by mosaicking. However, , suggesting that the blur is very mild in this scenario so that deblurring is not needed. In scenario 3, . Thus the isoplanatic angle is a very small fraction of the resolution angle. The reciprocal of this ratio indicates that there are 355 isoplanatic angles across a resolution angle. In addition, the Rytov number is found to be . Deep turbulence is commonly defined as and . Clearly scenario 3, our scenario of interest, is an example of imaging in deep turbulence, an operational regime with problems not encountered in scenarios 1 and 2.
3. Simulation of blur
In this section, we explore the simulation of deep-turbulence-induced blur in imagery, for both point objects and objects of modest extent. Simulations were performed using the Atmospheric Compensation Simulation (ACS) tool, a time-domain, wave-optics and hardware-emulation code . ACS is a dynamic-simulation tool that utilizes scalar diffraction theory to propagate wavefronts through a sequence of phase screens generated under Kolmogorov statistics. The discrete screens are scaled to match the desired turbulence profile and wavefront wavelength. The exact screen strengths and locations are optimally selected to minimize the difference in the computation of a weighted sum of key turbulence parameters (, , and , in order of weighting) when using a continuous-volume computation, using Eqs. (2) through (4), and one using a distribution of discrete phase screens. We note parenthetically that other strategies have been proposed for optimal placement of phase screens, for both plane-wave  and spherical-wave geometries . For the wave-optics simulations described herein, a combination of internal and external checks were implemented to insure that no-aliasing occurred during the propagations. In addition, there is a margin on the periphery of each phase screen to enable the possibility of light being deviated back into the geometric-optics cone defined by the pupil for each point source. Wave-optics simulations, when carefully executed, are known to successfully model wave propagation in deep-turbulence scenarios. ACS has been used to model propagation in both vertical and horizontal paths for astronomical imaging systems, high-energy laser systems, and free-space laser communications.
3.1 Simulations with point objects
ACS was used to simulate the propagation of radiation from a point object through differing volume-turbulence realizations (modelled with a distribution of phase screens) associated with the reference scenario (300km slant-path range) to produce the complex field in the telescope pupil. The associated short-exposure PSF is found by multiplying the complex field with the binary pupil function, applying a DFT, and taking the magnitude squared. Figure 2 shows a surface plot for the short-exposure PSF for a single turbulence realization, along with that of the diffraction-limited PSF, an intermediate-exposure PSF, and a theoretical long-exposure PSF for comparison. Note that the short-exposure PSF retains a central peak and fine structure that is comparable in width to the central peak of the diffraction-limited PSF and much narrower than the central peak of the long-exposure PSF. The intermediate-exposure PSF was constructed by incoherently averaging 100 independent realizations of short-exposure PSFs. Its shape is approaching the theoretical long-exposure PSF, although it is not circularly symmetric and has not filled out fully, owing to the moderate number of realizations used.
It is well known that such fine structure can be found in virtually all short-exposure PSFs. This is illustrated in Fig. 3, which shows short-exposure PSFs for four different turbulence realizations. It is the retention of this fine structure that allows near-diffraction-limited imagery to be restored from a series of short-exposure images in scenarios for which the blur is space-invariant (e.g., via speckle imaging or multi-frame blind deconvolution).
Of course in the reference scenario, the blur is at the other end of the space-variance spectrum, exhibiting extreme space variance. In order to examine the change in the blur as a function of displacement in the field-of-view, we simulated the PSF for each point in a closely spaced linear array of point sources (61 points with 0.5 nrad spacing). A single realization of volume turbulence was modeled with 10 phase screens, orthogonal to and optimally distributed along the slant path. The offsets in the linear array were implemented by shifting the phase screens as opposed to shifting the point-source position. Thus the angles are not limited by the array size and the grid spacing, removing the risk of aliasing at the outer point sources on the array. ACS was used to propagate the spherical wave emanating from each point source through the phase screens to the pupil using scalar diffraction theory. A PSF for each point in the linear array was then computed using the wavefront (amplitude and phase) aberration in the pupil. These PSFs can then be compared to see how they change with separation. We use the spatial correlation coefficient to quantify differences in PSFs associated with sources i and j:Figure 4 plots for two different turbulence profiles (profile 1 is the HV profile; profile 2 is 0.075 × HV). The mean and the 99% confidence limits (under an imperfect Gaussian-distributed assumption) are shown for two turbulence profiles. As expected, decays monotonically as the separation between points increases. In addition, the decay is more gradual for profile 2, for which the turbulence strength has been reduced. Note that the error bars are larger for larger separation, owing to the fact that there are fewer source pairs with longer separation than with short separation, given our finite linear array of source points.
It is interesting that for the two turbulence profiles used in Fig. 4, the average spatial correlation coefficient for ACS-simulated PSFs is about 0.89 at the Fried isoplanatic angle. However, there is no guarantee that the Fried isoplanatic angle will always correspond to a fixed average spatial correlation coefficient value. Recall that the Fried isoplanatic angle is defined in the context of adaptive optics. By contrast, we have defined the function to characterize image-domain behavior, in the context of non-adaptive imaging. We could contemplate defining a maximum tolerable value for PSF decorrelation when performing an image-restoration task. This could be quantified through analysis of image-restoration degradation as a function of PSF model mismatch. A maximum tolerable decorrelation would imply a corresponding maximum acceptable separation, Δo, which could then serve as an alternative, image-based definition for isoplanatic angle.
To investigate the need to interpret the isoplanatic angle differently in non-adaptive and adaptive-optics applications, we also computed as a function of point-source separation for the case of complete adaptive correction, where the correction corresponds to one of the two points (the beacon). This was straightforward to compute since our simulations provide the wavefront aberration for each point. Accordingly, we simulated ideal phase correction for the first point, which serves as a beacon, and used the same correction for the second point, separated by Δ. The corresponding plot is given in Fig. 5, where we see that the PSF decorrelation drops off more slowly than in the non-adaptive case. In addition, we see that the value for that corresponds to the Fried isoplanatic angle is significantly different from that of the non-adaptive case and there is less consistency between the two turbulence profiles. The Fried isoplanatic angle corresponds to = 0.94 and 0.93 for profiles 1 and 2, respectively. We also evaluated for profile 1 at longer separations than are shown in Figs. 4 and 5 and found that the functions for both non-adaptive and adaptive-optics become asymptotically flat. However the asymptote for the two cases is significantly different. The asymptotic value for the non-adaptive case is ≈ 0.57 whereas it is found to be ≈ 0.47 for the AO case. The lower asymptotic value for the AO case can be appreciated by realizing that once the separation from the beacon is sufficiently large, the wavefront aberration is completely uncorrelated from that of the beacon and the adaptive correction actually degrades the second PSF.
3.2 Simulations with objects of modest extent
Under any definition of isoplanatic angle, it is clear that for the reference scenario there are an extreme number of isoplanatic angles across a single diffraction-limited resolution element. This observation provokes the core question we are trying to answer: What happens during imaging of an extended object under conditions when the isoplanatic patch is much smaller than the diffraction-limited resolution of the imaging system? We would like to determine if the short-exposure image of an extended object retains information on the scale of the diffraction-limited detail, as is the case for field-invariant blur. Or does spatial averaging of independent-turbulence PSFs wash out such detail? As a thought experiment, consider the limiting case when the isoplanatic angle becomes infinitesimally small. In this limit, spatial averaging of independent PSFs will approach the temporal averaging that occurs with a long-exposure PSF. Therefore, in this limit there would be no value in collecting short-exposure data for purposes of image restoration, since all of the fine detail would be lost.
To address our core question, we computed reference-scenario images of a minimally extended object: a uniform square, 0.8 μrad on a side, sized to approximately equal an object-domain Nyquist sample for our reference optical system ( μrad). This object was chosen since it represents a barely resolved patch (or diffraction-limited Nyquist cell) over which space-variant PSFs could be spatially integrated. Because the isoplanatic patch is so small, one might reasonably ask if image formation should be modeled in the regime of partial spatial coherence owing to the sun’s limited extent. An analysis using the van Cittert-Zernike theorem indicates that there are nearly 200 solar-illumination coherence areas  within an isoplanatic patch, for illumination in the same direction as the observation. We conclude that we are safely in the spatially incoherent imaging regime, although this may need to be revisited for other illumination geometries. Accordingly, we simulate the imaging process by first computing the PSF for each point in a densely sampled array (61 × 61) of points on the object. The net image of the square object was then formed by incoherently summing PSF intensities over all contributing sources. The 61 × 61array was found to be an adequate representation of the minimally extended object by simulating the image of successively denser arrays until the image no longer changed.
Figure 6 provides examples of the resulting short-exposure image of a minimally extended object for each of four independent volume-turbulence realizations. Surface plots for the images of both point and minimally extended objects are shown. Note that the image of a minimally extended object can be viewed as an extended blur function and that a truly extended image can be constructed from a mosaic of many such extended blur functions, so long as the scene is sufficiently smooth. The key result of this article is that, despite the very large number of PSFs contributing to an image of the minimally extended object, main-lobe structure is retained with feature sizes intermediate to that of the diffraction-limited and long-exposure limits. That is, information content supporting significant restoration from short-exposure imagery is retained despite the severe anisoplanatism of this imaging scenario.
In order to quantify the detail retained in the images of the minimally extended object (or extended blur), we have computed a radius metric illustrated in Fig. 7. The figure shows a single short-exposure image of the square object, where the sampling pitch is (4 times the Nyquist sampling rate) to facilitate viewing. The peak intensity is identified along with a surrounding pixel contour (red) defined by the boundary of all pixels whose intensity is less than or equal to half the peak value. The distance between the peak pixel and each contour pixel is computed, forming a list of radii characterizing the extent of the main lobe of each image. The occurrence of radii shorter than the similarly measured radius of the long-exposure image indicates the presence of spatial information lying between that of the long-exposure and diffraction-limited images.
Figure 8 plots histograms of all radii measurements for 100 images derived from independent volume-turbulence realizations (blue). In addition, we computed an area-weighted mean radius for each image, defined by the radius of a circle having the same area as that within the contour. The histogram of the mean radius over the 100 images is also plotted (red). Radii associated with diffraction-limited and long-exposure images of the square object are indicated for comparison. It is clear that significant opportunities for image restoration are retained by collecting short-exposure data. Over half (53%) of all radii measure 1.5 μrad or less and 30% are 1.1 μrad or less, compared to the long-exposure limit of 3.6 μrad. In a multi-frame restoration approach, only a fraction of the radii in any given image needs to be shorter than the long-exposure radius to provide fine-resolution information. Moreover, only a reasonable percentage of short-exposure images need to retain the fine detail. It is interesting that some radii appear to be smaller than the diffraction limit. This may be due to turbulence-induced superresolution [26,27]. Although we cannot conclusively state that we are observing evidence of superresolution in these particular simulations, we have clearly observed this phenomenon in more recent simulations and with real data collected in the field.
Our wave-optics simulations included the effects of scintillation. Therefore our simulated images of objects with modest extent, which involve summing many constituent PSFs (each having a different amount of amplitude aberration) implicitly includes the effects of scintillation. Accordingly, the histograms in Fig. 8 implicitly include all the effects of the intervening turbulence (including scintillation). In addition, these simulations were performed under a monochromatic assumption. We believe that the conclusions regarding the value of short-exposure data generalize to narrow-band data as well. Constraints on the optical bandwidth for conventional speckle imaging insure that the fractional elongation of speckles due to varying wavelength is kept in check . Similar criteria can be applied in the case of deep-turbulence so that each constituent short-exposure PSF used in creating the narrow-band image of an object of modest extent will retain its basic speckle-realization shape. The incoherently integrated narrow-band image will therefore retain a shape similar to the monochromatic result.
A second important result of this study is that the dimensionality of an extended blur function is greatly reduced compared to the dimensionality of the set of 612 point-object images. The reduced dimensionality provides insight into the space-variant estimation problem that must be solved to realize the image-restoration potential. In order to quantify the dimensionality of an extended blur function we performed a fitting procedure on each of the 100 images using two-dimensional Gaussian basis functions (with elliptical contours):Figures 9 and 10 show the quality of fit as a function of number of basis functions, K, used for minimally and highly structured images, respectively, selected from the 100 simulations. The images are normalized so that the sum of squared pixel values is unity. The sum of squared error (SSE) between each image and the model fit is indicated for each case shown.
Table 2 summarizes the fitting error, in terms of SSE, as a function of the number of fitting parameters across all of the 100 images of a minimally extended object. Typically the SSE is less than 0.01 with K = 5 or fewer functions (< 30 parameters) required. Modelling these images with Gaussian basis functions is far from optimal, nor do we claim that the optimization used to find the model parameters necessarily provides a global solution. However, the quality of the fit with 30 parameters suggests that the number of parameters needed to simulate a short-exposure image of a minimally extended object is dramatically less than the accumulated number of parameters inherent in the 612 PSFs used to build such an image. By extension, simulation of a fully extended scene now seems at least plausible with this parameter reduction.
4. Implications for image restoration
We have shown that even in the reference scenario, where the space variance is extreme, there is value in collecting and processing short-exposure imagery. Apparently, the constituent short-exposure PSFs that are incoherently added to create the image of a minimally extended object are “spatially stabilized” to retain a central lobe that is clearly narrower than the corresponding long-exposure image (the limiting case of infinitesimal isoplanatic angle).
Consider collecting a sequence of short-exposure images of a fully extended scene in the reference scenario. We have argued that this data set is more informative, with the potential to restore finer-resolution imagery, than would be a single long-exposure image. How might such short-exposure data be processed to extract the fine-resolution result that we desire? Recall that Multi-Frame Blind Deconvolution (MFBD) is often used with short-exposure data for imagery with space-invariant blur. The algorithm seeks a joint estimate of the static scene, common to all frames, and the parameters that specify the space-invariant blur function for each frame. Conceptually, this algorithm is readily generalized to accommodate space-variant blur, in which case the abbreviation could then stand for Multi-Frame Blind Deblurring. Under a generalized MFBD algorithm, the joint estimate includes estimating parameters that specify the space-variant blur for each frame.
Although a generalized MFBD algorithm is conceptually straightforward, it is practically challenging. A generalized MFBD algorithm will need two features: (1) An economical parametric model for the space-variant blur and (2) a computationally fast implementation of the forward model (modelling the data from an estimated scene and estimated space-variant blur for each frame). The first feature is essential so that the number of parameters needed to specify the field-dependent blur over the entire image does not overwhelm the estimation problem. The second feature is also vital since MFBD is an iterative algorithm that requires the forward-model computations to be repeatedly exercised.
We have shown that there is a tremendous reduction in the number of parameters when modelling the image of a minimally extended object (or extended blur function), relative to the number of parameters implicit in the 612 constituent PSFs. However, this reduction alone is not sufficient for a generalized MFBD algorithm since each Nyquist sample in the entire data set would require on the order of 30 parameters. An additional reduction can be achieved with a more efficient representation of a minimally extended object, such as a wavelet or a dictionary representation. This may help but will still not be sufficient. From an analysis of the number of equations and unknowns, it is clear that a generalized MFBD algorithm needs an economy of parameters that accumulates less than one parameter per image pixel (Nyquist sample) that is recorded. This is because MFBD jointly estimates the object pixels common to all turbulence realizations and turbulence parameters characterizing each realization. Clearly, we need a parametric model that captures the relationship between neighboring extended blur functions rather than treating each generalized blur as an independent parametric entity.
A promising approach that may well succeed in reaching the desired parametric reduction involves modelling the field-dependent behavior with distributed phase screens, as illustrated in Fig. 11. This approach is physically motivated, with discrete phase screens approximating volume turbulence and free-space propagation occurring between phase screens. A key feature of a distributed-phase-screen model is that the spatial behavior of the field-dependent blur is entirely coded in the phase-screen parameters. Nearly all wave-optics codes that simulate the propagation of light through volume turbulence use a distributed-phase-screen approach. Recall that we used 10 such phase screens to model the volume turbulence when simulating the reference scenario. Obviously, more phase screens and more parameters per phase screen will provide a more accurate model. However, it may be possible to use fewer phase screens and/or fewer parameters per phase screen, thus reducing the total number of net parameters per turbulence realization, and still retain a model with sufficient fidelity to be of use in image restoration.
A useful volume-turbulence parameterization that involves less than 1 turbulence parameter per imaging Nyquist sample was demonstrated in simulation for Phase-Diverse Speckle (PDS) imaging in the presence of pronounced space variance . In this study, the simulated image data were created with 19 distributed phase screens using a wave-optics code, whereas the mismatched (and economical) model used for image restoration involved only 3 phase screens, each with 100 aberration parameters, for a total of 300 parameters per estimated volume-turbulence realization. The imagery was 32 × 32 pixels (for a total of 1024 samples per image). Thus the number of turbulence parameters per recorded pixel in a single image was 300/1024 ≈0.3, which meets the sub-unity goal with margin. Near diffraction-limited imagery was reconstructed from multiple short-exposure images even with this model mismatch. Although PDS data are not the same as MFBD data, this example provides hope that a distributed-phase-screen model may provide the desired economy of parameters.
Another promising approach for parameter economy is to use volume basis functions. For example, a Karhunen-Loève decomposition of the volume turbulence, tuned to the particular profile, could be truncated in an optimal fashion to manage parameter economy. Of course wave-propagation codes don’t operate on continuous volume-turbulence descriptions. One could convert the truncated volume-turbulence representation into a sequence of any number of phase screens for use in the wave-propagation software needed for the forward model. In this approach, the number of phase screens impacts the speed of the forward model but not the number of parameters needed to characterize the turbulence.
We have explored imaging through deep turbulence in the absence of adaptive optics. To do this we simulated a reference scenario that predicts extreme space variance: 355 Fried isoplanatic angles across a single resolution element! Using a wave-propagation code, we simulated the decorrelation of short-exposure PSFs as a function of point-source separation and found that the behavior for non-adaptive and single-conjugate adaptive-optics data is quite different. This motivates an image-based definition for isoplanatic angle for use in the context of image restoration from non-adaptive data. Such a definition would be distinct from the conventional Fried definition, defined to quantify AO performance.
Short-exposure images of minimally extended objects were carefully simulated. These extended blur functions were shown to be sufficiently spatially stabilized so as to retain a central lobe that is clearly narrower than the corresponding long-exposure image. We conclude that even with extreme space variance, collecting multiple short-exposure images is more informative than collecting a single long-exposure image. This is a promising result for image restoration.
We have provided a loose bound on the number of parameters needed to model an extended blur function through the use of suboptimal Gaussian basis functions. As expected, the number of parameters is dramatically reduced relative to the number of parameters used in the constituent PSFs. This result makes plausible the simulation of extended imagery in the presence of extreme turbulence.
The implications of these findings for image restoration from a sequence of short-exposure images were discussed. Whereas it is conceptually straight-forward to generalize an MFBD algorithm to accommodate extreme space variance, it is also practically challenging. We believe that a distributed-phase-screen model or a truncated series of volume-turbulence basis functions will yield sufficient parametric parsimony to achieve the goal of less than 1 turbulence parameter per image sample. Computational speed with the forward model clearly needs to be addressed.
Air Force Research Laboratory/Sensors Directorate (AFRL/RY) (FA8650-12-D-1344).
References and links
1. R. Tyson, Principles of Adaptive Optics (CRC, 2010).
2. B. R. Hunt, W. R. Fright, and R. H. T. Bates, “Analysis of the shift-and-add method for imaging through turbulent media,” J. Opt. Soc. Am. 73(4), 456–465 (1983). [CrossRef]
3. J. C. Dainty, Laser Speckle and Related Phenomena (Springer-Verlag, 1975).
4. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images,” J. Opt. Soc. Am. A 10(5), 1064–1073 (1993). [CrossRef]
5. W. van Kampen and R. G. Paxman, “Multiframe blind deconvolution of infinite-extent objects,” Proc. SPIE 3433, 296–307 (1998). [CrossRef]
6. J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am. A 7(9), 1598–1608 (1990). [CrossRef]
7. R. G. Paxman, J. H. Seldin, M. G. Löfdahl, G. B. Scharmer, and C. U. Keller, “Evaluation of phase-diversity techniques for solar-image restoration,” Astrophys. J. 466, 1087–1099 (1996). [CrossRef]
8. D. L. Fried, “Anisoplanatism in adaptive optics,” J. Opt. Soc. Am. 72(1), 52–61 (1982). [CrossRef]
9. D. G. Sandler, S. Stahl, J. R. P. Angel, M. Lloyd-Hart, and D. McCarthy, “Adaptive optics for diffraction-limited infrared imaging with 8-m telescopes,” J. Opt. Soc. Am. A 11(2), 925–945 (1994). [CrossRef]
10. V. N. Mahajan, “Strehl ratio for primary aberrations in terms of their aberration variance,” J. Opt. Soc. Am. 73(6), 860–861 (1983). [CrossRef]
11. C. J. Carrano, “Progress in horizontal and slant-path imaging using speckle imagery,” Proc. SPIE 5001, 56–64 (2003). [CrossRef]
12. C. J. Carrano, “Anisoplanatic performance of horizontal-path speckle imaging,” Proc. SPIE 5162, 14–27 (2003). [CrossRef]
13. D. Fraser, G. Thorpe, and A. Lambert, “Atmospheric turbulence visualization with wide-area motion-blur restoration,” J. Opt. Soc. Am. A 16(7), 1751–1758 (1999). [CrossRef]
15. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M. T. Valley, “Automated video enhancement from a stream of atmospherically distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009). [CrossRef]
16. R. A. Gonsalves and R. Chidlaw, “Wavefront sensing by phase retrieval,” Proc. SPIE 207, 32–39 (1979). [CrossRef]
17. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9(7), 1072–1085 (1992). [CrossRef]
18. B. J. Thelen, R. G. Paxman, D. A. Carrara, and J. H. Seldin, “Overcoming turbulence-induced space-variant blur by using phase-diverse speckle,” J. Opt. Soc. Am. A 26(1), 206–218 (2009). [CrossRef] [PubMed]
19. R. R. Beland, “Propagation through atmospheric optical turbulence,” in Atmospheric Propagation of Radiation (Volume 2 of The Infrared & Electro-Optical systems Handbook), F.G. Smith, ed. (SPIE Optical Engineering, 1993).
20. M. C. Roggemann and B. M. Welsh, Imaging Through Turbulence, (CRC, 1996).
21. G. A. Tyler, “Adaptive optics compensation for propagation through deep turbulence: a study of some interesting approaches,” Opt. Eng. 52(2), 021011 (2012). [CrossRef]
22. D. J. Link, “Comparison of the effects of near-field and distributed atmospheric turbulence on the performance of an adaptive optics system,” Proc. SPIE 2120, 87–94 (1994). [CrossRef]
23. E. P. Waliner, “Optimizing the locations of multiconjugate wavefront correctors,” Proc. SPIE 2201, 110–116 (1994). [CrossRef]
24. R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763-01, 2–10 (1999). [CrossRef]
25. J. W. Godman, Statistical Optics (Wiley, 2015).
26. A. J. Lambert, D. Fraser, M. R. Sayyah Jahromi, and B. R. Hunt, “Superresolution in image restoration of wide area images viewed through atmospheric turbulence,” Proc. SPIE 4792, 35–43 (2002). [CrossRef]