Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deconvolution from wave front sensing using the frozen flow hypothesis

Open Access Open Access

Abstract

Deconvolution from wave front sensing (DWFS) is an image-reconstruction technique for compensating the image degradation due to atmospheric turbulence. DWFS requires the simultaneous recording of high cadence short-exposure images and wave-front sensor (WFS) data. A deconvolution algorithm is then used to estimate both the target object and the wave front phases from the images, subject to constraints imposed by the WFS data and a model of the optical system. Here we show that by capturing the inherent temporal correlations present in the consecutive wave fronts, using the frozen flow hypothesis (FFH) during the modeling, high-quality object estimates may be recovered in much worse conditions than when the correlations are ignored.

©2011 Optical Society of America

1. Introduction

Images made by upward-looking ground-based telescopes are blurred by optical aberrations introduced by turbulence in the atmosphere through which the light from distant objects must pass. The frozen flow hypothesis (FFH) assumes that the turbulence can be modeled by a series of independent static layers, each moving across the telescope aperture with the prevailing wind at the altitude of the layer. Because of its simplicity, the FFH is frequently used as the basis for numerical studies of telescope imaging performance, particularly in the modeling of adaptive optics (AO) systems. While the FFH is observed not to hold in the real world over long time scales, a number of studies [13] have shown that it is a reasonable approximation for short but still interesting periods. For example, from observations made at the 1.5 m telescope of the Starfire Optical Range in New Mexico, at 0.74 μm wavelength, Schöck & Spillar [3] found that the FFH is accurate for time scales τFFH ~20 ms or less. The accuracy degraded with increasing time such that by ~100 ms only 50% of the temporal evolution of the wave front could be described by the FFH. We note that their measurements were obtained under strong turbulence conditions, characterized by D/r0 ~40, where D is the telescope diameter and r0 is the Fried parameter, or atmospheric coherence length.

The relevant time scale for comparison is τ0, the expectation value of the time required for the phase of a wave front in a circular telescope aperture to change by 1 radian rms [4]. Values of τ0 for observations at visible wavelengths are typically 1-3 ms depending on the quality of the site. For a single layer with Kolmogorov turbulence statistics and moving with local wind speed v, τ0=0.31r0/v [5]. Extending the idea from a single layer to the full depth of the atmosphere, the overall atmospheric coherence time is 0.31r0/vwhere vis the mean wind speed integrated vertically through the atmosphere and weighted by the Cn 2 profile [6]. Of importance for any method that seeks to estimate an object by deconvolving the atmospheric point-spread function (PSF) from an image, τ0 is also the period over which the PSF is approximately constant and so sets an effective upper limit to the imaging camera’s exposure time.

Critically for the present study, we note that the period of validity of the FFH, τFFH, is an order of magnitude longer than τ0. As a consequence, we may expect that the temporal correlation seen in a series of roughly 10 consecutive short-exposure images, each of integration time ~τ0, will be well modeled by the effects of an atmosphere comprising a number of stacked layers, each propagating with its own wind vector, but otherwise unchanging. The FFH has been studied for use in AO to overcome system lag by predicting atmospheric turbulence a few milliseconds in advance [7,8], but to the best of our knowledge it has not been used in image post processing to capture the inherent temporal correlations that exist in high-cadence imagery of targets observed through atmospheric turbulence. In this paper we extend the technique of image deconvolution from wave front sensing (DWFS) by applying the FFH as a way to exploit temporal correlation.

DWFS is an image-reconstruction technique that uses a deconvolution algorithm to recover an estimate of an object from short-exposure focal plane images blurred by atmospheric turbulence. As a by-product, the technique also recovers the pupil-plane optical wave fronts. Wave-front sensor (WFS) data recorded simultaneously with the images are applied as constraints in the search for the optimal solution. In previous work, the WFS measurements have been used to estimate the PSFs for the observations [9,10]. These PSFs were then deconvolved from the respective short-exposure images to obtain a single estimate of the object intensity distribution. This approach for the object estimation, however, does not take into account any errors in the wave-front phase reconstruction process. An alternative, which overcomes this limitation, is to perform a joint estimation of the object and the wave fronts [11]. This is the approach we take here.

2. DWFS algorithm

By modeling the atmosphere as a series of vertically separated layers, one can in principle account for its non-isoplanatic nature, that is, correctly treat the dependence of the PSF on the line of sight. However, for simplicity in this early work, we adopt the isoplanatic, incoherent imaging model and treat the observed focal plane data, g(x), as a convolution between the object, f(x), and an atmospheric point-spread function h(x)that does not depend on the field positionx. That is,

g(x)=(fh)x
where ⊗ denotes convolution. This approach also allows us to model the total wave-front phase distortions in the pupil, Φ(x,t),using the FFH as
Φ(x,t+Δt)=Σiαi(x-viΔt,t),
where αi=φiηiare the phase delays caused by layer i at time t, v i is the velocity of the layer, and φ are the model parameters. Spatial correlation is enforced by a Gaussian kernel η that also serves to prevent tears (unphysical wraps of 2π radians) in the recovered phases [12]. Without the FFH, Eq. (2) is replaced by Φ(x,t)=Σiαi(x,t)and the phases at different times are treated as independent realizations of the atmosphere.

We estimate the values of φ by minimizing the error metricε=εconv+εwfs, where

εconv=ΣkΣx(gkobs(x)gk(x))2,
gkobs(x)=(fohko)x+nk(x),
and

εwfs=kjxM[(xΦjkSjkx)2+(yΦjkSjky)2]

Here gkobs(x)are the observed focal plane data for time index k, the superscript odenotes the “true” quantity, n(x)is additive noise, ∇ is the gradient operator, Sx , and Sy are the measured x and y gradients of the phase, and M is a binary mask representing the location of each sub-aperture in the Shack-Hartmann sensor. The summation over j admits the case where the WFS data (i.e., measured phase gradients) are accumulated at a higher frame rate than the focal plane data (i.e. there are J WFS frames for each focal plane frame). We model the point-spread functions and object using

hk(x)=(1/J)j|FT1{Pjk(u)exp[iΦjk(u)]}|2
and
f(x)FT1{kGkobs(u)Hk*(u)[kHk(u)Hk*(u)+γ]1}
respectively. Here P(u)are the wave-front amplitudes at position u in the pupil (for the present assumed to be unity), FT1{}is the inverse Fourier transform operator, γ is a regularization parameter, * denotes complex conjugation, and h(x)⇔H(u), g(x)⇔G(u) are Fourier transform pairs. That is, the point spread functions are modeled via the wave-front phase estimates (Eqs. (2) and (6)) and the object is obtained via a Wiener filter of the observed data (Eq. (7) [13], ). We use a partial conjugate gradient approach [11] for the minimization of ε where we alternate between minimizing Eqs. (3) and (5)) For the work reported here we set γ=106which is commensurate with the precision of the calculations.

Of course, the FFH does not apply rigorously to the behavior of the real atmosphere; one expects the performance of the technique described here to degrade for sequences longer than τFFH0 ~10 frames. But good performance will be maintained for longer sequences by a piece-wise extension of the method in which there are sequentially overlapping propagating phase screens every τFFH0 frames. The geometry is illustrated in Fig. 1 .

 figure: Fig. 1

Fig. 1 Cartoon of a phase screen representing a single atmospheric layer propagating across the telescope pupil, depicted as the heavy black circle, with wind velocity v indicated by the arrow. The light and medium gray areas represent adjacent regions of wave-front phase for which the FFH is valid, each of length |v|τFFH. The phases in the dark gray area, the overlap between the two, are independently estimated for both screens.

Download Full Size | PDF

We note that small deviations from the frozen flow behavior, for example due to turbulent boiling, can be accounted for in practice by performing a second run of the DWFS algorithm. In this run each frame is treated as an independent realization of the atmosphere: this allows for the necessary small updates to the phase estimates. This trait allows for some latitude in the number of frames used in each propagating phase screen (i.e. we can push to larger values of τFFH0).

3. Simulations

Here we describe simulations of observations made at the 3.6 m AEOS telescope on Mount Haleakala without AO compensation and with zero read noise detectors. Parameters of the simulations are given in Tables 1 and 2 . To simulate frozen flow, phase screens were computed at the start of each simulation on a square grid very much larger than the 240-pixel diameter of the telescope pupil. The phase screens were computed as random functions that obeyed the Kolmogorov power spectrum. Wind was modeled by dragging each screen across the pupil by an amount at each time step appropriate to the chosen wind velocity. The PSF of the optical system and simulated atmosphere was computed at each time step as the power spectrum of the electric phase vector in the entrance pupil. The pupil was embedded in a 512 square raster, yielding PSF estimates at slightly better than critical sampling. For extended objects, image estimates were obtained through the convolution in Eq. (1) with the addition of photon noise (Eq. (4). To implement DWFS, WFS measurement estimates were obtained by sampling the slope of the wave front phase at the center of each of a square array of subapertures dividing the full pupil.

Tables Icon

Table 1. Atmospheric parameters used in the simulations. The values for r0 and τ0 are given for a wavelength of 0.5 μm.

Tables Icon

Table 2. Optical system and object parameters used in the simulations. Photon fluxes were calculated using [20]: Ω= Esun 1.944e-11 10-0.4mv A t Δλ Δt / (hc/λ), where Esun= 813 J/s m2 μm at 0.94 μm, and 1750 J/s m2 μm at 0.6μm, mv is the visual magnitude of the target, A is the area of the aperture, Δλ is the band pass (in μm), Δt is the integration time, t is the efficiency (assumed = 0.5), h is Planck’s constant, c is the velocity of light and λ is the imaging wavelength.

We note that our model for this early work is idealized in that it does not take into account a number of real-world effects. Principally, the focal plane and WFS detectors are assumed to see exactly the same wave-front deformation. This amounts to an assumption that the cameras are well synchronized, with no differential aberrations between the two optical paths. Furthermore, the layers of turbulence are assumed to be thin, and their heights well known.

We first look at the imaging scenario where a single star (visual magnitude, mv = 6) is observed at 0.94 μm through strong turbulence D/r0 = 50 (D/r0 = 100 at 0.5 μm), typical of daytime observing conditions at the AEOS telescope. Because the restoration of a point-like object is well known to be a simpler problem than that for an extended source, we then examine a case with an extended object, taking the Hubble Space Telescope (HST) as our example, under the same seeing conditions. In both cases, we model open-loop signals from the highest resolution mode of the AEOS Shack-Hartmann sensor that has 30 sub-apertures across the pupil [14]. The sensor therefore provides full high-order wave-front information only for turbulence with D/r0 < 30, rather weaker than our model. This makes an extremely challenging restoration problem. We note that the AEOS WFS system is currently the highest order system in use on a telescope.

The shear complexity of the PSFs at this level of turbulence usually results in entrapment of any iterative deconvolution algorithm in a local minimum, and premature stagnation before a reasonable restoration has been obtained. Temporal correlation in the data however can mitigate the problem. We have noted that the non-FFH evolution of the wave front is slow compared to the required camera frame rate. A third time scale of interest is τc, the crossing time of a WFS sub-aperture. Typically, sub-apertures are sized to 2-3r0 at the imaging wavelength, about the optimal scale found by Roddier [15] for fitting a wave front with a flat plane. For sub-apertures of size d = 3r0, τc=d/v10τ0. Hence, τcFFH is significantly longer than the time between WFS frames (constrained to be ~τ0). We can therefore expect that successive WFS frames will effectively subsample the wave front, allowing recovery of aberrations on scales smaller than those the WFS was designed to sense. Taking advantage of the frame-to-frame correlation of the WFS signals will therefore constrain accurate reconstruction of the PSF speckle structure at correspondingly larger radii.

To implement an appropriate FFH model, we must know the number of layers with significant turbulence and their wind velocity vectors. For real data, these parameters can be determined through correlations of the WFS data themselves [1,16]. Under the isoplanatic assumption made here, the heights of the layers are unimportant. In future work treating the full anisoplanatic problem, the layer heights will of course be required to model the angular dependence of the PSF. In a real system, they can be determined with a number of methods that measure the profile of the atmospheric turbulence, for example scintillometry [17] or SCIDAR [18]. For the cases under study here, we rely on the results of Rimmele [19], who has shown in a site survey that the atmosphere above Mount Haleakala can be reasonably well approximated by two turbulent layers: one at ground level with a velocity of ~5 m/s, the other at a height of 11 km with a velocity of ~29 m/s.

In addition to modeling the inherent temporal correlations in the wave-front phases, use of the FFH reduces the number of variables required to characterize the imaging problem. This is valuable firstly in that it reduces the computation time to arrive at a solution (both the time per iteration of the gradient descent algorithm and the number of iterations needed), and secondly because it improves the robustness of the algorithm by reducing the density of local minima in the Hilbert space of the error metric. For the estimation of the wave-front phases, described in terms of a pixel basis set as we do here, the number of variables required to model K frames is D2[π/4ipi+|vi|(K-1)twfs/D],when using the FFH with pi sequentially overlapping wave-front screens at layer i (in the sense of Fig. 1), and πKD2/4when treating each data frame as an independent realization of the atmosphere. Here twfsis the time between WFS frames. Thus, for the parameters listed in Tables 1 and 2 for the single star (p1 = p2 = 1), using the FFH provides more than a four-fold reduction in the number of variables.

4. Results

We have performed two experiments in each of the point source and extended source cases. In the first experiment, the FFH was used to model the wave-front phases in two layers with the parameters of Table 1. In the second, temporal correlations between frames were ignored and the wave-front phase in each frame was treated as an independent realization of the atmosphere. In all cases, photon noise consistent with the fluxes listed in Table 2 was included in the simulations of the imaging camera and WFS frames, and spatial correlations were enforced in the estimated phases using a Gaussian kernel η with a full width at half maximum that is slightly larger than the combined r0 value for the atmosphere (which represents a lower bound on the values for the individual layers and which in practice can be ascertained from the WFS data). The same value was used for all the layers that were modeled, as there is no a priori information available to do otherwise. The initial estimates for the wave front phases were zero and the minimization was stopped when the change in the error metric ε was < 10−5.

4.1 Point source

Figure 2 shows the results of image restoration from the simulations of the stellar source and illustrates the dramatic difference in the quality of the object estimates derived from the two approaches. With FFH, the point source is recovered with a Strehl ratio of 77%, while essentially no improvement is made in the quality of the image without it. As we discuss below, the fidelity of the estimate with FFH is commensurate with that expected in an “ideal” case where we cleanly separate the object and PSF Fourier spectra at all spatial frequencies where the signal-to-noise-level is above 1 in the observed data.

 figure: Fig. 2

Fig. 2 The object estimates of a point source obtained from (left) DWFS using the FFH to model the inherent temporal correlations in the wave-front phases and (right) DWFS assuming no temporal correlation. Note, as Eq. (7) does not enforce positivity, negative values can be present in the restored object. Both images are shown with square root scaling.

Download Full Size | PDF

Examples of the wave-front phase and corresponding PSF estimates obtained from one of the 10 frames in each of these simulations are shown in Fig. 3 . The bottom row shows that DWFS using the FFH can produce high-fidelity estimates of the wave-front phases under strong turbulence conditions characterized by D/r0 = 50. DWFS that ignores the temporal correlations in the data, on the other hand, produces substantially lower-quality wave-front phase estimates.

 figure: Fig. 3

Fig. 3 Top Left: truth point spread function (PSF) for first frame. Top Middle: estimated PSF using the FFH. The absence of speckle structure at large radial distances reflects the lack of high-spatial frequencies in the reconstructed wave-front phase estimates. Top Right: estimated PSF without using the FFH. Bottom row: wave front phases corresponding to the PSFs directly above. The top row is shown with square root scaling to emphasize the complex speckle structure. The bottom row is displayed with linear scaling (see color bar for scaling in radians).

Download Full Size | PDF

The results are summarized quantitatively in Table 3 as the rms residual wave-front phase error after subtraction of the estimate. The uncorrected phase error, averaged over the pupil and all 10 frames, was 4339 nm. This was reduced to just 77 nm in the recovery using the FFH. Without the FFH, a much poorer estimate results with 1106 nm of residual phase error. We also show values of the residual phase error after explicit removal of the best-fit plane. This quantity is of greater importance in the present context because overall tip-tilt of the wave front displaces the PSF in the focal plane but does not change its shape. Even large errors in the tip-tilt component therefore do not affect the object estimate obtained by deconvolution. By contrast, because of the poor seeing, even modest fractional errors in the shape of the wave-front phase lead to large mismatches in the speckle structure of the estimated PSFs as we show in the top row of Fig. 3.

Tables Icon

Table 3. Residual wave-front phase errors (in nm) for the point source for tests both with and without the FFH. Top row shows rms values averaged over the pupil and all frames for the full wave-front error. Bottom row shows values with tip and tilt explicitly removed.

The use of FFH improves the wave-front estimate by nearly an order of magnitude. The advantage of the method lies in its effective spatial sub-sampling of the wave front that arises from the movement of the phase screens between successive frames. The fine-scale wave-front phase information available in the direction of the wind vectors is exploited to overcome aliasing errors introduced by optical aberration at spatial frequencies finer than the WFS sub-apertures. These are quite apparent in the grid structure of the wave-front estimation in the lower right panel of Fig. 3 where the FFH was not used. The structure, commensurate with the sub-aperture spacing, is analogous to the classic 'waffle' pattern that arises as an unsensed mode in AO systems that deploy the Fried geometry between the deformable mirror and the Shack-Hartmann WFS [21]. While the slopes of the recovered wave front sampled on the sub-aperture grid closely match those in the WFS data, piston errors between them are largely unconstrained. Although the amplitude of the grid structure could be reduced by moving to larger values of η, the additional smoothing would preclude the reconstruction of the high-spatial frequencies in the phase that play a dominant role in the morphology of the speckle structure in the focal plane.

4.2 Extended source

We have modeled the HST seen through the same turbulence as the point source. For the present, we assume that the problem is isoplanatic. Given the maximum angular extent of HST seen from the ground, ~4.7 arcsec, about twice the expected size of the isoplanatic angle, this assumption may not always be well satisfied in practice for observations at visible wavelengths, but it allows a useful proof-of-principle test.

The HST model used in our experiments, shown in Fig. 4 , displays structure on all spatial scales down to the pixel limit. For reasons discussed below, these experiments relied on 40 frames (80 ms) of data, a considerably longer period than for the point source. The results, also shown in Fig. 4, demonstrate that use of the FFH provides a restored image with a spatial resolution that is far superior to the restoration without inclusion of temporal correlations in the wave-front phase. In the former, the restored image has an equivalent Strehl ratio of 48%, while in the latter the restoration quality is so poor that little if any meaningful information can be obtained about the object.

 figure: Fig. 4

Fig. 4 Left: Diffraction-limited image of the truth object, a model of the Hubble Space Telescope. Middle Left: Image after convolution of truth object with truth PSF shown in Fig. 3 and addition of photon noise. Middle Right: Restored image from 80 ms of focal plane and WFS data when using the FFH. Right: Restored image without use of the FFH. All images are shown with linear scaling.

Download Full Size | PDF

The rms residual wave-front errors are summarized in Table 4 in the same manner as for the point source. In this experiment, the uncorrected wave front had rms aberration of 3717 nm. The aberrations without the FFH are reduced to about the same fraction of the uncorrected value as for the point source. Although correction with the FFH is not as good as it was for the point source, we note that the value of 144 nm rms after tilt removal is still better than is typically achieved by high-order AO systems working under conditions of much less severe turbulence.

Tables Icon

Table 4. Residual wave-front phase errors for the HST extended source in nm for tests both with and without the FFH.

Good restorations of extended objects will typically require longer image sequences than are needed for point sources. This is because information for the restoration is derived preferentially from regions of frequency space where there is strong signal from both the object and the PSF. The spectrum of a point source of course has uniform amplitude, so low amplitude in areas of G(u), the spectrum of the image data, can be attributed to the PSF. The spectra of extended sources on the other hand can also be generally expected to include regions of low amplitude. The distinguishing characteristic in this case is that such regions in the spectrum of the object will not change, while those in the PSF spectrum will. Restoration of extended sources therefore requires data that span enough time that the ensemble Fourier spectrum of the data has no “spectral holes” (regions where the power is close to zero) at the locations where the object’s Fourier spectrum is above the noise level [22]. It is for this reason that we have used 80 ms of data in the estimation of the HST but only 20 ms for the point source. However, the former is still in the regime where the FFH is a reasonable model.

The imperfect spectral coverage of our simulated data imposes limitations on the quality of the image restoration that we explore in Fig. 5 . In the lower panels we show the coverage in the Fourier plane for the point source and the HST where the combined SNR exceeds unity within the band limit imposed by the pupil. The upper panels show the truth objects after filtering by these Fourier masks, and represent the best that one could reasonably expect from our experiments. We find that the achieved results, in Fig. 2 and Fig. 4 respectively, are not quite as good as these limits but they are of comparable quality.

 figure: Fig. 5

Fig. 5 Top row: the point source and extended object estimates obtained from inverse Fourier transforming the spectra of the truth objects multiplied by the binary masks shown beneath. These panels are of the central region of the frames displayed in Figs. 2 and 4 and show the best recovery that can be expected with the given data. Bottom row: the Fourier plane coverage of the object spectrum provided by the ensemble of data used in each restoration, defined as frequencies where the signal-to-noise ratio is > 1. The black circles indicate the Nyquist cut-off frequency for the observations.

Download Full Size | PDF

5. Discussion

We have demonstrated the benefit of modeling the temporal correlations in the wave-front phases over short time intervals by using the FFH during the image restoration process. The FFH reduces variable count and allows for an effective sub-sampling of the wave-front phase that facilitates the reconstruction of the high-spatial frequencies in the phase that are not sampled by the sub-apertures in the WFS. This property allows us to extend the range of imaging conditions over which DWFS can be used to obtain high-quality restorations into the regime where r0 is less than the sub-aperture spacing, and where AO compensation loses its effectiveness. This enables high-Strehl imaging at shorter wavelengths and in stronger turbulence than has hitherto been possible.

We note the additional sampling afforded by the FFH should also help in the reconstruction of the wave front when observing in extremely low-light conditions where photons may not be captured by every sub-aperture. Moreover, it may provide a way to use WFS intensity data (e.g., from a Shack-Hartmann sensor) for estimation of the wave-front amplitudes during the DWFS process. The characteristic spatial scale for (weak) scintillation in the telescope pupil is given by the Fresnel scale, rF=(z/k)1/2,where k = 2π/λ and z is the distance to the scattering screen. For λ = 0.94 μm and z = 11 km, this gives a scale length of ~10 cm, which is commensurate with the sub-aperture spacing for the 30 x 30 WFS used at AEOS. As long as the scintillation time scale τ0(z = 11 km) is comparable to or longer than the camera frame rate, then the effective sub-sampling property brought by using the FFH should allow for the accurate estimation of wave-front amplitudes: an important consideration for the restoration of observations taken at low-elevation angles.

Finally, now that proof-of-concept has been made for the use of the FFH in DFWS, further studies are needed to study the sensitivity of the technique to errors in the knowledge of the number of layers, wind amplitude and direction, static wave-front errors in the optical system, and its performance under different noise scenarios.

Acknowledgements

This study was supported by Air Force Office of Scientific Research. award F9550-09-1-0216

References and links

1. L. A. Poyneer, M. van Dam, and J.-P. Véran, “Experimental verification of the frozen flow atmospheric turbulence assumption with use of astronomical adaptive optics telemetry,” J. Opt. Soc. Am. A 26(4), 833–846 (2009). [CrossRef]  

2. E. Gendron and P. Léna, “Single layer atmospheric turbulence demonstrated by adaptive optics observations,” Astrophys. Space Sci. 239(2), 221–228 (1996). [CrossRef]  

3. M. Schöck and E. J. Spillar, “Method for a quantitative investigation of the frozen flow hypothesis,” J. Opt. Soc. Am. A 17(9), 1650–1658 (2000). [CrossRef]  

4. D. L. Fried, “Time-delay-induced mean-square error in adaptive optics,” J. Opt. Soc. Am. A 7(7), 1224–1225 (1990). [CrossRef]  

5. D. F. Buscher, J. T. Armstrong, C. A. Hummel, A. Quirrenbach, D. Mozurkewich, K. J. Johnston, C. S. Denison, M. M. Colavita, and M. Shao, “Interferometric seeing measurements on Mt. Wilson: power spectra and outer scales,” Appl. Opt. 34(6), 1081–1096 (1995). [CrossRef]   [PubMed]  

6. J. W. Hardy, “Adaptive optics for astronomical telescopes,” Oxford Series in Optical and Imaging Science, Oxford University Press, §9.4.3 (1998)

7. K. A. Page, “Exploiting the frozen flow hypothesis for linear predictions in adaptive optics,” presented in Session 145 on Astronomical Instruments and Analytical Tools at the AAS 199th meeting, Washington DC, Jan 10, 2002.

8. L. A. Poyneer, B. A. Macintosh, and J.-P. Véran, “Fourier transform wavefront control with adaptive prediction of the atmosphere,” J. Opt. Soc. Am. A 24(9), 2645–2660 (2007). [CrossRef]  

9. J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am. A 7(9), 1598–1608 (1990). [CrossRef]  

10. B. M. Welsh and M. C. Roggemann, “Signal-to-noise comparison of deconvolution from wave-front sensing with traditional linear and speckle image reconstruction,” Appl. Opt. 34(12), 2111–2119 (1995). [CrossRef]   [PubMed]  

11. L. M. Mugnier, C. Robert, J.-M. Conan, V. Michau, and S. Salem, “Myopic deconvolution from wave front sensing,” J. Opt. Soc. Am. A 18(4), 862–872 (2001). [CrossRef]  

12. S. M. Jefferies, M. Lloyd-Hart, E. K. Hege, and J. Georges, “Sensing wave-front amplitude and phase with phase diversity,” Appl. Opt. 41(11), 2095–2102 (2002). [CrossRef]   [PubMed]  

13. R. H. T. Bates, and M. J. McDonnell, Image Restoration and Reconstruction, Chapter 3, Oxford University Press, p 77 (1986)

14. L. C. Roberts Jr and C. R. Neyman, “Characterization of the AEOS adaptive optics system,” Publ. Astron. Soc. Pac. 114(801), 1260–1266 (2002). [CrossRef]  

15. F. Roddier, M. Northcott, and J. E. Graves, “A simple low-order adaptive optics system for near-infrared applications,” Publ. Astron. Soc. Pac. 103, 131–149 (1991). [CrossRef]  

16. N. M. Milton, M. Lloyd-Hart, J. Bernier, and C. Baranec, “Real-time atmospheric turbulence profile estimation using modal covariance measurements from multiple guide stars,” in Astronomical Adaptive Optics Systems and Applications III (Proc. SPIE) eds. Tyson, R. K. & Lloyd-Hart, M., 6691, 66910B (2007).

17. S. G. Els, M. Schöck, J. Seguel, A. Tokovinin, V. Kornilov, R. Riddle, W. Skidmore, T. Travouillon, K. Vogiatzis, R. Blum, E. Bustos, B. Gregory, J. Vasquez, D. Walker, and P. Gillett, “Study on the precision of the multiaperture scintillation sensor turbulence profiler (MASS) employed in the site testing campaign for the Thirty Meter Telescope,” Appl. Opt. 47(14), 2610–2618 (2008). [CrossRef]   [PubMed]  

18. S. E. Egner, E. Masciadri, and D. McKenna, “Generalized SCIDAR measurements at Mount Graham,” Publ. Astron. Soc. Pac. 119(856), 669–686 (2007). [CrossRef]  

19. T. Rimmele, “Haleakala turbulence and wind profiles used for adaptive optics performance modeling,” ATST Project Document RPT-0300 (1996).

20. S. R. Maethner, Deconvolution from wave-front sensing using optimal wave-front estimators, Thesis, Air Force Institute of Technology, AFIT/GSO/ENP/96D–01, p 42–43 (1996)

21. R. B. Makidon, A. Sivaramakrishnan, M. D. Perrin, L. C. Roberts Jr, B. R. Oppenheimer, R. Soummer, and J. R. Graham, “An Analysis of Fundamental Waffle Mode in Early AEOS Adaptive Optics Images,” Publ. Astron. Soc. Pac. 117(834), 831–846 (2005). [CrossRef]  

22. D. A. Hope, and S. M. Jefferies, “A Fourier-based constraint for blind restoration of imagery obtained through strong turbulence”, Proceedings of the Advanced Maui Optical and Space Surveillance Technologies Conference, held in Wailea, Maui, Hawaii, September 10–14, 2006, Ed.: S. Ryan, The Maui Economic Development Board, p. E26 (2006).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 Cartoon of a phase screen representing a single atmospheric layer propagating across the telescope pupil, depicted as the heavy black circle, with wind velocity v indicated by the arrow. The light and medium gray areas represent adjacent regions of wave-front phase for which the FFH is valid, each of length |v|τFFH. The phases in the dark gray area, the overlap between the two, are independently estimated for both screens.
Fig. 2
Fig. 2 The object estimates of a point source obtained from (left) DWFS using the FFH to model the inherent temporal correlations in the wave-front phases and (right) DWFS assuming no temporal correlation. Note, as Eq. (7) does not enforce positivity, negative values can be present in the restored object. Both images are shown with square root scaling.
Fig. 3
Fig. 3 Top Left: truth point spread function (PSF) for first frame. Top Middle: estimated PSF using the FFH. The absence of speckle structure at large radial distances reflects the lack of high-spatial frequencies in the reconstructed wave-front phase estimates. Top Right: estimated PSF without using the FFH. Bottom row: wave front phases corresponding to the PSFs directly above. The top row is shown with square root scaling to emphasize the complex speckle structure. The bottom row is displayed with linear scaling (see color bar for scaling in radians).
Fig. 4
Fig. 4 Left: Diffraction-limited image of the truth object, a model of the Hubble Space Telescope. Middle Left: Image after convolution of truth object with truth PSF shown in Fig. 3 and addition of photon noise. Middle Right: Restored image from 80 ms of focal plane and WFS data when using the FFH. Right: Restored image without use of the FFH. All images are shown with linear scaling.
Fig. 5
Fig. 5 Top row: the point source and extended object estimates obtained from inverse Fourier transforming the spectra of the truth objects multiplied by the binary masks shown beneath. These panels are of the central region of the frames displayed in Figs. 2 and 4 and show the best recovery that can be expected with the given data. Bottom row: the Fourier plane coverage of the object spectrum provided by the ensemble of data used in each restoration, defined as frequencies where the signal-to-noise ratio is > 1. The black circles indicate the Nyquist cut-off frequency for the observations.

Tables (4)

Tables Icon

Table 1 Atmospheric parameters used in the simulations. The values for r0 and τ0 are given for a wavelength of 0.5 μm.

Tables Icon

Table 2 Optical system and object parameters used in the simulations. Photon fluxes were calculated using [20]: Ω= Esun 1.944e-11 10-0.4mv A t Δλ Δt / (hc/λ), where Esun= 813 J/s m2 μm at 0.94 μm, and 1750 J/s m2 μm at 0.6μm, mv is the visual magnitude of the target, A is the area of the aperture, Δλ is the band pass (in μm), Δt is the integration time, t is the efficiency (assumed = 0.5), h is Planck’s constant, c is the velocity of light and λ is the imaging wavelength.

Tables Icon

Table 3 Residual wave-front phase errors (in nm) for the point source for tests both with and without the FFH. Top row shows rms values averaged over the pupil and all frames for the full wave-front error. Bottom row shows values with tip and tilt explicitly removed.

Tables Icon

Table 4 Residual wave-front phase errors for the HST extended source in nm for tests both with and without the FFH.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

g ( x ) = ( f h ) x
Φ ( x , t + Δ t ) = Σ i α i ( x - v i Δ t , t ) ,
ε c o n v = Σ k Σ x ( g k o b s ( x ) g k ( x ) ) 2 ,
g k o b s ( x ) = ( f o h k o ) x + n k ( x ) ,
ε w f s = k j x M [ ( x Φ j k S j k x ) 2 + ( y Φ j k S j k y ) 2 ]
h k ( x ) = ( 1 / J ) j | F T 1 { P j k ( u ) exp [ i Φ j k ( u ) ] } | 2
f ( x ) F T 1 { k G k o b s ( u ) H k * ( u ) [ k H k ( u ) H k * ( u ) + γ ] 1 }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.