Abstract

In this paper, we present an algorithm for the restoration of images with an unknown, spatially-varying blur. Existing computational methods for image restoration require the assumption that the blur is known and/or spatially-invariant. Our algorithm uses a combination of techniques. First, we section the image, and then treat the sections as a sequence of frames whose unknown PSFs are correlated and approximately spatially-invariant. To estimate the PSFs in each section, phase diversity is used. With the PSF estimates in hand, we then use a technique by Nagy and O’Leary for the restoration of images with a known, spatially-varying blur to restore the image globally. Test results on star cluster data are presented.

©2006 Optical Society of America

1. Introduction

The mathematical model for image formation is given by the linear operator equation

d=Sf+η,

where d is the blurred, noisy image, f is the unknown true image, η is additive noise, and S is the blurring operator. In the case of spatially invariant blurs, Sf can be written as a convolution of the associated point spread function (PSF) s and the object f; that is,

Sfuv=s*fuv:=suξvηfξηdξdη.

However, if the blur is spatially variant, Sf cannot be written as a simple convolution operation. Space variant blur can occur from distortions due to telescope optics, such as was the case for the original Hubble Space Telescope Wide-Field/Planetary Camera, which had a large amount of spatial variation because of errors in the shaping mirrors [1]. Other important examples include objects moving at different velocities [2], and turbulence outside the telescope pupil [3]. Accurately modeling the spatially variant PSF in these applications can lead to substantially better reconstructions of the image. On the other hand, allowing for a fully spatially-varying PSF results in a computationally intractable problem [4]. Thus one typically makes the assumption that s is spatially-invariant on subregions of the image. That is, that the image d can be broken up into regions {Ω n }n=1p such that the blur in each region is spatially-invariant. Then if we let cn be the indicator function on Ω n , i.e. cn (u,v) = 1 for (u,v) ∈ Ω n and 0 otherwise, and we define sn to be the (approximately) spatially-invariant PSF on Ω n , the PSF s will be given by

s(u,v;ξ,η)=n=1Psnuξvηcnξη.

Substituting (1.2) into (1.1), we obtain

d=n=1Psn*fn+η,

where fn (ξ, η) := cn (ξ, η)f(ξ, η).

Computational methods for the restoration of images arising from (1.3) have been sought by several researchers, though only in the case that the sn ’s are known. Faisal et al. [5] apply the Richardson-Lucy algorithm, and discuss a parallel implementation. Boden et al. [6] also describe a parallel implementation of the Richardson-Lucy algorithm, and consider a model that allows for smooth transitions in the PSF between regions. Nagy and O’Leary [7] use a conjugate gradient (CG) algorithm with piecewise constant and linear interpolation, and also suggest a preconditioning scheme (for both interpolation methods) that can substantially improve rates of convergence.

In this paper, we present an algorithm for restoring images arising from model (1.3) when the sn ’s are unknown. In our approach, the PSF in each region is estimated using the technique of phase diversity [8]. Once the PSF estimates have been obtained, we use a global restoration scheme of Nagy and O’Leary mentioned in the previous paragraph to obtain the restored image.

The paper is organized as follows. We begin in Section 2 with a description of the technique of phase diversity. Our computational method is then presented in Section 3. Results on some tests using simulated star field images taken through atmospheric turbulence are provided in Section 4, while conclusions and comments on the spatially-varying blur removal problem are given in Section 5.

2. PSF estimation and phase diversity

As discussed in, e.g. [9], with atmospheric turbulence the phase varies both with time and position in space. Adaptive optics systems use deformable mirrors to correct for these phase variations. Errors in this correction process arise from a variety of sources, e.g., errors in the measurement of phase, inability of the mirror to conform exactly to the phase shape, and lag time between phase measurement and mirror deformation. Thus, a spatially variant model can potentially produce better restorations.

In this section we describe a phase diversity-based scheme to approximate the PSF associated with each section of a segmented image. For a sufficiently fine segmentation, the PSF can be assumed to be essentially spatially invariant, and thus a blind deconvolution scheme can be applied. The mathematics of this phase recovery process was first described by Gonsalves [8], and has been applied extensively for imaging through atmospheric turbulence [4].

Assuming that light emanating from the object is incoherent, the dependence of the PSF on the phase is given by

s[ϕ]=1(peιϕ)2,

where p denotes the pupil, or aperture, function, ι = √-1, and ℱ denotes the 2-D Fourier transform,

(h)(y)=IR2h(x)eι2πxydx,yIR2.

The pupil function p = p(x 1,x 2) is determined by the extent of the telescope’s primary mirror. In phase diversity-based blind deconvolution, the kth diversity image is given by

dk=s[ϕ+θk]*f+ηk,k=1,,K,

where ηk represents noise in the data, f is the unknown object, s is the point spread function (PSF), ϕ is the unknown phase function, θk is the kth phase diversity function.

In atmospheric optics [4], the phase ϕ (x 1,x 2) quantifies the deviation of the wave front from a reference planar wave front. This deviation is caused by variations in the index of refraction (wave speed) along light ray paths, and is strongly dependent on air temperature. Because of turbulence, the phase varies with time and position in space and is often modelled as a stochastic process.

Additional changes in the phase ϕ can occur after the light is collected by the primary mirror, e.g., when adaptive optics are applied. This involves mechanical corrections obtained with a deformable mirror to restore ϕ to planarity. By placing beam splitters in the light path and modifying the phase differently in each of the resulting paths, one can obtain more independent data. The phase diversity functions θk represent these deliberate phase modifications applied after light is collected by the primary mirror. The easiest to implement is defocus blur, modelled by a quadratic

θkx1x2=bk(x12+x22),

where the parameters bk are determined by defocus lengths. In practice, the number of diversity images is often quite small, e.g., K = 2 in the numerical simulations to follow. In addition, one of the images, which we will denote using index k = 1, is obtained with no deliberate phase distortion, i.e., θ 1 = 0 in (2.6).

2.1. The minimization problem

To estimate the phase ϕ, and the object f from data (2.6), we consider the least squares fit-to-data functional

Jdata[ϕ,f]=12Kk=1Ks[ϕ+θk]*fdk,t2.

Here ∥ ∙ ∥ denotes the standard L 2 norm. By the convolution theorem and the fact that Fourier transforms preserve the L 2 norm, one can express this in terms of Fourier transforms, F = ℱ{f}, Dk = ℱ{dk }, and Sk [ϕ} = ℱ{s[ϕ + θk ]},

Jdata[ϕ,F]=12Kk=1KSk[ϕ]FDk2.

Since deconvolution and phase retrieval are both ill-posed problems, any minimizer of Jdata is unstable with respect to noise in the data. Hence we add regularization terms to obtain the full cost functional,

Jfull[ϕ,f]=Jdata[ϕ,f]+γJobject[f]+αJphase[ϕ].

Here the regularization parameters γ and α are positive real numbers, and the regularization functionals Jobject and Jphase provide stability and incorporate prior information.

Because of atmospheric turbulence, variations in the refractive index, and hence the phase itself, can be modelled as a random process [4]. We apply the von Karman turbulence model, which assumes this process is second order, wide sense stationary, and isotropic with zero mean; see, e.g., [4]. It can be characterized by its power spectral density,

Φ(ω)=C1(C2+ω2)116,

where ω = (ωx ,ωy ) represents spatial frequency. Corresponding to this stochastic model for phase, we take the phase regularization functional

Jphase[ϕ]=12Φ1{ϕ},{ϕ},

where 〈f,g〉 = f(ω)g *(ω), and the superscript * denotes complex conjugate. For regularization of the object, we take the “minimal information prior”

Jobject[f]=12f2=12f2.

Note that the object regularization functional (2.13) is quadratic and the dependence of the fit-to-data functional (2.8) on the object f is quadratic. Moreover, the Hessian with respect to the object of the full cost functional (2.10) is symmetric and positive definite with eigenvalues bounded below by γ. By setting the gradient with respect to the object equal to zero, one obtains a linear equation whose solution yields the object at a minimizer for Jfull [10]. From (2.9)–(2.10) and (2.13) one obtains the Fourier representation for the minimizing object,

F=P[ϕ]*Q[ϕ],

where

P[ϕ]=kDk*Sk[ϕ],Q[ϕ]=γ+kSk[ϕ]2.

Substituting (2.14) back in (2.9)–(2.10), one obtains the cost functional that we will minimize,

Jϕ=Jreduced data[ϕ]+αJphase[ϕ],

where

Jreduced data[ϕ]=kDk2P[ϕ]Q[ϕ],P[ϕ].

See Appendix B of [9] for a detailed derivation.

Our objective is to minimize (2.16) in order to obtain a reconstruction of the phase ϕ. In order to do that, we need an effective computational method. Such a method is discussed in the next section.

2.2. The minimization algorithm

To minimize J in (2.16) we will use, as in [11], a quasi-Newton, or secant, method known as limited memory BFGS (L-BFGS) [12]. A generic quasi-Newton algorithm with line search globalization is presented below. We denote the gradient of J at ϕ by grad J(ϕ) and the Hessian of J at ϕ by Hess J(ϕ).

Quasi-Newton / Line Search Algorithm

ν:=0;
ϕ0:=initialguess;
begin quasi-Newton iterations
gν:=gradJ(ϕν);% compute gradient
Bν:=SPD approximation to HessJ(ϕν);
dν:=Bν1gν;% compute quasi-Newton step
τν:=argminτ>0J(ϕν+τdν);% line search
ϕν+1:=ϕν+τνdν;% update approximate solution
ν:=ν+1;
end quasi-Newton iterations

In practice, the line search subproblem is solved inexactly [12].

The BFGS method [12] is one of the most popular quasi-Newton techniques. Given , this method generates a new Hessian approximation B ν+1 in terms of the differences in the successive approximates to the solution and its gradient,

sv=defϕv+1ϕv,
yv=defgradJ(ϕv+1)gradJ(ϕv).

L-BFGS is based on a recursion for the inverse of the Bν ’s,

Bv+11=(IyvsvTyvTsv)Bv1(IsvyvTyvTsv)+svsvTyvTsv,v=0,1,.

Given ν ∈ IRn, computation of Bν+11 ν requires a sequence of inner products involving ν and the sν ’s and yν ’s, together with the application of B01. If B 0 is symmetric positive definite (SPD) and the “curvature condition” yνTsν > 0 holds for each ν, then each of the Bν ’s is also SPD, thereby guaranteeing that - Bν1 grad J(ϕν ) is a descent direction. The curvature condition can be maintained by implementing the line search correctly [12].

“Limited memory” means that at most N vector pairs {(sν ,yν ),…, (s ν-N+1,y ν-N+1)} are stored and at most N steps of the recursion are taken, i.e., if νN, apply the recursion (2.20) for ν, ν - 1,…, ν - N, and set BνN1 equal to an SPD matrix Mν1. We will refer to Mν as the preconditioning matrix. In standard implementations, Mv is taken to be a multiple of the identity [12]. For our application we choose an Mν which has the operator representation on an operand ψ, given by

Mvψ=1(Φ1(ψ)),

where Φ is defined in (2.11).

Under mild assumptions, the local convergence rate for the BFGS method is superlinear [12]. In the limited memory case, this rate can become linear.

Before continuing, we make the observation that the above computational approach provides estimates of both the phase ϕ and object f (via (2.14)). In our application of removing spatially-varying blur, the main interest is in using this approach for obtaining the PSF as a function of the phase, as given in (2.4). The object f is then reconstructed in a second stage, which we now discuss.

3. Sectioning the image and applying a global restoration scheme

For many spatially variant blurs, in small regions of the image the blur can be well approximated by a spatially invariant PSF. This property has motivated several types of sectioning methods [2, 13, 14, 15] that partition the image, restoring each local region using its corresponding spatially invariant PSF. The results are then sewn together to obtain the restored image. To reduce blocking artifacts at the region boundaries, larger, overlapping regions are used, and then the restored sections are extracted from their centers. Trussell and Hunt [2] proposed using the Landweber iteration for the local deblurring, and suggested a complicated stopping criteria based on a combination of local and global convergence constraints. Fish, Grochmalicki and Pike [14] use a truncated singular value decomposition (TSVD) to obtain the local restorations.

An alternative scheme can be derived by partitioning the image into subregions on which the blur is assumed to be spatially invariant; but, rather than deblurring the individual subregions locally and then sewing the individual results together, we first sew (interpolate) the individual PSFs, and restore the image globally. A global reconstruction avoids the problem of boundary artifacts that can occur in traditional sectioning methods [2, 13, 14, 15]. In algebraic terms, the blurring matrix S, given in (1.1), can be written as

S=i=1pi=1pDijsij,

where Sij a matrix representing the spatially invariant PSF in region (i, j), and Dij is a nonnegative diagonal matrix satisfying ∑∑Dij = I. For example, if piecewise constant interpolation is used, then the lth diagonal entry of Dij is 1 if the lth point is in region (i, j), and 0 otherwise. In principle, any partitioning scheme can be used. The most obvious approach is to partition the image into rectangular subregions, or to use concentric annular regions in the case that the PSFs vary radially from the on-axis PSF.

Faisal et al. [5] use this formulation of the spatially variant PSF, apply the Richardson-Lucy algorithm with piecewise constant interpolation of the PSFs, and discuss a parallel implementation. Boden et al. [6] also describe a parallel implementation of the Richardson-Lucy algorithm, and consider piecewise constant as well as piecewise linear interpolation. Nagy and O’Leary [7] use a conjugate gradient algorithm with piecewise constant and linear interpolation, and also suggest a preconditioning scheme (for both interpolation methods) that can substantially improve the rate of convergence. Furthermore, it is shown in [16] that by generalizing overlap-add and overlap-save convolution techniques, very efficient FFT-based algorithms can be obtained when the image is partitioned into rectangular subregions. Although other partitioning schemes can be used, the computational cost will increase. If the PSFs vary smoothly, then a rectangular partitioning should provide a good approximation of the spatially variant blur. For these reasons, we assume throughout the paper that a rectangular partitioning of the image is used.

Note that, using (3.21), the image formation model can be written as:

d=i=1pi=1pDij(sij*f)+n,

where sij is the PSF for region (i, j). This model assumes the PSFs are known, so the question is how to use this for blind deconvolution. The algorithm proceeds follows:

  1. First determine a region (i, j) for which a good estimate of the PSF can be obtained. Use the L-BFGS method discussed in Section 2.2 on an extended region of (i, j) to obtain reconstruction of the phase ϕ in that region. From ϕ an approximation of the spatially invariant PSF for that region is obtained via (2.4). A reconstruction of the image contained in that region, if desired, could be computed by applying the inverse Fourier transform to F given in (2.14).
  2. Now, assuming that the overall space varying PSF varies slowly across the image, the PSFs for the regions neighboring region (i, j) should be similar to the one in region (i, j). That is, PSF sij should be a good estimate of the PSFs s i,j+1, s i,j-1, s i+1,j and s i-1,j. Thus, one can use ϕij for the initial guess ϕ 0 in the L-BFGS iterations, obtaining reconstruction for the phase, and image if desired. Once again, via (2.4), a reconstruction of the PSF is obtained.
  3. Execute steps 1 and 2 for each region. When done, one hopefully has a set of good PSFs, and restored regions of the image. A global image restoration algorithm, with the individual (good) PSFs, can be used to restore the blurred image (as done by Nagy and O’Leary [7]). Note that the restored images in the regions can be pieced together, and used as an initial guess for the global restoration scheme.

4. Numerical examples

In this section we describe some numerical experiments to illustrate the potential advantages of the space variant approach discussed in this paper. All tests were done on a simulated star field image, shown in Fig. 1, which was obtained from the Space Telescope Science Institute (www.stsci.edu).

 figure: Fig. 1.

Fig. 1. Simulated image of a star field.

Download Full Size | PPT Slide | PDF

To simulate an image blurred by a spatially variant point spread function, we begin by generating 1024 different PSFs. The PSFs were created by generating a single pupil function and a moving phase screen, each on a 32 × 32 grid. Figure 2 shows the pupil function, two selected phase screens, and their corresponding PSFs.

The blurred image, shown in Fig. 3, was then created by convolving each of the 1024 PSFs with successive 32 × 32 pixels of the true image. By overlapping these regions, we obtain successive 8×8 pixel regions of the blurred image with virtually no blocking (boundary) artifacts; see Fig. 3.

4.1. Advantages of space variant model

To illustrate the potential advantages of using a space variant model, we construct certain average PSFs as follows:

  • By averaging all of the 1024 true PSFs, we obtain a single PSF which represents a spatially invariant approximation of the spatially variant blurring operator.
  • Next we section the image into 4 equally sized regions (that is, we use a 2 × 2 partitioning of the image). Note that because of the way we constructed the blurred image, there are 10244=256 true PSFs corresponding to each region. We obtain a single approximate PSF for each region by averaging the 256 true PSFs in their respective regions. We then use these 4 average PSFs to construct an approximation of the space variant blur, as described in Section 3.
     figure: Fig. 2.

    Fig. 2. Pupil function, two sample phases and their corresponding PSFs.

    Download Full Size | PPT Slide | PDF

     figure: Fig. 3.

    Fig. 3. Simulated space variant blur of star field image; a linear scale is used to display the image on the left, and a log scale is used to display the image on the right.

    Download Full Size | PPT Slide | PDF

  • Using an analogous approach on a 4 × 4 partitioning of the image, we construct 16 PSFs to approximate the space variant blurring operator, each obtained by averaging the 102416=64 true PSFs corresponding to each region.
  • Finally, we use a 16 × 16 partitioning of the image. In this case 1024256=4 of the true PSFs are averaged to obtain a single PSF for each region.

The conjugate gradient (CG) method was used to reconstruct the image with the various approximations of the PSF as described above. Efficient implementation of the matrix vector multiplications for the space variant cases (4 PSFs, 16 PSFs, and 64 PSFs) was done using the approach outlined in Section 3. Goodness of fit is measured using the relative error,

ftruefk2ftrue2

where ftrue is the (diffraction limited) true image, and fk is the (diffraction limited) solution at the kth CG iteration.

A plot of the relative errors for the special case of noise free data is shown in Fig. 4. Note that when more PSFs are used, a better approximation of the space variant blur is obtained. This is confirmed by the results in Fig. 4. The computed reconstructions at iterations corresponding to the minimum relative errors for each case are shown in Fig. 5.

Similar results occur if noise is added to the blurred image, however the ill-conditioning of the problem means that the reconstructions will be more contaminated with noise. For example, with Poisson and Gaussian noise (mean 0, standard deviation 2) added to the blurred image, the relative errors for CG are shown in Fig. 6 and the corresponding reconstructions are shown in Fig. 7.

 figure: Fig. 4.

Fig. 4. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur. These results were computed using a noise free blurred image.

Download Full Size | PPT Slide | PDF

4.2. Practical results

The results from the previous section clearly show that if we are able to get accurate approximations of the PSFs, then the space variant model will provide better reconstructions than by using a space invariant model. In this section we show results when the PSFs are reconstructed using a blind deconvolution algorithm. In particular, we use phase diversity, with one frame and two channels. The blind deconvolution (BD) algorithm we use is described in Section (2.2). Our regularization parameters were taken to be α= 1×10-1 and γ= 1×10-4. We remark that the problem of choosing appropriate regularization parameters is one of the most difficult issues in the numerical solution of ill-posed problems. There are techniques for estimating parameters for simple noise models; see, for example [17, 18, 19]. In many cases good heuristics based on computational experience with the algorithm and the particular application can be useful. The parameters we use are similar to those reported in the literature; see [11].

 figure: Fig. 5.

Fig. 5. Reconstructions computed by the conjugate gradient algorithm, using accurate approximations of the PSF, and with a noise free blurred image. For comparison, we also show the true image and the true image convolved with the diffraction-limited PSF.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur. These results were computed using a noisy (Poisson and Gaussian) blurred image.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7.

Fig. 7. Reconstructions computed by the conjugate gradient algorithm, using accurate approximations of the PSF, with Poisson and Gaussian noise added to the blurred image. For comparison, we also show the true image and the true image convolved with the diffraction-limited PSF.

Download Full Size | PPT Slide | PDF

In the phase estimation step, regularization is applied before computing a minimum of the least squares functional. Thus it is reasonable to seek estimates of the phase and object that yield a gradient norm near zero. We therefore stop L-BFGS iterations once the norm of the gradient had been reduced by six orders of magnitude in all cases. Note that no noise amplification occurs provided the regularization parameters are properly chosen, and, as previously mentioned, the choice of these parameters is dependent on the data.

Once the PSFs are computed from the BD algorithm, we then use the CG algorithm to reconstruct the image, as was done in the previous subsection. Figure 9 shows plots of the relative errors when no noise is added to the data. At first glance these appear to be disappointing results; 4 PSFs produce lower relative errors than a single PSF, but additional PSFs actually increase the relative error. It is important to note, however, that to obtain more PSFs, the blind deconvolution algorithm must be implemented on small regions of the image. If the small regions do not contain significant object information, then we cannot hope to obtain good approximations of the PSFs. Consequently, given the fact that the object that we are using in our experiment has significant black background, it is not surprising that several such small regions occur. Thus the poor results when using 64 PSFs.

On the other hand, some of the small regions may contain enough significant object information so that the blind deconvolution algorithm can reconstruct good PSFs. This motivates us to consider an approach where we refine the partitioning in each subregion only if further refinement produces good reconstructions of the PSFs. For example:

  • Suppose we have the partitioning shown on the left in Fig. 8, and that we have obtained reconstructed PSFs on each of the four regions.
  • Refine the partitioning of the image as shown in the middle of Fig. 8, and reconstruct PSFs for each of the (now smaller) subregions.
  • Determine if each of these reconstructed PSFs are sufficiently accurate.
    • - If a reconstructed PSF is not sufficiently accurate, then reject it and use a PSF from the previous step for this subregion. In this case, further refinement and reconstruction of PSFs for these regions is not needed.
    • - If a reconstructed PSF is sufficiently accurate, then accept this PSF.
  • Repartition all of the subregions corresponding to accepted PSFs, and repeat the above process. The right plot in Fig. 8 shows a possible repartitioning of selected subregions.

Thus, this adaptive approach uses a “mix” of PSFs computed by the various partitionings of the image.

 figure: Fig. 8.

Fig. 8. Example of adaptive refinement of region partitions. The left plot shows an initial partitioning of an image, the middle shows a refinement of the partitioning, and the right plot shows what might happen in an adaptive approach where only certain subregions are refined further.

Download Full Size | PPT Slide | PDF

To automate the process of determining when a PSF is reconstructed sufficiently accurately, an appropriate measure must be used. Such a measure will be dependent on the application, and the type of images being reconstructed. In order to test the potential of this adaptive scheme, we used a systematic approach for choosing a good PSF by comparing the computed PSFs with the average PSFs described in the previous subsection. Of course in a realistic problem the average PSFs will not be available, and some other mechanism must be used to decide on the quality of the computed PSFs. In any case, our systematic approach shows that if the best computed PSFs can be found, then substantially better reconstructions can be obtained; see the bottom curve in Fig. 9. The reconstructed images are shown in Fig. 10.

Similar results are obtained with noisy data. For example, with Poisson and Gaussian noise (mean 0, standard deviation 2) added to the blurred image, the relative errors using the CG iterative method are shown in Fig. 11. The corresponding reconstructions are shown in Fig. 12, where we also include arrows to indicate regions in which the space variant approach produces significantly better reconstructions (compared to the diffraction limited true image) than the space invariant (1 PSF) model.

 figure: Fig. 9.

Fig. 9. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur. The PSFs used to approximate the space variant blur were computed using phase diversity-based blind deconvolution. These results were computed using noise free blurred image data.

Download Full Size | PPT Slide | PDF

5. Conclusions

We have presented a computational approach for solving image restoration problems in which the blur in the image is both unknown and spatially varying. The approach has three stages. The first stage involves sectioning the image into regions in which the blur is believed to be approximately spatially invariant. In the second stage, phase diversity-based blind deconvolution via the L-BFGS optimization algorithm is implemented in order to obtain an estimate of the phase in each region. From these reconstructed phases, the corresponding PSF in each region can be computed. In the final stage, with these PSFs in hand, the object can be reconstructed globally via the algorithm of Nagy and O’Leary.

Our numerical experiments show, first, that using a spatially varying model when a spatially varying blur is present does indeed provide more accurate results. Secondly, we find that in regions with little object information, the phase, and hence, PSF, reconstructions can be inaccurate. This motivates a “PSF mixing” scheme, in which the object is divided into further subregions only in areas in which there is enough object information. A conclusion of particular importance that follows from our numerical experiments is that in the presence of an unknown, spatially varying blur, our approach is much more effective than is standard one-PSF phase-diversity.

We remark that, in principle, our space variant approach should be applicable with other blind deconvolution methods, but as evidenced from our numerical results, the quality of the reconstructed PSFs is important. Thus we would expect that additional diversity information should improve the results. Similarly, we would expect a multi-frame scheme, coupled with our space variant technique, to outperform its single frame counterpart.

 figure: Fig. 10.

Fig. 10. Reconstructions computed by the conjugate gradient algorithm, using PSFs computed from a phase diversity blind deconvolution algorithm. The blurred image data in this case is noise free.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11.

Fig. 11. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur, and with a noisy (Poisson and Gaussian) blurred image.

Download Full Size | PPT Slide | PDF

 figure: Fig. 12.

Fig. 12. Reconstructions computed by the conjugate gradient algorithm, using PSFs computed from a phase diversity-based blind deconvolution algorithm, and with a noisy (Poisson and Gaussian) blurred image.

Download Full Size | PPT Slide | PDF

The problem of optimal sectioning of the image, and efficient implementation details, is worth further exploration. One might base the partitioning on a-priori knowledge of how the PSF varies; for example, one could partition into concentric annular regions for a radially varying PSF. Another approach is to base the partitioning on the amount of information (e.g., signal to noise ratio) in various regions. In this case, one could use an approach where regions are repartitiioned into smaller and smaller subregions until, for example, the mean intensity of the subregion falls below a predetermined threshold (which might be obtained through extensive simulations).

References and links

1. J. Biretta, “WFPC and WFPC 2 instrumental characteristics,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 224–235 (Space Telescope Science Institute, Baltimore, MD, 1994).

2. H. J. Trussell and S. Fogel, “Identification and restoration of spatially variant motion blurs in sequential images,” IEEE Trans. Image Proc. 1, 123–126 (1992). [CrossRef]  

3. R. G. Paxman, B. J. Thelen, and J. H. Seldin, “Phase-diversity correction of turbulence-induced space-variant blur,” Opt. Lett. 19(16), 1231–1233 (1994). [CrossRef]   [PubMed]  

4. M. C. Roggemann and B. Welsh, Imaging Through Turbulence (CRC Press, Boca Raton, FL, 1996).

5. M. Faisal, A. D. Lanterman, D. L. Snyder, and R. L. White, “Implementation of a Modified Richardson-Lucy Method for Image Restoration on a Massively Parallel Computer to Compensate for Space-Variant Point Spread Function of a Charge-Coupled Device Camera,” J. Opt. Soc. Am. A 12, 2593–2603 (1995). [CrossRef]  

6. A. F. Boden, D. C. Redding, R. J. Hanisch, and J. Mo, “Massively Parallel Spatially-Variant Maximum Likelihood Restoration of Hubble Space Telescope Imagery,” J. Opt. Soc. Am. A 13, 1537–1545 (1996). [CrossRef]  

7. J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially-variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998). [CrossRef]  

8. R. A. Gonsalves, “Phase diversity in adaptive optics,” Opt. Eng. 21, 829–832 (1982).

9. C. R. Vogel, T. Chan, and R. J. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” in Adaptive Optical System Technologies, vol. 3353 (SPIE, 1998).

10. R. G. Paxman, T. Schulz, and J. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992). [CrossRef]  

11. L. Gilles, C. R. Vogel, and J. M. Bardsley, “Computational Methods for a Large-Scale Inverse Problem Arising in Atmospheric Optics,” Inverse Probl. 18, 237–252 (2002). [CrossRef]  

12. J. Nocedal and S. J. Wright, Numerical Optimization (Springer-Verlag, New York, 1999). [CrossRef]  

13. H.-M. Adorf, “Towards HST restoration with space-variant PSF, cosmic rays and other missing data,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 72–78 (1994).

14. D. A. Fish, J. Grochmalicki, and E. R. Pike, “Scanning singular-value-decomposition method for restoration of images with space-variant blur,” J. Opt. Soc. Am. A 13, 1–6 (1996). [CrossRef]  

15. H. J. Trussell and B. R. Hunt, “Image Restoration of Space-Variant Blurs by Sectional Methods,” IEEE Trans. Acoust. Speech, Signal Processing 26, 608–609 (1978). [CrossRef]  

16. J. G. Nagy and D. P. O’Leary, “Fast iterative image restoration with a spatially varying PSF,” in Advanced Signal Processing Algorithms, Architectures, and Implementations VII, F. T. Luk, ed., vol. 3162, pp. 388–399 (SPIE, 1997).

17. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, 2000).

18. P. C. Hansen, Rank-deficient and discrete ill-posed problems (SIAM, Philadelphia, PA, 1997).

19. C. R. Vogel, Computational Methods for Inverse Problems (SIAM, Philadelphia, PA, 2002). [CrossRef]  

References

  • View by:

  1. J. Biretta, “WFPC and WFPC 2 instrumental characteristics,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 224–235 (Space Telescope Science Institute, Baltimore, MD, 1994).
  2. H. J. Trussell and S. Fogel, “Identification and restoration of spatially variant motion blurs in sequential images,” IEEE Trans. Image Proc. 1, 123–126 (1992).
    [Crossref]
  3. R. G. Paxman, B. J. Thelen, and J. H. Seldin, “Phase-diversity correction of turbulence-induced space-variant blur,” Opt. Lett. 19(16), 1231–1233 (1994).
    [Crossref] [PubMed]
  4. M. C. Roggemann and B. Welsh, Imaging Through Turbulence (CRC Press, Boca Raton, FL, 1996).
  5. M. Faisal, A. D. Lanterman, D. L. Snyder, and R. L. White, “Implementation of a Modified Richardson-Lucy Method for Image Restoration on a Massively Parallel Computer to Compensate for Space-Variant Point Spread Function of a Charge-Coupled Device Camera,” J. Opt. Soc. Am. A 12, 2593–2603 (1995).
    [Crossref]
  6. A. F. Boden, D. C. Redding, R. J. Hanisch, and J. Mo, “Massively Parallel Spatially-Variant Maximum Likelihood Restoration of Hubble Space Telescope Imagery,” J. Opt. Soc. Am. A 13, 1537–1545 (1996).
    [Crossref]
  7. J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially-variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
    [Crossref]
  8. R. A. Gonsalves, “Phase diversity in adaptive optics,” Opt. Eng. 21, 829–832 (1982).
  9. C. R. Vogel, T. Chan, and R. J. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” in Adaptive Optical System Technologies, vol. 3353 (SPIE, 1998).
  10. R. G. Paxman, T. Schulz, and J. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992).
    [Crossref]
  11. L. Gilles, C. R. Vogel, and J. M. Bardsley, “Computational Methods for a Large-Scale Inverse Problem Arising in Atmospheric Optics,” Inverse Probl. 18, 237–252 (2002).
    [Crossref]
  12. J. Nocedal and S. J. Wright, Numerical Optimization (Springer-Verlag, New York, 1999).
    [Crossref]
  13. H.-M. Adorf, “Towards HST restoration with space-variant PSF, cosmic rays and other missing data,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 72–78 (1994).
  14. D. A. Fish, J. Grochmalicki, and E. R. Pike, “Scanning singular-value-decomposition method for restoration of images with space-variant blur,” J. Opt. Soc. Am. A 13, 1–6 (1996).
    [Crossref]
  15. H. J. Trussell and B. R. Hunt, “Image Restoration of Space-Variant Blurs by Sectional Methods,” IEEE Trans. Acoust. Speech, Signal Processing 26, 608–609 (1978).
    [Crossref]
  16. J. G. Nagy and D. P. O’Leary, “Fast iterative image restoration with a spatially varying PSF,” in Advanced Signal Processing Algorithms, Architectures, and Implementations VII, F. T. Luk, ed., vol. 3162, pp. 388–399 (SPIE, 1997).
  17. H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, 2000).
  18. P. C. Hansen, Rank-deficient and discrete ill-posed problems (SIAM, Philadelphia, PA, 1997).
  19. C. R. Vogel, Computational Methods for Inverse Problems (SIAM, Philadelphia, PA, 2002).
    [Crossref]

2002 (1)

L. Gilles, C. R. Vogel, and J. M. Bardsley, “Computational Methods for a Large-Scale Inverse Problem Arising in Atmospheric Optics,” Inverse Probl. 18, 237–252 (2002).
[Crossref]

1998 (1)

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially-variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[Crossref]

1996 (2)

A. F. Boden, D. C. Redding, R. J. Hanisch, and J. Mo, “Massively Parallel Spatially-Variant Maximum Likelihood Restoration of Hubble Space Telescope Imagery,” J. Opt. Soc. Am. A 13, 1537–1545 (1996).
[Crossref]

D. A. Fish, J. Grochmalicki, and E. R. Pike, “Scanning singular-value-decomposition method for restoration of images with space-variant blur,” J. Opt. Soc. Am. A 13, 1–6 (1996).
[Crossref]

1995 (1)

1994 (2)

R. G. Paxman, B. J. Thelen, and J. H. Seldin, “Phase-diversity correction of turbulence-induced space-variant blur,” Opt. Lett. 19(16), 1231–1233 (1994).
[Crossref] [PubMed]

H.-M. Adorf, “Towards HST restoration with space-variant PSF, cosmic rays and other missing data,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 72–78 (1994).

1992 (2)

H. J. Trussell and S. Fogel, “Identification and restoration of spatially variant motion blurs in sequential images,” IEEE Trans. Image Proc. 1, 123–126 (1992).
[Crossref]

R. G. Paxman, T. Schulz, and J. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A 9, 1072–1085 (1992).
[Crossref]

1982 (1)

R. A. Gonsalves, “Phase diversity in adaptive optics,” Opt. Eng. 21, 829–832 (1982).

1978 (1)

H. J. Trussell and B. R. Hunt, “Image Restoration of Space-Variant Blurs by Sectional Methods,” IEEE Trans. Acoust. Speech, Signal Processing 26, 608–609 (1978).
[Crossref]

Adorf, H.-M.

H.-M. Adorf, “Towards HST restoration with space-variant PSF, cosmic rays and other missing data,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 72–78 (1994).

Bardsley, J. M.

L. Gilles, C. R. Vogel, and J. M. Bardsley, “Computational Methods for a Large-Scale Inverse Problem Arising in Atmospheric Optics,” Inverse Probl. 18, 237–252 (2002).
[Crossref]

Biretta, J.

J. Biretta, “WFPC and WFPC 2 instrumental characteristics,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 224–235 (Space Telescope Science Institute, Baltimore, MD, 1994).

Boden, A. F.

Chan, T.

C. R. Vogel, T. Chan, and R. J. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” in Adaptive Optical System Technologies, vol. 3353 (SPIE, 1998).

Engl, H. W.

H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, 2000).

Faisal, M.

Fienup, J.

Fish, D. A.

D. A. Fish, J. Grochmalicki, and E. R. Pike, “Scanning singular-value-decomposition method for restoration of images with space-variant blur,” J. Opt. Soc. Am. A 13, 1–6 (1996).
[Crossref]

Fogel, S.

H. J. Trussell and S. Fogel, “Identification and restoration of spatially variant motion blurs in sequential images,” IEEE Trans. Image Proc. 1, 123–126 (1992).
[Crossref]

Gilles, L.

L. Gilles, C. R. Vogel, and J. M. Bardsley, “Computational Methods for a Large-Scale Inverse Problem Arising in Atmospheric Optics,” Inverse Probl. 18, 237–252 (2002).
[Crossref]

Gonsalves, R. A.

R. A. Gonsalves, “Phase diversity in adaptive optics,” Opt. Eng. 21, 829–832 (1982).

Grochmalicki, J.

D. A. Fish, J. Grochmalicki, and E. R. Pike, “Scanning singular-value-decomposition method for restoration of images with space-variant blur,” J. Opt. Soc. Am. A 13, 1–6 (1996).
[Crossref]

Hanisch, R. J.

Hanke, M.

H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, 2000).

Hansen, P. C.

P. C. Hansen, Rank-deficient and discrete ill-posed problems (SIAM, Philadelphia, PA, 1997).

Hunt, B. R.

H. J. Trussell and B. R. Hunt, “Image Restoration of Space-Variant Blurs by Sectional Methods,” IEEE Trans. Acoust. Speech, Signal Processing 26, 608–609 (1978).
[Crossref]

Lanterman, A. D.

Mo, J.

Nagy, J. G.

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially-variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[Crossref]

J. G. Nagy and D. P. O’Leary, “Fast iterative image restoration with a spatially varying PSF,” in Advanced Signal Processing Algorithms, Architectures, and Implementations VII, F. T. Luk, ed., vol. 3162, pp. 388–399 (SPIE, 1997).

Neubauer, A.

H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, 2000).

Nocedal, J.

J. Nocedal and S. J. Wright, Numerical Optimization (Springer-Verlag, New York, 1999).
[Crossref]

O’Leary, D. P.

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially-variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[Crossref]

J. G. Nagy and D. P. O’Leary, “Fast iterative image restoration with a spatially varying PSF,” in Advanced Signal Processing Algorithms, Architectures, and Implementations VII, F. T. Luk, ed., vol. 3162, pp. 388–399 (SPIE, 1997).

Paxman, R. G.

Pike, E. R.

D. A. Fish, J. Grochmalicki, and E. R. Pike, “Scanning singular-value-decomposition method for restoration of images with space-variant blur,” J. Opt. Soc. Am. A 13, 1–6 (1996).
[Crossref]

Plemmons, R. J.

C. R. Vogel, T. Chan, and R. J. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” in Adaptive Optical System Technologies, vol. 3353 (SPIE, 1998).

Redding, D. C.

Roggemann, M. C.

M. C. Roggemann and B. Welsh, Imaging Through Turbulence (CRC Press, Boca Raton, FL, 1996).

Schulz, T.

Seldin, J. H.

Snyder, D. L.

Thelen, B. J.

Trussell, H. J.

H. J. Trussell and S. Fogel, “Identification and restoration of spatially variant motion blurs in sequential images,” IEEE Trans. Image Proc. 1, 123–126 (1992).
[Crossref]

H. J. Trussell and B. R. Hunt, “Image Restoration of Space-Variant Blurs by Sectional Methods,” IEEE Trans. Acoust. Speech, Signal Processing 26, 608–609 (1978).
[Crossref]

Vogel, C. R.

L. Gilles, C. R. Vogel, and J. M. Bardsley, “Computational Methods for a Large-Scale Inverse Problem Arising in Atmospheric Optics,” Inverse Probl. 18, 237–252 (2002).
[Crossref]

C. R. Vogel, T. Chan, and R. J. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” in Adaptive Optical System Technologies, vol. 3353 (SPIE, 1998).

C. R. Vogel, Computational Methods for Inverse Problems (SIAM, Philadelphia, PA, 2002).
[Crossref]

Welsh, B.

M. C. Roggemann and B. Welsh, Imaging Through Turbulence (CRC Press, Boca Raton, FL, 1996).

White, R. L.

Wright, S. J.

J. Nocedal and S. J. Wright, Numerical Optimization (Springer-Verlag, New York, 1999).
[Crossref]

IEEE Trans. Acoust. Speech, Signal Processing (1)

H. J. Trussell and B. R. Hunt, “Image Restoration of Space-Variant Blurs by Sectional Methods,” IEEE Trans. Acoust. Speech, Signal Processing 26, 608–609 (1978).
[Crossref]

IEEE Trans. Image Proc. (1)

H. J. Trussell and S. Fogel, “Identification and restoration of spatially variant motion blurs in sequential images,” IEEE Trans. Image Proc. 1, 123–126 (1992).
[Crossref]

Inverse Probl. (1)

L. Gilles, C. R. Vogel, and J. M. Bardsley, “Computational Methods for a Large-Scale Inverse Problem Arising in Atmospheric Optics,” Inverse Probl. 18, 237–252 (2002).
[Crossref]

J. Opt. Soc. Am. A (4)

Opt. Eng. (1)

R. A. Gonsalves, “Phase diversity in adaptive optics,” Opt. Eng. 21, 829–832 (1982).

Opt. Lett. (1)

SIAM J. Sci. Comput. (1)

J. G. Nagy and D. P. O’Leary, “Restoring images degraded by spatially-variant blur,” SIAM J. Sci. Comput. 19, 1063–1082 (1998).
[Crossref]

Other (9)

M. C. Roggemann and B. Welsh, Imaging Through Turbulence (CRC Press, Boca Raton, FL, 1996).

C. R. Vogel, T. Chan, and R. J. Plemmons, “Fast algorithms for phase diversity-based blind deconvolution,” in Adaptive Optical System Technologies, vol. 3353 (SPIE, 1998).

J. Biretta, “WFPC and WFPC 2 instrumental characteristics,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 224–235 (Space Telescope Science Institute, Baltimore, MD, 1994).

J. Nocedal and S. J. Wright, Numerical Optimization (Springer-Verlag, New York, 1999).
[Crossref]

H.-M. Adorf, “Towards HST restoration with space-variant PSF, cosmic rays and other missing data,” in The Restoration of HST Images and Spectra II, R. J. Hanisch and R. L. White, eds., pp. 72–78 (1994).

J. G. Nagy and D. P. O’Leary, “Fast iterative image restoration with a spatially varying PSF,” in Advanced Signal Processing Algorithms, Architectures, and Implementations VII, F. T. Luk, ed., vol. 3162, pp. 388–399 (SPIE, 1997).

H. W. Engl, M. Hanke, and A. Neubauer, Regularization of Inverse Problems (Kluwer Academic Publishers, Dordrecht, 2000).

P. C. Hansen, Rank-deficient and discrete ill-posed problems (SIAM, Philadelphia, PA, 1997).

C. R. Vogel, Computational Methods for Inverse Problems (SIAM, Philadelphia, PA, 2002).
[Crossref]

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1. Simulated image of a star field.
Fig. 2.
Fig. 2. Pupil function, two sample phases and their corresponding PSFs.
Fig. 3.
Fig. 3. Simulated space variant blur of star field image; a linear scale is used to display the image on the left, and a log scale is used to display the image on the right.
Fig. 4.
Fig. 4. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur. These results were computed using a noise free blurred image.
Fig. 5.
Fig. 5. Reconstructions computed by the conjugate gradient algorithm, using accurate approximations of the PSF, and with a noise free blurred image. For comparison, we also show the true image and the true image convolved with the diffraction-limited PSF.
Fig. 6.
Fig. 6. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur. These results were computed using a noisy (Poisson and Gaussian) blurred image.
Fig. 7.
Fig. 7. Reconstructions computed by the conjugate gradient algorithm, using accurate approximations of the PSF, with Poisson and Gaussian noise added to the blurred image. For comparison, we also show the true image and the true image convolved with the diffraction-limited PSF.
Fig. 8.
Fig. 8. Example of adaptive refinement of region partitions. The left plot shows an initial partitioning of an image, the middle shows a refinement of the partitioning, and the right plot shows what might happen in an adaptive approach where only certain subregions are refined further.
Fig. 9.
Fig. 9. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur. The PSFs used to approximate the space variant blur were computed using phase diversity-based blind deconvolution. These results were computed using noise free blurred image data.
Fig. 10.
Fig. 10. Reconstructions computed by the conjugate gradient algorithm, using PSFs computed from a phase diversity blind deconvolution algorithm. The blurred image data in this case is noise free.
Fig. 11.
Fig. 11. Relative errors at each iteration of the conjugate gradient method, using increasingly more accurate approximations of the spatially variant blur, and with a noisy (Poisson and Gaussian) blurred image.
Fig. 12.
Fig. 12. Reconstructions computed by the conjugate gradient algorithm, using PSFs computed from a phase diversity-based blind deconvolution algorithm, and with a noisy (Poisson and Gaussian) blurred image.

Equations (35)

Equations on this page are rendered with MathJax. Learn more.

d = Sf + η ,
Sf u v = s * f u v := s u ξ v η f ξ η d ξ d η .
s ( u , v ; ξ , η ) = n = 1 P s n u ξ v η c n ξ η .
d = n = 1 P s n * f n + η ,
s [ ϕ ] = 1 ( p e ιϕ ) 2 ,
( h ) ( y ) = IR 2 h ( x ) e ι 2 π x y d x , y IR 2 .
d k = s [ ϕ + θ k ] * f + η k , k = 1 , , K ,
θ k x 1 x 2 = b k ( x 1 2 + x 2 2 ) ,
J data [ ϕ , f ] = 1 2 K k = 1 K s [ ϕ + θ k ] * f d k , t 2 .
J data [ ϕ , F ] = 1 2 K k = 1 K S k [ ϕ ] F D k 2 .
J full [ ϕ , f ] = J data [ ϕ , f ] + γ J object [ f ] + α J phase [ ϕ ] .
Φ ( ω ) = C 1 ( C 2 + ω 2 ) 11 6 ,
J phase [ ϕ ] = 1 2 Φ 1 { ϕ } , { ϕ } ,
J object [ f ] = 1 2 f 2 = 1 2 f 2 .
F = P [ ϕ ] * Q [ ϕ ] ,
P [ ϕ ] = k D k * S k [ ϕ ] , Q [ ϕ ] = γ + k S k [ ϕ ] 2 .
J ϕ = J reduced data [ ϕ ] + α J phase [ ϕ ] ,
J reduced data [ ϕ ] = k D k 2 P [ ϕ ] Q [ ϕ ] , P [ ϕ ] .
ν := 0 ;
ϕ 0 := initial guess ;
begin quasi-Newton iterations
g ν := grad J ( ϕ ν ) ; % compute gradient
B ν := SPD approximation to Hess J ( ϕ ν ) ;
d ν := B ν 1 g ν ; % compute quasi-Newton step
τ ν := argmin τ > 0 J ( ϕ ν + τ d ν ) ; % line search
ϕ ν + 1 := ϕ ν + τ ν d ν ; % update approximate solution
ν := ν + 1 ;
end quasi-Newton iterations
s v = def ϕ v + 1 ϕ v ,
y v = def grad J ( ϕ v + 1 ) grad J ( ϕ v ) .
B v + 1 1 = ( I y v s v T y v T s v ) B v 1 ( I s v y v T y v T s v ) + s v s v T y v T s v , v = 0,1 , .
M v ψ = 1 ( Φ 1 ( ψ ) ) ,
S = i = 1 p i = 1 p D ij s ij ,
d = i = 1 p i = 1 p D ij ( s ij * f ) + n ,
f true f k 2 f true 2

Metrics