Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Fitting freeform shapes with orthogonal bases

Open Access Open Access

Abstract

Orthogonality is exploited for fitting analytically-specified freeform shapes in terms of orthogonal polynomials. The end result is expressed in terms of FFTs coupled to a simple explicit form of Gaussian quadrature. Its efficiency opens the possibilities for proceeding to arbitrary numbers of polynomial terms. This is shown to create promising options for quantifying and filtering the mid-spatial frequency structure within circular domains from measurements of as-built parts.

©2013 Optical Society of America

1. Introduction

Freeform optics play established roles for progressive ophthalmic lenses, in conformal and non-imaging applications, and across a diversity of display and projection systems. Several conventions have been developed and implemented for describing their surface shapes in commercial optical design codes. Perhaps the most standard option is a simple polynomial added to a conic section, but a range of others exists, see e.g [14]. While diamond turning machines can deliver parts with the desired figure and finish in one step for some applications (e.g. infrared and illumination), higher precision contexts generally require grind-and-polish processes. For this latter category of molds and parts, a specially tailored gradient-orthogonal basis was introduced to characterize shape [5]. While this basis was constructed to be robust and efficient and to facilitate the assessment of manufacturability, it was recently shown to deliver another benefit. In particular, the associated coefficients were found to be especially effective degrees of freedom during design [6]. Although they span the same space of surface shapes as other common representations, solutions with superior optical performance were discovered when optimizing in terms of this new description. Together with its efficiency, the gradient-orthogonal basis therefore offers significant promise for applications beyond those driven by consideration of “grind-and-polish manufacturability”.

Effective means for interconverting alternative descriptions of shape are valuable because no single convention can meet all needs. One of the advantages of an orthogonal basis is that fitting any particular shape in such a basis can often be performed more efficiently by exploiting orthogonality. Since the values of the associated fit coefficients are independent of the particular subset of the basis elements that is in use and follow directly from simple inner products, the resulting fitting process is sometimes referred to as an “orthogonal projection” onto the basis. A projection of this sort is reported here for the gradient-orthogonal basis of [5]. The process is constructed to avoid the need for determining derivatives and is shown to open the possibility of new applications of these ideas for the analysis of mid-spatial frequency structure (MSF) on manufactured parts.

Modern sub-aperture tools lack the automatic smoothing of traditional spherical fabrication processes. This has given unprecedented importance to both quantifying and tolerancing MSF, but these steps continue to be problematic. Although it may seem intuitive to use Fourier methods to determine a power spectral density (PSD), the lack of data beyond the part’s aperture causes the well-known phenomenon of “spectral leakage”. Various window functions have been introduced to control this corruption, but these inherently mask edge effects on the part that are oftentimes of critical interest. “Detrending” is another standard step for reducing the impact of spectral leakage. This involves subtracting a low-order polynomial from the data to help to suppress the artificial discontinuity at the aperture’s edge before applying a window and Fourier transform. The arbitrary aspects of these steps mean that interpretations of the end results are troubled with questions. It is the finite aperture that generates these complications so it is natural to seek a Fourier-like decomposition that is matched to the aperture at its foundation. The robustness and efficiency of the general sorts of processes described below create such an option for fitting manufactured part shapes entirely with polynomials. The coefficients in the resulting fit are shown to provide a new spectrum that enables familiar types of spatial frequency filtering operations and a promising quantification of MSF.

In keeping with [5], the sag of the freeform surfaces considered in this work is expressed in terms of cylindrical polar coordinates as z=f(ρ,θ) where, with u=ρ/ρmax,

f(ρ,θ)=cρ21+1c2ρ2+11c2ρ2{u2(1u2)n=0an0Qn0(u2)+m=1umn=0[anmcosmθ+bnmsinmθ]Qnm(u2)}.
Here, c is the curvature of the best-fit sphere, ρmax is the semi-diameter of the cylinder that tightly encloses the surface, and Qnm(x) is a polynomial of order n. Notice that only the lower limits on the indices of summation are specified in Eq. (1.1); their upper limits can be chosen in a number of ways. As described in [6], all terms of up to order T are included if these sums span tT where the Cartesian order t is given by
t={2n+4,m=0,2n+m,m>0.
The expression in Eq. (1.1) is constructed so that the displacement along the normal to the best-fit sphere from that sphere over to the freeform surface is well approximated by the component within the braces. That component is referred to simply as the normal departure.

When fitting the parameters in Eq. (1.1) to some other representation of sag, say z=S(ρ,θ) where the function S is regarded as a black box, the objective is to determine values for anm, bnm, and c to match f(ρ,θ) to S(ρ,θ). The first step is to ensure the best-fit sphere has the required sag values at ρ=0 and at ρ=ρmax. Because f(0,θ) evaluates to zero, the alternative sag description is given a constant bias of S0:=S(0,θ), where “:=” means “is defined to be”. This offset in z simply translates the surface without changing its shape. If averaging over angle is denoted by

g(θ)θ:=12π02πg(θ)dθ,
then because the component within braces in Eq. (1.1) averages to zero at ρ=ρmax (i.e. at u=1), c is chosen to satisfy
S(ρmax,θ)S0θ=cρmax21+1c2ρmax2.
Solving Eq. (1.4) for c yields the first of the fit parameters:
c=2S(ρmax,θ)S0θS(ρmax,θ)S0θ2+ρmax2.
To complete matching the expression in Eq. (1.1) to S(ρ,θ)S0, it is convenient to introduce
S¯(ρ/ρmax,θ):=1c2ρ2[S(ρ,θ)S0cρ21+1c2ρ2].
The fitting process is thereby reduced to performing a linear least-squares fit for anm and bnm to match these normal departures:

S¯(u,θ)u2(1u2)n=0an0Qn0(u2)+m=1umn=0[anmcosmθ+bnmsinmθ]Qnm(u2).

Fitting to a linear combination of, say E, basis elements generally requires the solution of a set of linear equations involving an E×E Gram matrix [7]. For an orthogonal basis, however, this process simplifies significantly. To exploit orthogonality for Eq. (1.7), the fit must be performed over the entire enclosing disc (i.e. 0<u<1 and 0<θ<2π) even if the surface’s clear aperture is only a subset of that disc. The simplification starts, for this case, by splitting the process into two phases. In particular, rather than solving for all of the fitting coefficients in a single step, it is significantly more efficient to break the operation into separate stages of azimuthal and radial conversions. First, for any specific value of u, it is straightforward to determine Am(u) and Bm(u) so that

S¯(u,θ)m=0[Am(u)cosmθ+Bm(u)sinmθ].
This is simply a Fourier series in each annular zone and, without loss of generality, B0(u) can be taken to be zero. It follows from Eqs. (1.7) and (1.8) that the fit can then be completed by choosing anm and bnm to satisfy the relations

A0(u)u2(1u2)n=0an0Qn0(u2),
Am(u)umn=0anmQnm(u2),Bm(u)umn=0bnmQnm(u2),whenm>0. (1.10a,b)

The Fourier series methods used for Eq. (1.8) are summarized in Sec. 2. That brief review includes a discussion of the impact of aliasing and, importantly, it gives an idea of an analogous issue in working with Eqs. (1.9) and (1.10) to complete the projection. A powerful method for solving Eq. (1.9) for an0 has already been developed [8] and its end result is summarized below in Sec.3. As discussed in Sec. 2, standard Fourier methods involve a particularly simple variant of Gaussian quadrature. Similarly, as pointed out in Sec. 4, the optimal solutions for Eqs. (1.9) and (1.10) involve what is known as Chebyshev-Gauss quadrature [9]. Demonstrations are presented in Secs. 5 and 6 before the Concluding Remarks of Sec. 7. Readers who seek further inspiration to take on the mathematical details may wish to jump first to the figures of Sec. 6.

2. Fourier methods for the azimuthal fit

Although the Fourier series is elementary, some of its aspects offer helpful insights for the sections that follow. For clearer access to the points of interest, I initially express the results in terms of complex exponentials; they are later recast in terms of explicitly real-valued functions. The Fourier series for a function of period 2π can be written as

L(θ)=k=kexp(ikθ).
In the notation of Eq. (1.3), these basis elements are readily seen to be orthogonal:
exp(ikθ)exp(ijθ)θ=12παα+2πexp[i(jk)θ]dθ=δjk.
Here, j and k are integers, δjk is the Kronecker delta and, because the integral is over a full period regardless, this equality holds for any value of α.

When the coefficients, i.e. k, are chosen to minimize the mean squared modulus of the difference between the left- and right-hand sides of Eq. (2.1), it follows from Eq. (2.2) that the associated Gram matrix is an identity. These coefficients are therefore given directly by

k=12παα+2πL(θ)exp(ikθ)dθ.
That is, as a result of orthogonality, the computations typically required for building up a Gram matrix and solving the associated linear equations are avoided entirely. This is an example of a projection of the type described in the Introduction where the optimal value of any fit parameter is independent of the presence of others and, in this case, is given by the integral in Eq. (2.3). Finding analogous results for a projection to Qnm is one of the objectives of what follows, but another significant simplification is also sought.

Notice that the dominant computation for the fit of Eq. (2.1) by using Eq. (2.3) lies in numerically approximating that integral for all the required values of k. Valuable results emerge if a uniformly distributed and weighted sum is used for that approximation. For example, consider the case of sampling at θ=jΔ for j=1,2,3,...J where Δ=2π/J (with, say, α=Δ/2 in mind). The integral in Eq. (2.3) is then approximated by

k˜k:=12πj=1JL(jΔ)exp[ikjΔ]Δ=1Jj=1JL(jΔ)exp[ikjΔ].
The key relation between k and its approximant, ˜k, now follows upon substituting Eq. (2.1) into Eq. (2.4):
˜k=1Jj=1J{k=kexp[ikjΔ]}exp[ikjΔ]=k=k{1Jj=1Jexp[i(kk)j2π/J]}.
The sum inside the final braces of Eq. (2.5) is just a geometric series and, provided exp[i(kk)2π/J]1, it is readily evaluated in closed form and shown to vanish since (kk) is an integer. The only non-zero contributions arise when exp[i(kk)2π/J]=1. These occur whenever (kk) is a multiple of J, in which case the contents of those braces trivially evaluates toJ/J=1. This observation gives the desired relation that

˜k=n=k+nJ=k+(k+J+kJ)+(k+2J+k2J)+....

The mixing of terms in Eq. (2.6) is referred to as aliasing. For band-limited functions, i.e. where k vanishes whenever |k|>B for some B, it follows that ˜k is precisely equal to k over all of the non-zero band provided J>2B. That is, if we sample sufficiently densely, all the terms but n=0 vanish in Eq. (2.6) for |k|B. This requirement that J>2B is the familiar Nyquist sampling condition. In most cases of interest to us, |k| decays rapidly (oftentimes exponentially) as |k| grows, so the functions are effectively band limited and aliasing has negligible impact when sufficiently dense sampling is used, i.e. J is sufficiently large. The second objective in what follows is to find analogous summation-based processes that reduce the computational burden of the steps involved in the final stage of projecting toQnm, i.e. for working with Eqs. (1.9) and (1.10).

Before proceeding, it is helpful to notice that when L(θ) is real valued, it follows from Eq. (2.3) that k=k. Now, if k is separated into real and imaginary parts as k=12(ak+ibk), then ak=ak and bk=bk, hence Eqs. (2.1), (2.3) and (2.4) can be expressed in real-valued terms as

L(θ)=12a0+k=1(akcoskθ+bksinkθ),
ak=2L(θ)coskθθ,bk=2L(θ)sinkθθ, (2.8a,b)
a˜k:=2Jj=1JL(jΔ)cos(kjΔ),b˜k:=2Jj=1JL(jΔ)sin(kjΔ). (2.9a,b)
Interestingly, Eq. (2.9) is a variant of Gaussian quadrature where, with J discrete sample locations and J weights (which turn out to be uniform), a weighted sum of sample values exactly matches the integral for a set of 2J1 functions of interest with one additional constraint. The constraint can be taken to be that the first sample sits at the origin. It is subsequently readily seen that the samples can all be translated and that, for any value of α,
f(θ)θ1Jj=1Jf(α+jΔ),
where “” means “is identically equal to” and f(θ) is any one (or linear combination) of {coskθ,k=0,1,2,...J1} and {sinkθ,k=1,2,3,...J1}. Equations (2.7) and (2.9) are used for estimating Am and Bm of Eq. (1.8) and note that, for even more speed, the sums in Eq. (2.9) can be implemented as FFTs [10].

When the terms in Eq. (2.7) beyond k=B all vanish, there are evidently 2B+1 coefficients that can be seen to be determined exactly by using Eq. (2.9) with any J2B+1. (This follows more simply in the complex-valued representation where trigonometric product-to-sum rules are trivial.) In other words, the number of coefficients that can be determined exactly by way of a sum then matches the number of sample values of the original function. This sets an unbeatable efficiency target for the projections sought in the following sections. Because symmetric functions appear below, i.e. L(θ)L(θ), notice that only ak is non-zero in such cases and can be determined by a collapsed sum over just 0θπ. Again, the number of fitted coefficients can readily be seen to then match the number of samples.

3. Rotationally symmetric radial fit (i.e. m = 0)

To account explicitly for the role of the best-fit sphere, a pre-factor is included on the m=0 term in Eq. (1.1) so this component must be handled separately. It is shown in [8] that the fit required for Eq. (1.9) can be achieved in two remarkably simple steps. The first is to perform a projection in terms of an auxiliary set of orthogonal polynomials, which leads to a set of coefficients given simply by

βn=(1)nA¯0(cosθ)cos[(2n+1)θ]θ
where, in terms of A0(u) of Eq. (1.9), A¯0(u) is defined by
A¯0(u):=A0(u)/[u(1u2)].
These follow from Eqs. (4.4) and (B.4) of [8] (where βn is written there as bn) and reveal that we are once again essentially computing Fourier coefficients: c.f. Equation (2.8a) and (3.1). As a result, {βn,n=0,1,2,...} is a spectrum that typically decays rapidly. [Keep in mind that the selection of the best-fit sphere, together with Eqs. (1.6) and (1.8), mean that A0(u) has a repeated root at u=0 as well as roots at u=±1, so A¯0(u) of Eq. (3.2) is well defined at these limits and, in fact, approaches zero at u=0.] The second of these two steps converts βn to the desired fit coefficients, i.e. an0, by using Eq. (3.2) of [8], namely
an0=fnβn+gnβn+1+hnβn+2,
for n=0,1,2,.... Simple expressions for fn, gn, and hn are given in Eqs. (A.14) to (A.16) of [8], which allow these entities to be pre-computed with just a few arithmetic operations each. To suppress aliasing, it is assumed that Eq. (3.3) is taken to sufficiently large values of n that |an0| has trended to a negligibly small value. All but the terms of interest can then be discarded.

For Eqs. (3.1) and (3.2), notice from Eq. (1.8) that

A0(u)=S¯(u,θ)θA˜0(u):=1Jj=1JS¯(u,jΔ),
where Eq. (2.4) has been used again for this approximation. More interestingly, the function to be averaged in Eq. (3.1) is evidently symmetric in θ and has period π, so we need only sample in one quadrant:
βn=(1)n2π0π/2A¯0(cosϕ)cos[(2n+1)ϕ]dϕ(1)nKk=1KA¯0(cosϕk)cos[(2n+1)ϕk],
where
ϕk:=(2k1)π4K.
For extra speed, the sum in Eq. (3.5) can be implemented with an FFT, in particular with the variant of the discrete cosine transform called DCT-IV. [Note that this result was given in Eq. (4.5) of [8], but there is an overall factor-of-two error in that equation that resulted from the same normalization error in Eqs. (B.6) and (B.7) of that work. The normalization is correct in the computational results reported there, however.] If A0(u) can be fitted exactly by using a sum up to n=N in Eq. (1.9), then it follows from the results of Sec.2 regarding summation of Fourier terms that the exact fit coefficients are given by the sum in Eq. (3.5) provided K>N+1. That is, this remarkably simple scheme almost matches the efficiency described at the end of Sec.2: it is now necessary to have just one more sample than the number of coefficients being fitted. Of course, when working with functions that are not precisely band limited, it is better to add extra samples to further suppress aliasing.

4. Radial fit for m > 0

With the estimates derived in Sec.2 for Am(u) and Bm(u) of Eqs. (1.10a,b), the final step in the projection is to determine anm and bnm. These equations both have the same form, which can be written generically as

C(u)=umn=0NcnQnm(u2),
where C and cn stand for either Am and anm or Bm and bnm, respectively. Because m is a constant throughout this section, it is now suppressed as an index where possible.

The determination of cn in Eq. (4.1) can be expressed in terms of a sequence of elementary operations that involve matrices with just one off-diagonal band. It is convenient therefore to define an upper-triangular matrix, say Upq, by

Upq:=[p0mq0m0...00p1mq1m...000p2mq2m...000...0pN1mqN1m0...00pNm].
A lower triangular matrix is also defined here by Lpq:=UpqT. Notice that the computation of u:=Upqv, where v:=(v0,v1,v2,...vN), proceeds from n=0 to N1 simply with
un=pnmvn+qnmvn+1,
and ends with uN=pNmvN. The inverse operation, i.e. v:=Upq1u, need not explicitly invert the matrix but can proceed by starting with vN=uN/pNm and working down in n from N1 to 0 with the recurrence relation
vn=(unqnmvn+1)/pnm.
Similarly, u:=Lpqv starts with u0=p0mv0 and works from n=1 to N with
un=qn1mvn1+pnmvn,
so v:=Lpq1u starts with v0=u0/p0m and works from n=1 to N with

vn=(unqn1mvn1)/pnm.

In this notation, it is shown in the Appendix that the coefficients in Eq. (4.1) can be determined by a sequence of these simple recurrence-based operations, namely

c=UfgUst1Uhk1Lhk1r,
where the elements of r follow from Eq. (A.4):
rn=2π01C(u)Pnm+1(u2)um1u2du=2π0π/2C(cosϕ)cosmϕPnm+1(cos2ϕ)dϕ1Kk=1KC(cosϕk)cosmϕkPnm+1(cos2ϕk).
Here, ϕk is given by Eq. (3.6) and Pnm(x) is the auxiliary polynomial of order n introduced in [5]. Simple expressions for the matrix elements hnm and knm as well as snm and tnm for Eq. (4.7) are given in the Appendix; those for fnm and gnm appear in an appendix in [5]. An efficient implementation of Eq. (4.8) requires the recurrence relations given in Eqs. (A.2-6) of [5]. Those relations allow each polynomial to be evaluated robustly with just a handful of arithmetic operations regardless of its order. It follows from Eq. (4.1) that the first integrand in Eq. (4.8) is an even function of u, hence the second integrand is an even function of ϕ with period π. The sum over one quadrant in Eq. (4.8) therefore matches a sum over a full period of the summand and thus has all the striking properties discussed in Sec.2. In particular, the N+1 coefficients of Eq. (4.1) can be extracted exactly by Eqs. (4.7) and (4.8) provided K>N+1. That is, the efficiency described in the previous section is matched yet again. As pointed out in the Appendix, Eq. (4.8) is an application of Chebyshev-Gauss quadrature. This is a critical advantage of these projections that enables such impressive performance.

5. Demonstration

An example of a freeform surface from the patent literature was considered in Sec. 2.2 of [6]. This surface has a plane of reflection symmetry and its conventional description involves Cartesian polynomial terms of the form xjyk where j+k10 and j is always even on account of the symmetry and coordinate choice. The end result of expressing this surface in terms of Qnm was also given in Fig. 4 of that work, but without derivation. An efficient means to achieve that conversion has been laid out above and it is now helpful for code verification to note some of the intermediate results. To give extra digits for numerical checks, all units in this section are expressed in femtometres (fm). For confirmation of the sag itself, now written with Cartesian arguments as z=F(x,y), some sample values are

F(12ρmax,0)=7,989,797,240,841F(12ρmax,12ρmax)=16,201,346,177,785F(0,12ρmax)=7,996,883,312,162F(12ρmax,12ρmax)=16,038,915,342,886.

The symmetry in this example means that it is convenient to choose the coordinate relation (x,y)=ρmax(usinθ,ucosθ) so that Bm(u)0, hence bnm0. With this, for each of the azimuthal averages [such as in Eqs. (1.5) or the determination of Am(u) for Eqs. (3.4) and (4.1)] it is simplest to sample uniformly at θj=(j12)π/J for j=1,2,...J. It turns out that J=11 is sufficient for this tenth-order surface. When N=4 and K=N+2 is used in Eqs. (4.1) and (4.8), Eq. (4.7) reproduces the coefficients in Fig. 4 of [6] to sufficient accuracy to match the sag at the sub-nm level. This involves sampling at the 66 locations shown in Fig. 1 (associated with θj and u=cosϕk where j=1,2,...J and k=1,2,...K) and yields anm for m=0,1,2,...10 and n=0,1,2,...4. Of those 55 fitted coefficients, only the 34 of order up to ten need to be retained and, as discussed in [6], these correspond to the original tenth-order Cartesian terms given in the patent. (The 21 higher-order terms that were dropped make only sub-nm contributions.)

 figure: Fig. 1

Fig. 1 Sample locations for a plane-symmetric conversion with eleven spokes and six rings. Reflection symmetry means that it is sufficient to sample in just one half of the aperture.

Download Full Size | PDF

The effects of aliasing are suppressed by using larger values for J and N (hence also larger K given K=N+2). Because the projection developed above is so efficient, there is little need to run with minimal values of these parameters for simple sag conversions: computing more coefficients allows confirmation of the decaying spectrum and then only the terms of significance are ultimately retained. Increasing J has negligible impact in this example, but accuracy is further improved by using N=9 (so K=11). With this choice, when m=1, the vectors written in the Appendix as r, e=Uhk1Lhk1r, d=Ust1e, and c=Ufgd are found to be

{r,e,d,c}={(40,446,180,68753,565,404,12511,587,305,007328,228,05132,163,607366,90228,01732160),(687,615,292,982242,697,231,6609,224,094,424817,823,6128,029,264777,2289,26818950),(924,648,807,539474,067,029,11225,486,726,9682,023,663,24520,477,8691,728,69219,98939990),(225,290,889,213223,995,088,26312,915,787,2821,568,376,33819,668,2632,261,60831,561737201)}.
Notice that the last of these columns holds the spectral coefficients in the m=1 column of Fig. 4 in [6]. Twice the number of coefficients is given now and each is to six additional significant digits (i.e. rounded off in fm instead of nm). Similarly, as an additional check, these intermediate results for m=5 are
{r,e,d,c}={(98,410,809162,552,30023,414,045221,3742,464311000),(2,080,090,082262,103,9563,095,82539,133556110000),(2,145,718,876328,143,9714,317,81458,363865180000),(2,650,626,613801,139,99810,459,850151,1332,422551000)}.
Again, the last column in Eq. (5.3) gives the coefficients for m=5 in Fig. 4 of [6]. As discussed in Sec.3, the m=0 column is handled differently and, in this case, I note just the final fit coefficients:
c=(70,530,723,064,43,064,584,81,637,144,10,310,272,546,272,13,714,464,17,1,0).
Importantly, the result from Eq. (1.5) must be retained to full precision for use in Eqs. (1.6), (3.2), and (3.4) when determining the values in Eq. (5.4); it is only after this conversion that the best-fit curvature can be rounded to lower precision in keeping with the power tolerance associated with the sag.

The intermediate results given in Eqs. (5.2–4) are offered as an aid in the identification of coding errors. Ultimately, however, there is an acid test for verifying any code that implements such projections: plot and analyze the difference between the original sag expression and the new one. Similarly, there are analogous intermediate checks such as plotting the difference between the left and right sides of Eq. (A.9), etc. Notice too from Eqs. (5.2–4) how rapidly the coefficients decay and therefore what it means to say that such cases are “effectively band limited” (hence that aliasing can readily be suppressed). It is this property that makes it possible to use highly efficient sums in place of integrals in Eqs. (2.9), (3.5), and (4.8).

6. Fitting mid-spatial frequencies

As pointed out in Sec.3 of [5], it is sometimes helpful to express the elements within the brackets on the second line of Eq. (1.1) in terms of their amplitude and phase, say

anmcosmθ+bnmsinmθ=αnmcos(mθϕnm),
and the mean square gradient of the normal departure [i.e. the component within braces in Eq. (1.1)] is then given by just the sum over m and n of the square of αnm. What is more, as presented in [6], αnm alone appears in some low-order testability estimates; ϕnm is of secondary importance in that regard. These properties are consistent with the observation that if we re-fit a part after rotating it through an angle ψ about the centre of the aperture, i.e. z=S(ρ,θ) is replaced by z=S(ρ,θψ), then αnm is invariant while ϕnm simply changes by mψ. This is the familiar property of translation of a function causing a phase change in the Fourier domain. It is natural to seek analogues of other basic Fourier properties in this context and to carry over as much intuition as possible from that realm. I begin this for specific sag functions while focusing primarily on the reduced form S¯(u,θ) of Eqs. (1.6) and (1.7). In this way, the distractions of cosine factors and base spheres are largely suppressed and the results are always expressed in terms of a unit disk.

6.1 Projections of pure sinusoids

Consider, for example, a sag function that is perfectly sinusoidal. Because the orientation of the variations influences only ϕnm, it can be chosen arbitrarily and the invariant values of αnm are of primary interest. To be consistent with the example in Sec.5, the sag is oriented to be symmetric in x and (x,y)=ρmax(usinθ,ucosθ). As a result, bnm0 and it follows that αnm=|anm| with ϕnm=[1sgn(anm)]π/2. Start then with a sag of unit peak value of the form

z=sin(πCy/ρmax+δ)=sinδcos(πCy/ρmax)+cosδsin(πCy/ρmax),
which has C full cycles between (x,y)=(0,±ρmax) with a phase determined by δ. When δ=π/2, the sag in Eq. (6.2) is symmetric in y. Because changing the sign on y alone is equivalent to replacing θ byπθ, it follows upon considering the impact of this change on cosmθ that anm (hence also αnm) must then vanish for all odd values of m. Given that the Cartesian order of the contribution associated with either anm, bnm, or αnm is given by Eq. (1.2), only terms of even Cartesian order appear in such cases. Alternatively, for δ=0 the sag is anti-symmetric in y and anm is then zero for all even values of m. Further, if the sum of the squares of all αnm of Cartesian order t is written as St then alternate entries vanish in these two cases, i.e. St vanishes when t is either even or odd. Of course, the definitive property of Qnm means that the sum over t of St is just the weighted mean square gradient of the fitted polynomial over the unit disc. I use the acronym PSS to refer to this partially summed spectrum, viz. St.

To clarify some of these observations, specific results are presented in Fig. 2 for the case C=1 where, to flatten the best-fit sphere of Eq. (1.5) and suppress the cosine effects of Eq. (1.6), I choose ρmax=105. (When the sag variation is much smaller than the aperture size, the part is essentially flat, so this aperture size is used in all the simple sinusoidal cases considered here.) Notice for the anti-symmetric case, i.e. for δ=0 as presented in the left column of Fig. 2, that the polynomial is dominated by the tilt, coma, and trefoil terms, i.e. (m,n)=(1,0), (1,1), and (3,0), respectively. The symmetric case, where δ=π/2, is dominated by spherical aberration, astigmatism of second and fourth order, and tetrafoil, i.e. (m,n)=(0,0), (2,0), (2,1), and (4,0), respectively. In both cases, the magnitude of the largest tabulated coefficients is of the order of unity and the overall fit error in both cases is below 10−6 times the peak-to-valley of the sag. The PSS is evidently almost all at third order for the odd case, and at second and fourth order for the even case.

 figure: Fig. 2

Fig. 2 A sinusoidal sag function is presented in each column. Both correspond to a single full period, i.e. C = 1 in Eq. (6.2), but one is symmetric while the other is antisymmetric. The sags are plotted in the first row with a linear color ramp between the displayed extreme values. The peak-to-valley and standard deviation are also shown. The spectra of fit coefficients are tabulated in the second row as functions of m and n and include all terms up to Cartesian order t = 11. The PSS’s are presented in the third row. The rms gradient is given by the square root of the sum of these values, and the result is shown as “rss” in those plots since it is the root-sum-square of all the tabulated coefficients.

Download Full Size | PDF

When expressed in terms of a normalized coordinate, say v, and averaged uniformly over an integral number of periods, the rms slope of sin(πCv+δ) is 21/2πC2.22C. For integral values of C therefore, although it corresponds to a differently weighted average, the number given as “rss” in the plots in the third row can be expected to be approximately equal to 2.22C. This estimate also applies when averaging over any region much larger than a period, so the resulting approximation should be valid for all C much larger than unity.

It follows from Eq. (6.2) that the results for a sinusoid of different phase can be found by simply taking a linear combination of the coefficients in the second row of Fig. 2. For example, a phase of δ=π/4 corresponds to an equal mixture of these even and odd cases, which interleaves the results so that the entries are then all non-zero and scaled by cos(π/4)=sin(π/4)=21/2. Similarly, the PSS is found by interleaving the two plotted partially summed spectra. Keep in mind too that rotating the sag (i.e. giving the colored contours some other orientation in Fig. 2) means that bnm then becomes non-zero in keeping with the comments made about rotation following Eq. (6.1), but αnm remains untouched. For any fixed frequency C, therefore, the entities αnm and St for the case δ=π/4 give access to a general characterization of such a sinusoidal sag of arbitrary orientation and phase. Some results are presented in Fig. 3 for the cases C=5 and C=25.

 figure: Fig. 3

Fig. 3 Sinusoidal sag functions presented much as in Fig. 2 but now as a combination of the even and odd cases and for C = 5 and 25. The fit coefficient amplitudes are plotted on a logarithmic color scale in the second row as functions of m and n. Their peak value is shown next to the red block of the legend and each colored block below that represents a reduction by a factor of ten down to 10−7 times the maximum. The third row holds the PSS’s.

Download Full Size | PDF

The spectral amplitudes, i.e. αnm, are plotted in the second row of Fig. 3 because there are too many coefficients to tabulate. The color scale for these plots is logarithmic and ranges from red for the peak value down through seven orders of magnitude to purple. The coefficients of smaller value are not visible in this representation. In fact, αnm rapidly falls by orders of magnitude below the blue diagonal zone in those plots. It is striking that the contours of αnm coincide closely with contours of t and that the highest ridge of that distribution occurs near tπC. As a result, the PSS —found by summing along contours of t— peaks strongly near tπC. More importantly, the majority of the PSS falls within a narrow interval around this peak. To emphasize this and clarify the trend, the PSS for C=100 with δ=π/4 is plotted in Fig. 4. The rss values shown in Figs. 3 and 4 confirm that, as anticipated in the discussion of Fig. 2, the rms gradient is approximated well by 2.22C.

 figure: Fig. 4

Fig. 4 The PSS for C = 100 follows upon fitting over 150,000 coefficients.

Download Full Size | PDF

6.2 Frequency filtering the Q-spectra for a sinusoid

It turns out that 60% of the blue area in Fig. 4 falls within the last four lobes of the distribution, i.e. 280<t<320, so the rss of these components alone is over 75% of the total. More generally, sinusoidal variations with a period of λ2ρmax/C are principally localized in their PSS around the region tπC. By combining these two relations, it follows that the abscissa for the PSS can be loosely interpreted as a spatial frequency through

t2πρmax/λ.

A more complete appreciation of the associated frequency resolution follows upon separately summing different bands of these spectra. As an example, three sub-bands of the spectrum in the right-hand column of Fig. 3 are shown in Fig. 5. Notice that the spacing between the ridges that run from top to bottom near the centre of each plot is consistent with Eq. (6.3) given that the dominant components in each are near t equal to 30, 60, and 75, respectively. It is striking, however, that spatial variations are judged by this basis to be of lower frequency when they are nearer and parallel to the edge. That is, the dominant variations near the left and right edges of these plots appear in the two lower frequency bands (which fortuitously means that a part’s typical edge effects can be more readily captured in such spectra). Most of the original sinusoid is clearly in the rightmost plot in Fig. 5, however, and the rms slope of these filtered components doubles at each of the two steps from left to right in the figure. Together, these components constitute the associated sinusoidal form to within ± 0.0017; the balance falls in t>90 and that residual drops by more than a factor of ten with each few additional orders in t.

 figure: Fig. 5

Fig. 5 The data shown at top right in Fig. 3 is presented here after a low-, medium-, and high-pass filter has been applied in terms of the Cartesian order in the Q basis. These plots contain only the components in the data from (a) t = 1 to 30 at left, (b) t = 31 to 60 in the central plot, and (c) t = 61 to 90 at right. The rss (i.e. the rms gradient) for these three bands is 12.1, 24.9, and 48.5, respectively, and the standard deviation of each band appears in the plots.

Download Full Size | PDF

These Q-spectra can be filtered not just in terms of t, but also in m (for separation of different angular frequencies) and n (for different radial frequencies). An example is given in Fig. 6 where the filtering is based on angular frequency. Because the results in Fig. 6 are a combination of filtering in t and m, they are also effectively filtering in n. That is, as can be appreciated by tracking the variations along either some diametral lines or concentric circles, both the radial and angular frequencies change as expected in each of the two steps from left to right. The capabilities of such processes are explored by analyzing more complex data in the next section.

 figure: Fig. 6

Fig. 6 The data in the rightmost plot of Fig. 5 is now passed through a low-, medium-, and high-pass filter that discriminates based on the azimuthal order rather than the Cartesian order. These plots contain only the components in the data from (a) m = 0 to 25 at left, (b) m = 26 to 50 in the central plot, and (c) m = 51 to 90 at right. This division is chosen so that there is roughly an equal number of terms in each of these three sub-bands [two of the domains are parallelograms and one a trapezoid in the (m,n)-plane]. Together, these components exactly comprise the original function.

Download Full Size | PDF

6.3 Demonstration with synthetic data

A better appreciation of the ideas in the previous subsection can be obtained by analyzing some synthetic data that mimics as-built part shapes. For this, a superposition of a sum of thousands of sinusoidal terms with a randomized, decaying Fourier spectrum is overlaid with some additional structure. The fitting process for this case determines anm and bnm over n from 0 up to N=75 and m from 0 up to M=2N=150. To achieve this, samples are taken at u=cosϕk with k=1,2,...K where ϕk is given in Eq. (3.6) and K=N+2=77. Since there is no plane of symmetry, the sampling is no longer in a half-plane as in Fig. 1. Instead, the azimuthal averages used to determine Am(u) and Bm(u) for Eq. (4.1) involve sampling at θ=jΔ for j=1,2,...J where Δ=2π/J and J=2(M+1)=302. That is, 302 samples are taken on each of 77 rings so, in this case, 77×302=23,254 sag samples are used to determine (N+1)×(2M+1)=22,876 coefficients. This is close to the one-for-one efficiency standard set by the discrete Fourier methods. As an aside, since the Fourier series on each ring is independent, an attractive and generally equivalent alternative is to sample on the k'th ring at θ=(j+k/φ)Δ where φ=(51/2+1)/21.618 is the golden ratio. The distinction is displayed in Fig. 7 and is related to Fibonacci lattices [11]. Of course, the number of samples around these rings can be boosted to a power of two to increase the benefit delivered by FFT algorithms, if desired [10].

 figure: Fig. 7

Fig. 7 The default sample locations at left for N = 25 (hence M = 50, J = 102, and K = 27) involves 2,754 samples to extract (2M + 1)(N + 1) = 2,626 coefficients. This is appropriate for a general fit of a spectrum like that in the left column of Fig. 3. The Fibonacci-inspired and analytically equivalent option at right avoids unattractive bunching in the outer zones.

Download Full Size | PDF

The spectrum for this example is plotted in Fig. 8 and the data itself for the normal departure appears in Fig. 9(a). At a demanding yet realistic level, if the units are taken to be millimeters and the departure needs to be characterized down to the nanometer level, the fit must be accurate to better than one part in 106. In this case, it is evident in the plot at left in Fig. 8 that only about a half of the 23,000 fitted coefficients are large enough to be significant for these purposes, viz. those within t140. (Remember that these fits must be taken to sufficiently high orders to suppress corruption by aliasing, thus subsequently disregarding higher orders is an essential aspect.)

 figure: Fig. 8

Fig. 8 The peak coefficient in this spectrum is 2.3 and the color legend steps down to purple for an amplitude of 2.3e-7. The PSS is also plotted on a log scale as the green curve at right, while the blue curves give the results when the sampled sag values are corrupted by additive noise uniformly distributed between ± 1.0e-3, ± 1.0e-4, and ± 1.0e-5. Only the highest level of noise can be distinguished from the noise-free case in the left half of the plot (t < 80). The purple curve is found by using interpolation from a 250x250 pixelated grid.

Download Full Size | PDF

 figure: Fig. 9

Fig. 9 (a) Normal departure extracted from a synthetic data set. The components in that data from (b) t = 1 to 10, (c) t = 11 to 24, (d) t = 25 to 64, (e) t = 65 to 84, and (f) t = 85 to 140.

Download Full Size | PDF

It is instructive now to perform some filtering. First, it is clear from the plot at left in Fig. 8 that the spectrum is dominated by the terms satisfying t10. Figure 9(b) reveals how well the shape is captured by those first 4+2×30=64 fit coefficients. The residual is almost indistinguishable from what is plotted in Fig. 9(c), which is the sum of the components in the spectrum within 11t24. Striking structures become visible at the higher spatial frequencies presented in Figs. 9(d), 9(e), and 9(f). Such MSF patterns are oftentimes found on parts manufactured with modern sub-aperture tools: spoking is evident in Fig. 9(d) and two sets of raster marks at 60° to each other are clear in Fig. 9(e) while Fig. 9(f) suggests some high-order rings on the surface. As a further demonstration, the spoke and high-frequency ring components are more specifically isolated in Fig. 10, but the more impressive and central result is the separation of spatial frequencies demonstrated in Fig. 9: the capability to decompose the data in Fig. 9(a) into the five bands shown is both striking and useful.

 figure: Fig. 10

Fig. 10 Plots of the components of the spectrum in Fig. 8 for the data in Fig. 9(a) with (a) m = 14, 28, 42,… 140 and (b) m = 0 and n > 7. Notice that the multiples of 14 stand out as vertical bands in the plot at left in Fig. 8 and that there are anomalously large values for large n when m = 0. These are the indicators that suggest these two final filtering steps.

Download Full Size | PDF

More realistically, metrology is corrupted by noise of various types. An indication of the impact that noise has on these fitting processes is presented in the PSS in Fig. 8. It turns out that the spectral coefficients are all impacted at about the same absolute level. Further, it is pleasing that summing the difference between the clean and noisy spectra for coefficients above the noise floor yields a map whose peak value is comparable to the noise level. Another type of noise is encountered when interpolating from pixelated metrology to obtain the desired sample values. To explore this, I used a 250×250 pixel grid with bi-linear interpolation and the resulting PSS is shown as the purple curve at right in Fig. 8. In this case, summing all terms within t140 gave an absolute error less than 0.0005. This means that all the sub-bands in Fig. 9 except (f) can be extracted in this way. Interestingly, Fibonacci sampling yielded equivalent results, but gave slightly larger coefficients at high azimuthal orders for this interpolated case.

For parts with circular apertures, sampling processes of the type shown in Fig. 7 appear to be promising partners for coordinate measuring machines. When the metrology yields a pixelated map on the other hand, although interpolation is then a workable option, the process can in practice be complicated by the presence of regions of invalid data. In such cases and, more generally, for intentionally non-circular apertures there are useful modifications based on least-squares methods for the fitting processes developed above. While the factoring into azimuthal and radial stages remains a critical step, the details are not discussed further here.

7. Concluding remarks

The goal of this work was to develop a robust projection for expressing nominal shapes in terms of the gradient orthogonal basis Qnm. Although the natural projection would involve derivatives for determining gradients, this could sometimes be problematic and was found to be unnecessary. Despite the fact that the underlying orthogonality is based on an integral, the primary results, namely Eqs. (3.3), (3.5), (4.7) and (4.8), are of comparable efficiency to discrete Fourier transforms. That is, they yield almost one fitted coefficient for every sample point evaluated within the circular aperture. Further, the first stage of this projection can be implemented by using FFTs, and the second uses Chebyshev-Gauss quadrature to deliver its remarkable overall efficiency. It is also worth noting that the projection developed in the Appendix to an intermediate basis can equally be followed by a change of basis using Eqs. (B.7-9) of [5] together with Salzer’s recurrence-based methods of [12]. In this way, the results can be expressed in terms of other orthogonal polynomials if required, e.g. Zernike polynomials (although there can be stability problems at extreme orders in that case). That is, the methods developed above allow freeform shapes that are defined over a disc to be expressed in terms of a variety of standard orthogonal polynomials.

Because they readily extend to many thousands of terms, the methods developed here create possibilities for a novel characterization of MSF on as-built parts —regardless of whether the parts are nominally rotationally symmetric or intentionally freeform. This has long been thought to be beyond the reach of polynomials alone. It is also significant that early innovations for coping with the evolving challenges of tolerancing MSF led to a realization of the importance of an rms gradient metric in some applications [13]. It is noteworthy then that the projection to Qnm gives direct access to the rms gradient, and that this is connected to the rms radius of a ray spot diagram. Further, the power of the spatial frequency-filtering processes demonstrated in Sec. 6 is perhaps the most promising surprise to emerge from this work. It establishes that, on top of their benefits in nominal shape characterization and design, tailored orthogonal polynomials also have the potential to serve as a useful diagnostic and qualification tool for MSF-related aspects of optics production.

Appendix: Derivation of the radial fit

The weighted average used in the definition of Qnm gives the radial analogue to the azimuthal average of Eq. (1.3):

g(u)u:=2π01g(u)1u2du,
where the normalization factor in front of the integral ensures 1u=1. The goal in this appendix is to determine the coefficients cn of Eq. (4.1). This is broken down into three steps. In all the cases considered, g(u) is a symmetric function of u so the integral in Eq. (A.1) is just one half of the same integral taken over (1,1), hence it has the form required for the application of Chebyshev-Gauss quadrature [9] which enters this work in Eq. (4.8).

The first step is to project to the auxiliary basis of [5], in particular to Pnm+1, and this amounts to choosing en to minimize the mean square error given by

[C(u)umn=0NenPnm+1(u2)]2u.
The element in row n and column n of the associated Gram matrix is
umPnm+1(u2)umPnm+1(u2)u=2π01Pnm+1(u2)Pnm+1(u2)u2m1u2du=1π01Pnm+1(x)Pnm+1(x)xm1/21xdx,
and the n’th element of the vector on the right-hand side of the so-called normal equations is
rn:=C(u)umPnm+1(u2)u=2π01C(u)Pnm+1(u2)um1u2du.
Since Pnm+1(x) is just scaled version of the Jacobi polynomial Pn(3/2,m1/2)(2x1), it follows from Eq. (A.9) of [5] (with m replaced throughout by m+1) that Pnm+1(x) is a linear combination of Pn(1/2,m1/2)(2x1) and Pn1(1/2,m1/2)(2x1). It immediately follows from the orthogonality of the Jacobi polynomials that the Gram matrix of Eq. (A.3) is tri-diagonal.

By using Eq. (A.10) of [5] (again, with m replaced throughout by m+1), the above-diagonal elements of the Gram matrix, say Knm:=u2mPnm+1(u2)Pn+1m+1(u2)u, are found to satisfy

Knm={(2m+1)!!(2m+2)!!2,n=0,2(2m+3)m3(m+3)(m+2)K0m,n=1,(n+1)(m+2n2)(m+2n3)(2m+2n+1)(2n+1)(m+n2)(m+2n+1)(m+2n)Kn1m,n>1.
Similarly, the diagonal elements, say Hnm:=u2mPnm+1(u2)Pnm+1(u2)u, can be expressed in terms of Knm as
Hnm={m+12m+1K0m,n=0,3m+2m+2K0m,n=1,(m+2n3)[(m+n2)(4n1)+5n](m+n2)(2n1)(m+2n)Kn1m,n>1.
As in Eq. (A.18) of [5], the elements of the Cholesky decomposition of this matrix are found by starting with h0m=H0m and working progressively up from n=1 to N by using
kn1m=Kn1m/hn1m,andhnm=Hnm(kn1m)2. (A.7a,b)
In terms of the notation of Eqs. (4.2-7), the Gram matrix of Eq. (A.3) is equal to LhkUhk and the solution for en in Eq. (A.2) can therefore now be written as e=Uhk1Lhk1r. As an aside, I note that it can be useful to extend these results to include m=0: first, when fitting to other polynomials such as Zernikes, the projection to Pnm+1(x) can be a helpful first step for all m and, second, Pn1(x) is itself gradient orthogonal so it can be scaled to replace x(1x)Qn0(x) in non-standard applications. Accordingly, note that K00=3/8, H00=1/4, K10=1/24, H10=19/32, while for all n>1 they are given by

Kn0=(n21)/(32n28),andHn0=[1+(12n)2]/16. (A.8a,b)

The second step is to change the basis from Pnm+1(x) to Pnm(x). That is, we now seek dn such that

n=0NenPnm+1(x)n=0NdnPnm(x).
By taking α=3/2 and β=m1/2 in Eq. (22.7.19) of [9] and using Eq. (A.4) of [5], it follows that
Pnm(x)={s0mP0m+1(x),n=0,snmPnm+1(x)+tn1mPn1m+1(x),n>0.
where
snm={1,n=0,1/2,m=1andn=1,m+n2m+2n2,otherwise,
tnm={1/2,m=1andn=0,(12n)(n+1)(m+2n)(2n+1)otherwise.
It follows from Eq. (A.10) that the coefficients in Eq. (A.9) are related by d=Ust1e. The final step is to change basis from Pnm(x) to Qnm(x), and Eq. (B.5) of [5] states that this step can be expressed as c=Ufgd. Finally, these three stages can be combined to arrive at the desired result for this appendix: c=UfgUst1Uhk1Lhk1r.

For verifying any code that implements these relations, it is helpful to have a sample of low-order values for these entities that will typically be pre-computed and stored:

(H01H02H03H04H11H12H13H14H21H22H23H24H31H32H33H34)=(183325643551251651677256147512171447483119210456144374009806495120700751200),
(K01K02K03K04K11K12K13K14K21K22K23K24)=(3165323525663512596796212561112871609160335121432048),
(h01h02h03h04h11h12h13h14h21h22h23h24h31h32h33h34)=(24685870322830243732105402870403024546244828314567701283003240),
(k01k02k03k04k11k12k13k14k21k22k23k24)=(32856247532970160522473012078111053367240970280333064013462960),
(s01s02s03s04s11s12s13s14s21s22s23s24s31s32s33s34)=(1111121223341312352325124758),(t01t02t03t04t11t12t13t14t21t22t23t24)=(12121314291621519925310935940). (A.17a,b)

References and links

1. A. Yabe, “Representation of freeform surfaces suitable for optimization,” Appl. Opt. 51(15), 3054–3058 (2012), doi:. [CrossRef]   [PubMed]  

2. I. Kaya and J. P. Rolland, “Hybrid RBF and local ϕ-polynomial freeform surfaces,” Adv. Opt. Technol. 2(1), 81–88 (2012), doi:. [CrossRef]  

3. P. Jester, C. Menke, and K. Urban, “Wavelet Methods for the Representation, Analysis and Simulation of Optical Surfaces,” IMA J. Appl. Math. 77, 357–363 (2012).

4. R. Steinkopf, L. Dick, T. Kopf, A. Gebhardt, S. Risse, and R. Eberhardt, “Data handling and representation of freeform surfaces,” Proc. SPIE 8169, 81690X, 81690X-9 (2011), doi:. [CrossRef]  

5. G. W. Forbes, “Characterizing the shape of freeform optics,” Opt. Express 20(3), 2483–2499 (2012). [CrossRef]   [PubMed]  

6. C. Menke and G. W. Forbes, “Optical design with orthogonal representations of rotationally symmetric and freeform aspheres,” Adv. Opt. Technol. 2(1), 97–109 (2012), doi:. [CrossRef]  

7. W. Press, S. Teukolsky, W. Vetterling, and B. Flannery, Numerical Recipes: The Art of Scientific Computing (Cambridge University Press, 1992) Section 15.4.

8. G. W. Forbes, “Robust, efficient computational methods for axially symmetric optical aspheres,” Opt. Express 18(19), 19700–19712 (2010), doi:. [CrossRef]   [PubMed]  

9. M. Abramowitz and I. Stegun, Handbook of Mathematical Functions (Dover, 1978). See 25.4.38.

10. See Section 12.3 of [7]. Or see Sec. 12.4 in http://apps.nrbook.com/empanel/index.html#

11. J. H. Hannay and J. F. Nye, “Fibonacci numerical integration on a sphere,” J. Phys. Math. Gen. 37(48), 11591–11601 (2004), doi:. [CrossRef]  

12. G. W. Forbes, “Robust and fast computation for the polynomials of optics,” Opt. Express 18(13), 13851–13862 (2010), doi:. [CrossRef]   [PubMed]  

13. J. K. Lawson, J. M. Auerbach, R. E. English Jr, M. A. Henesian, J. T. Hunt, R. A. Sacks, J. B. Trenholme, W. H. Williams, M. J. Shoup III, J. H. Kelly, and C. T. Cotton, “NIF optical specifications: the importance of the RMS gradient,” Proc. SPIE 3492, 336–343 (1999), doi:. [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Sample locations for a plane-symmetric conversion with eleven spokes and six rings. Reflection symmetry means that it is sufficient to sample in just one half of the aperture.
Fig. 2
Fig. 2 A sinusoidal sag function is presented in each column. Both correspond to a single full period, i.e. C = 1 in Eq. (6.2), but one is symmetric while the other is antisymmetric. The sags are plotted in the first row with a linear color ramp between the displayed extreme values. The peak-to-valley and standard deviation are also shown. The spectra of fit coefficients are tabulated in the second row as functions of m and n and include all terms up to Cartesian order t = 11. The PSS’s are presented in the third row. The rms gradient is given by the square root of the sum of these values, and the result is shown as “rss” in those plots since it is the root-sum-square of all the tabulated coefficients.
Fig. 3
Fig. 3 Sinusoidal sag functions presented much as in Fig. 2 but now as a combination of the even and odd cases and for C = 5 and 25. The fit coefficient amplitudes are plotted on a logarithmic color scale in the second row as functions of m and n. Their peak value is shown next to the red block of the legend and each colored block below that represents a reduction by a factor of ten down to 10−7 times the maximum. The third row holds the PSS’s.
Fig. 4
Fig. 4 The PSS for C = 100 follows upon fitting over 150,000 coefficients.
Fig. 5
Fig. 5 The data shown at top right in Fig. 3 is presented here after a low-, medium-, and high-pass filter has been applied in terms of the Cartesian order in the Q basis. These plots contain only the components in the data from (a) t = 1 to 30 at left, (b) t = 31 to 60 in the central plot, and (c) t = 61 to 90 at right. The rss (i.e. the rms gradient) for these three bands is 12.1, 24.9, and 48.5, respectively, and the standard deviation of each band appears in the plots.
Fig. 6
Fig. 6 The data in the rightmost plot of Fig. 5 is now passed through a low-, medium-, and high-pass filter that discriminates based on the azimuthal order rather than the Cartesian order. These plots contain only the components in the data from (a) m = 0 to 25 at left, (b) m = 26 to 50 in the central plot, and (c) m = 51 to 90 at right. This division is chosen so that there is roughly an equal number of terms in each of these three sub-bands [two of the domains are parallelograms and one a trapezoid in the (m,n)-plane]. Together, these components exactly comprise the original function.
Fig. 7
Fig. 7 The default sample locations at left for N = 25 (hence M = 50, J = 102, and K = 27) involves 2,754 samples to extract (2M + 1)(N + 1) = 2,626 coefficients. This is appropriate for a general fit of a spectrum like that in the left column of Fig. 3. The Fibonacci-inspired and analytically equivalent option at right avoids unattractive bunching in the outer zones.
Fig. 8
Fig. 8 The peak coefficient in this spectrum is 2.3 and the color legend steps down to purple for an amplitude of 2.3e-7. The PSS is also plotted on a log scale as the green curve at right, while the blue curves give the results when the sampled sag values are corrupted by additive noise uniformly distributed between ± 1.0e-3, ± 1.0e-4, and ± 1.0e-5. Only the highest level of noise can be distinguished from the noise-free case in the left half of the plot (t < 80). The purple curve is found by using interpolation from a 250x250 pixelated grid.
Fig. 9
Fig. 9 (a) Normal departure extracted from a synthetic data set. The components in that data from (b) t = 1 to 10, (c) t = 11 to 24, (d) t = 25 to 64, (e) t = 65 to 84, and (f) t = 85 to 140.
Fig. 10
Fig. 10 Plots of the components of the spectrum in Fig. 8 for the data in Fig. 9(a) with (a) m = 14, 28, 42,… 140 and (b) m = 0 and n > 7. Notice that the multiples of 14 stand out as vertical bands in the plot at left in Fig. 8 and that there are anomalously large values for large n when m = 0. These are the indicators that suggest these two final filtering steps.

Equations (58)

Equations on this page are rendered with MathJax. Learn more.

f( ρ,θ )= c ρ 2 1+ 1 c 2 ρ 2 + 1 1 c 2 ρ 2 { u 2 (1 u 2 ) n=0 a n 0 Q n 0 ( u 2 ) + m=1 u m n=0 [ a n m cosmθ+ b n m sinmθ] Q n m ( u 2 ) }.
t={ 2n+4,m=0, 2n+m,m>0.
g(θ) θ := 1 2π 0 2π g(θ)dθ ,
S( ρ max ,θ) S 0 θ = c ρ max 2 1+ 1 c 2 ρ max 2 .
c= 2 S( ρ max ,θ) S 0 θ S( ρ max ,θ) S 0 θ 2 + ρ max 2 .
S ¯ ( ρ/ ρ max ,θ ):= 1 c 2 ρ 2 [ S(ρ,θ) S 0 c ρ 2 1+ 1 c 2 ρ 2 ].
S ¯ (u,θ) u 2 (1 u 2 ) n=0 a n 0 Q n 0 ( u 2 )+ m=1 u m n=0 [ a n m cosmθ+ b n m sinmθ] Q n m ( u 2 ).
S ¯ (u,θ) m=0 [ A m (u)cosmθ+ B m (u)sinmθ].
A 0 (u) u 2 (1 u 2 ) n=0 a n 0 Q n 0 ( u 2 ),
A m (u) u m n=0 a n m Q n m ( u 2 ), B m (u) u m n=0 b n m Q n m ( u 2 ),whenm>0.
L(θ)= k= k exp(ikθ).
exp(ikθ)exp(ijθ) θ = 1 2π α α+2π exp[i(jk)θ]dθ= δ jk .
k = 1 2π α α+2π L(θ)exp(ikθ)dθ.
k ˜ k := 1 2π j=1 J L(jΔ)exp[ikjΔ]Δ= 1 J j=1 J L(jΔ)exp[ikjΔ].
˜ k = 1 J j=1 J { k = k exp[i k jΔ] }exp[ikjΔ]= k = k { 1 J j=1 J exp[i(k k )j2π/J] }.
˜ k = n= k+nJ = k +( k+J + kJ )+( k+2J + k2J )+....
L(θ)= 1 2 a 0 + k=1 ( a k coskθ+ b k sinkθ),
a k =2 L(θ)coskθ θ , b k =2 L(θ)sinkθ θ ,
a ˜ k := 2 J j=1 J L(jΔ)cos(kjΔ), b ˜ k := 2 J j=1 J L(jΔ)sin(kjΔ).
f(θ) θ 1 J j=1 J f(α+jΔ),
β n = (1) n A ¯ 0 (cosθ)cos[(2n+1)θ] θ
A ¯ 0 (u):= A 0 (u) / [u(1 u 2 )] .
a n 0 = f n β n + g n β n+1 + h n β n+2 ,
A 0 (u)= S ¯ (u,θ) θ A ˜ 0 (u):= 1 J j=1 J S ¯ (u,jΔ),
β n = (1) n 2 π 0 π/2 A ¯ 0 (cosϕ)cos[(2n+1)ϕ]dϕ (1) n K k=1 K A ¯ 0 (cos ϕ k )cos[(2n+1) ϕ k ] ,
ϕ k :=(2k1) π 4K .
C(u)= u m n=0 N c n Q n m ( u 2 ),
U pq :=[ p 0 m q 0 m 0 ... 0 0 p 1 m q 1 m ... 0 0 0 p 2 m q 2 m ... 0 0 0 ... 0 p N1 m q N1 m 0 ... 0 0 p N m ].
u n = p n m v n + q n m v n+1 ,
v n =( u n q n m v n+1 )/ p n m .
u n = q n1 m v n1 + p n m v n ,
v n =( u n q n1 m v n1 )/ p n m .
c= U fg U st 1 U hk 1 L hk 1 r,
r n = 2 π 0 1 C(u) P n m+1 ( u 2 ) u m 1 u 2 du = 2 π 0 π/2 C(cosϕ) cos m ϕ P n m+1 ( cos 2 ϕ)dϕ 1 K k=1 K C(cos ϕ k ) cos m ϕ k P n m+1 ( cos 2 ϕ k ) .
F( 1 2 ρ max ,0)=7,989,797,240,841F( 1 2 ρ max , 1 2 ρ max )=16,201,346,177,785 F(0, 1 2 ρ max )=7,996,883,312,162F( 1 2 ρ max , 1 2 ρ max )=16,038,915,342,886.
{r,e,d,c}={ ( 40,446,180,687 53,565,404,125 11,587,305,007 328,228,051 32,163,607 366,902 28,017 321 6 0 ),( 687,615,292,982 242,697,231,660 9,224,094,424 817,823,612 8,029,264 777,228 9,268 189 5 0 ),( 924,648,807,539 474,067,029,112 25,486,726,968 2,023,663,245 20,477,869 1,728,692 19,989 399 9 0 ),( 225,290,889,213 223,995,088,263 12,915,787,282 1,568,376,338 19,668,263 2,261,608 31,561 737 20 1 ) }.
{r,e,d,c}={ ( 98,410,809 162,552,300 23,414,045 221,374 2,464 31 1 0 0 0 ),( 2,080,090,082 262,103,956 3,095,825 39,133 556 11 0 0 0 0 ),( 2,145,718,876 328,143,971 4,317,814 58,363 865 18 0 0 0 0 ),( 2,650,626,613 801,139,998 10,459,850 151,133 2,422 55 1 0 0 0 ) }.
c=(70,530,723,064,43,064,584,81,637,144, 10,310,272,546,272,13,714,464,17,1,0).
a n m cosmθ+ b n m sinmθ= α n m cos(mθ ϕ n m ),
z=sin(πCy/ ρ max +δ)=sinδcos(πCy/ ρ max )+cosδsin(πCy/ ρ max ),
t2π ρ max /λ.
g(u) u := 2 π 0 1 g(u) 1 u 2 du ,
[C(u) u m n=0 N e n P n m+1 ( u 2 )] 2 u .
u m P n m+1 ( u 2 ) u m P n m+1 ( u 2 ) u = 2 π 0 1 P n m+1 ( u 2 ) P n m+1 ( u 2 ) u 2m 1 u 2 du = 1 π 0 1 P n m+1 (x) P n m+1 (x) x m1/2 1x dx ,
r n := C(u) u m P n m+1 ( u 2 ) u = 2 π 0 1 C(u) P n m+1 ( u 2 ) u m 1 u 2 du .
K n m ={ (2m+1)!! (2m+2)!!2 , n=0, 2(2m+3)m 3(m+3)(m+2) K 0 m , n=1, (n+1)(m+2n2)(m+2n3)(2m+2n+1) (2n+1)(m+n2)(m+2n+1)(m+2n) K n1 m , n>1.
H n m ={ m+1 2m+1 K 0 m , n=0, 3m+2 m+2 K 0 m , n=1, (m+2n3)[(m+n2)(4n1)+5n] (m+n2)(2n1)(m+2n) K n1 m , n>1.
k n1 m = K n1 m / h n1 m ,and h n m = H n m ( k n1 m ) 2 .
K n 0 =( n 2 1)/(32 n 2 8),and H n 0 =[1+ (12n) 2 ]/16.
n=0 N e n P n m+1 (x) n=0 N d n P n m (x).
P n m (x)={ s 0 m P 0 m+1 (x), n=0, s n m P n m+1 (x)+ t n1 m P n1 m+1 (x), n>0.
s n m ={ 1, n=0, 1/2, m=1andn=1, m+n2 m+2n2 , otherwise,
t n m ={ 1/2, m=1andn=0, (12n)(n+1) (m+2n)(2n+1) otherwise.
( H 0 1 H 0 2 H 0 3 H 0 4 H 1 1 H 1 2 H 1 3 H 1 4 H 2 1 H 2 2 H 2 3 H 2 4 H 3 1 H 3 2 H 3 3 H 3 4 )=( 1 8 3 32 5 64 35 512 5 16 5 16 77 256 147 512 17 144 7 48 31 192 1045 6144 37 400 9 80 649 5120 7007 51200 ),
( K 0 1 K 0 2 K 0 3 K 0 4 K 1 1 K 1 2 K 1 3 K 1 4 K 2 1 K 2 2 K 2 3 K 2 4 )=( 3 16 5 32 35 256 63 512 5 96 7 96 21 256 11 128 7 160 9 160 33 512 143 2048 ),
( h 0 1 h 0 2 h 0 3 h 0 4 h 1 1 h 1 2 h 1 3 h 1 4 h 2 1 h 2 2 h 2 3 h 2 4 h 3 1 h 3 2 h 3 3 h 3 4 )=( 2 4 6 8 5 8 70 32 2 8 30 24 3 7 32 105 40 2 8 70 40 30 24 5 462 448 2 8 3 14 56 770 128 3003 240 ),
( k 0 1 k 0 2 k 0 3 k 0 4 k 1 1 k 1 2 k 1 3 k 1 4 k 2 1 k 2 2 k 2 3 k 2 4 )=( 3 2 8 5 6 24 7 5 32 9 70 160 5 2 24 7 30 120 7 8 11 105 336 7 2 40 9 70 280 33 30 640 13 462 960 ),
( s 0 1 s 0 2 s 0 3 s 0 4 s 1 1 s 1 2 s 1 3 s 1 4 s 2 1 s 2 2 s 2 3 s 2 4 s 3 1 s 3 2 s 3 3 s 3 4 )=( 1 1 1 1 1 2 1 2 2 3 3 4 1 3 1 2 3 5 2 3 2 5 1 2 4 7 5 8 ),( t 0 1 t 0 2 t 0 3 t 0 4 t 1 1 t 1 2 t 1 3 t 1 4 t 2 1 t 2 2 t 2 3 t 2 4 )=( 1 2 1 2 1 3 1 4 2 9 1 6 2 15 1 9 9 25 3 10 9 35 9 40 ).
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.