Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Imaging with partially coherent light: elementary-field approach

Open Access Open Access

Abstract

Numerical modeling of bright-field and dark-field imaging with spatially partially coherent light is considered. The illuminating field is expressed as a superposition of transversely shifted fully coherent elementary fields of identical form. Examples of imaging under variable coherence conditions demonstrate the computational feasibility of the model even when the coherence area of the illumination is in the wavelength scale.

© 2015 Optical Society of America

1. Introduction

The bilinear nature of the object-image relationship under spatially partially coherent illumination implies that a direct solution of the imaging problem involves four-dimensional (4D) integrals [1,2], which leads to high computational effort. In non-separable imaging geometries, the problem generally becomes numerically prohibitive when the complexity of a two-dimensional object increases. Over several decades, various computational approaches have been developed to treat this problem [3–7], particularly in the field of projection lithography [8–10]. The computational task can be simplified by means of the coherent-mode decomposition of the cross-spectral density function (CSD) associated with the illuminating field [11], which reduces the 4D integrals into a sum of 2D integrals. Some initial steps in this direction have been taken [12,13], but all existing methods have some drawbacks as explained in [14].

We propose and demonstrate an alternative coherent-mode approach in which the incident CSD is represented in terms of spatially shifted and mutually uncorrelated elementary field modes [14–17]. This representation is applicable, for instance, to all imaging problems involving a spatially incoherent primary source and either critical or Köhler illumination of the object. Our numerical frequency-domain implementation involves the evaluation of a finite set of 2D integrals using the Fast Fourier Transform algorithm. We apply the method first to the classical edge-imaging problem and then evaluate partially coherent bright-field and dark-field (BFI and DFI) images of a 2D resolution target. In particular we demonstrate the feasibility of the method when the incident field is nearly incoherent.

2. Theory

In our formulation we assume Köhler illumination of the object and a telecentric imaging system illustrated in Fig. 1, but the method can be readily extended to other cases. As usual, we assume that the imaging takes place within the paraxial aplanatic region of the system.

 figure: Fig. 1

Fig. 1 Köhler illumination of an object O with condenser LC, followed by a telecentric imaging system consisting of lenses (or lens systems) L1 and L2 and an aperture A. Though marked here in the S plane, the symbols R1 and R2 represent the limiting spatial frequencies generated by the annular primary source in the condenser system. Correspondingly, R denotes the maximum spatial frequency passed by the circular aperture of the imaging system.

Download Full Size | PDF

Let us define the CSD of a stationary field at a plane z = constant as W(ρ1,ρ2) = 〈E* (ρ1)E(ρ2)〉, where ρ = (x,y), E(ρ) is a single field realization, and the brackets denote ensemble averaging. If a thin object defined by a complex-amplitude transmittance t(ρ′) is transilluminated with a field described by W0(ρ1,ρ2), the CSD of the field behind the object is

W0(ρ1,ρ2)=t*(ρ1)t(ρ2)W0(ρ1,ρ2).

If the object is imaged by a system with coherent impulse response K(ρ,ρ′), the image-plane CSD takes the form

W(ρ1,ρ2)W0(ρ1,ρ2)K*(ρ1,ρ1)K(ρ2,ρ2)d2ρ1d2ρ1.

The necessary and sufficient condition for the realizability of a spatially partially coherent field is that its CSD can be expressed in the (genuine) form [18]

W(ρ1,ρ2)=p(ρ¯)H*(ρ1,ρ¯)H(ρ2,ρ¯)d2ρ¯,
where p(ρ¯) is a real and non-negative function and H(ρ,ρ¯) is an arbitrary kernel function [19]. If W0(ρ1,ρ2) is of this type, with kernel H0(ρ,ρ¯), Eq. (2) reduces to Eq. (3) with
H(ρ,ρ¯)=t(ρ)H0(ρ,ρ¯)K(ρ,ρ)dρ.

The calculation of the image-plane spectral density, S(ρ) = W(ρ,ρ), now requires only 2D integrals: first the system response H(ρ,ρ¯) is evaluated for each ρ¯ using Eq. (4) and then Eq. (3) is applied to integrate over all values of ρ¯. However, this does not immediately imply that the resulting numerical algorithm would be computationally feasible.

Let us next suppose that the incident CSD has the particular mathematical form

W0(ρ1,ρ2)=1(2π)2T(κ)exp(iΔρκ)d2κ,
where T(κ) is real and non-negative and Δρ=ρ2ρ1, which corresponds to a Schell-model source. Equation (5) is valid in both critical and Köhler illumination conditions (see [20]), and implies that the complex degree of spectral coherence
μ0(ρ1,ρ2)=W0(ρ1,ρ2)S0(ρ1)S0(ρ2)=T(κ)exp(iΔρκ)d2κT(κ)d2κ
depends only on the coordinate difference Δρ′. The CSD given by Eq. (5) can also be written as
W0(ρ1,ρ2)=e0*(ρ1ρ¯)e0(ρ2ρ¯)d2ρ¯,
where
e0(ρ)=1(2π)2T(κ)exp(iρκ)d2κ
is a fully coherent field, which we call the elementary field associated with W0(ρ1,ρ2). We note that Eq. (8) represents a genuine CSD with p(ρ¯)=1 and H0(ρ,ρ¯)=e0(ρρ¯).

Considering an isoplanatic region of an imaging system, such that K(ρ,ρ′) = K (ρρ′), the image-plane CSD is

W(ρ1,ρ2)=e*(ρ1ρ¯)e(ρ2ρ¯)d2ρ¯,
where
e(ρρ¯)=t(ρ)e0(ρρ¯)K(ρρ)d2ρ.

The spectral density of the field, given by

S(ρ)=W(ρ,ρ)=|e(ρρ¯)|2d2ρ¯,
can now be computed by scanning e0(ρ′) across the object, calculating the resulting ‘elementary-field response’ e(ρρ¯) for each ρ¯ by means of Eq. (10), and using Eq. (11) to construct the full partially coherent diffraction image. The expressions given above represent a reformulation of the Hopkins model [1]. For example, Eq. (3) in [21] may be interpreted as a critical-illumination equivalent of our Eq. (11), although the former does not involve the elementary-field representation explicitly.

Since Eq. (10) has the form of a convolution, we can apply the frequency analysis of coherent systems [22] to evaluate it. To this end we define the functions

g(ξ,ρ¯)=t(ρ)e0(ρρ¯)exp(iξρ)d2ρ
and
P(ξ)=K(ρ)exp(iξρ)d2ρ.

The pupil function P(ξ) = |P(ξ)|exp[ik0w(ξ)] accounts, as usual, for truncation and apodization of the field at the exit pupil, and w(ξ) contains the wave aberrations of the system. With this notation the elementary-field response is

e(ρρ¯)=1(2π)2P(ξ)g(ξ,ρ¯)exp(iξρ)d2ξ.

We proceed to numerical demonstrations of the technique described above by applying it to both BFI and DFI. We assume an annular incoherent primary source with T(κ) = T0 when R1 ≤ |κ| ≤ R2 and zero otherwise. The CSD is obtained by Eq. (5) which corresponds to the van Cittert–Zernike theorem [2]. Then, according to Eq. (6),

μ0(Δρ)=2[R2J1(R2|Δρ|)R1J1(R1|Δρ|)](R22R12)|Δρ|,
where J1 is the Bessel function of the first kind and order one. In view of Eq. (8), the incident elementary field is now
e0(ρ)=2[R2J1(R2|ρ|)R1J1(R1|ρ|)](R22R12)|ρ|.

A circular incoherent source typically used in BFI is obtained by choosing R1 = 0 and dark-field imaging conditions can be modeled by choosing R1 ≥ R, where R defines the pupil of the optical system in normalized coordinates. The pupil function is assumed to be circular, with P(ξ) = 1 when 0 ≤ |ξ| ≤ R. In what follows, we normalize all quantities to R = k0 NA, where k0 is the vacuum wave number and NA is the numerical aperture of the imaging system.

The sampling distances in the two lateral directions are chosen based on the properties of the illumination and the object. To obtain adequate convergence, the sampling distance of the object is taken as r/10, where r = 0.61/R is the resolution limit of the imaging system. The sampling of the incident elementary function e0(ρ′) must be sufficiently dense such that there are several sampling points between |ρ′| = 0 and |ρ′| = σ/R, where σ (the dimensionless number) indicates the first zero of e0(ρ′) that, in view of Eq. (15), can be associated with the transverse coherence length. In our examples the sampling distance of the elementary field was also taken to be r/10, which simplifies the simulation but is not obligatory since one may always apply interpolation. We then scan the incident elementary field e0(ρρ¯) with a discrete set of points ρ¯ across the object t(ρ′) to obtain the transmitted elementary fields. Each of these fields is propagated through the system by the FFT-algorithm to find the set of image-plane elementary fields e(ρρ¯), which are superposed incoherently by the discrete version of Eq. (11). Since the elementary fields are uncorrelated, the algorithm can be easily parallelized.

Because the FFT algorithm is used, the computation time of our method is proportional to M2N2 logN2, where M2 is the total number of elementary fields needed to represent the CSD correctly and N2 is the number of sampling points required to cover the spatial region where the incident elementary field differs significantly from zero. Considering BFI, we roughly have M ∝ 1 and Nσ. Hence there is no significant dependence of the computational effort on the degree of spatial coherence. In DFI the effective extent of the elementary field is larger than in BFI (the extreme case being R2 → R1, which represents a thin primary-source ring). In DFI we therefore need to sample the elementary field over an area several times larger than in BFI.

3. Examples

As a first example we consider the problem of bright-field imaging of a straight edge [23, 24] with a circular primary source and pupil, treated semi-analytically in [25]. In Fig. 2, the black line shows the ideal geometrical image of the edge and the other curves illustrate diffraction images under different coherence conditions. The red curve with R2 = 0.1R is practically indistinguishable from the fully coherent case and the green curve (R2 = R) represents the case in which the image of the incoherent primary source matches the pupil.

 figure: Fig. 2

Fig. 2 Edge imaging in partially coherent illumination.

Download Full Size | PDF

Tables 1 and 2 illustrate the convergence of our numerical model. Here we consider the value of the image intensity at the geometrical image point of the edge (point Rx = 0 in Fig. 2@@). Both the distance between adjacent elementary fields and the size of the elementary-field window affect convergence. The former is characterized by a parameter Δx and the latter by a parameter L. The results converge when Δx is reduced and L in increased. In Table 1 we show convergence results when the density of the elementary fields is increased and the elementary-field window is chosen large enough (L = 4), while Table 2 demonstrates the convergence when the density of the elementary fields is high (Δx = 0.1) and the elementary-field window is increased. Irrespective of the degree of coherence of illumination, a value Δx = 0.1 is seen to be adequate. Because of the oscillatory nature of the elementary field modes, the results fluctuate somewhat when L is increased, but L = 4 is sufficient in bright-field imaging for most practical purposes.

Tables Icon

Table 1. Convergence of the method when the distance (characterized by Δx, a dimensionless number) between the adjacent shifted elementary fields is reduced.

Tables Icon

Table 2. Convergence of the method when the size (characterized by L, a dimensionless number) of the elementary-field window is increased.

We proceed to apply our approach to partially coherent imaging of the 2D resolution target illustrated in Fig. 3. The transverse scale of the object is expressed in dimensionless units Rx and Ry. In the BFI case, the white areas of the object have a complex-amplitude transmittance t = 1 and the black areas are opaque (t = 0). In DFI, we assume a low-contrast object with t = 0.99exp(iπ/4) and t = 1 for the white and black regions, respectively.

 figure: Fig. 3

Fig. 3 The 2D resolution target used as the object in numerical simulations. The periods of the three four-bar gratings are r, 2r, and 5r, respectively, where r = 0.61/R is the resolution limit of the setup. The window (with the frame) contains 156 × 573 sampling points separated by a distance r/10.

Download Full Size | PDF

Figures 4 and 5 illustrate the effect of varying the degree of spatial coherence in the imagery of the resolution target in BFI and DFI conditions, respectively. In BFI the resolution is best with least coherent illumination, and strong fluctuations emerge when the coherence is high. In DFI we see the (expected) edge-enhancement effect particularly well in the largest object features. Table 3 summarizes the simulation parameters together with the required computations times.

 figure: Fig. 4

Fig. 4 Bright-field imaging (R1 = 0). Top: R2 = 1.5R. Middle: R2 = 0.7R. Bottom: R2 = 0.1R. Visualization 1 demonstrates the diffraction image when R2 increases from 0.1R to 2R.

Download Full Size | PDF

 figure: Fig. 5

Fig. 5 Dark-field imaging (R1 = R). Top: R2 = 2R. Middle: R2 = 1.6P. Bottom: R2 = 1.1R. Visualization 2 shows the image when R1 = R and R2 varies from R to 2R.

Download Full Size | PDF

Tables Icon

Table 3. Simulation parameters for Figs. 4 and 5. Size is the number N2 of sampling points of the elementary field and Sep. is the separation of their center points (the same in x and y directions) in pixel units 0.61/R, Num. is the total number of elementary fields, and Time is the simulation time on a desktop computer (Intel Core i7-2600K Processor).

The combined effect of spherical aberration and defocus, defined by a wave aberration function

w(ξ)=a(|ξ|/R)2+b(|ξ|/R)4,
is illustrated in Figs. 6 and 7 for BFI and DFI, respectively. In these examples b = 1, i.e., we assume one wave of pure spherical aberration at the edge of the aperture. The top views represent images in the geometrical focal plane (pure spherical aberration), and the improvement in image quality by balancing spherical aberration with defocusing is clearly visible in the middle and bottom views. The simulation parameters are identical to the corresponding values of R1 and R2 in Table 3. The effect of aberrations on imaging process has been analyzed in [26] and the simple results for the 1D case that only include paraxial defocus can be found in work done by S. Subramanian [27].

 figure: Fig. 6

Fig. 6 Bright-field imaging in the presence of spherical aberration and defocus (R1 =0 and R2 = 0.7R). Top: a = 0. Middle: a = −0.75. Bottom: a = −1. Visualization 3 demonstrates the effect of changing the value of a from zero to −2.

Download Full Size | PDF

 figure: Fig. 7

Fig. 7 Dark-field imaging in the presence of spherical aberration and defocus (R1 = R and R2 = 1.6R). Top: a = 0. Middle: a = 0.5. Bottom: a = 1. Visualization 4 shows the effect of changing a from zero to 2.

Download Full Size | PDF

4. Conclusions

In conclusion, we have approached the problem of partially coherent imaging by means of of an elementary-field decomposition of the incident field. Here we have assumed an incoherent primary source at the input plane of the condenser. However, spatially partially coherent primary sources can be treated by the elementary-field approach without added difficulty, provided that the source CSD obeys the Schell model [14–17]. In order to bring out the essential concepts in simple terms, we presented the method in the domain of Fourier optics. Extensions to non-paraxial aplanatic systems (that obey the sine law) are straightforward: certain well-known geometrical apodization factors arise, which depend on the conjugate ratio [28]. In the non-paraxial domain the scalar treatment of the elementary illumination field must also be replaced by its electromagnetic counterpart [17]. In our paraxial-domain analysis the results were presented in dimensionless coordinates (xR,yR). However, when features in the object structure are in the wavelength scale, the imagery depends on the wavelength and the transmission-function approach implied by (1) must be replaced by rigorous electromagnetic diffraction analysis. In this domain we would first evaluate the scattering matrix of the object using, e.g., the Fourier Modal Method [29]. Once this is done, the elementary-field response at each center point ρ¯ can be readily evaluated in the plane-wave basis and the rest of the imaging analysis proceeds as described above.

Acknowledgments

The work was supported by the Academy of Finland ( 252910) and the strategic funding of the University of Eastern Finland.

References and links

1. H. H. Hopkins, “The concept of partial coherence in optics,” Proc. Royal Soc. 208, 263–277 (1951). [CrossRef]  

2. J. W. Goodman, Statistical Optics (Wiley, 2000).

3. E. C. Kintner, “Method for the calculation of partially coherent imagery,” Appl. Opt. 17, 2747–2753 (1978). [CrossRef]   [PubMed]  

4. J. van der Gracht, “Simulation of partially coherent imaging by outer-product expansion,” Appl. Opt. 17, 3725–3731 (1994). [CrossRef]  

5. K. Yamazoe, “Computation theory of partially coherent imaging by stacked pupil shift matrix,” J. Opt. Soc. Am. A 25, 3111–3119 (2008). [CrossRef]  

6. S. B. Mehta and C. J. R. Sheppard, “Phase-space representation of partially coherent imaging using the Cohen class distribution,” Opt. Lett. 35, 348–350 (2010). [CrossRef]   [PubMed]  

7. K. Yamazoe, “Two models for partially coherent imaging,” J. Opt. Soc. Am. A 29, 2591–2597 (2012). [CrossRef]  

8. A. E. Rosenbluth, S. J. Bukofsky, M. Hibbs, K. Lai, R. N. Singh, and A. K. K. Wong, “Optimum mask and source patterns to print a given shape,” J. Micro/Nanolith., Microfab., Microsyst. 1, 13–30 (2002).

9. R. L. Gordon and A. E. Rosenbluth, “Lithographic Image Simulation for the 21st Century with 19th-Century Tools,” Proc. SPIE 5182, 73–87 (2003). [CrossRef]  

10. R. L. Gordon, “Exact Computation of Scalar, 2D Aerial Imagery,” Proc. SPIE 4692, 517–531 (2002). [CrossRef]  

11. E. Wolf, “New theory of partial coherence in the space-frequency domain. Part I: Spectra and cross-spectra of steady-state sources,” J. Opt. Soc. Am. 72, 343–351 (1982). [CrossRef]  

12. B. E. A. Saleh and M. Rabbani, “Simulation of partially coherent imagery in the space and frequency domains and by modal expansion,” Appl. Opt. 21, 2770–2777 (1982). [CrossRef]   [PubMed]  

13. A. S. Ostrovsky, O. Ramos-Romero, and M. V. Rodríguez-Solís, “Coherent-mode representation of partially coherent imagery,” Opt. Rev. 3, 492–496 (1996). [CrossRef]  

14. A. Burvall, A. Smith, and C. Dainty, “Elementary functions: propagation of partially coherent light,” J. Opt. Soc. Am. A 26, 1721–1729 (2009). [CrossRef]  

15. F. Gori and C. Palma, “Partially coherent sources which give rise to highly directional laser beams,” Opt. Commun. 27, 185–188 (1978). [CrossRef]  

16. P. Vahimaa and J. Turunen, “Finite-elementary-source model for partially coherent radiation,” Opt. Express 14, 1376–1381 (2006). [CrossRef]   [PubMed]  

17. J. Tervo, J. Turunen, P. Vahimaa, and F. Wyrowski, “Shifted-elementary-mode representation for partially coherent vectorial fields,” J. Opt. Soc. Am. A 27, 2004–2014 (2010). [CrossRef]  

18. F. Gori and M. Santarsiero, “Devising genuine spatial correlation functions,” Opt. Lett. 32, 3531–3533 (2007). [CrossRef]   [PubMed]  

19. R. Martínez-Herrero, P. M. Mejías, and F. Gori, “Genuine cross-spectral densities and pseudo-modal expansions,” Opt. Lett. 34, 1399–1401 (2009). [CrossRef]   [PubMed]  

20. M. Born and E. Wolf, Principles of Optics, 7th ed. (Cambridge University, 1999), Sect. 10.6.2. [CrossRef]  

21. C. J. R. Sheppard and T. Wilson, “The theory of the direct-view confocal microscope,” J. Microsc. 124, 107–117 (1981). [CrossRef]   [PubMed]  

22. J. W. Goodman, Fourier Optics, 3rd ed. (Roberts & Company, 2005), Chap. 6.

23. R. E. Kinzly, “Images of coherently illuminated edged objects formed by scanning optical systems,” J. Opt. Soc. Soc. Am. 56, 9–11 (1966). [CrossRef]  

24. B. Möller, “Imaging of a straight edge in the partially coherent illumination in the presence of spherical aberration,” Opt. Acta 15, 223–236 (1968). [CrossRef]  

25. B. M. Watrasiewicz, “Theoretical calculations of images of straight edges in partially coherent illumination,” Opt. Acta: Int. J. of Opt. 12, 391–400 (1965). [CrossRef]  

26. D. S. Goodman and A. E. Rosenbluth, “Condenser aberrations in Köhler illumination,” Proc. SPIE 922, 108–134 (1988). [CrossRef]  

27. S. Subramanian, “Rapid calculation of defocused partially coherent images,” Appl. Opt. 20, 1854–1857 (1981). [CrossRef]   [PubMed]  

28. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University., 2006). [CrossRef]  

29. H. Kim, J. Park, and B. Lee, Fourier Modal Method and Its Applications in Computational Nanophotonics (CRC Press, 2012).

Supplementary Material (4)

NameDescription
Visualization 1: MPEG (288 KB)      Visualization 1 demonstrates the diffraction image when K2 increases from 0.1P to 2P
Visualization 2: MPEG (209 KB)      Visualization 2 shows the image when K1 = P and K2 varies from P to 2P.
Visualization 3: MPEG (191 KB)      Visualization 3 demonstrates the effect of changing the value of a from zero to -2.
Visualization 4: MPEG (333 KB)      Visualization 4 shows the effect of changing a from zero to -2

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Köhler illumination of an object O with condenser LC, followed by a telecentric imaging system consisting of lenses (or lens systems) L1 and L2 and an aperture A. Though marked here in the S plane, the symbols R1 and R2 represent the limiting spatial frequencies generated by the annular primary source in the condenser system. Correspondingly, R denotes the maximum spatial frequency passed by the circular aperture of the imaging system.
Fig. 2
Fig. 2 Edge imaging in partially coherent illumination.
Fig. 3
Fig. 3 The 2D resolution target used as the object in numerical simulations. The periods of the three four-bar gratings are r, 2r, and 5r, respectively, where r = 0.61/R is the resolution limit of the setup. The window (with the frame) contains 156 × 573 sampling points separated by a distance r/10.
Fig. 4
Fig. 4 Bright-field imaging (R1 = 0). Top: R2 = 1.5R. Middle: R2 = 0.7R. Bottom: R2 = 0.1R. Visualization 1 demonstrates the diffraction image when R2 increases from 0.1R to 2R.
Fig. 5
Fig. 5 Dark-field imaging (R1 = R). Top: R2 = 2R. Middle: R2 = 1.6P. Bottom: R2 = 1.1R. Visualization 2 shows the image when R1 = R and R2 varies from R to 2R.
Fig. 6
Fig. 6 Bright-field imaging in the presence of spherical aberration and defocus (R1 =0 and R2 = 0.7R). Top: a = 0. Middle: a = −0.75. Bottom: a = −1. Visualization 3 demonstrates the effect of changing the value of a from zero to −2.
Fig. 7
Fig. 7 Dark-field imaging in the presence of spherical aberration and defocus (R1 = R and R2 = 1.6R). Top: a = 0. Middle: a = 0.5. Bottom: a = 1. Visualization 4 shows the effect of changing a from zero to 2.

Tables (3)

Tables Icon

Table 1 Convergence of the method when the distance (characterized by Δx, a dimensionless number) between the adjacent shifted elementary fields is reduced.

Tables Icon

Table 2 Convergence of the method when the size (characterized by L, a dimensionless number) of the elementary-field window is increased.

Tables Icon

Table 3 Simulation parameters for Figs. 4 and 5. Size is the number N2 of sampling points of the elementary field and Sep. is the separation of their center points (the same in x and y directions) in pixel units 0.61/R, Num. is the total number of elementary fields, and Time is the simulation time on a desktop computer (Intel Core i7-2600K Processor).

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

W 0 ( ρ 1 , ρ 2 ) = t * ( ρ 1 ) t ( ρ 2 ) W 0 ( ρ 1 , ρ 2 ) .
W ( ρ 1 , ρ 2 ) W 0 ( ρ 1 , ρ 2 ) K * ( ρ 1 , ρ 1 ) K ( ρ 2 , ρ 2 ) d 2 ρ 1 d 2 ρ 1 .
W ( ρ 1 , ρ 2 ) = p ( ρ ¯ ) H * ( ρ 1 , ρ ¯ ) H ( ρ 2 , ρ ¯ ) d 2 ρ ¯ ,
H ( ρ , ρ ¯ ) = t ( ρ ) H 0 ( ρ , ρ ¯ ) K ( ρ , ρ ) d ρ .
W 0 ( ρ 1 , ρ 2 ) = 1 ( 2 π ) 2 T ( κ ) exp ( i Δ ρ κ ) d 2 κ ,
μ 0 ( ρ 1 , ρ 2 ) = W 0 ( ρ 1 , ρ 2 ) S 0 ( ρ 1 ) S 0 ( ρ 2 ) = T ( κ ) exp ( i Δ ρ κ ) d 2 κ T ( κ ) d 2 κ
W 0 ( ρ 1 , ρ 2 ) = e 0 * ( ρ 1 ρ ¯ ) e 0 ( ρ 2 ρ ¯ ) d 2 ρ ¯ ,
e 0 ( ρ ) = 1 ( 2 π ) 2 T ( κ ) exp ( i ρ κ ) d 2 κ
W ( ρ 1 , ρ 2 ) = e * ( ρ 1 ρ ¯ ) e ( ρ 2 ρ ¯ ) d 2 ρ ¯ ,
e ( ρ ρ ¯ ) = t ( ρ ) e 0 ( ρ ρ ¯ ) K ( ρ ρ ) d 2 ρ .
S ( ρ ) = W ( ρ , ρ ) = | e ( ρ ρ ¯ ) | 2 d 2 ρ ¯ ,
g ( ξ , ρ ¯ ) = t ( ρ ) e 0 ( ρ ρ ¯ ) exp ( i ξ ρ ) d 2 ρ
P ( ξ ) = K ( ρ ) exp ( i ξ ρ ) d 2 ρ .
e ( ρ ρ ¯ ) = 1 ( 2 π ) 2 P ( ξ ) g ( ξ , ρ ¯ ) exp ( i ξ ρ ) d 2 ξ .
μ 0 ( Δ ρ ) = 2 [ R 2 J 1 ( R 2 | Δ ρ | ) R 1 J 1 ( R 1 | Δ ρ | ) ] ( R 2 2 R 1 2 ) | Δ ρ | ,
e 0 ( ρ ) = 2 [ R 2 J 1 ( R 2 | ρ | ) R 1 J 1 ( R 1 | ρ | ) ] ( R 2 2 R 1 2 ) | ρ | .
w ( ξ ) = a ( | ξ | / R ) 2 + b ( | ξ | / R ) 4 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.