Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Beam drift and partial probe coherence effects in EUV reflective-mode coherent diffractive imaging

Open Access Open Access

Abstract

While the industrial implementation of extreme ultraviolet lithography for upcoming technology nodes is becoming ever more realistic, a number of challenges have yet to be overcome. Among them is the need for actinic mask inspection. We report on reflective-mode lensless imaging of a patterned multi-layer mask sample at extreme ultraviolet wavelength that provides a finely structured defect map of the sample under test. Here, we present the imaging results obtained using ptychography in reflection mode at 6° angle of incidence from the surface normal and 13.5 nm wavelength. Moreover, an extended version of the difference map algorithm is employed that substantially enhances the reconstruction quality by taking into account both long and short-term variations of the incident illumination.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Over the past decades, considerable effort has been spent on the development of extreme ultraviolet (EUV) lithography to make the transition from deep ultraviolet lithography in upcoming technology nodes [1] and it is now believed that the technology will be ready for the 7 nm node. In the meantime however, a method for the reliable detection of mask defects remains a challenge [2, 3]. In this context, a defect is defined as any structure in the fabricated mask that will lead to a fault when copied to the wafer. Existing metrology tools such as scanning electron microscopy, are reliable and well established but due to the use of electrons or photons at wavelengths different from the EUV design wavelength, the aerial image will differ from that of the scanner and could include non-printable defects while missing others that could lead to device failure [4]. Since EUV radiation is absorbed by virtually all matter (except at grazing incidence, where total internal reflection effects can occur), EUV masks are equipped with a multilayer mirror to maximize their reflectivity. Phase defects resulting from bumps and pits in the substrate below the multilayer or from faulty layers within the multilayer mirror itself pose a major challenge as they are difficult to detect and repair [5]. To avoid printing defects to the exposed wafers and to maximize yield, EUV metrology methods are essential.

By using coherent EUV radiation to illuminate the mask under test, defect inspection can be carried out with a lensless method, such as coherent diffraction imaging (CDI). This method applies iterative algorithms to reconstruct the sample image by solving the phase problem using a constrained solution space [6]. In its original form, CDI was considered impractical for experimental use because it was only applicable to heavily constrained samples. These constraints can be substantially relaxed when the sample (object) is scanned using a finite illumination (probe) while partially overlapping every position with the previous one to gather redundant data. Together with recent advances in algorithms, this form of scanning CDI called ptychography has rapidly gained popularity as it provides a stable, yet simple technique for lensless imaging ranging from hard X-rays to visible light and electrons [7–10]. The method is exploited by several research groups for EUV mask imaging to overcome the resolution limitations of lens-based actinic microscopes and to develop more effective or low-cost actinic imaging methods [11–14]. The excellent contrast derived by ptychography can also be used to render height-maps of reticles [15]. Recently, sub-wavelength resolution has been achieved for 13.5 nm wavelength incident illumination in a transmission experiment [16]. This was accomplished by using prior information of the illuminating wavefront through the measurement of the far-field amplitude distribution, which was added as an additional constraint during the reconstruction.

The reconstruction of EUV masks with a reflective-mode setup at 6° angle of incidence using ptychography has been successfully demonstrated [17]. Without the presence of a monochromator, a Zr filter was used to limit out-of-band radiation, which led to an exposure time of 5 s per image. A clear advantage of including the incident illumination in the iterative reconstruction could be shown [18]. With the same setup, but using a high-harmonic light source, the exposure time could be reduced to 300 ms per image [19]. This was used to successfully apply ptychography to the reconstruction of a 10 µm cross pattern with 2 µm wide bars as well as a 88 nm half-pitch grating.

We have recently presented defect maps, both from die-to-die and die-to-database comparison, showing a 50 × 200 nm defect with a high signal-to-noise ratio [20], underlining the feasibility of using ptychography with the difference map algorithm adapted to an EUV reflective-mode setup for reliable EUV defect inspection. In this paper, we discuss enhancements to the ptychographic reconstruction algorithm that substantially increase the sensitivity and resolution of our proposed imaging and inspection method.

2. Experimental setup

The measurement setup was installed at the SIM beamline of the Swiss Light Source [21]. Two 3.8 m Apple II type undulators provide photons with an energy range from 90 eV to approximately 2 keV with variable polarization. For our experiment, the beamline energy was set to 92 eV (13.5 nm wavelength) and linear polarization. The beam is then further monochromatised using a plane grating monochromator to provide a coherent beam of EUV light (λ = 13.5 nm) with a monochromaticity of λλ ≃ 2000 for the used entry and exit slit settings. The beam can be further monochromatized by adjusting the slit openings, albeit with a loss of intensity, cf. Fig. 1(a). In Fig. 1(b), a schematic diagram of the experimental chamber is shown. The chamber is placed in the divergent beam, 3300 mm downstream of the beamline intermediate focus position. To increase spatial coherence and clean the beam, a 100 µm pinhole was placed at the intermediate focus position as described in [22] and the beam passes through a 1 mm diameter Si3N4 membrane with a thickness of 100 nm before entering the chamber from the left. The flux was 5 × 1011 photons/s with a spatial coherence length of about 140 µm and temporal coherence length of 27 µm.

 figure: Fig. 1

Fig. 1 (a) Simplified schematic setup of the SIM beamline and end-station (see Ref. [21]). The energy of 92 eV is preselected by adjusting the undulator gap and further refined via a plane grating monochromator. (b) End-station optical setup. The beam is focused by a spherical multilayer mirror (M1) and then projected onto the sample by a flat mirror (M2) at 6° angle of incidence. The resulting diffraction pattern is recorded on the CCD camera.

Download Full Size | PDF

The beam is then focused and reflected on the two Mo/Si multilayer mirrors M1 and M2, mounted at 4° and 37° from the beam axis, respectively. This leads to an angle of incidence of 6° from the sample normal, which is the same as used by the industrial exposure tool. M1 has a spherical surface with a radius of curvature equal to 220 mm, focusing the beam on the sample after reflection from M2 which has a flat surface and was cut into a crescent moon shape to allow the free passage of the diffracted beam from the sample to the detector. The sample was mounted on a piezo-electric 2D stage with a range of 200 µm in each direction.

The reflected beam from the sample was captured by a 2048 × 2048 pixel charge-coupled device (CCD) camera (Princeton Instruments, PI-MTE2048B) cooled to −40° C. The dynamic range of the CCD for EUV is determined by the 16 bit electronics and the fact that each EUV photon creates approximately 25 electrons. This results in a dynamic range of ≈2600, which is too low to reach the maximum possible resolution. To achieve a high dynamic range, each position was imaged 3 times with exposure times of 65, 500 and 10000 ms, while being careful that the lowest exposure time contains no saturated pixels, as these would lead to severe artifacts in the reconstructed image. Prior to reconstruction, the corresponding detector background noise was subtracted from each image and the resulting image normalized to an exposure time of 1 s. The normalized images were then combined, starting from the one with the longest exposure time and successively replacing the saturated parts with data from the image with the next shorter exposure time. The amount of saturated pixels does not correspond to the photons gathered, as there is bleeding into the neighboring pixels during the readout due to register spills. The saturated areas are passed through an erosion filter to smoothen the final image. The need for an erosion filter is due to the bleeding of the CCD camera, where electrons spill to adjacent pixels. The replacement of saturated pixels without the use of an erosion filter leads to a halo around the diffraction patterns which can in turn lead to severe artifacts in the reconstruction.

Since this combination of multiple images relies on a normalized exposure time, it is important that the shutter rise-time is minimized so that the defined exposure times are close to the real ones. For this experiment, a fast piezo shutter (DSM, XRS1-900) was installed at the beamline. The rise-time depends on the gap between the two shutter blades when opened, the overlap when closed, and the applied voltage. Opening the shutter too rapidly will lead to strong oscillations, due to the nature of the piezoelectric amplification. We have set the rise-time to 4 ms - as confirmed by measurements with a digital oscilloscope - which is sufficiently larger than the minimum possible rise-time of 1.5 ms to suppress oscillations. In this experiment, the rise-time was neglected during the creation of a high dynamic range image, but in previous experiments, where a shutter with a rise-time of 20 ms was used, it had to be taken into account when normalizing to an exposure time of 1 s [23].

The reticle test sample used for the data presented here was designed to include a range of different structures, both periodic and aperiodic, to test the performance of the reconstruction algorithm, cf. Fig. 2. It was fabricated in-house by spin-coating hydrogen silsesquioxane (HSQ) resist with a thickness of ≈60 nm onto a Si wafer coated with a Mo/Si multilayer. The HSQ absorber structures were then patterned by electron beam lithography and consist of several gratings with programmed defects, a Siemens star, L-shapes, square patterns, and several types of arbitrary structures. The four gratings shown have a half-pitch (hp) of 1 µm, 500, 250, and 100 nm from bottom to top and right to left, respectively. It should be noted that the sample is not an industrial grade EUV mask and exhibits poor contrast due to the high transparency of the HSQ resist at the EUV wavelength. This makes the reconstructions more challenging than with a state-of-the-art EUV photomask.

 figure: Fig. 2

Fig. 2 (a) SEM image of the patterned mask. (b) Layout of the mask pattern according to the design file that was created in the GDSII image format in order to facilitate the subsequent electron beam exposure. The scale bars in both images correspond to 5 µm.

Download Full Size | PDF

3. Reconstruction method

The object (patterned mask) is reconstructed from the CCD diffraction images employing a version of the difference map (DM) algorithm adapted for ptychography [8,9]. Ideally, the DM algorithm converges to a unique solution provided that the illuminating probe is both fully coherent at a given scan position and also highly stable (i. e. without mechanical drifts) across the complete scan consisting of multiple positions. In practice however, it is difficult to establish and uphold these conditions under experimental conditions. Coherence loss is caused by effects such as finite bandwidth, the point-spread function of the detector and vibrations of the illuminating spot or the sample. The recorded diffraction patterns at each scan position are composed of incoherent sums of orthogonal coherent states and, due to long term drifts, the shape of the probe may vary through the course of the experiment and cannot be taken as fixed for all scan positions. Thus, the image reconstruction quality depends on probe variations occurring at different time scales:

  1. Probe coherence loss at short time-scales τ < T
  2. Probe variations at long time-scales, τ > T

Here, T is the exposure time over which the diffraction images are captured and τ is the time-scale over which variations occur. Advanced ptychographic reconstruction algorithms are used to deconvolve the effects of probe coherence loss and allow the introduction of multiple independent probes to mitigate the long term illumination drift. Here, we briefly describe the original difference map algorithm and the additions to the algorithm used to mitigate these effects.

3.1. The difference map algorithm

The original DM algorithm assumes a fully coherent illumination of the object o(r) with a finite beam – the probe p(r). Central to this method is the concept of exit waves (or views) defined as the product of the probe and the object’s reflection function

ϕj(r)=p(rrj)o(r),
where rj denotes the relative displacement of the probe from the object center r for the current scan position. Assuming no prior knowledge of the probe and the object functions, the challenge here is to retrieve both of them by only measuring the intensities Ij(q) of the far-field diffraction images. The phase is lost during the measurement and therefore must be recovered iteratively. The DM algorithm solves this inverse problem by recursively constraining the solution set Ω of views {ϕj} into a converging sub-set by iterative projections Π onto Ω. Here, Ω is a state vector in a multi-dimensional Euclidean space. The iteration consists of a projection in reciprocal space and one in real space,
  1. The Fourier projection ΠF
  2. The overlap projection ΠO.

ΠF ensures that at every iterative step, the exit waves comply with their far-field intensity distribution which is measured as the diffraction images. This amounts to a projection

F:ϕjϕjF,
where,
ϕjF(r)=1[Ij(q)ψj(q)|ψj(q)|].

Here, ψj(q) = ℱ[φj(r)] and ℱ, ℱ−1 stand for the Fourier and inverse Fourier transform operators, respectively. ΠO exploits the data redundancy of ptychographic measurements with overlapping probes. The Fourier projected view ϕjF(r) is then subjected to the overlap projection

O:ϕjFϕjO.

The projections of the views are evaluated imposing the condition that all the Fourier projected views also comply with the overlap constraint. This amounts to representing all the views {ϕjF} generated using ΠF by a single intermediate probe function p¯(r) and intermediate object o¯(r), such that for every scan position j:

ϕjO(r)=p¯(rrj)o¯(r).

The conditions under which the exit views can be modeled as the product of p¯ and o¯ are detailed in refs. [8] and [24]. ΠO ensures that at every step of the iteration, the views generated by the overlap projection ΠO are self-consistent, sharing the same probe while sampling different regions of the object. The new state vector Ω¯={p¯(rrj)o¯(r)} is determined by minimizing the Euclidean distance d(Ω¯,Ω)=Ω¯Ω, where Ω = {ϕj} is the state vector generated by ΠF. Therefore,

d(Ω¯,Ω)=[j|ϕjF(r)p¯(rrj)o¯(r)|2dr]12
is parametrized in both p¯(r) and o¯(r). Minimizing d(Ω¯,Ω) with respect to p¯(r) and o¯(r)provides the decomposed probe and object functions for each iteration as a set of coupled equations which can be solved numerically:
o¯(r)=jp¯*(rrj)ϕjF(r)j|p¯(rrj)|2,
p¯(r)=jo¯*(r+rj)ϕjF(r+rj)j|o¯(r+rj)|2.

The projections ΠF and ΠO are applied until p¯(r)and o¯(r) converge (i. e. the iteration reaches a fixed point). The essential step is the generation of new exit waves using the double projection alternating map A = ΠOΠF:

ϕj(r)ϕ¯j(r)=A[ϕj(r)]=O[F[ϕj(r)]].

While the alternating map is the simplest projection scheme, it suffers from stagnation in converging to solutions when A[ϕj(r)] = ϕj(r), despite d(Ω¯,Ω) being minimal. This scenario occurs when there are non-zero local minima in d(Ω¯,Ω), where the projections cannot drive the system to a converging solution, but are essentially trapped at a non-zero local minimum of the distance metric d(Ω¯,Ω). To circumvent this and ensure convergence without getting trapped in a local minimum, the double projection in the difference map algorithm uses a mapping construct D [ϕj(r)] defined as:

ϕj(r)ϕ¯j(r)=D[ϕj(r)]=ϕj(r)+γ[O{ϕjF(r)}.gF{ϕj(r)}F{ϕj(r)}.gO{ϕjF(r)}],
where
gO{ϕjF(r)}=O{ϕjF(r)}[O{ϕjF(r)}ϕjF(r)]/γ,
gF{ϕj(r)}=F{ϕj(r)}[F{ϕj(r)}ϕj(r)]/γ,
and γ ∈ ℝ, γ ≠ 0 is a parameter. This iterative procedure converges toward a solution that belongs to the set of fixed points of D, which corresponds to the intersection of the solution spaces defined by the two constraints. Due to the inherent presence of noise in the recorded images, there is always the possibility of non-unique solutions [25]. A simple approach is the averaging of the last few iterations, which is the method applied here. A more refined method has recently been published that explicitly models the expected noise and uses the derived model to optimize the reconstruction in every step of the iteration [26].

3.2. Probe coherence loss

The DM algorithm described above assumes that the illuminated probe is highly coherent and can be described as a pure state or, equivalently, an eigenstate of the system. However, this is generally not the case under experimental conditions where the probe is only partially coherent. The partial coherence of the probe could e. g. stem from a finite bandwidth or mechanical vibrations with respect to the object. Irrespective of the origin of the coherence loss, a generic probe can be described as a sum of orthogonal coherent - but mutually incoherent - states. Employing the formalism used in the previous section, this amounts to a representation of a generalized probe P(r) in real space as

P(r)=npn(r),
where the pn(r) are real space projections of orthogonal coherent probe states pn. The exit waves at position rj with such a partially coherent probe are
Φj(r)=o(r)npn(rrj)=nϕj,n(r),
where ϕj,n(r) = o (r)pn(rrj) are the exit wave modes. The far-field diffraction pattern of the exit wave Φj(r) is given by:
Ψj(q)=[Φj(r)]=[o(r)npn(rrj)]=n[ϕj,n(q)]=nψj,n(r).

Including the time variation, the complete form can be written as:

Ψj(q,t)=nψj,n(q)eiωnt,
where ωn is the temporal frequency associated with the pure probe state pn(r). The time dependent Fourier spectral intensity j(q,t) is then given as:
j(q,t)=Ψj(q,t)Ψj*(q,t)=n,mψj,n(q)eiωnt×ψj,m*(q)eiωmt=n,mψj,n(q)ψj,m*(q)ei(ωnωm)t.

The measured intensity on the detector Ij(q) is the time average value of the above:

Ij(q)=j(q,t)=n,mψj,n(q)ψj,m*(q)ei(ωnωm)t=n,mψj,n(q)ψj,m*(q)×δnm=nIj,n(q).

The measured Fourier intensities Ij(q) of the mixed state exit wave Φj(r) are the sum of the pure state Fourier intensities Ij,n(q). In other words, the measured Fourier transform involving state mixtures of orthogonal probes is an incoherent sum of the pure state Fourier spectra.

With this established, the individual exit waves ϕj,n are updated using the Fourier projection:

F:ϕj,nϕj,nF,
where instead of the full measured intensity, the projection is governed by individual weights as:
ϕj,nF(r)=F1[Ij(q)ψj,n(q)n|ψj,n(q)|2].

This is the major difference between pure state and state-mixture projections. The overlap projection for state-mixtures,

O:ϕj,nFϕj,nO,
is defined as:
ϕj,nO(r)=p¯n(rrj)o¯(r).

In principle, the same reasoning applies to the object and it is in fact completely arbitrary whether the decoherence is assigned to the object or the probe as all positioning is relative, so that, for example, in the case of fast vibrations it leads to the exact same solution whether they are assigned to the probe or the object. Here, we assume the object to be in a pure state, since a static sample like a photomask does not alter its state within the acquisition time, and assign all sources of decoherence to the probe.

From the discussion above, the new distance metric d(ζ¯,ζ)=ζ¯ζ has to be minimized, where ζ = {ϕj,n} is the new global state vector defined over all scan positions j and the various probe states n. Thus,

d(ζ¯,ζ)=[jn|ϕj,n(r)p¯n(rrj)o¯(r)|2dr]12.

Considerable effort has been spent on the mitigation of artifacts in the reconstruction that are due to partial coherence [27–30]. The proposed methods either make use of the polychromatic probe to reduce acquisition time, or model the system as a state-mixture and reconstruct the resulting states using multiple mutually incoherent probe modes for the incident illumination or the imaged sample. Note that both the above mentioned probe uncertainties at short time-scale describe variations with a period shorter than the exposure time (τ < T), whereas long time-scale changes in the experimental conditions that vary slowly during the time it takes to acquire all positions of a complete scan (τ > T ) cannot be corrected by the introduction of state-mixtures.

3.3. Probe beam drift and instability

While the mentioned enhancements to ptychographic reconstruction allow superior resolution for partially coherent systems, they do not take into account the slowly varying incident illumination. Recently, an advanced reconstruction algorithm has been proposed that can reconstruct a different probe for every scan position [31]. All probes are mutually orthogonal and are linked together by a projection into a lower dimensional space using a singular value decomposition (SVD). These can then be interpreted as eigenprobes of the system and substantially relax the single probe constraint of the DM [8] or extended ptychographic iterative engine (ePIE) algorithms [10]. With this method, a separate probe for every single scan position could be used without significant loss in the reconstruction quality. This is interesting e. g. for hard X-ray experiments at X-ray free-electron lasers, where the probe changes significantly for each shot.

For our experiment, we have implemented the DM algorithm including the capability of reconstructing mixed-states. To cover the effects of probe variation, we follow a similar but simpler approach as the one mentioned above. Assuming sufficient stability of the incident illumination as well as all optical elements and the sample stage, it is safe to assume that the probe variation is only noticeable over the range of several minutes. We therefore allow the algorithm to keep several independent probes in memory, cf. Fig 3, but forgo the linking SVD. This way, the probe instability during the scan duration is taken into account without any increase in computation time. Because the probes are completely independent in this case, a sufficient amount of scan positions have to be covered by each of them to arrive at an artifact-free reconstruction. The exact amount differs for each experiment and has to be chosen by the user. We note that each of these probes can also have several probe modes to cover partial coherence effects.

 figure: Fig. 3

Fig. 3 Schematic showing the probe positions where the algorithm uses independent probes for different sample areas. This image is only used to illustrate the method and does not accurately depict the actual probe positions during the experiment.

Download Full Size | PDF

4. Results

Before attempting any reconstruction, it is worthwhile to consider the implications of the interdependency of equations 7 and 8. Since all optical elements used in the experiment are known, a good initial guess of the probe can be achieved, while the object is more difficult to calculate from the layout, because shadowing and other mask 3D effects that alter the aerial image would have to be considered [32]. Therefore, we calculate an initial guess of the probe while starting with a zero matrix for the object. During the first few iterations, the probe is then kept constant. To calculate the initial guess of the probe, only the optical elements of the end-station were taken into account, while neglecting the preceding beamline. Considering the mirror configuration shown in Fig. 1 and a pinhole at the chamber entrance, the calculation is straightforward:

  1. assume a plane wave incident on the aperture
  2. propagate wave-field from pinhole to M1
  3. calculate Fourier transform to propagate wave-field to the focus and multiply with the appropriate quadratic phase term according to [33]
  4. final free-space propagation to allow for a defocused beam on the sample plane.

Since we assume plane wave incidence on the experimental chamber, neglecting also the 100 µm pinhole at the intermediate focus positions and the membrane before the chamber entrance as well as the aberrations due to the off-axis illumination of M1, the calculated initial probe guess constitutes an idealized model and differs significantly from the final reconstructed wave-field, cf. Fig. 4. However, the results obtained are substantially better than what could be achieved by using a Gaussian profile or random numbers for the initial guess. In the latter case, the reconstruction failed to converge for our experimental data.

 figure: Fig. 4

Fig. 4 Due to neglecting optical elements of the beamline and aberrations, the initial guess of the probe differs in both magnitude (b) and phase (d) from the result of the reconstruction as depicted in (a) and (c). The scale is the same for all images.

Download Full Size | PDF

We imaged a 90 × 90 µm2 area of the sample mask using a spiral pattern with a step size of 2 µm. For the elliptical probe with estimated radii of 3.5 µm and 4.5 µm, this ensures sufficient overlap between subsequent scan positions. We chose a spiral rather than a raster pattern to avoid the so-called raster grid pathology wherein the scan positions lie on a periodic grid which introduces an additional degree of freedom into the reconstruction and leads to artifacts [9]. Using the described scan pattern, a total of 4827 diffraction patterns were recorded - taking into account that each of the 1609 positions was captured with 3 different exposure times as described above.

Unlike our previous experiments [23,34], where we had to reduce the incident bandwidth using a combination of Fresnel zone-plate and a pinhole mounted close to the sample surface, in this experiment we used a planar grating monochromator capable of delivering a monochromaticity of λλ ≃ 2000. To further clean the beam, a 100 µm diameter circular pinhole was inserted at the intermediate focus position of the beamline, cf. Fig. 1, approximately 3300 mm upstream of the sample position. Due to the monochromatic beam, the reconstruction quality is improved compared to previous results where strong artifacts were visible in the background between the structures, cf. Fig. 5(a). Great care was taken to minimize vibrations in the experiment which resulted in improved data quality readily visible by the small improvement gained using multiple probe modes, cf. Fig. 5(b). On the other hand, the long acquisition time of 8 hours cannot be be accurately reproduced by using a constant probe throughout the experiment. Here we permit the algorithm to use 40 independent probes (one separate probe for every ≈12 min slice of the experiment). This strongly reduced the artifacts due to the variation in the incident beam and allows for increased resolution as show in Fig. 5(c). The highest quality reconstruction with increased contrast is achieved by using multiple probes, each with its own modes, cf. Fig. 5(d). Measuring the edge profile resulted in an overall resolution of 60 nm calculated by using the 10%–90% lineout [23]. Assuming a diffraction-limited system, the maximum achievable resolution with this setup is 35 nm for a detector NA of 0.18 defined by the detector size of 1″ and a distance to the sample of 70 mm. The pixel size at the object plane is slightly smaller at 34 nm. Due to a slightly off-center position of the specular reflection in our setup, we have used only 1800 of the available 2048 pixels, which limits the NA resolution to 39 nm with an object pixel size of 39 nm.

 figure: Fig. 5

Fig. 5 Reconstructed sample magnitude after 300 iterations of the DM algorithm and subsequent averaging of the last few iterations. The resolution can be increased by taking into account both short and long term effects. As the beam is highly monochromatic, the use of multiple probe modes has only a small effect. However, due to the extended acquisition time, allowing for multiple sequential probes results in an enhanced reconstruction quality. (a) 1 mode, 1 probe (b) 1 probe, 3 modes (c) 40 probes, 1 mode (d) 40 probes, 3 modes. The insets depict a close-up of the 500 nm half-pitch grating.

Download Full Size | PDF

The discrepancy between the theoretical and measured values for the resolution stems mainly from the poor contrast of the EUV sample used. In Fig. 6 we show a set of diffraction patterns recorded for one scan-position. At 65 ms exposure time, the dynamic range of the detector is not fully utilized. In fact, the highest value is 14656, meaning that only about 22% of the available 16-bit range are used. The reason for keeping the shortest exposure time at 65 ms was that the sample displays a locally dependent scattering contrast that varies so strongly that at other scan-positions an exposure time of 65 ms already results in some pixels being very close to saturation. During the combination, all images are normalized to a 1 s exposure time as mentioned above and the image intensities are normalized to the average over all scan-positions.

 figure: Fig. 6

Fig. 6 One set of diffraction patterns for a single scan-position. The exposure times are (a) 65 ms, (b) 500 ms, and (c) 10000 ms. The combined image is shown in (d); (b) and (c) include saturated points and the effect of bleeding becomes evident at the longest exposure time.

Download Full Size | PDF

We would also like to mention that the difference map is only one of multiple possible approaches that exploit projections on constraint sets to solve the phase problem. Another popular method is the (extended) ptychographical iterative engine (ePIE) [10]. While ePIE has been successfully applied to phase retrieval problems, it did not lead to convergence with our data, which could be due to the use of a sub-optimal update function. In future experiments, we aim to use a combination of different algorithms in sequence in order to maximize the reconstructed image fidelity.

To further illustrate the difference in contrast for the reconstruction methods discussed above, we take a look at a cross-section through the 100 nm half-pitch grating as shown in Fig. 7. The red lines depict the position at which the cross-section was taken. Since the position was the same for all images, this shows that the increased reconstruction fidelity was partly gained by an implicitly performed global shift of the pattern within the frame, cf. Fig. 7(c) and (e), the magnitude of which seems to be dependent mainly on the number of probe modes used. The cross-section appears more homogeneous when a higher number of probe modes are used, but the increase in contrast comes from the addition of multiple independent probes (each of which possessing its own probe modes).

 figure: Fig. 7

Fig. 7 Cross-sections taken from the 100 nm half-pitch grating. (a) and (b) 1 mode, 1 probe (c) and (d) 1 probe, 3 modes (e) and (f) 40 probes, 1 mode (g) and (h) 40 probes, 3 modes. The range for relative magnitude was set to [0.0, 0.7] for all cross-sections to show the difference in contrast for the different reconstructions. There is also a global shift induced by the use of multiple probe modes.

Download Full Size | PDF

5. Conclusions and outlook

We have shown that ptychography can be used to image absorber patterns on EUV photomask samples in reflection mode with negligible artifacts as well as high resolution and thereby demonstrated the feasibility of CDI for the inspection of defects on EUV photomasks. With the presented implementation of algorithmic additions to ptychography, various problems such as intensity instabilities and mechanical drifts or vibrations can be overcome.

In order to improve the throughput and sensitivity of the tool, all the components of the experimental setup require upgrading. The most important parts are the sample stage, which currently has a limited x,y range of 200 µm and the CCD detector because of its low frame-rate and the narrow dynamic range. In the future, the CCD detector will be replaced by a hybrid silicon detector [35]. The new detector will allow frame-rates of up to 4 kHz and a dynamic range of ≈ 107 electrons. The higher dynamic range will eliminate the need for multiple exposures necessary when using a CCD. In theory, this would allow for an increase in speed by a factor of 12,000. However, to achieve this in an experimental setup, the upgrade of the camera alone is not sufficient. A high-speed, nanometer-precision stage is also required, as well as the necessary bandwidth to handle the acquired data without delay. In an intermediate step, it is planned to extend the stage range to approximately 1 cm in both x and y direction and add a z-stage to allow for through-focus scans.

The reconstruction was carried out on a dual CPU Linux system with a total of 72 cores, 512 GB of RAM and two GPU cards. Using a reconstruction size of 1800 pixels per diffraction pattern, it takes about 3 minutes to complete 300 DM iterations. The time increases approximately 30% for each additional probe mode used but is not influenced by the number of independent probes. The code makes use of both the CPUs and GPUs of the system by exploiting SIMD (single instruction multiple data) capability via the 256-bit AVX2 registers for data-intensive operations and calculating all Fourier transform operations on the GPUs. The execution time scales roughly reciprocal with the number of available compute nodes, but the limiting factors are much more likely to be found in the data acquisition hardware than the software or computational hardware.

Acknowledgments

The authors would like to express their gratitude to Markus Kropf, Jonas Woitkowiak, and Istvan Mohacsi for their invaluable help with instrumentation and numerous suggestions that led to substantial improvements in the collected data, Jose Gabadinho for his assistance in automating the experimental setup and Pascal Schifferle for technical support at the SIM beamline of SLS. Further thanks go to Marco Calvi and Thomas Schmidt who have calibrated the undulators to enable 92 eV photon energy. Part of this work was performed at the Surface/Interface:Microscopy (SIM) beamline of the Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland.

References and links

1. E. Hendrickx, R. Gronheid, J. Hermans, G. Lorusso, P. Foubert, I. Pollentier, A.-M. Goethals, R. Jonckheere, G. Vandenberghe, and K. Ronse, “Readiness of EUV lithography for insertion into manufacturing: the IMEC EUV program,” J. Photopolym. Sci. Technol. 26, 587–593 (2013). [CrossRef]  

2. D. Uzzel, A. Garetto, K. Magnusson, and G. Tabbone, “A novel method for utilizing AIMS to evaluate mask repair and quantify over-repair or under-repair condition,” Proc. SPIE 8880, 888029 (2013). [CrossRef]  

3. K. A. Goldberg, I. Mochi, M. P. Benk, A. P. Allezy, M. R. Dickinson, C. W. Cork, D. Zehm, J. B. Macdougall, E. H. Anderson, F. Salmassi, W. L. Chao, V. K. Vytla, E. M. Gullikson, J. C. DePonte, M. S. Jones, D. Van Camp, J. F. Gamsby, W. B. Ghiorso, H. Huang, W. Cork, E. Martin, E. Van Every, E. Acome, V. Milanovic, R. Delano, P. P. Naulleau, and S. B. Rekawa, “Commissioning an EUV mask microscope for lithography generations reaching 8 nm,” Proc. SPIE 8679, 867919 (2013). [CrossRef]  

4. I. Mochi, K. A. Goldberg, B. La Fontaine, A. Tchikoulaeva, and C. Holfeld, “Actinic imaging of native and programmed defects on a full-field mask,” Proc. SPIE 7636, 76361A (2010).

5. K. A. Goldberg and I. Mochi, “Wavelength-specific reflections: a decade of extreme ultraviolet actinic mask inspection research,” J. Vac. Sci. Technol. B 28, C6E1–C6E10 (2010). [CrossRef]  

6. W. Hoppe, “Beugung im inhomogenen Primärstrahlwellenfeld. III. Amplituden- und Phasenbestimmung bei unperiodischen Objekten,” Acta Crystallogr. A 25, 508–514 (1969). [CrossRef]  

7. J. M. Rodenburg and H. M. L. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85, 4795 (2004). [CrossRef]  

8. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-resolution scanning X-ray diffraction microscopy,” Science 321, 379–382 (2008). [CrossRef]   [PubMed]  

9. P. Thibault, M. Dierolf, O. Bunk, A. Menzel, and F. Pfeiffer, “Probe retrieval in ptychographic coherent diffractive imaging,” Ultramicroscopy 109, 338–343 (2009). [CrossRef]   [PubMed]  

10. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109, 1256–1262 (2009). [CrossRef]   [PubMed]  

11. M. D. Seaberg, D. E. Adams, E. L. Townsend, D. A. Raymondson, W. F. Schlotter, Y. Liu, C. S. Menoni, L. Rong, C.-C. Chen, J. Miao, H. C. Kapteyn, and M. M. Murnane, “Ultrahigh 22 nm resolution coherent diffractive imaging using a desktop 13 nm high harmonic source,” Opt. Express 19, 22470 (2011). [CrossRef]   [PubMed]  

12. D. F. Gardner, B. Zhang, M. D. Seaberg, L. S. Martin, D. E. Adams, F. Salmassi, E. Gullikson, H. Kapteyn, and M. Murnane, “High numerical aperture reflection mode coherent diffraction microscopy using off-axis apertured illumination,” Opt. Express 20, 19050 (2012). [CrossRef]   [PubMed]  

13. L. Juschkin, L. Loetgering, D. Rudolf, R. Xu, S. Brose, S. Danylyuk, and J. Miao, “Tabletop coherent diffraction imaging with a discharge plasma EUV source,” Proc. SPIE 8849, 9 (2013).

14. C. L. Porter, M. Tanksalvala, M. Gerrity, G. Miley, X. Zhang, C. Bevis, E. Shanblatt, R. Karl, M. M. Murnane, D. E. Adams, and H. C. Kapteyn, “General-purpose, wide field-of-view reflection imaging with a tabletop 13 nm light source,” Optica 4, 1552–1557 (2017). [CrossRef]  

15. B. Zhang, D. F. Gardner, M. D. Seaberg, E. R. Shanblatt, H. C. Kapteyn, M. M. Murnane, and D. E. Adams, “High contrast 3D imaging of surfaces near the wavelength limit using tabletop EUV ptychography,” Ultramicroscopy 158, 98–104 (2015). [CrossRef]   [PubMed]  

16. D. F. Gardner, M. Tanksalvala, E. R. Shanblatt, X. Zhang, B. R. Galloway, C. L. Porter, R. Karl Jr., C. Bevis, D. E. Adams, H. C. Kapteyn, M. M. Murnane, and G. F. Mancini, “Subwavelength coherent imaging of periodic samples using a 13.5 nm tabletop high-harmonic light source,” Nat. Photonics 11, 259–263 (2017). [CrossRef]  

17. T. Harada, H. Hashimoto, T. Amano, H. Kinoshita, and T. Watanabe, “Phase imaging results of phase defect using micro-coherent extreme ultraviolet scatterometry microscope,” J. Micro. Nanolithogr. MEMS MOEMS 15, 021007 (2016). [CrossRef]  

18. T. Harada, M. Nakasuji, Y. Nagata, T. Watanabe, and H. Kinoshita, “Phase imaging of extreme-ultraviolet mask using coherent extreme-ultraviolet scatterometry microscope,” Jpn. J. Appl. Phys. 52, 06GB02 (2013). [CrossRef]  

19. D. Mamezaki, T. Harada, Y. Nagata, and T. Watanabe, “Imaging performance improvement of coherent extreme-ultraviolet scatterometry microscope with high-harmonic-generation extreme-ultraviolet source,” Jpn. J. Appl. Phys. 56, 06GB01 (2017). [CrossRef]  

20. I. Mochi, P. Helfenstein, I. Mohacsi, R. Rajeev, D. Kazazis, S. Yoshitake, and Y. Ekinci, “RESCAN: an actinic lensless microscope for defect inspection of EUV reticles,” J. Micro. Nanolithogr. MEMS MOEMS 16, 041003 (2017). [CrossRef]  

21. U. Flechsig, F. Nolting, A. Fraile Rodríguez, J. Krempaský, C. Quitmann, T. Schmidt, S. Spielmann, and D. Zimoch, “Performance measurements at the SLS SIM beamline,” AIP Conf. Proc. 1234, 319–322 (2010). [CrossRef]  

22. G. Olivieri, A. Goel, A. Kleibert, and M. A. Brown, “Effect of X-ray spot size on liquid jet photoelectron spectroscopy,” J. Synchrotron Radiat. 22, 1528–1530 (2015). [CrossRef]   [PubMed]  

23. P. Helfenstein, I. Mohacsi, R. Rajeev, and Y. Ekinci, “Scanning coherent diffractive imaging methods for actinic extreme-ultraviolet mask metrology,” J. Micro. Nanolithogr. MEMS MOEMS 15, 034006 (2016). [CrossRef]  

24. J. M. Rodenburg and R. H. T. Bates, “The theory of super-resolution electron microscopy via Wigner-distribution deconvolution,” Philos. Trans. Royal Soc. A 339, 521–553 (1992). [CrossRef]  

25. P. Thibault and M. Guizar-Sicairos, “Maximum-likelihood refinement for coherent diffractive imaging,” New J. Phys. 14, 063004 (2012). [CrossRef]  

26. M. Odstrčil, A. Menzel, and M. Guizar-Sicairos, “Iterative least-squares solver for generalized maximum-likelihood ptychography,” Opt. Express 26, 3108–3123 (2018). [CrossRef]  

27. G. J. Williams, H. M. Quiney, A. G. Peele, and K. A. Nugent, “Coherent diffractive imaging and partial coherence,” Phys. Rev. B: Condens. Matter 75, 104102 (2007). [CrossRef]  

28. B. Abbey, L. W. Whitehead, H. M. Quiney, D. J. Vine, G. A. Cadenazzi, C. A. Henderson, K. A. Nugent, E. Balaur, C. T. Putkunz, A. G. Peele, G. J. Williams, and I. McNulty, “Lensless imaging using broadband X-ray sources,” Nat. Photonics 5, 420–424 (2011). [CrossRef]  

29. P. Thibault and A. Menzel, “Reconstructing state mixtures from diffraction measurements,” Nature 494, 68–71 (2013). [CrossRef]   [PubMed]  

30. J. N. Clark, X. Huang, R. J. Harder, and I. K. Robinson, “Dynamic imaging using ptychography,” Phys. Rev. Lett. 112, 113901 (2014). [CrossRef]  

31. M. Odstrčil, P. Baksh, S. A. Boden, R. Card, J. E. Chad, J. G. Frey, and W. S. Brocklesby, “Ptychographic coherent diffractive imaging with orthogonal probe relaxation,” Opt. Express 24, 8360 (2016). [CrossRef]  

32. J. Finders and T. Hollink, “Mask 3D effects: impact on imaging and placement,” Proc. SPIE 7985, 79850I (2011). [CrossRef]  

33. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005), 3rd ed.

34. I. Mohacsi, P. Helfenstein, R. Rajendran, and Y. Ekinci, “Scanning scattering contrast microscopy for actinic EUV mask inspection,” Proc. SPIE 9778, 97781O (2016). [CrossRef]  

35. A. Mozzanica, A. Bergamaschi, S. Cartier, R. Dinapoli, D. Greiffenberg, I. Johnson, J. Jungmann, D. Maliakal, D. Mezza, C. Ruder, L. Schaedler, B. Schmitt, X. Shi, and G. Tinti, “Prototype characterization of the JUNGFRAU pixel detector for SwissFEL,” J. Instrum. 9, C05010 (2014). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 (a) Simplified schematic setup of the SIM beamline and end-station (see Ref. [21]). The energy of 92 eV is preselected by adjusting the undulator gap and further refined via a plane grating monochromator. (b) End-station optical setup. The beam is focused by a spherical multilayer mirror (M1) and then projected onto the sample by a flat mirror (M2) at 6° angle of incidence. The resulting diffraction pattern is recorded on the CCD camera.
Fig. 2
Fig. 2 (a) SEM image of the patterned mask. (b) Layout of the mask pattern according to the design file that was created in the GDSII image format in order to facilitate the subsequent electron beam exposure. The scale bars in both images correspond to 5 µm.
Fig. 3
Fig. 3 Schematic showing the probe positions where the algorithm uses independent probes for different sample areas. This image is only used to illustrate the method and does not accurately depict the actual probe positions during the experiment.
Fig. 4
Fig. 4 Due to neglecting optical elements of the beamline and aberrations, the initial guess of the probe differs in both magnitude (b) and phase (d) from the result of the reconstruction as depicted in (a) and (c). The scale is the same for all images.
Fig. 5
Fig. 5 Reconstructed sample magnitude after 300 iterations of the DM algorithm and subsequent averaging of the last few iterations. The resolution can be increased by taking into account both short and long term effects. As the beam is highly monochromatic, the use of multiple probe modes has only a small effect. However, due to the extended acquisition time, allowing for multiple sequential probes results in an enhanced reconstruction quality. (a) 1 mode, 1 probe (b) 1 probe, 3 modes (c) 40 probes, 1 mode (d) 40 probes, 3 modes. The insets depict a close-up of the 500 nm half-pitch grating.
Fig. 6
Fig. 6 One set of diffraction patterns for a single scan-position. The exposure times are (a) 65 ms, (b) 500 ms, and (c) 10000 ms. The combined image is shown in (d); (b) and (c) include saturated points and the effect of bleeding becomes evident at the longest exposure time.
Fig. 7
Fig. 7 Cross-sections taken from the 100 nm half-pitch grating. (a) and (b) 1 mode, 1 probe (c) and (d) 1 probe, 3 modes (e) and (f) 40 probes, 1 mode (g) and (h) 40 probes, 3 modes. The range for relative magnitude was set to [0.0, 0.7] for all cross-sections to show the difference in contrast for the different reconstructions. There is also a global shift induced by the use of multiple probe modes.

Equations (23)

Equations on this page are rendered with MathJax. Learn more.

ϕ j ( r ) = p ( r r j ) o ( r ) ,
F : ϕ j ϕ j F ,
ϕ j F ( r ) = 1 [ I j ( q ) ψ j ( q ) | ψ j ( q ) | ] .
O : ϕ j F ϕ j O .
ϕ j O ( r ) = p ¯ ( r r j ) o ¯ ( r ) .
d ( Ω ¯ , Ω ) = [ j | ϕ j F ( r ) p ¯ ( r r j ) o ¯ ( r ) | 2 d r ] 1 2
o ¯ ( r ) = j p ¯ * ( r r j ) ϕ j F ( r ) j | p ¯ ( r r j ) | 2 ,
p ¯ ( r ) = j o ¯ * ( r + r j ) ϕ j F ( r + r j ) j | o ¯ ( r + r j ) | 2 .
ϕ j ( r ) ϕ ¯ j ( r ) = A [ ϕ j ( r ) ] = O [ F [ ϕ j ( r ) ] ] .
ϕ j ( r ) ϕ ¯ j ( r ) = D [ ϕ j ( r ) ] = ϕ j ( r ) + γ [ O { ϕ j F ( r ) } . g F { ϕ j ( r ) } F { ϕ j ( r ) } . g O { ϕ j F ( r ) } ] ,
g O { ϕ j F ( r ) } = O { ϕ j F ( r ) } [ O { ϕ j F ( r ) } ϕ j F ( r ) ] / γ ,
g F { ϕ j ( r ) } = F { ϕ j ( r ) } [ F { ϕ j ( r ) } ϕ j ( r ) ] / γ ,
P ( r ) = n p n ( r ) ,
Φ j ( r ) = o ( r ) n p n ( r r j ) = n ϕ j , n ( r ) ,
Ψ j ( q ) = [ Φ j ( r ) ] = [ o ( r ) n p n ( r r j ) ] = n [ ϕ j , n ( q ) ] = n ψ j , n ( r ) .
Ψ j ( q , t ) = n ψ j , n ( q ) e i ω n t ,
j ( q , t ) = Ψ j ( q , t ) Ψ j * ( q , t ) = n , m ψ j , n ( q ) e i ω n t × ψ j , m * ( q ) e i ω m t = n , m ψ j , n ( q ) ψ j , m * ( q ) e i ( ω n ω m ) t .
I j ( q ) = j ( q , t ) = n , m ψ j , n ( q ) ψ j , m * ( q ) e i ( ω n ω m ) t = n , m ψ j , n ( q ) ψ j , m * ( q ) × δ n m = n I j , n ( q ) .
F : ϕ j , n ϕ j , n F ,
ϕ j , n F ( r ) = F 1 [ I j ( q ) ψ j , n ( q ) n | ψ j , n ( q ) | 2 ] .
O : ϕ j , n F ϕ j , n O ,
ϕ j , n O ( r ) = p ¯ n ( r r j ) o ¯ ( r ) .
d ( ζ ¯ , ζ ) = [ j n | ϕ j , n ( r ) p ¯ n ( r r j ) o ¯ ( r ) | 2 d r ] 1 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.