Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Reflective Fourier ptychographic microscopy using a parabolic mirror

Open Access Open Access

Abstract

Fourier ptychography uses a phase retrieval algorithm to reconstruct a high-resolution image with a wide field-of-view. Reflective-type Fourier ptychographic microscopy (FPM) is expected to be very useful for surface inspection, but the reported methods have several limitations. We propose a darkfield illuminator for reflective FPM consisting of a parabolic mirror and a flat LED panel. This increases the signal-to-noise ratio of the acquired images because the normal beam of each LED is directed toward the object. Furthermore, the LEDs do not have to be far from the object because they are collimated by the parabolic surface before illumination. Based on this, a reflective FPM with a synthesized numerical aperture (NA) of 1.06 was achieved, which is the highest value by reflective FPM as far as we know. To validate this experimentally, we measured a USAF reflective resolution target and reconstructed a high-resolution image. This resolved up to the period of 488 nm, which corresponds to the synthesized NA. Additionally, an integrated circuit was measured to demonstrate the effectiveness of surface inspection of the proposed system.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Because the amount of information that can be transferred by an imaging system is limited, its spatial resolution and field-of-view (FOV) are in a trade-off relationship. Fourier ptychographic microscopy (FPM), the imaging technique developed in 2013, overcomes this limitation by applying a light emitting diode (LED) panel to a conventional optical microscope with a low magnification objective lens [116]. It has the same theoretical background as the synthetic aperture technique of digital holography [1720], in that it illuminates an object with beams of various angles and combines them in the Fourier domain to extend the spectrum. However, in contrast to this, FPM uses a phase retrieval algorithm, which eliminates the need for a reference beam, simplifying the system. Additionally, because of the use of LEDs with partial coherence, they have a much lower coherence noise than that of digital holography. With these advantages, much research has been done on FPM such as a pupil recovery [9,10], LED position error compensation [1113], and data reduction [1416].

FPM is divided into transmission (diascopic) type and reflection (episcopic) type according to the illumination. Biological samples have low light absorption, and researchers are interested in their internal structure; thus, it is appropriate to observe them as transmission type. In contrast, the surface metrology of metals, ceramics, integrated circuits, semiconductors, and plastics requires observation using reflected light. Nevertheless, most of the currently published FPM papers are focused on transmission type, and only several papers have been reported on reflective types [2124] due to the difficulty of installation and alignment of illuminating system. In this paper, we propose a new illumination system for reflection type FPM and achieved the spatial resolution beyond those reported.

The extended spectrum by FPM has the synthesized numerical aperture (NA), NAsynth, given as NAobj + NAillum, where NAobj and NAillum denote the NA of the objective lens and illumination, respectively. In the case of the reflection type, the objective lens also acts as a condenser lens, so NAillum is limited to NAobj. Therefore, the maximum value of NAsynth is 2NAobj. Pacheco et al. placed the aperture stop of the achromatic lens out of the illumination path [22] to overcome this limit, but it is not applicable to conventional objective lenses. To achieve NAsynth over 2NAobj with an objective lens, LEDs should be placed around the objective lens for darkfield images [23,24]. In this case, they should be far enough away from the object so that a spherical wave can be approximated to a plane wave. Otherwise, the illumination beam angles are different for each position of the object, which complicates the process of FPM.

The LED light has a peak intensity in the normal direction and decreases as the angle increases. When the LEDs on a flat panel [24] are used to obtain darkfield images, each beam emits perpendicular to the panel plane, so a large divergence angle of LEDs is required to illuminate the object. Therefore, the most beams that do not illuminate the FOV, including the normal beam, can act as noise and reduce the signal-to-noise ratio (SNR) of the acquired image. In addition, direct illumination by the flat LED panel also becomes problematic when trying to achieve a high NAillum, which requires a large panel size. To solve this problem, a dome type LED array has been proposed [2527]; however, it is not easy to fabricate, and in the case of the reflective type, it is likely to interfere with the housing of the objective lens.

In this study, we used a parabolic concave mirror and a flat LED panel to achieve a NAillum above NAobj. The spherical wave from the LED panel is reflected and then propagates as a plane wave to illuminate an object. The normal beam of each LED is directed toward the object and thus improves the SNR of the acquired image. We designed and fabricated a surface-mount device (SMD) LED panel to implement the proposed method by experiments. In Section 2, we demonstrate the design process of the epi-illuminators. In Section 3, the experimental setup and data processing are described, and the experimental results are presented to validate the proposed method. In Section 4, we present the summary and discussion of this research.

2. Design of the epi-illuminators

The epi-illuminator of FPM consists of a brightfield illuminator (BI) for the range NAillum<NAobj and a darkfield illuminator (DI) for the range NAillum>NAobj. The BI was implemented by imaging the LEDs on the back focal plane of the objective lens. In the case of the DI, the parabolic mirror was positioned around the objective lens so that its focal plane and the object plane are coincident shown in Fig. 1(a). Additionally, a flat LED panel for illumination was placed at the object plane, facing the mirror. Then, the beams from the LED are reflected at the parabolic mirror and then collimated to illuminate the object shown in Fig. 1(a) (red rays). The arrangement of LEDs for FPM has been found to be more effective as a ring-type rather than as a periodic grid-type [28]; therefore, we adopted the ring-type LED arrays for the design.

 figure: Fig. 1.

Fig. 1. (a) Result of the ray tracing for illuminating an object from the BI (blue lines) and the DI (red lines). (b) The spectral positions of the transfer functions for each LED in the BI (blue circles) and the DI (red circles).

Download Full Size | PDF

Each LED has its own region in the Fourier domain corresponding to its illumination angle and NAobj, and there must be at least 40% overlap [15] between the adjacent spectra to converge to the true value during the phase retrieval process. We first optimized the incident polar angles satisfying that there is enough spectral overlap between the rings in the radial direction and then optimized the number of LEDs in each ring so that they overlap sufficiently in the azimuthal direction. Both directions in Fourier spectra are indicated in Fig. 1(b). The spectral areas of the BI beams themselves have no problem with overlapping enough with each other in the Fourier domain, and so does the DI. However, difficulties arise in the transition from the BI to the DI, especially for objective lenses with a wide FOV and a low NA. This is because the polar angle of the illumination beam from the DI’s first ring, which should overlap with the last ring of the BI in the spectrum, is not large enough; thus, the DI may interfere with the housing of the lens or partially obstruct the beam reflected from the object. By performing a ray tracing simulation on several objective lenses, we selected the Mitutoyo M Plan Apo 10× (focal length = 20 mm, NA = 0.28, working distance = 34 mm, FOV = 2.4 mm) and determined the polar angles so that the spectra are overlapped enough in the radial direction. We have included a detailed description and simulation results for other lenses in Appendix A. Then, the number of LEDs in each ring was then determined so that they overlap each other sufficiently in the azimuthal direction. Table 1 summarizes the design parameters for each ring number of i including the polar angle θi of the beam incident on the object and the number of LEDs for both illuminators. The expected NAsynth was calculated to be NAobj+NAillum=0.28 + sin 51.4°=1.06. Figure 1(b) shows the spectral position of the amplitude transfer function for each LED in the BI (blue circles) and the DI (red circles). The expected full-pitch resolution is calculated to be 486 nm by the Abbe criterion for our LED with a 515 nm wavelength.

Tables Icon

Table 1. Design parameters of the epi-illuminators

A parabolic mirror should be chosen so that the focal length is longer than the working distance of the lens so as not to obstruct the beam reflected from the object. Additionally, the diameter of the parabolic mirror should be large enough to cover all the desired NAillum. We chose Edmund optics #68-763 (aluminum coated parabolic mirror, focal length = 100 mm, diameter = 220 mm) from among off-shelf-products that meets the required conditions, or we can also customize a smaller parabolic mirror for a more compact system. Figure 1(a) shows the ray tracing results of the normal beam from the LEDs of the BI (blue lines) and the DI (red lines) on the product drawing, respectively, reflecting the actual values. And the rays from the DI do not interfere with the housing of the objective lens, and the mirror does not block the imaging path. Use of a spherical mirror instead a parabolic mirror may cause the beams emitted by the flat LED panel to have aberrations.

If the ith ring’s LED position imaged at the back focal plane of the objective lens is rBIi with respect to the optical axis, the polar angle θi is calculated by the following equation:

$$\sin {\theta _i} ={-} \frac{{{r_{\textrm{BI}i}}}}{{{f_{\textrm{obj}}}}}, $$
where fobj is the focal length of the objective lens. In the case of the illumination by the DI using the parabolic mirror, the position rDIi of the LED from the optical axis has the following relationship with θi:
$$\sin {\theta _i} ={-} \frac{{{r_{\textrm{DI}i}}}}{{\sqrt {r_{\textrm{DI}i}^2 + {{\left( { - \frac{{r_{\textrm{DI}i}^2}}{{4{f_{\textrm{p}}}}} + {f_{\textrm{p}}}} \right)}^2}} }}, $$
where fp is the focal length of the parabolic mirror. This is calculated by assuming that the reflecting surface of the parabolic mirror is expressed as $z ={-} \frac{1}{{4f}}{r^2} + {f_{\textrm{p}}}$ when the object is at the origin. The values needed to fabricate the LED panels for the BI and the DI were calculated from Eqs. (1) and (2), respectively, and summarized in Table 1. Based on the calculated positions of the LEDs, both LED arrays were made in the form of printed-circuit boards with the SMD LEDs having a peak wavelength at 515 nm, a bandwidth of 35 nm and a viewing angle of 30°, and a total of 88 LEDs were used. The parabolic mirror and the LED panels should have a hole at the center so that the objective lens and the object can be positioned, respectively.

3. Experiment

3.1 Experimental setup

We experimentally verified the epi-illuminator for FPM, and the schematic of the experimental setup is depicted in Fig. 2(a). This FPM consists of the BI, the DI, and the microscope for collecting light reflected from an object, and the blue, red, and green beams in Fig. 2(a) indicate the respective beam paths. The microscope consists of the objective lens (Mitutoyo M Plan Apo 10×), a tube lens, and an image sensor (PCO pco.edge 4.2, 16 bit dynamic range, 6.5 µm pixel size, 2048×2048 resolution). The BI was based on a 4f imaging system for Kohler illumination where the beam from the LED array is imaged on the back focal plane of the objective lens through two convex lenses and a field stop. The beam is collimated, and then, the incident on the object with the angle is given by Eq. (1). The LED array panel for the DI was placed to coincide with the object plane, and the LED beam was reflected and collimated by the parabolic mirror to illuminate the object. Figure 2(b) shows a picture of the whole experimental setup of the reflective FPM and Fig. 2(c) shows the objective lens and the LED array for the DI.

 figure: Fig. 2.

Fig. 2. (a) Experimental setup of reflective FPM. CMOS: Complementary metal–oxide–semiconductor image sensor, TL: tube lens, BS: beam-splitter, PM: parabolic mirror, OL: objective lens, CL: convex lens, FS: field stop, O: object. (b) Experimental setup of the reflective FPM. (c) The LED array for the darkfield illuminator and the objective lens.

Download Full Size | PDF

Prior to measuring the object, the intensity of the incident beam of each LED was measured by placing a photodetector at the object position. The intensity data were applied to the FPM reconstruction. After the setup was completed, the object was placed, and 88 images with low resolution corresponding to 88 LEDs were acquired by the image sensor. Although the objective lens has a FOV of 2.4 mm in diameter, the actual FOV of the acquired images is limited to 1.33 mm due to the image sensor size (13.3 mm×13.3 mm).

3.2 Reconstruction of high-resolution images

The imaging system can simply be described as imposing a transfer function on the object field in the Fourier domain. Therefore, the low-resolution image acquired by the kth LED can be represented by Ik(r)=|F{P(fO(ffk)}|2, where F denotes the Fourier transform operator, and P and O are the transfer function and the spectrum of the object field, respectively. fk represents the spectral shift by the angle of incidence on the object, which is the source of higher frequency information than the NAobj/λ.

The phase retrieval process is achieved by applying the constraints on the image plane and Fourier plane during the iterative process. The constraint on the image plane is that it should have the same intensity as the measured image Ik, and the constraint on the Fourier plane is the transfer function P. Reconstruction of a high-resolution image begins with an initial spectrum obtained by Fourier transforming the amplitude of the interpolated on-axis image (square root of I1) with a uniform phase. Then, in this extended Fourier domain, a low-pass filter is applied to the area corresponding to the incident angle of the 2nd LED beam, and the Fourier transform is performed to replace the amplitude with the measured amplitude (square root of I2) while maintaining its phase. This is inverse Fourier transformed to update the original region of the extended spectrum. This process is performed for the rest of kth low-resolution images, which should be performed to overlap in the Fourier domain with the adjacent region. This process is a single iteration, and at least 5 iterations are performed to converge to the true value.

The above process assumes that the transfer function P of the optical system is a circle function. However, the acquired low-resolution images include loss and distortion in the actual object field depending on the characteristics of the optical system. If the transfer function is distorted by aberrations, convergence becomes difficult because the object information is measured differently according to the illumination angle. This can be resolved by compensating for the pre-measured transfer function, but it is quite cumbersome. A useful method proposed by Ou [9] can obtain the transfer function without any pre-measurement and a heavy computational burden and was applied to our result in this paper. Additionally, considering that the transfer function varies spatially and the coherence area of the LED is finite, the images were divided into small segments, processed and then stitched [29].

3.3 Noise reduction

FPM is very sensitive to noise, and it can produce speckle-like patterns in the reconstructed image. If the SNR is not high enough, the algorithm may not converge. In the case of the BI, the field stop prevents unwanted light before entering the imaging path. In the case of the DI, two points should be noted when using a LED with the large divergence angle. The first is stray light that directly enters the objective lens from the LEDs. Although the objective lens is designed to block beams beyond NA, they can be internally reflected several times to reach the sensor, and even a small amount stray light can adversely affect the image quality of the FPM. The noise from stray light is quantitatively analyzed in Appendix B. To remove this factor, the noise can be simply subtracted because they have a path length difference beyond the coherence length of the LED (several micrometers). Therefore, we subtracted the noise image taken without the sample from the sample image for each LED.

Another is that, unlike the previous case of direct incidence of the LED light, the wide beam illuminated outside of the FOV is incident on the objective lens and act as noise. We referred to a previous study [30,31] to reduce the noise of the acquired image for the FPM reconstruction. Before starting the algorithm, we calculated the average noise intensity at the blank area in the FOV obtained for each image and then subtracted it from the entire area with a weighting factor. This is based on the assumption that the noise is spatially uniform within the FOV, which is reasonable given the fact that the FOV size is only 1.33 mm.

3.4 Experimental results

The measurement target is a positive USAF-1951 resolution target coated with chromium on a quartz plate, consisting of Group 4 through Group 11. Figures 3(a) and 3(e) show a high-resolution image intensity and a synthesized spectrum through the FPM reconstruction, respectively. Figure 3(b) is a magnified image around the finest pattern, and Fig. 3(c) shows the same region when the normal beam from the on-axis LED is illuminated for comparison. Figure 3(d) shows the intensity profile of Element 1 in Group 11 (488 nm period; 2,048 lp/mm), and it is clearly resolved in two perpendicular directions. This bar period is almost identical to the expected full-pitch resolution of 486 nm calculated by a NAsynth of 1.06. On the other hand, in Fig. 3(c), up to the patterns of Element 2 in Group 9 (870 nm period; 575 lp/mm) is resolved in correspondence with the cutoff frequency given by NAobj/λ (544 lp/mm).

 figure: Fig. 3.

Fig. 3. (a) Reconstructed high-resolution image. Scale bar, 100 µm. (b) Magnified image around the finest pattern of the reconstructed image. Scale bar, 10 µm. The inset shows the magnified view of Element 1 in Group 11. (c) Magnified image when illuminated by a normal beam. Scale bar, 10 µm. (d) Intensity profiles for the bars in the x and y directions of Element 1 in Group 11 from (b). (e) Synthesized spectrum by FPM in logarithmic scale.

Download Full Size | PDF

To prove the effectiveness of this system for surface inspection, we measured a high-density integrated circuit of a smartphone. Figure 4(a) shows the reconstructed image from the FPM process, and Figs. 4(b1), 4(c1), and 4(d1) show the magnified images of three different positions as indicated in Fig. 4(a) when only on-axis LED is illuminated. Figures 4(b2), 4(c2), and 4(d2) show the high resolution images obtained by FPM process. For comparison, Figs. 4(b3), 4(c3), and 4(d3) show images obtained with a 100× objective lens having a NA of 0.8. Figure 4(b2) clearly shows patterns and a defect that were invisible by on-axis illumination. In Fig. 4(c2), we can observe the dust (indicated by arrows) that was not seen with on-axis illumination. And the area indicated by the arrow in Fig. 4(d2) shows a grating-shaped circuit with about 0.6 µm period and could not be resolved by 0.8 NA lens as shown in Fig. 4(d3).

 figure: Fig. 4.

Fig. 4. (a) Reconstructed high-resolution image of the integrated circuit. The scale bar represents 100 µm. (b1),(c1),(d1) Magnified images of the specified region obtained when only on-axis LED is illuminated. (b2),(c2),(d2) High resolution images by FPM process. (b3),(c3),(d3) Images obtained with a 100× objective lens. The scale bars in (b2), (c2), and (d2) represent 10 µm.

Download Full Size | PDF

The FPM results are somewhat noisy because the measured images include not only the beam reflected from the surface but also the beam reflected from the inner layer structure of the circuit, which affects the processing of the FPM. However, it is thought to be enough to examine the overall quality and observe the micropattern.

4. Summary and discussion

FPM overcomes the limitation of conventional imaging systems by achieving a wide FOV and high spatial resolution. Reflective FPM is useful for surface inspection, but in reported studies, the LEDs should be far from the object, and the normal beam from the LEDs is not used for illumination, requiring a high divergence angle and decreasing SNR. We designed and fabricated a darkfield illuminator using a parabolic mirror so that the normal beam from the LED panel is collimated and directed toward the object, resulting in high SNR of the acquired low-resolution images.

We achieved a NAsynth of 1.06 with a ×10 objective lens, which is the highest value ever implemented with reflective FPM as far as we know. To validate the system experimentally, we obtained images of the USAF-1951 target for various angles and reconstructed the high-resolution image with the FPM algorithm. From the result, a full-pitch resolution of 488 nm corresponding to the NAsynth/λ was obtained. Then, an integrated circuit was measured to demonstrate its utility for surface inspection.

The parabolic mirror can be smaller for a system unless it obstructs the imaging path and interferes with the objective lens. In the setup used in this paper, it is expected that the minimum size that a parabolic mirror can have is a 40 mm focal length and 80 mm diameter, and the size of the LED panel will also decrease accordingly, so a more compact system is expected.

The limited size of the image sensor has caused loss of the space-bandwidth product. The best way to overcome this is to use an image sensor with a size similar to the field number (24 mm in this case) of the lens. However, the scientific CMOS with such a large area and a sufficiently small pixel size is very expensive. Instead, it can be solved by reducing the focal length of the tube lens. Considering the cutoff frequency of the objective lens, the image sensor should satisfy the equation to avoid the aliasing error:

$$p \le \frac{{\lambda }}{{2\textrm{N}{\textrm{A}_{\textrm{obj}}}}}\frac{{{f_{\textrm{tube}}}}}{{{f_{\textrm{obj}}}}}, $$
where p is the pixel size of the image sensor, and ftube is the focal length of the tube lens. According to this equation, it is expected that the focal length (200 mm) of the tube lens can be reduced to 142 mm without the aliasing error to achieve double the SBP compared to the previous one.

The proposed method can achieve a NAillum close to 1 by illuminating at an angle close to 90 degrees even with a finite size panel and is expected to be widely used in various industries requiring precise surface inspection. And this system can also be applied to the transmission type by inverting the LED panel and the parabolic mirror.

Appendix A: Ray tracing simulation of the darkfield illuminator

In this section, we show the simulation results of the ray tracing for other objective lenses to test their applicability to our method. Figure 5 shows the ray tracing simulation results for the DI with plan semi-apochromats, and they were performed based on the drawings provided by the manufacturer. The green beams represent the imaging beam paths calculated from their NA and FOV. The red beams represent the illumination beam paths by the DI and are designed to have the maximum polar angle while sufficiently overlapping with the outermost illumination beam of the BI. In case of Fig. 5(a), Olympus MPLFLN 5× with a NA of 0.15 was tested, and because the imaging beam and the illumination beam are not sufficiently separated, installation of a parabolic mirror may block the imaging beam and cause vignetting. As such, lenses with a low NA generally have difficulty in installing the mirror due to the small polar angle of the illumination beam. In case of Fig. 5(b), Olympus MPLFLN 10× with a NA of 0.30 was tested and the imaging beam and the illumination beam are sufficiently separated, so the mirror can be installed just under the lens. In this case, the system is compact, but there may not be enough space to place the object, and narrow LED intervals make it difficult to fabricate the LED array. On the other hand, in case of Mitutoyo M Plan Apo 10× used in this research, as can be seen in Fig. 1(a), the illumination beams have polar angles large enough not to be blocked by the housing of the lens. Therefore, there is no limit on the size of the mirror, so the system can be flexibly configured.

 figure: Fig. 5.

Fig. 5. Simulation results of the ray tracing for (a) Olympus MPLFLN 5× and (b) Olympus MPLFLN 10×.

Download Full Size | PDF

Appendix B: Noise analysis of stray light

We quantitatively analyzed the noise caused by stray light coming directly from the LED of the DI into the objective lens. Figures 6(a1)–6(a3) show darkfield images of the integrated circuit, illuminated by a single LED of the first, second and third rings of the DI, respectively. Figures 6(b1)–6(b3) show stray light for each area obtained after removing the object and its stage, and the colormap was adjusted to increase visibility. Figure 6(c) shows a histogram for each darkfield image intensity, and Fig. 6(d) shows a histogram of the stray light for each ring and when all LEDs are off. This histogram in Fig. 6(d) shows the presence of electrical noise in the image sensor, and the stray light noise tends to increase for the inner ring. The stray light noise is not very noticeable compared to the darkfield signal, but especially for the first ring, it can adversely affect the FPM reconstruction, so we removed it by the method presented in Sec. 3.3.

 figure: Fig. 6.

Fig. 6. (a1)-(a3) Darkfield images of the integrated circuit when illuminated by a single LED of the (a1) first ring, (a2) second ring, and (a3) third ring of the DI, respectively. (b1)-(b3) Stray light by the LEDs corresponding to the case of (a1)-(a3). (c) The histogram of the darkfield images. (d) The histogram of the stray lights and when all LEDs are off. The scale bar in (a1) represents 100 µm.

Download Full Size | PDF

Funding

Korea Research Institute of Standards and Science (KRISS-GP2019-0019).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]  

3. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]  

4. J. Chung, J. Kim, X. Ou, R. Horstmeyer, and C. Yang, “Wide field-of-view fluorescence image deconvolution with aberration-estimation from Fourier ptychography,” Biomed. Opt. Express 7(2), 352–368 (2016). [CrossRef]  

5. S. Sen, D. B. Desai, M. H. Alsubaie, M. V Zhelyeznyakov, L. Molina, H. S. Sarraf, A. A. Bernussi, and L. G. de Peralta, “Imaging photonic crystals using Fourier plane imaging and Fourier ptychographic microscopy techniques implemented with a computer controlled hemispherical digital condenser,” Opt. Commun. 383, 500–507 (2017). [CrossRef]  

6. Y. Xue, S. Cheng, Y. Li, and L. Tian, “Reliable deep-learning-based phase imaging with uncertainty quantification,” Optica 6(5), 618–629 (2019). [CrossRef]  

7. A. Pan, C. Zuo, Y. Xie, M. Lei, and B. Yao, “Vignetting effect in Fourier ptychographic microscopy,” Opt. Lasers Eng. 120, 40–48 (2019). [CrossRef]  

8. M. Zhang, L. Zhang, D. Yang, H. Liu, and Y. Liang, “Symmetrical illumination based extending depth of field in Fourier ptychographic microscopy,” Opt. Express 27(3), 3583–3597 (2019). [CrossRef]  

9. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

10. J. Chung, G. W. Martinez, K. C. Lencioni, S. R. Sadda, and C. Yang, “Computational aberration compensation by coded-aperture-based correction of aberration obtained from optical Fourier coding and blur estimation,” Optica 6(5), 647–661 (2019). [CrossRef]  

11. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]  

12. J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy,” Opt. Express 25(23), 28053–28067 (2017). [CrossRef]  

13. A. Zhou, W. Wang, N. Chen, E. Y. Lam, B. Lee, and G. Situ, “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express 26(18), 23661–23674 (2018). [CrossRef]  

14. L. Bian, J. Suo, G. Situ, G. Zheng, F. Chen, and Q. Dai, “Content adaptive illumination for Fourier ptychography,” Opt. Lett. 39(23), 6648–6651 (2014). [CrossRef]  

15. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]  

16. S. Li, Y. Wang, W. Wu, and Y. Liang, “Predictive searching algorithm for Fourier ptychography,” J. Opt. 19(12), 125605 (2017). [CrossRef]  

17. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Synthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). [CrossRef]  

18. V. Mico, Z. Zalevsky, P. García-Martínez, and J. García, “Synthetic aperture superresolution with multiple off-axis holograms,” J. Opt. Soc. Am. A 23(12), 3162–3170 (2006). [CrossRef]  

19. T. M. Kreis and K. Schluter, “Resolution enhancement by aperture synthesis in digital holography,” Opt. Eng. 46(5), 055803 (2007). [CrossRef]  

20. T. R. Hillman, T. Gutzler, S. A. Alexandrov, and D. D. Sampson, “High-resolution, wide-field object reconstruction with synthetic aperture Fourier holographic optical microscopy,” Opt. Express 17(10), 7873–7892 (2009). [CrossRef]  

21. S. Pacheco, B. Salahieh, T. Milster, J. J. Rodriguez, and R. Liang, “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett. 40(22), 5343–5346 (2015). [CrossRef]  

22. S. Pacheco, G. Zheng, and R. Liang, “Reflective Fourier ptychography,” J. Biomed. Opt. 21(2), 026010 (2016). [CrossRef]  

23. K. Guo, S. Dong, and G. Zheng, “Fourier ptychography for brightfield, phase, darkfield, reflective, multi-slice, and fluorescence imaging,” IEEE J. Sel. Top. Quantum Electron. 22(4), 77–88 (2016). [CrossRef]  

24. X. Ou, R. Horstmeyer, G. Zheng, and C. Yang, “Variable-illumination Fourier ptychographic imaging devices, systems, and methods,” U.S. patent 9,497,379 (2016).

25. S. Sen, I. Ahmed, B. Aljubran, A. A. Bernussi, and L. G. de Peralta, “Fourier ptychographic microscopy using an infrared-emitting hemispherical digital condenser,” Appl. Opt. 55(23), 6421–6427 (2016). [CrossRef]  

26. M. Alotaibi, S. Skinner-Ramos, A. Alamri, B. Alharbi, M. Alfarraj, and L. G. de Peralta, “Illumination-direction multiplexing Fourier ptychographic microscopy using hemispherical digital condensers,” Appl. Opt. 56(14), 4052–4057 (2017). [CrossRef]  

27. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]  

28. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23(5), 6171–6180 (2015). [CrossRef]  

29. L. Tian and L. Waller, “3D intensity and phase imaging from light field measurements in an LED array microscope,” Optica 2(2), 104–111 (2015). [CrossRef]  

30. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

31. Y. Zhang, A. Pan, M. Lei, and B. Yao, “Data preprocessing methods for robust Fourier ptychographic microscopy,” Opt. Eng. 56(12), 123107 (2017). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Result of the ray tracing for illuminating an object from the BI (blue lines) and the DI (red lines). (b) The spectral positions of the transfer functions for each LED in the BI (blue circles) and the DI (red circles).
Fig. 2.
Fig. 2. (a) Experimental setup of reflective FPM. CMOS: Complementary metal–oxide–semiconductor image sensor, TL: tube lens, BS: beam-splitter, PM: parabolic mirror, OL: objective lens, CL: convex lens, FS: field stop, O: object. (b) Experimental setup of the reflective FPM. (c) The LED array for the darkfield illuminator and the objective lens.
Fig. 3.
Fig. 3. (a) Reconstructed high-resolution image. Scale bar, 100 µm. (b) Magnified image around the finest pattern of the reconstructed image. Scale bar, 10 µm. The inset shows the magnified view of Element 1 in Group 11. (c) Magnified image when illuminated by a normal beam. Scale bar, 10 µm. (d) Intensity profiles for the bars in the x and y directions of Element 1 in Group 11 from (b). (e) Synthesized spectrum by FPM in logarithmic scale.
Fig. 4.
Fig. 4. (a) Reconstructed high-resolution image of the integrated circuit. The scale bar represents 100 µm. (b1),(c1),(d1) Magnified images of the specified region obtained when only on-axis LED is illuminated. (b2),(c2),(d2) High resolution images by FPM process. (b3),(c3),(d3) Images obtained with a 100× objective lens. The scale bars in (b2), (c2), and (d2) represent 10 µm.
Fig. 5.
Fig. 5. Simulation results of the ray tracing for (a) Olympus MPLFLN 5× and (b) Olympus MPLFLN 10×.
Fig. 6.
Fig. 6. (a1)-(a3) Darkfield images of the integrated circuit when illuminated by a single LED of the (a1) first ring, (a2) second ring, and (a3) third ring of the DI, respectively. (b1)-(b3) Stray light by the LEDs corresponding to the case of (a1)-(a3). (c) The histogram of the darkfield images. (d) The histogram of the stray lights and when all LEDs are off. The scale bar in (a1) represents 100 µm.

Tables (1)

Tables Icon

Table 1. Design parameters of the epi-illuminators

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

sin θ i = r BI i f obj ,
sin θ i = r DI i r DI i 2 + ( r DI i 2 4 f p + f p ) 2 ,
p λ 2 N A obj f tube f obj ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.