Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

All-in-focus large-FOV GRIN lens imaging by multi-focus image fusion

Open Access Open Access

Abstract

Gradient refractive index (GRIN) lenses are useful for miniaturized and in-vivo imaging. However, the intrinsic field-dependent aberrations of these lenses can deteriorate imaging resolution and limit the effective field of view. To address these aberrations, adaptive optics (AO) has been applied which inevitably requires the incorporation of additional hardware. Here we focus on field curvature aberration and propose a computational correction scheme which fuses a z-stack of images into a single in-focus image over the entire field of view (FOV), with no AO required. We validate our method by all-in-focus wide-field imaging of a printed letter sample and fluorescently labeled mouse brain slices. The method can also provide what we believe to be a new and valuable option for imaging enhancement in the scanning-modality use of GRIN lens microscopy.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

GRIN lenses rely on a parabolic refractive index profile along the radial axis to achieve optical focusing, as opposed to conventional lenses that utilize refractive interfaces. This unique characteristic enables the fabrication of miniature and lightweight GRIN lenses, leading to their widespread application in fields that prioritize compactness and low invasiveness, e.g., confocal and multiphoton micro-endoscopy [1,2], as well as miniaturized microscopy [3,4]. However, these advantages are accompanied by significant shift-variant aberrations caused by the refractive index profile and fabrication imperfections. Aberrations such as field curvature, coma, astigmatism, spherical aberrations, and chromatic aberrations are commonly observed.

To address these aberrations, usually at a specific field position in the FOV, adaptive optics (AO) [58] is a common solution, employing direct [1] or indirect [9] wavefront sensing methods based on considerations such as compactness, cost, speed, and sample properties regarding emission and scattering [10]. Although AO can handle field-dependent aberrations [2,11], a severe issue for GRIN lens imaging, by multiple corrections at different field positions, it requires additional hardware and complicates the imaging system. In this study, we provide a method to improve focus quality over a large FOV. Specifically, we focus on field curvature, showing its significant impact, characterizing its aberration features, and accordingly propose a multi-focus image fusion (MFIF) [12,13] algorithm to improve focus quality throughout a large FOV without any additional elements.

2. Method

In our imaging system, we mounted our GRIN lens (GRIN TECH, GT-IFRL-200-005-50-C1, NA: 0.5, paraxial magnification: 2.86) onto an inverted microscope (Nikon, ECLIPSE Ti2). To ensure a convenient and reliable mount, we glued the GRIN lens (Fig. 1(a)) with specifications including a working distance (l) of 5 mm, diameter (d) of 2 mm, length (zl) of 5.84 mm, and a viewing angle (θ) of 30°, with a coverslip.

 figure: Fig. 1.

Fig. 1. Optical setup and aberration illustration. (a), the mount of the GRIN lens onto an inverted microscope. (b), image of a nanohole array. The PSFs at P1 and P3 are severely defocused due to the field curvature. Focus quality at these positions can be improved by axial shift, decreasing the distance l by 150 µm and 130 µm respectively in this case. Scale bar: 50 µm. (c), maximum projection of the color-coded 3D PSF at position P3. Scale bar: 10 µm (d), cross section along the white line in (c). Scale bar: 50 µm. (e), detection of PSFs in (b). (f), PSF analysis: blue circles represent the standard deviation σ of Gaussian fitting for PSFs at the detected positions in (e), including a surface fit vs. field position, and a contour plot.

Download Full Size | PDF

First, to visualize and analyze aberrations, we use a 10x objective (NA: 0.45) and perform GRIN lens imaging of a nano-hole array (50-by-50 holes with a diameter of 200 nm and an interval of 20 µm), using LED illumination and a multiband filter (Chroma 89902). This allows us to capture PSFs (Fig. 1(b)) at various field positions throughout the FOV. Notably, compared to PSFs at positions close to the FOV center, such as P2, the PSFs at the FOV edges, namely P1 and P3, appear severely defocused because of the strong field curvature (also see the 3D PSF in Fig. 1(c) and axial PSF in Fig. 1(d)). Next, we introduce axial shifts to the nano-hole array by gradually decreasing the distance l, between the objective and the GRIN lens, yielding better-focused PSF 1’ at P1 and PSF 3’ at P3, compared with their counterparts before the shifts. These two newly acquired PSFs also exhibit a lateral shift toward the FOV center, which will be analyzed in detail in the magnification match step of our image fusion algorithm.

The implementation above leads to several implications. First, the GRIN lens exhibits strong field-dependent aberrations that are intensified with increasing radius, which can be quantitatively analyzed by Gaussian fitting (Fig. 1(e-f)). Second, the field curvature plays a significant role in the overall aberration and field dependence. The observation that axial shifts can bring originally defocused PSFs at the FOV edges into focus suggests that the focused PSFs, in the image domain, are located on a curved surface, rather than on a plane. Visualization 1 clearly demonstrates that starting from the FOV center, the in-focus region gradually shifts outward upon the axial shift. The improvement in PSF 1’ and PSF 3’, compared with PSF 1 and PSF 3, demonstrates the benefits of correcting field curvature. Moreover, compared with PSF 1 and PSF 3, PSF 1’ and PSF 3’ resembles PSF 2 more in shape, indicating the mitigation of field dependence. Third, the introduction of multiple-plane sampling proves to be incredibly helpful in mitigating the impact of field curvature. Lastly, the GRIN lens exhibits depth-dependent magnification [14], as evident by the lateral shift of the PSFs (PSF 1’ and PSF 3’) with axial shifts.

To address the field curvature issue, we propose to fuse a z-stack of images with different in-focus regions to enhance the focus quality of GRIN lenses throughout the entire field of view (FOV). This approach draws inspiration from the MFIF concept in computer vision, which combines multiple images with different in-focus regions into a single all-in-focus image [12]. In traditional MFIF research, focus metrics are typically employed to guide the fusion process in either the spatial domain or other transformed domains. However, it is challenging to find a robust and universally applicable focus measurement metric. As a result, task-oriented focus measurement, such as energy in gradient, Laplacian, and Fourier domain, is often necessary [10].

Based on the focus characteristics of the GRIN lens, we propose a customized fusion algorithm. The image stack to be fused contains n images (Fig. 2(a)) which are captured with n different axial/defocus distances of the GRIN lens. Specifically, the axial distances for the first, kth, and last images are l, l-(k-1)Δl, and l-(n-1)Δl, respectively, such that Δl is the constant axial shift between any two adjacent images. Additionally, the first image has its in-focus region at the FOV center, while the last image focuses on the FOV edge. The algorithm follows three main steps (Fig. 2). Note that parameters involved in these steps are inherent for a specific GRIN lens and their determination by a calibration measurement enables totally automatic image fusion.

 figure: Fig. 2.

Fig. 2. MFIF algorithm for GRIN lens imaging. (a), centered image stack (only showing a quarter of the whole FOV for visual convenience). (b), zoom-in views of window b in (a). Scale bar: 50 µm. (c), rendered RGB image by setting the three images in (b) as R, G, and B channel respectively. (d), the image stack after magnification matching. (e), zoom-in views of window e in (d). (f), rendered RGB image of (e). (g), a quarter of the fused image with the zoom-in view of (i). (h), the quarter of image at z1 in (a) for comparison and the zoom-in view (i). Scale bar: 50 µm.

Download Full Size | PDF

2.1 Image centering

First, we ensure that the physical FOV center aligns with the centers of the images to be fused. As we decrease the axial distance, the image undergoes contraction around the FOV center. To conveniently localize this center, we use a calibration object consisting of numerous point sources, such as a nanohole array and fluorescent microspheres and take a z-stack of images. By summation or maximum z-projection, radial traces converging toward the FOV center can be observed. We then crop the raw experimental images around this determined center and obtain a centered image stack (Fig. 2(a)). Note that n is 18 and only one quarter of each image is shown for visualization convenience. Zoom-in views of the bottom left corner (Fig. 2(b)) reveal dislocation caused by depth-dependent magnification. To visualize this discrepancy, we assign the three images as the red (R), green (G), and blue (B) channels and create a rendered image (Fig. 2(c)). Addressing this mismatch is crucial prior to the fusion process.

2.2 Magnification matching

According to Fig. 1(a), with the decrease of the axial distance, the magnification M (yo/yi) decreases accordingly, and the physical pixel size p at the object plane increases. Suppose the square region of interest regarding the object has $N \times N$ pixels, we have:

$$p{N / 2} = {{l\tan \theta } / M}$$

With a Δl decrease of the in axial distance, it becomes

$$p^{\prime}{{N^{\prime}} / 2} = {{(l - \varDelta l)\tan \theta } / {M^{\prime}}}$$
where p′, N′, and M′ are the updated pixel size, pixel number and magnification respectively. Combining Eq. (1) and (2) yields:
$$N^{\prime} = N - {{\varDelta l} / l}N$$

In Eq. (3), it is evident that a consistent change in working distance results in an equivalent change in the number of pixels, denoted as ΔN. This pixel number adjustment ensures that different images within the stack maintain the same FOV. To practically determine ΔN, we calibrate it by comparing identical object features across different images. One option is to use a nanohole array in calibration and create rendered images like Fig. 2(c, f) to help fine-tune this parameter. Then, we crop the image stack accordingly and numerically resize them to match the dimensions of the first image, as depicted in Fig. 2(d). The zoom-in views (Fig. 2(e)), along with the rendered RGB image (Fig. 2(f)), exhibit aligned object features. At this stage, the image is ready for fusion.

2.3 Image fusion

One of the main tasks in MFIF is to determine the in-focus region. In the case of the GRIN lens, this determination requires establishing a mapping relationship between the defocus distance or axial position and the in-focus region. Detecting the curved surface of field curvature would be highly beneficial for this task, but it introduces additional complexity. Instead, we rely on prior knowledge, employ certain approximations, and simplify the determination of the in-focus region, which has proven to be effective. Specifically, we are aware that the in-focus region expands outward as the working distance decreases. We assume that each axial position corresponds to a ring-shaped well-focused region (with the first one being circular). Additionally, we know that the first image has an in-focus region located at the FOV center, while the last image has it at the edge of the center. We approximate a linear relationship between these two extremes for the positions in between.

Based on the in-focus region determination, the following fusion steps are taken:

  • 1) Assign each image a best-focus radius corresponding to the in-focus ring. The radius is set to 0 for the first image and equal to the whole FOV radius for the last image. For the images in between, the radii are equally spaced.
  • 2) Divide all the images into small patches and choose the well-focused patches of each image, around the best-focus radius, forming the fused image by stitching.

While square segmentation is used here, alternative choices such as radial segmentation are also expected to yield favorable results.

By comparing the fused image (Fig. 2(g)) with the original image (Fig. 2(h), the first image of the image stack), as well as examining their zoomed-in views (Fig. 2(i-j)), it becomes evident that the fused image exhibits improved focus across the entire field of view (FOV).

3. Results

We initially applied our method to observe a 2D object, specifically a transparent plastic paper containing printed digits. The image stack comprises of 9 images. Comparing the original image (Fig. 3(a)) with the fused image (Fig. 3(b)), we can observe the feasibility of our approach and the noticeable improvement in focus quality.

 figure: Fig. 3.

Fig. 3. Imaging a 2D printed object. (a), original image with zoom-in views of (c) and (e). (b), fused image with zoom-in views of (d) and (f). Scale bars in (b), (d) and (f) are 50 µm, 10 µm and 50 µm respectively.

Download Full Size | PDF

The application of our method was extended to imaging a mouse brain slice, which has a thickness of 20 µm and is labeled with DAPI. In this case, we observed the CA1 region of the mouse brain and fused 8 images. Compared with original fluorescent image (Fig. 4(a)), the fused image (Fig. 4(b)) demonstrates sharper and clearer CA1 neurons in the off-axis area, which can also be seen when comparing the zoom-in views.

 figure: Fig. 4.

Fig. 4. Fluorescence imaging of a 20-µm-thick mouse brain slice. (a), original image with zoom-in views of (c) and (e). (b), fused image with zoom-in views of (d) and (f). Scale bars in (b), (d) and (f) are 50 µm, 10 µm and 50 µm respectively.

Download Full Size | PDF

Finally, we implement GRIN lens imaging of a thicker brain slice (50 µm) with two types of fluorescent labelling, mCherry and EGFP. Here we fused 10 images. Compared with the original image (Fig. 5(a)), the fused image (Fig. 5(b)) shows better signal-to-noise ratio and presents a more informative visualization of the piriform cortex of the mouse brain.

 figure: Fig. 5.

Fig. 5. Fluorescence imaging of a 50-µm-thick mouse brain slice. (a), original image with zoom-in views of (c), (e) and (g). (b), fused image with zoom-in views of (d), (f) and (h). Scale bars in (b) and (d, f, h), are 50 µm and 10 µm respectively.

Download Full Size | PDF

4. Summary and discussion

In conclusion, this work provides an analysis of aberrations in GRIN lens imaging and highlights the substantial impact of field curvature. Based on these characteristics, we propose an approach that involves capturing a z-stack of images and fusing them to achieve all-in-focus imaging. In contrast to AO-based methods, our approach does not require the introduction of any hardware. We have applied this method to image a variety of samples, including a printed letter sample and mouse brain slices labeled with DAPI, mCherry, and EGFP. Comparing the fused images with their original counterparts, our method demonstrates noticeable improvement in focus quality throughout the whole FOV. However, it is important to note that this improvement comes at the cost of additional image acquisition at other axial planes.

While here we present our image fusion technique in the context of wide-field imaging, it is also applicable, and in fact inherently more efficient, in scanning microscopy. First, this study demonstrates that, due to the strong field curvature of GRIN lenses, plane scanning actually yields a sampling distributed on a curved surface, which is clearly noticeable for tasks demanding accurate depth information. Compensating for this field curvature would entail focusing the excitation spot at a different z-position per lateral position. In this scenario, a 3D scanning pattern (distributed on the curved focus surface as shown in Fig. 1(a)) corresponds to a 2D image layer. Second, combination of our method’s focus enhancement with proper control of illumination power offers opportunities to gain more information and extend the valid FOV. Scanning microscopy allows for simple control of field-dependent illumination power, which can be adjusted to increase excitation at the FOV edge while minimizing photobleaching at the FOV center. Third, combining our method for field curvature with adaptive optics for the remaining aberrations can hopefully yield a hybrid aberration correction technique for GRIN lens imaging, promoting imaging resolution to approach diffraction limit.

Future research directions include the application to scanning microscopy and image fusion algorithms based on deep learning.

Appendix

Mice and ethics: We used adult C57Bl/6J female wildtype mice (Envigo), 6-8 weeks old. All mice were group housed under standard conditions and provided standard chow and water ad libitum. All experimental procedures followed the legislation under the National Institute of Health guidelines and were approved by the Institutional Animal Care and Use Committee (IACUC) of the Technion Israel Institute of Technology.

Fluorophore labelling and tissue preparation: For mCherry and EGFP labelling, we used retro-AAVs ssAAV-retro/2-hEF1α-mCherry-WPRE-bGHp(A) (Cat. #v212) and ssAAV-retro/2-hSyn1-chI-EGFP_2A_iCre-WPRE-SV40p(A) (Cat.#146) (VVF Zurich, Switzerland). Mice were anesthetized and maintained under anesthesia using isoflurane (SomnoSuite, Kent Scientific Corporation, Torrington, CT, USA; 0.2%). The animal was placed on the stereotactic rig (Neurostar, Kopf Instruments), their body temperature maintained at 37°C, and ophthalmic ointment (Duratears, Alcon Couvreur NV, Belgium) applied, head was shaved, scalp disinfected, and the skull exposed, bregma-lambda recorded to correct coordinates. retroAAV-tracer viruses were unilaterally injected to the MOB (AP4.5 mm, ML0.5 mm, DV1.8 mm) and BLA (AP: −1.46 mm, ML:3.10 mm, DV: 4.95 mm. 2 weeks post-surgery, mice were sacrificed with an overdose of Ketamine/Xylazine, followed by transcardial perfusion with PBS, and 4% PFA. Brains were extracted. (and prepared for following cryosection). We collected 50 µm cryosections on slides. Also, we collected 20 µm cryosections, counterstained with Hoechst 33342 (H3570, Invitrogen) and mounted. Images are acquired with 10x objective on an inverted microscope (Nikon, ECLIPSE Ti2). For 50 µm sections, we observed the ipsilateral piriform cortex, cells were labelled with both EGFP and mCherry. For 20 µm sections, we observed the CA1 cells stained with Hoechst 33342.

Funding

Zuckerman Foundation; HORIZON EUROPE European Research Council (802567).

Acknowledgment

This work was funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 802567, ERC-Five-Dimensional Localization Microscopy for Sub-Cellular Dynamics, and by the Zuckerman Foundation.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. W. M. Lee and S. H. Yun, “Adaptive aberration correction of GRIN lenses for confocal endomicroscopy,” Opt. Lett. 36(23), 4608–4610 (2011). [CrossRef]  

2. C. Wang and N. Ji, “Pupil-segmentation-based adaptive optical correction of a high-numerical-aperture gradient refractive index lens for two-photon fluorescence endoscopy,” Opt. Lett. 37(11), 2001–2003 (2012). [CrossRef]  

3. K. Yanny, N. Antipa, W. Liberti, et al., “Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy,” Light: Sci. Appl. 9(1), 171 (2020). [CrossRef]  

4. J. Greene, Y. Xue, J. Alido, et al., “Pupil engineering for extended depth-of-field imaging in a fluorescence miniscope,” Neurophotonics 10(4), 044302 (2023). [CrossRef]  

5. M. J. Booth, “Adaptive optical microscopy: the ongoing quest for a perfect image,” Light: Sci. Appl. 3(4), e165 (2014). [CrossRef]  

6. B. Ferdman, A. Saguy, D. Xiao, et al., “Diffractive optical system design by cascaded propagation,” Opt. Express 30(15), 27509–27530 (2022). [CrossRef]  

7. J. Mertz, H. Paudel, and T. G. Bifano, “Field of view advantage of conjugate adaptive optics in microscopy applications,” Appl. Opt. 54(11), 3498–3506 (2015). [CrossRef]  

8. F. Bortoletto, C. Bonoli, P. Panizzolo, et al., “Multiphoton Fluorescence Microscopy with GRIN Objective Aberration Correction by Low Order Adaptive Optics,” PLoS One 6(7), e22321 (2011). [CrossRef]  

9. N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-resolution imaging in biological tissues,” Nat. Methods 7(2), 141–147 (2010). [CrossRef]  

10. N. Ji, “Adaptive optical fluorescence microscopy,” Nat. Methods 14(4), 374–380 (2017). [CrossRef]  

11. C. Wang and N. Ji, “Characterization and improvement of three-dimensional imaging performance of GRIN-lens-based two-photon fluorescence endomicroscopes with adaptive optics,” Opt. Express 21(22), 27142–27154 (2013). [CrossRef]  

12. Y. Liu, L. Wang, J. Cheng, et al., “Multi-focus image fusion: A Survey of the state of the art,” Inf. Fusion 64, 71–91 (2020). [CrossRef]  

13. X. Zhang, “Deep learning-based multi-focus image fusion: a survey and a comparative study,” IEEE Trans. Pattern Anal. Mach. Intell. 44(9), 4819–4838 (2021). [CrossRef]  

14. B. Engelhard, J. Finkelstein, J. Cox, et al., “Specialized coding of sensory, motor and cognitive variables in VTA dopamine neurons,” Nature 570(7762), 509–513 (2019). [CrossRef]  

Supplementary Material (1)

NameDescription
Visualization 1       z-stack of a nanohole array taken with a GRIN lens

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Optical setup and aberration illustration. (a), the mount of the GRIN lens onto an inverted microscope. (b), image of a nanohole array. The PSFs at P1 and P3 are severely defocused due to the field curvature. Focus quality at these positions can be improved by axial shift, decreasing the distance l by 150 µm and 130 µm respectively in this case. Scale bar: 50 µm. (c), maximum projection of the color-coded 3D PSF at position P3. Scale bar: 10 µm (d), cross section along the white line in (c). Scale bar: 50 µm. (e), detection of PSFs in (b). (f), PSF analysis: blue circles represent the standard deviation σ of Gaussian fitting for PSFs at the detected positions in (e), including a surface fit vs. field position, and a contour plot.
Fig. 2.
Fig. 2. MFIF algorithm for GRIN lens imaging. (a), centered image stack (only showing a quarter of the whole FOV for visual convenience). (b), zoom-in views of window b in (a). Scale bar: 50 µm. (c), rendered RGB image by setting the three images in (b) as R, G, and B channel respectively. (d), the image stack after magnification matching. (e), zoom-in views of window e in (d). (f), rendered RGB image of (e). (g), a quarter of the fused image with the zoom-in view of (i). (h), the quarter of image at z1 in (a) for comparison and the zoom-in view (i). Scale bar: 50 µm.
Fig. 3.
Fig. 3. Imaging a 2D printed object. (a), original image with zoom-in views of (c) and (e). (b), fused image with zoom-in views of (d) and (f). Scale bars in (b), (d) and (f) are 50 µm, 10 µm and 50 µm respectively.
Fig. 4.
Fig. 4. Fluorescence imaging of a 20-µm-thick mouse brain slice. (a), original image with zoom-in views of (c) and (e). (b), fused image with zoom-in views of (d) and (f). Scale bars in (b), (d) and (f) are 50 µm, 10 µm and 50 µm respectively.
Fig. 5.
Fig. 5. Fluorescence imaging of a 50-µm-thick mouse brain slice. (a), original image with zoom-in views of (c), (e) and (g). (b), fused image with zoom-in views of (d), (f) and (h). Scale bars in (b) and (d, f, h), are 50 µm and 10 µm respectively.

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

p N / 2 = l tan θ / M
p N / 2 = ( l Δ l ) tan θ / M
N = N Δ l / l N
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.