Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Brightfield, fluorescence, and phase-contrast whole slide imaging via dual-LED autofocusing

Open Access Open Access

Abstract

Whole slide imaging (WSI) systems convert the conventional biological samples into digital images. Existing commercial WSI systems usually require an expensive high-performance motorized stage to implement the precise mechanical control, and the cost is prohibitive for most individual pathologists. In this work, we report a low-cost WSI system using the off-the-shelf components, including a computer numerical control (CNC) router, a photographic lens, a programmable LED array, a fluorescence filter cube, and a surface-mount LED. To perform real-time single-frame autofocusing, we exploited two elements of a programmable LED array to illuminate the sample from two different incident angles. The captured image would contain two copies of the sample with a certain separation determined by the defocus distance of the sample. Then the defocus distance can be recovered by identifying the translational shift of the two copies. The reported WSI system can reach a resolution of ∼0.7 µm. The time to determine the optimal focusing position for each tile is only 0.02 s, which is about an 83% improvement compared to our previous work. We quantified the focusing performance on 1890 different tissue tiles. The mean focusing error is ∼0.34 µm, which is well below the ± 0.7 µm depth of field range of our WSI system. The reported WSI system can handle both the semitransparent and the transparent sample, enabling us to demonstrate the implementation of brightfield, fluorescence, and phase-contrast WSI. An automatic digital distortion correction strategy is also developed to avoid the stitching errors. The reported prototype has an affordable cost and can make it broadly available and utilizable for individual pathologists as well as can promote the development of digital pathology.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Pathologists generally use a conventional optical microscope to analyze the pathology slide. In the routine diagnostic process, to view a feature of tissue clearly, the pathologists need to manually adjust the focus knob of the microscope platform to reach the focusing plane of the slide. Then it also needs to manually move the microscope stage to different positions to view regions of interest. This traditional slide reviewing process is labor-intensive and has low workflow efficiency, which is an unignored disadvantage, especially in diagnosing a large number of tissue slides [1]. To reduce the demand for labor and better understand the biological mechanisms of the disease process, a WSI system is designed to replace the traditional microscope. The WSI, also known as virtual microscopy, refers to scanning a conventional tissue slide to create a high-resolution digital image. The digital image of the slide then can be efficiently viewed, analyzed, stored, and shared with other pathologists.

A typical WSI system uses a high numerical aperture (NA) objective lens to capture the high-resolution digital image. Due to the use of a high NA objective lens, the range of depth of field of the WSI system is typically on the micron level, posing a challenge for accurate focusing during the scanning process [2,3]. Currently existed autofocusing techniques can be roughly categorized into three groups: 1) pre-scan focus map, 2) real-time reflective autofocusing, and 3) real-time image-based autofocusing [1]. Creating a focus map prior to the scanning process is commonly used in many commercial WSI systems. For each point on the map, it is necessary to acquire a z-stack by scanning the sample to different axial positions. Then the best focus position can be inferred based on the optimal figures of merit [46]. However, surveying the focus positions for every tile of the tissue slide would consume a lot of time. Besides, a mechanical system with high positional accuracy and repeatability is required in the process of surveying focus map, which will magnify the cost of the entire WSI system. The reflective-based autofocusing technique can correct the focus drift of the system by repetitively find the axial location of the reference plane and maintain a constant distance between the objective lens and the reference plane [710], but it does not work well when a sample varies its location from the surface due to the natural property of the tissue topography variations above the glass [3]. Real-time image-based autofocusing approaches without the need of generating the focus map and can handle the sample with varied topography, including 1) independent dual sensor scanning [2], 2) beam splitter array [11], 3) titled sensor [12], 4) phase-detection [1315], 5) deep learning approaches [1621] and 6) dual-LED illumination [2226]. The first four autofocusing methods rely on an additional optical path with different sensor configurations to acquire the image for tracking the defocus distance of the sample. The requirement of additional optical hardware can cause alignment issue and make the system more complicated, and it also does not address the cost issue. Deep learning-based approaches allow single-frame autofocusing and require no additional optical hardware, but the relatively short focusing range and the need of a new train for different specimens that have not been trained before may limit its ability in a real application. Dual-LED illumination-based autofocusing methods 1) can enable real-time single-frame autofocusing [22,23,26], 2) can be performed with continuous sample motion [24,25], 3) can be implemented with cost-effective design [26]. However, they also have many issues, including 1) require an additional camera and optical hardware [22,24], 2) time-consuming for focus map surveying [25], 3) cannot work with transparent samples [23,26]. Some commercial WSI systems only work for brightfield mode, and it has bad performance for the transparent or low-contrast sample. A few commercial WSI systems work for both the brightfield and fluorescence imaging modes, but the cost is prohibitive for some individual pathologists.

Our recent work reported a cost-effective and high-throughput whole slide imaging system based on single-frame autofocusing and color-multiplexed illumination [26]. We used a red and a green LED to illuminate the sample to generate the translational shift between the camera's red and green channels. The gradient of the mutual information metric is utilized to identify this translational shift and then recover the defocus distance of the sample. Compared to the previous work, there are four key improved points in this work. First, different from our previous work which could only perform brightfield WSI, the reported WSI system could work for brightfield, fluorescence, and phase-contrast imaging mode. In the report system, we tested Green Fluorescent Protein (GFP) stained sample slide and corresponding GFP filter was needed in the fluorescence imaging mode. The use of the filter will block the light emitted from the red LED. In order to avoid the light blockage, we used two green LEDs for sample illumination in the autofocusing process. Second, we used the autocorrelation function instead of the mutual information to analyze the defocus distance of the sample. The consumed time to use the autocorrelation function is ∼0.02 s, while the consumed time to use the mutual information is ∼0.12 s. Thus, there is about 83% improvement for the time to determine the optimal focusing position. Third, we set an offset distance to the photographic lens to handle transparent specimens before capturing each tile. As such, we can perform brightfield, fluorescence, and phase-contrast WSI, significantly improving the imaging ability compared to our previous WSI system. Fourth, we developed an automatic pincushion distortion correction strategy to tackle the stitching errors. The use of the photographic lens can cause minor pincushion distortion at the edge of the captured image. In the previous work, we used a hole-array mask to measure the distortion. Based on the difference between the distorted image captured using a photographic lens and the ground-truth image acquired under a standard microscope, the pincushion distortion can be corrected. This approach may have limitations under the conditions where a hole-array mask is hard to obtain or difficult to make and without a standard microscope. In our new strategy, multiple measurements of a specimen were captured using the reported platform, which can be used to create a ground-truth image as a replacement for one acquired under a standard microscope with a standard mask. Then the created ground-truth image can be applied to fit the distortion coefficients using a mapping equation. This strategy does not rely on any standard mask or microscope, while the reported platform can perform this correction process by itself completely.

To merge the different merits of the existing WSI systems and avoid their disadvantage, in this work, we have developed a low-cost DIY WSI system based on off-the-shelf components, including a computer numerical control (CNC) router, a photographic lens, a programmable LED array, a fluorescence filter cube, and a surface-mount LED. The reported system does not need a second optical path or additional camera for autofocusing process, and no additional alignment is needed. Real-time single-frame autofocusing was performed by dynamically tracking the focus position of the sample in the scanning process. Since the focus map is not needed, there is no requirement for mechanical repeatability, allowing us to build a three-axis scanning platform using low-cost hardware. Our platform can handle both the semitransparent and the transparent sample. The brightfield, fluorescence, and phase-contrast WSI have been demonstrated. Such an affordable and powerful microscopy imaging tool may provide help for many individual pathology researchers, biologists, and microscopists.

2. Prototype and single-frame autofocusing via dual-LED illumination

The prototype of our low-cost WSI system is shown in Fig. 1(a). We used a Nikon 20X, 0.75 NA objective lens, and a Canon 100 mm photographic lens to form a microscope system. The purpose of using the photographic lens instead of using the conventional microscope tube lens is to implement a precise z-axis scanning and reduce the cost of the system. A color camera was used for brightfield imaging, and a monochrome camera was used for fluorescence and phase-contrast imaging (DFK 33UX183 and DMK 33UX183, The Imaging Source). An 8 by 8 programmable LED array (Adafruit) was used to provide sample illumination for the brightfield imaging, phase-contrast imaging, and autofocusing process. For the fluorescence imaging, we developed a plug-and-play filter cube (Fig. 1(b1)). We placed it in the region between the objective lens and tube lens (infinity space) to form the episcopic illumination configuration. We designed 3D-printed parts to connect the fluorescence filter cube with an externally threaded coupler (Thorlabs CMT10) and used a locking ring to fix the position of this coupler. Then we can thread this coupler into a compatible lens tube (SM2A6, Thorlabs) attached to the front of the photographic lens. With such a design, on the one hand, we can fix the position of the fluorescence filter cube and make it vibration-free in the sample scanning process. On the other hand, it is easy to take it out when performing brightfield acquisition. Replacing other fluorescence filter cubes with different working wavelengths is also easy to perform. A surface-mount white LED (XLamp CXB2540, CREE) was attached to a heatsink and was used as the illuminator for fluorescence imaging (Fig. 1(b2)). A cooling fan was positioned at the back of the surface-mount LED, and a 50-mm Nikon photographic lens (f/1.8D, as a collector) was used to build up the fluorescence illumination path (Fig. 1(b3)). A low-cost CNC router (Mysweety CNC router kits, Amazon) was modified for 3D sample positioning in our prototype. We performed coarse axial adjustment using the CNC router and precise adjustment using the ultrasonic motor ring. The motor of the CNC router is driven by an Arduino board that can communicate with the computer via serial commands. It is worth noting that we have placed a Sorbothane isolation pad under the y-scanning stage to significantly reduce the vibration caused by the fast stage actuation (Fig. 1(c)). We tested the system resolution using a USAF target, and it can resolve group 10, element 4 with 0.35 µm half-pitch line width.

 figure: Fig. 1.

Fig. 1. (a) The low-cost brightfield, fluorescence, and phase-contrast WSI system prototype. The inset is an extended view of the fluorescence filter cube. (b1) A fluorescence filter cube and the designed 3D-printed parts constitute a plug-and-play component, and it can be easily replaced by threading in or out from an SM1-threaded lens tube. (b2) Surface mounted white LED. An e-switch MOSFET module was connected to the Arduino board to quickly turn on or off the surface-mount LED. (b3) Fluorescence illumination path. The surface-mount LED was placed at the back focal plane of a Nikon photographic lens, and a cooling fan was attached back of the surface-mount LED to remove heated air. (c) System integration to give more details about the prototype.

Download Full Size | PDF

If the sample is placed at an out-of-focus position, the captured image would contain two copies of the sample (Fig. 2(a1)). The translational shift between these two copies is proportional to the defocus distance (Fig. 2(b1)-(b3)). The used autofocusing scheme is to identify this translational shift and then recover the defocus distance based on it. As shown in Fig. 2(a2), we calculated the autocorrelation of the image with two copies, and the separation x0 can be recovered from the distance between the two first-order peaks in Fig. 2(a3). Figure 2(c1) shows the relationship between the translational shift of the two copies and the lens ring position. Once we identify the translational shift between the two copies, we can recover the corresponding lens ring position based on the curve in Fig. 2(c1). To infer the real defocus distance, we also measured the calibration curve between the lens ring positions and the defocus positions of the sample (Fig. 2(c2)). In this calibration process, we mounted the objective lens to a precise mechanical stage (ASI LS-50). Then, we moved the ultrasonic motor to different positions and moved the objective lens back to the in-focus position using the precise mechanical stage. We set an illumination NA to be ∼0.4. Although a larger illumination NA leads to a larger distance between the two copies, the content of the two copies will also vary with a large illumination angle. The 0.4 illumination NA is an appropriate compromise in our setting. An important point is that we set an offset distance for the Canon photographic lens before each autofocusing process. When the transparent sample is near the in-focus position, the contrast of the captured image is low, and the distance between the two copies is also small. In this case, the two first-order peaks of the autocorrelation would not show an obvious local maximum point. It results in being difficult to identify the translational shift between two copies correctly. In our implementation, we moved the ultrasonic motor ring to a pre-defined position to generate out-of-focus contrast for the transparent sample; in other words, when the sample is in focus, there still have a translational shift of two copies (Fig. 2(b2)).

 figure: Fig. 2.

Fig. 2. Single-frame autofocusing schema via dual-LED illumination. (a1) The captured image with two copies of a sample. (a2) Autocorrelation function of (a1). (a3) The line trace of the (a2) and the locations of the two peaks. (b1)-(b3) The captured images when the ultrasonic motor ring at different positions. The defocus distance can be recovered according to the translational shift between the two copies. (c1) The calibration curve between the two-copy separation and the lens ring position (the three color data points in (c1) correspond to the cases of (b1)-(b3)). (c2) The measured calibration curve between the lens ring positions and the defocus distances.

Download Full Size | PDF

3. Autofocusing performance and automatic digital distortion correction

3.1 Autofocusing performance

To quantify the autofocusing performance of the reported scheme, we tested 5 different samples. For each sample, we tested 18 different sample positions. At each sample position, we acquired a z-stack of −10 µm to +10 µm with a step of 1 µm. The corresponding lens ring position is from 100 to 360 with a step of 13 (21 defocused positions in total). The in-focus position of the sample is determined based on an 11-point Brenner gradient method. We calculated the defocus distance of each tile based on the reported approach. The total amount of tested tiles is 1890 (5×18×21). As shown in Fig. 3, the mean focusing error is ∼0.34 µm, which is well below the ± 0.7 µm depth of field range.

 figure: Fig. 3.

Fig. 3. The autofocusing performance of the reported scheme. 5 different samples in total 1890 tiles were tested. The different stained and unstained samples are included. The average focusing error is ∼0.34 µm, which is well below the ± 0.7 µm depth of field range.

Download Full Size | PDF

3.2 Automatic digital distortion correction

In our reported system, the use of a photographic lens introduces minor pincushion distortions, leading to stitching errors shown in Fig. 4(a). To correct this distortion, previously, we used the standard microscope to capture an image of a hole-array mask to measure the pincushion distortion [26]. Then we corrected each captured image based on this measurement. Although this approach can address stitching errors in our design, the need of a hole-array mask and standard microscope may be prohibitive in some limited conditions, such as a hole-array mask is hard to get or without a standard microscope. Here, we developed an automatic digital distortion correction strategy to correct the pincushion distortion in our platform. This process does not need any standard mask (like a hole-array) or microscope to measure the distortion, while all correction steps can be completed by using the reported platform itself. The key point to replace the hole-array and standard microscope is to acquire a ground-truth image with respect to the captured distorted image. According to the experiment result in our previous work and our best knowledge, the pincushion distortion mainly affect the quality at the edge of the captured image. Therefore, the central area of the distorted image is equivalent to one captured under a standard microscope. By stitching central areas of different distorted images, we can obtain a ground-truth image. As such, we can fit the distortion coefficients and apply them to correct the distortion of each raw captured image. In Fig. 4, we demonstrated the stitching performance with and without the reported distortion correction process. We can see that the stitching errors have been eliminated after performing distortion correction. The details can be found in Supplement 1, and we also open source the MATLAB code for use by interested readers [27].

 figure: Fig. 4.

Fig. 4. Pincushion distortion correction. (a) Image stitching errors due to pincushion distortion. (b) No stitching errors after digital distortion correction.

Download Full Size | PDF

4. Brightfield, fluorescence and phase-contrast WSI

Figure 5 shows the different imaging modes of our system. We took out the fluorescence filter cube and used the color camera for brightfield acquisition, as shown in Fig. 5(a). The surface-mount LED was turned off, and the programmable LED array was switched between the brightfield imaging mode and the autofocusing mode at high speed. For the fluorescence imaging, as shown in Fig. 5(b), the color camera was replaced by a monochrome camera for image acquisition. The fluorescence filter cube was placed at the infinity space of the microscope system. The surface-mount LED was turned on, and the LED array was used for the autofocusing. The light from the surface-mount LED was collected and went into the filter cube. It was then filtered by the excitation filter and was reflected onto the sample to excite out the fluorescence signal. The fluorescence image thus can be acquired by the microscope system. For the phase-contrast imaging, we used the transport of intensity equation (TIE) to recover the phase information of the sample [2831]. As shown in Fig. 5(c), the surface-mount LED was turned off, and a single green LED was turned on for sample illumination. The Canon photographic lens can be controlled to move to different z-positions, then the images at the different focal planes can be captured for the recovery process. Figure 5(d) shows the autofocusing process that uses two green LEDs for sample illumination.

 figure: Fig. 5.

Fig. 5. Different operating modes of the reported WSI system. (a) Brightfield imaging. The filter cube was taken out to perform brightfield acquisition. (b) Fluorescence imaging. The surface-mount white LED was turned on, and the LED array was turned off. The different filter components are labeled with different colors for distinguishment. (c) TIE phase-contrast imaging. A single green LED was turned on for image acquisition. The ultrasonic motor ring was used to drive the photographic lens to move axially. We acquired one focused image and one defocused image to perform TIE phase recovery. (d) The autofocusing process. Two green LEDs are used for sample illumination in the autofocusing process.

Download Full Size | PDF

Figure 6 shows two brightfield whole slide images captured using the reported platform. The brightfield WSI workflow can be described as follows: 1) Turn on all 64 R/G/B LEDs to acquire a brightfield image of the sample. 2) Move the lens ring to the preset offset position and turn on the two green LEDs. At the same time, moving the x-y stage to the next position. 3) Acquire the image and identify the translational shift between the two copies of the captured image. 4) Move the lens ring to the in-focus position according to the shift calculated in step 3. 5) Repeat steps 1–4. The lens ring preset offset is set to be 200, which corresponds to ∼16 µm of real defocus distance. The time to move the lens ring needs ∼0.15 s, and the movement of the x-y stage takes ∼0.2 s. The autofocusing process is performed after the x-y stage is moved to the next position. It takes ∼0.02 s to identify the translational shift of the two copies using our desktop computer (Intel Core i7-7700 K, 4.2 GHz, 32GB RAM). We also summarize the timing diagram in Supplement 1. The total time for one circle operation is ∼0.47 s. We used ImageJ to stitch together all the captured tiles and set ∼13% overlap between neighboring tiles [32]. Figure 6(a) shows the whole slide image of a human blood smear with hematoxylin and eosin (H&E) stain, and the acquisition time is ∼29 s for this 50-mm2 area. Figure 6(b) shows a Ki-67 with immunohistochemistry (IHC) brown stain, and the acquisition time is ∼20 s for this whole slide image.

 figure: Fig. 6.

Fig. 6. Brightfield whole slide images. (a) A human blood smear with hematoxylin and eosin (H&E) stain. The acquisition time is ∼29 s for this 50-mm2 area. (b) A Ki-67 with immunohistochemistry (IHC) brown stain. The acquisition time is ∼20 s for this whole slide image.

Download Full Size | PDF

Figure 7 shows the fluorescence whole slide image of a transparent mouse kidney section. The operation procedures can be found in Supplement 1. The time for one circle operation is ∼0.57 s. For the field of view of 11 mm by 7 mm, the acquisition time is ∼60 s. In our current prototype, it takes ∼0.15 s to acquire one fluorescence image. Almost the whole of 0.15 s is consumed by the exposure process. The illuminator we used for fluorescence imaging is a type of low-cost surface mounted white LED, and one can choose other high-power sources to reduce the acquisition time.

 figure: Fig. 7.

Fig. 7. The fluorescence whole slide image of a transparent mouse kidney section. The field of view is 11 mm by 7 mm and the acquisition time is ∼60 s.

Download Full Size | PDF

For the phase-contrast WSI, we used the TIE for phase recovery. We captured one focused image and one defocused image at each sample position. The total time for one circle operation is ∼0.67 s. The details of the operation procedures can be seen in Supplement 1. To save the operation time, we have optimized the operation procedure. We do not need to move the lens ring in both acquisition of the defocused image and the autofocusing image, while we only need to move the lens ring before acquiring the defocused image. The axial topography variation of the sample is not much between two neighboring tiles. There still maintain enough contrast for the autofocusing image. In the phase-contrast imaging, the lens ring preset offset is set to be 100, which corresponds to ∼8 µm of real defocus distance. We tested the mouse kidney section for the phase-contrast WSI, as shown in Fig. 8. The acquisition time is ∼69 s for the field of view of 11 mm by 7 mm.

 figure: Fig. 8.

Fig. 8. The phase-contrast whole slide image of the mouse kidney section. The acquisition time is ∼69 s.

Download Full Size | PDF

5. Conclusion

In summary, we report a low-cost WSI system based on dual-LED autofocusing and open-source hardware. There are several important advantages to the reported scheme: 1) It can perform real-time autofocusing in between the sample scanning, and it does not need the scanning stage with high positional accuracy and repeatability. The acquisition time is also shorter than that of the focus map-based WSI platform. 2) There is no need for additional optical hardware or imaging sensor; thus, no additional alignment and cost is required. 3) We used the autocorrelation function instead of the mutual information to analyze the defocus distance of the sample. The consumed time to determine the optimal focusing position is ∼0.02 s, which is about an 83% improvement compared to our previous work. 4) We used the low-cost off-the-shelf components to build up our WSI system. The affordable budget of our proposed platform can make it broadly available and utilizable for individual pathologists or researchers. In a conventional microscopy imaging platform, axially moving the stage or the objective lens may perturb the sample or even possibly damage the sample. Our reported scheme can be able to avoid potential mechanical perturbation during the experiment, which is meaningful for some biomedical experiments. 5) Our platform is able to handle both semitransparent and transparent samples, which is a clear advantage over other existing methods. The implementation of brightfield, fluorescence, and phase-contrast WSI makes our WSI system comparable to a high-end microscope. 6) To overcome the issue of stitching errors, we developed an automatic digital distortion correction strategy. In this correction process, there is no need of a standard mask or microscope to measure the distortion. All correction steps can be completed by using the reported platform itself.

In the future direction, with the recent advancement of artificial intelligence (AI) in medical diagnosis, how to exploit AI to improve the performance of our reported WSI system deserves to be focused on in future research. The line of conventional microscopy, digital slide scanner, and automated smart microscopy is an important branch of the development of the future advanced technology, which will promote the research of digital pathology to a new high-level stage.

The part list and cost estimate, focus control of the canon lens, operating procedures, and automatic digital distortion correction can be found in Supplement 1. The demo code for automatic digital distortion correction and 3D design files for this work can be downloaded at Dataset 1 in Ref. [27]: 1) demo code, 2) 3D design files.

Funding

111 project (B17035); China Scholarship Council (201806960045); National Natural Science Foundation of China (61975254).

Acknowledgment

The authors would like to thank Dr. Guoan Zheng for his insightful suggestions.

Disclosures

The authors declare no conflicts of interest.

Data availability

Parts of code and 3D design files can be downloaded at Dataset 1 in Ref. [27]. Additional data in this paper may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. Z. Bian, C. Guo, S. Jiang, J. Zhu, R. Wang, P. Song, Z. Zhang, K. Hoshino, and G. Zheng, “Autofocusing technologies for whole slide imaging and automated microscopy,” J. Biophotonics 13(9), e202000227 (2020). [CrossRef]  

2. R. R. McKay, V. A. Baxi, and M. C. Montalto, “The accuracy of dynamic predictive autofocusing for whole slide imaging,” J. Pathol. Inform. 2(1), 38 (2011). [CrossRef]  

3. M. C. Montalto, R. R. McKay, and R. J. Filkins, “Autofocus methods of whole slide imaging systems and the introduction of a second-generation independent dual sensor scanning method,” J. Pathol. Inform. 2(1), 44 (2011). [CrossRef]  

4. S. Yazdanfar, K. B. Kenny, K. Tasimi, A. D. Corwin, E. L. Dixon, and R. J. Filkins, “Simple and robust image-based autofocusing for digital microscopy,” Opt. Express 16(12), 8670–8677 (2008). [CrossRef]  

5. A. Santos, C. Ortiz de Solórzano, J. J. Vaquero, J. M. Peña, N. Malpica, and F. del Pozo, “Evaluation of autofocus functions in molecular cytogenetic analysis,” J. Microsc. 188(3), 264–272 (1997). [CrossRef]  

6. F. C. A. Groen, I. T. Young, and G. Ligthart, “A comparison of different focus functions for use in autofocus algorithms,” Cytometry 6(2), 81–91 (1985). [CrossRef]  

7. Y. Liron, Y. Paran, G. Zatorsky, B. Geiger, and Z. Kam, “Laser autofocusing for high-resolution cell biological imaging,” J. Microsc. 221(2), 145–151 (2006). [CrossRef]  

8. G. Reinheimer, “Arrangement for automatically focussing an optical instrument,” US patent 3(721), 827 (1973).

9. A. Cable, J. Wollenzin, R. Johnstone, K. Gossage, J.S. Brooker, J. Mills, J. Jiang, and D. Hillmann, “Microscopy system with auto-focus adjustment by low-coherence interferometry,” US Patent 9(869), 852 (2018).

10. J. S. Silfies, E. G. Lieser, S. A. Schwartz, and M. W. Davidson, “Nikon Perfect Focus System (PFS),” https://www.microscopyu.com/applications/live-cell-imaging/nikon-perfect-focus-system.

11. T. Virág, A. László, B. Molnár, A. Tagscherer, and V.S. Varga, “3DHistech KFT, 2010. Focusing method for the high-speed digitalisation of microscope slides and slide displacing device, focusing optics, and optical rangefinder,” US Patent 7(663), 078 (2010).

12. R.T. Dong, U. Rashid, and J. Zeineh, “System and method for generating digital images of a microscope slide,” US Patent 10, 941 (2005).

13. K. Guo, J. Liao, Z. Bian, X. Heng, and G. Zheng, “InstantScope: a low-cost whole slide imaging system with instant focal plane detection,” Biomed. Opt. Express 6(9), 3210–3216 (2015). [CrossRef]  

14. J. Liao, L. Bian, Z. Bian, Z. Zhang, C. Patel, K. Hoshino, Y. C. Eldar, and G. Zheng, “Single-frame rapid autofocusing for brightfield and fluorescence whole slide imaging,” Biomed. Opt. Express 7(11), 4763–4768 (2016). [CrossRef]  

15. L. Silvestri, M.C. Muellenbroich, I. Costantini, A.P. Di Giovanna, L. Sacconi, and F.S. Pavone, “RAPID: Real-time image-based autofocus for all wide-field optical microscopy systems.” bioRxiv, 170555, (2017).

16. S. Jiang, J. Liao, Z. Bian, K. Guo, Y. Zhang, and G. Zheng, “Transform- and multi-domain deep learning for single-frame rapid autofocusing in whole slide imaging,” Biomed. Opt. Express 9(4), 1601–1612 (2018). [CrossRef]  

17. Y. Rivenson, Z. Göröcs, H. Günaydin, Y. Zhang, H. Wang, and A. Ozcan, “Deep learning microscopy,” Optica 4(11), 1437–1443 (2017). [CrossRef]  

18. N. Dimitriou, O. Arandjelović, and P. D. Caie, “Deep learning for whole slide image analysis: an overview,” Front. Med. 6, 264 (2019). [CrossRef]  

19. H. R. Tizhoosh and L. Pantanowitz, “Artificial intelligence and digital pathology: challenges and opportunities,” J. Pathol. Inform. 9(1), 38 (2018). [CrossRef]  

20. H. Pinkard, Z. Phillips, A. Babakhani, D. A. Fletcher, and L. Waller, “Deep learning for single-shot autofocus microscopy,” Optica 6(6), 794–797 (2019). [CrossRef]  

21. P. Chen, K. Gadepalli, R. MacDonald, Y. Liu, S. Kadowaki, K. Nagpal, T. Kohlberger, J. Dean, G. S. Corrado, J. D. Hipp, C. H. Mermel, and M. C. Stumpe, “An augmented reality microscope with real-time artificial intelligence integration for cancer diagnosis,” Nat. Med. 25(9), 1453–1457 (2019). [CrossRef]  

22. J. Liao, Z. Wang, Z. Zhang, Z. Bian, K. Guo, A. Nambiar, Y. Jiang, S. Jiang, J. Zhong, M. Choma, and G. Zheng, “Dual light-emitting diode-based multichannel microscopy for whole-slide multiplane, multispectral and phase imaging,” J. Biophotonics 2(11), e201700075 (2018). [CrossRef]  

23. S. Jiang, Z. Bian, X. Huang, P. Song, H. Zhang, Y. Zhang, and G. Zheng, “Rapid and robust whole slide imaging based on LED-array illumination and color-multiplexed single-shot autofocusing,” Quant. Imaging Med. Surg. 9(5), 823–831 (2019). [CrossRef]  

24. J. Liao, S. Jiang, Z. Zhang, K. Guo, Z. Bian, Y. Jiang, J. Zhong, and G. Zheng, “Terapixel hyperspectral whole-slide imaging via slit-array detection and projection,” J. Biomed. Opt. 23(06), 1–7 (2018). [CrossRef]  

25. J. Liao, Y. Jiang, Z. Bian, B. Mahrou, A. Nambiar, A. W. Magsam, K. Guo, S. Wang, Y. K. Cho, and G. Zheng, “Rapid focus map surveying for whole slide imaging with continuous sample motion,” Opt. Lett. 42(17), 3379–3382 (2017). [CrossRef]  

26. C. Guo, Z. Bian, S. Jiang, M. Murphy, J. Zhu, R. Wang, P. Song, X. Shao, Y. Zhang, and G. Zheng, “OpenWSI: a low-cost, high-throughput whole slide imaging system via single-frame autofocusing and open-source hardware,” Opt. Lett. 45(1), 260–263 (2020). [CrossRef]  

27. C. Guo, “Brightfield, fluorescence, and phase WSI,” figshare (2021), https://doi.org/10.6084/m9.figshare.12330740.

28. C. Zuo, J. Li, J. Sun, Y. Fan, J. Zhang, L. Lu, R. Zhang, B. Wang, L. Huang, and Q. Chen, “Transport of intensity equation: a tutorial,” Opt. Laser Eng. 135, 106187 (2020). [CrossRef]  

29. C. Zuo, Q. Chen, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation: fast solution with use of discrete cosine transform,” Opt. Express 22(8), 9220–9244 (2014). [CrossRef]  

30. C. Zuo, Q. Chen, H. Li, W. Qu, and A. Asundi, “Boundary-artifact-free phase retrieval with the transport of intensity equation ii: applications to microlens characterization,” Opt. Express 22(15), 18310–18324 (2014). [CrossRef]  

31. C. Zuo, Q. Chen, Y. Yu, and A. Asundi, “Transport-of-intensity phase imaging using savitzky-golay differentiation filter-theory and applications,” Opt. Express 21(5), 5346–5362 (2013). [CrossRef]  

32. J. Schindelin, I. Arganda-Carreras, E. Frise, V. Kaynig, M. Longair, T. Pietzsch, S. Preibisch, C. Rueden, S. Saalfeld, B. Schmid, J.Y. Tinevez, D. J. White, V. Hartenstein, K. Eliceiri, P. Tomancak, and A. Cardona, “Fiji: an open-source platform for biological-image analysis,” Nat. Med. 9(7), 676–682 (2012). [CrossRef]  

Supplementary Material (2)

NameDescription
Dataset 1       This dataset includes the MATLAB Demo code of automatic distortion correction and 3D design files for building up the reported WSI system
Supplement 1       No_1

Data availability

Parts of code and 3D design files can be downloaded at Dataset 1 in Ref. [27]. Additional data in this paper may be obtained from the authors upon reasonable request.

27. C. Guo, “Brightfield, fluorescence, and phase WSI,” figshare (2021), https://doi.org/10.6084/m9.figshare.12330740.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. (a) The low-cost brightfield, fluorescence, and phase-contrast WSI system prototype. The inset is an extended view of the fluorescence filter cube. (b1) A fluorescence filter cube and the designed 3D-printed parts constitute a plug-and-play component, and it can be easily replaced by threading in or out from an SM1-threaded lens tube. (b2) Surface mounted white LED. An e-switch MOSFET module was connected to the Arduino board to quickly turn on or off the surface-mount LED. (b3) Fluorescence illumination path. The surface-mount LED was placed at the back focal plane of a Nikon photographic lens, and a cooling fan was attached back of the surface-mount LED to remove heated air. (c) System integration to give more details about the prototype.
Fig. 2.
Fig. 2. Single-frame autofocusing schema via dual-LED illumination. (a1) The captured image with two copies of a sample. (a2) Autocorrelation function of (a1). (a3) The line trace of the (a2) and the locations of the two peaks. (b1)-(b3) The captured images when the ultrasonic motor ring at different positions. The defocus distance can be recovered according to the translational shift between the two copies. (c1) The calibration curve between the two-copy separation and the lens ring position (the three color data points in (c1) correspond to the cases of (b1)-(b3)). (c2) The measured calibration curve between the lens ring positions and the defocus distances.
Fig. 3.
Fig. 3. The autofocusing performance of the reported scheme. 5 different samples in total 1890 tiles were tested. The different stained and unstained samples are included. The average focusing error is ∼0.34 µm, which is well below the ± 0.7 µm depth of field range.
Fig. 4.
Fig. 4. Pincushion distortion correction. (a) Image stitching errors due to pincushion distortion. (b) No stitching errors after digital distortion correction.
Fig. 5.
Fig. 5. Different operating modes of the reported WSI system. (a) Brightfield imaging. The filter cube was taken out to perform brightfield acquisition. (b) Fluorescence imaging. The surface-mount white LED was turned on, and the LED array was turned off. The different filter components are labeled with different colors for distinguishment. (c) TIE phase-contrast imaging. A single green LED was turned on for image acquisition. The ultrasonic motor ring was used to drive the photographic lens to move axially. We acquired one focused image and one defocused image to perform TIE phase recovery. (d) The autofocusing process. Two green LEDs are used for sample illumination in the autofocusing process.
Fig. 6.
Fig. 6. Brightfield whole slide images. (a) A human blood smear with hematoxylin and eosin (H&E) stain. The acquisition time is ∼29 s for this 50-mm2 area. (b) A Ki-67 with immunohistochemistry (IHC) brown stain. The acquisition time is ∼20 s for this whole slide image.
Fig. 7.
Fig. 7. The fluorescence whole slide image of a transparent mouse kidney section. The field of view is 11 mm by 7 mm and the acquisition time is ∼60 s.
Fig. 8.
Fig. 8. The phase-contrast whole slide image of the mouse kidney section. The acquisition time is ∼69 s.
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.