Abstract
LED array microscopy is a novel computational imaging technique that can achieve two-dimensional (2D) phase imaging and three-dimensional (3D) refractive index imaging with both high resolution and a large field of view. Although its experimental setup is simple, the errors caused by LED array position and light source central wavelength obviously decrease the quality of reconstructed results. To solve this problem, comprehensive error parameters optimized by the phase smoothing criterion are put forward in this paper. The central wavelength error and 3D misalignment model with six freedom degree errors of LED array are considered as the comprehensive error parameters when the spatial positional and optical features of arbitrarily placed LED array are unknown. Phase smoothing criterion is also introduced to the cost function for optimizing comprehensive error parameters to improve the convergence results. Compared with current system correction methods, the simulation and experimental results show that the proposed method in this paper has the best reconstruction accuracy, which can be well applied to an LED array microscope system with unknown positional and optical features of the LED array.
© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement
1. Introduction
Traditional microscopic imaging systems can capture objects’ two-dimensional (2D) intensity information, but lose their other important optical information, such as phase, polarization, and spectrum. In recent years, computational imaging technology, such as quantitative phase imaging and polarization imaging, can enhance the information acquisition ability by processing light sources, optical devices, detectors, and post-processing algorithms in the imaging link, raising widespread attention from researchers [1]. LED array microscopy, as a typical computational imaging technology, replaces the light source of traditional microscopic imaging systems with a programmable LED array. By collecting multiple images of a single object with angularly varying illuminations and combining with specific inversion algorithms, 2D phase imaging and three-dimensional (3D) refractive index imaging can be achieved with both high resolution and a large field of view [2–13]. Most significantly, this technology can be label-free, which is essential for biological studies where fluorescent probes or staining agents cannot be used or are difficult to label [14,15].
In 2013, Zheng et al. introduced the use of LED array microscopy to iteratively stitch low-resolution (LR) images with angle-varied illumination in the Fourier domain to create high-resolution (HR) complex images of specimens, and named Fourier ptychographic (FP) or Fourier ptychographic microscopy (FPM) [2]. Since then, numerous studies have been conducted to improve the original FPM method and widen its range of applications. Some methods have incorporated the aberrations present in the imaging system and obtained aberration information when reconstructing HR complex images, resulting in better reconstruction accuracy for FPM [3–5]. To reduce image acquisition time and improve data acquisition efficiency, approaches such as multi-coding illumination [6] and single-shot techniques [7,8] have been proposed. Moreover, some studies have focused on FPM algorithm improvement [9], noise suppression [10], and other related aspects. Furthermore, in recent years, utilizing the theory of optical diffraction tomography and drawing inspiration from FPM, researchers have achieved 3D refractive index imaging of biological samples using LED array microscopy [12,13]. This innovative imaging technology holds great potential for various biological and medical applications.
The experimental setup of LED array microscopy is very simple. Only the light source of commercial microscopy needs to be modified with a programmable LED array. However, various post-processing reconstruction algorithms are sensitive to system errors in the experimental system. The misalignment of LED array caused by the deviation between its actual and ideal position is a serious problem. This is because parallel light with different illumination angles is achieved by LED elements at different positions. When the position of LED elements deviates, the corresponding sub-aperture position in the Fourier domain will also be misaligned, decreasing in the quality of the reconstructed image and even serious artifacts. Although the physical alignment method can adjust some apparent misalignment, there are still positional errors that are difficult to adjust and measure in the system. Therefore, in response to the misalignment problem of LED array, some software level correction algorithms have been proposed. According to the different search domains of the algorithms, these algorithms can be divided into Fourier domain search and spatial domain search. In Fourier domain search algorithms, some studies use simulated annealing (SA) algorithm to search for the position of each sub-aperture in the Fourier domain, minimizing the cost function based on amplitude [16,17]. These methods have a long search time and are prone to falling into local minima due to the large number of sub-apertures searched. And there is no established position misalignment model for the LED array, which requires optimization search during each reconstruction, resulting in high computational costs. Subsequently, Sun et al. established a position misalignment model for LED array, using four position error parameters $(\Delta x,\Delta y,\Delta z,\theta )$ to describe the translation deviation in three directions and rotation deviation around the $z$-axis. After searching for the sub-aperture positions of the bright field images in the Fourier domain using SA algorithm, these four error parameters are obtained by the nonlinear regression method, and then all images are iterated for reconstruction. This method enhances iteration efficiency and adjusting accuracy, and is named pcFPM [18]. In spatial domain search algorithms, Zhou et al. proposed a method named misalignment correction FPM (mcFPM) [19]. It uses SA algorithm to directly search for two translation parameters $(\Delta x,\Delta y)$ and one rotation parameter $(\theta )$ in the spatial domain. Afterwards, Zhu et al. used particle swarm optimization to search for four parameters to correct positional errors and named it SBC [20]. The above methods are all implemented in the transmission LED array microscopy, and these position error correction methods can also be transplanted to the reflective LED array microscopy to improve reconstruction accuracy [21]. However, these LED array position misalignment models used in the above methods assume that the LED array is perpendicular to the optical axis, with only 3 or 4 degrees of freedom position deviation, which is sufficient in some cases. For a wider range of application scenarios, such as when modifying inverted microscopy, where the LED array is suspended above the sample, there may be a positional deviation of 6 degrees of freedom in the system, which makes the above methods less applicable. Therefore, Zheng et al. proposed the full-pose-parameter of LED array, and used a physics-based model according to the general knowledge of the microscope and the brightfield-to-darkfield boundaries of the image to solve the 6 position error parameters of the misplaced LED array [22]. In addition, the center wavelength of commercial LED arrays has errors in fabrication, although some studies have plotted the convergence index as a function of the LED wavelength to maximize it to obtain the center wavelength of the LED [23]. However, when there are too many error parameters in the system, this method will also be greatly limited in the inversion of comprehensive error parameters.
In response to the issue of comprehensive error parameters when the spatial positional and optical features of arbitrarily placed LED array are unknown, a 3D misalignment model of LED array is used in this paper. The full freedom of spatial motion of LED array (3 degrees of translational freedom and 3 degrees of rotational freedom) are considered. At the same time, the center wavelength error of LED light source is taken into account, and these 7 error parameters constitute all the errors generated by the LED array. Then the SA algorithm is used to search for these 7 parameters in the spatial domain to minimize the cost function. Unlike the previous cost function, which only contains amplitude data fidelity terms, we added a phase smoothing criterion to the cost function, which uses the prior knowledge that the reconstructed phase result is smooth. In addition, before the algorithm starts, a simple image preprocessing is performed to obtain the initial values and value ranges of some parameters, to reduce the search range and accelerate the convergence. Afterwards, validation is conducted on simulation and experimental data, and the results showed that the algorithm using 7 parameters for optimized reconstruction had higher quality than the algorithm using only 4 parameters, without significantly increasing the algorithm runtime. At the same time, by comparing the results of whether there is a phase smoothing criterion in the cost function, it is found that the reconstruction results after we add a phase smoothing criterion also have a better performance.
2. Principle
2.1 Comprehensive error parameters in LED array microscopy
This section describes the comprehensive error parameters in LED array microscopy and illustrates the necessity of these system error correction in LED array microscopy using FP reconstruction as an example. An LED array microscopy system typically comprises an LED array, a microscope with a low NA objective lens, and a monochromatic camera. The LED elements on the array sequentially light up from the center towards the periphery, illuminating the specimen from various angles. For LED element located in row $m$ and column $n$, the wave vector of the illumination light is represented as ${{\mathbf {k}}_{m,n}} = \left ( {k_x^{m,n},k_y^{m,n}} \right )$, and the camera captures the LR specimen’s intensity image ${I_{m,n}}$ can be described as:
In LED array microscopy, the wave vector $\left ( {k_x^{m,n},k_y^{m,n}} \right )$ of the angle-varied illuminating light, which defines the sub-aperture position, is determined by the central wavelength $\lambda$ and spatial position $\left ( {{x_{m,n}},{y_{m,n}},{z_{m,n}}} \right )$ of the LED, as illustrated in Eq. (2).
Assuming that the distance between adjacent LED elements in a commercial LED array is equal, denoted as $d$. Therefore, for any posture of the LED array in Fig. 1(b), according to the coordinate transformation method, the actual spatial position $\left ( {{{x'}_{m,n}},{{y'}_{m,n}},{{z'}_{m,n}}} \right )$ for the $m$-th row and $n$-th column LED element can be obtained as:
Additionally, accounting for the center wavelength error $\Delta \lambda$ of the LED, the illumination wave vector of each LED element can be expressed as:
In the absence of system errors (Fig. 2(a1)-(c1)) or when the system errors are precisely known (Fig. 2(a5)-(c5)), the sub-aperture center frequencies accurately align, thereby producing satisfactory reconstruction results. In cases where there is a deviation in the central wavelength of LED, the reconstructed central frequencies misaligns, and the cutoff frequency of $P\left ( {\mathbf {k}} \right )$ is altered, which adversely affects the mapping of the Fourier domain and reduces the accuracy of reconstruction, as shown in Fig. 2(a2)-(c2). Positional errors in the LED array cause substantial distortion in the center frequencies during the generation and reconstruction processes of the image, and may even affect the light and dark field boundaries, which can lead to significant inaccuracies in the reconstructed results, as illustrated in Fig. 2(a3)-(c3). If only 4 position error parameters are corrected, the reconstructed image quality will be greatly improved, but they will still not achieve satisfactory results due to the misalignment of the center frequencies, as shown in Fig. 2(a4)-(c4) . And two additional position error parameters, $\theta _x$ and $\theta _y$, control the shape of the Fourier domain map, making it more arbitrary than just a translated or rotated square. In summary, it is necessary to correct the comprehensive error parameters in LED array microscopy.
2.2 System error correction algorithm with phase smoothing criterion
In this section, we propose a system error correction algorithm with phase smoothing criterion based on the comprehensive error parameters model established in section 2.1. The spatial domain search correction is accomplished using the SA algorithm to minimize the cost function involving 7 error parameters.
In contrast to previous works, where the cost function only considered the data fidelity term of the intensity image, we include a regularization term about the phase smoothing criterion and utilize the continuous smooth variation of the phase image as a priori knowledge for optimization. Before starting the correction algorithm, a simple image preprocessing is performed in which a part of the initial values are adaptively selected to reduce the influence of human experience and ensure faster algorithm convergence. Fig. 3 illustrates the algorithm flow and detailed steps for implementing the proposed algorithm.
Step 1: The algorithm begins by performing a simple preprocessing step, which involves computing the total intensity value of each LR image. An adaptive threshold using the Otsu algorithm is then calculated, akin to the threshold segmentation of the image. The LR image is roughly classified as a bright field (with high total intensity value) or a dark field (with low total intensity value) image. The last bright field image in the image sequence, numbered $N$, is selected. Then, the result of the threshold segmentation is used to create a matrix that maps to the corresponding LED positions and is used for circle finding via the Hough transform. The initial values $\Delta {x_0}$ and $\Delta {y_0}$ can be obtained from the distance between the center of the circle and the center of the matrix, whereas the remaining 5 system error parameters are initialized to 0.
Step 2: The sample Fourier spectrum ${O_j}\left ( {\mathbf {k}} \right )$ and pupil function ${ P_j}\left ( {\mathbf {k}} \right )\left ( {j = 0} \right )$ are initialized. Typically, the up-sampled bright field LR image’s Fourier transform is employed as the initial sample Fourier spectrum. In addition, the initial pupil function is defined as a circular low-pass filter with a cutoff frequency of ${{2\pi {\textrm{NA}}} \mathord {\left /{\vphantom {{2\pi {\textrm{NA}}} \lambda }} \right. } \lambda }$, where NA is the numerical aperture of the objective lens.
Step 3: Calculate the wave vector ${{\mathbf {k}}_{m,n}}$ of the illumination light at different angles using Eq. (6), and generate the corresponding LR image estimate ${\psi _{m,n}}\left ( {\mathbf {r}} \right )$ from ${O_j}\left ( {\mathbf {k}} \right )$:
Step 5: Apply the EPRY algorithm [3] for the simultaneous update of the sample Fourier spectrum and pupil function:
Step 7: After one round of update iteration, the SA method is employed to search 7 system error parameters in the spatial domain to minimize the cost function, which is defined as:
In the search process, the results of the reconstructed image are taken into account, which is conducive to the search of system error parameters. Subsequently, the 7 system error parameters found are used as initial values, and all illumination wave vectors are changed according to Eq. (6), while the cutoff frequency of ${P_j}\left ( {\mathbf {k}} \right )$ is changed to ${{2\pi {\textrm{NA}}} \mathord {\left / {\vphantom {{2\pi {\textrm{NA}}} {\left ( {\lambda + \Delta \lambda } \right )}}} \right. } {\left ( {\lambda + \Delta \lambda } \right )}}$.
Step 8: Repeat steps 3-7 $J$ times to optimize the comprehensive error parameters. Typically, the search process converges within more than 10 iterations.
Step 9: Return the sample Fourier spectrum $O\left ( {\mathbf {k}} \right )$ and pupil function $P\left ( {\mathbf {k}} \right )$.
It should be noted that the search for system error parameters is a one-time process after building the system. After that, with the system settings unchanged, the 7 system error parameters can be used in steps 3-6 and updated iteratively to obtain the HR complex images. This eliminates the need for the optimization search process and reduces computational costs for future applications.
3. Simulations
To validate the effectiveness of our method, this section conducted a simulation study before applying it for the system error correction of actual LED array microscopy. The system parameters selected for the simulation process are identical to the subsequent experimental system settings. The light source utilized for the simulation is a $15\times 15$ programmable LED array, which provides angle-varied illumination. The distance between each adjacent LED element in the LED array is $2mm$, and the nominal distance between the sample and LED array is $50mm$. The objective lens used in the simulation has a magnification of $4\times$ and an NA of 0.13, while the detector has a pixel size of $6.5\mu m$.
Following the customary approach of using two images, the camera man and the aerial view image, as amplitude and phase to simulate the HR complex image of the sample. We artificially introduce comprehensive system errors by setting the 7 parameters described in section 2.1. Generate 100 sets of LR intensity images, each containing 225 images. The 7 system error parameters are randomly changed, while the variation range for the 3 position translation parameters $\left ( {\Delta x,\Delta y,\Delta z} \right )$ is set as $\left [ { - 2mm,2mm} \right ]$, and the variation range for the 3 position rotation parameters $\left ( {{\theta _x},{\theta _y},{\theta _z}} \right )$ is set as $\left [ { - {5^ \circ },{5^ \circ }} \right ]$. The wavelength of the LED is fixed at $520nm$, and the variation range of wavelength error $\Delta \lambda$ is set to be $\left [ { -10nm,10nm} \right ]$. Choose one group for thorough analysis and evaluation. This group exhibits the position translation parameters $\left ( {\Delta x,\Delta y,\Delta z} \right )$ of $1.5mm$, the position rotation parameters $\left ( {{\theta _x},{\theta _y},{\theta _z}} \right )$ are set as $5^ \circ$, and a wavelength error $\Delta \lambda$ of $-10nm$.
In accordance with step 1 of the algorithm flowchart in section 2.2, image preprocessing operations are carried out. The total intensity of each image is calculated and normalized, as demonstrated in Fig. 4(a). The threshold, Thr, is determined by applying the Otsu method, which resulted in a value of 0.704. Images with a total intensity greater than Thr are classified as bright field images and those with lower intensity values as dark field images. The number of the last bright field image, $N$, is recorded as 77. After setting the LED position corresponding to the bright field image to 1 as shown in Fig. 4(b), a circle finding operation is performed. The distance from the center of the circle to the center of the matrix yielded the initial values of system error $\Delta {x_0}$ and $\Delta {y_0}$. Specifically, the values are $1.53mm$ for $\Delta {x_0}$, and $1.81mm$ for $\Delta {y_0}$. The system errors $\Delta {x_0}$ and $\Delta {y_0}$ obtained through image preprocessing are found to be close to the set value of $1.5mm$. To analyze the discrepancy between the values obtained from the preprocessing method and the actual system error, the 100 sets of images mentioned above are processed, and the disparities are depicted using a box plot, as seen in Fig. 4(c). The $x$ and $y$ directions are indicated by blue and red, respectively. The results indicate that the difference between the initial system error values obtained by our preprocessing method and the actual error values is generally within the range of $\pm 0.2mm$, with none exceeding $\pm 0.5mm$. As a result, during the calibration process of the actual experimental system, the search ranges for the system error parameters can be limited to $\left [ {\Delta {x_0} - 0.5mm,\Delta {x_0} + 0.5mm} \right ]$ and $\left [ {\Delta {y_0} - 0.5mm,\Delta {y_0} + 0.5mm} \right ]$, respectively to reduce the search range and accelerate the convergence.
Various algorithms are utilized to correct the system errors of the LED array microscopy after the image preprocessing step. Then, the reconstructed HR amplitude and phase are compared in Fig. 5. Fig. 5(a1) and (a2) show the ground truth amplitude and phase images, respectively. Fig. 5(b1) and (b2) display the reconstructed HR amplitude and phase images using the original FPM algorithm, which does not correct for system errors. It is apparent that the reconstructed results do not match the ground truth, indicating that the original FPM algorithm produces significantly worse results. The Fourier domain stitching results are depicted in Fig. 5(b3), where a regular square shape can be observed. Fig. 5(c1) and (c2) present the amplitude and phase results produced by pcFPM. This algorithm uses SA to search for the center frequencies corresponding to 9 center LED elements in Fourier domain and subsequently calculates the 4 position errors $\left ( {\Delta x,\Delta y,\Delta z,{\theta _z}} \right )$ by performing nonlinear fitting. Fig. 5(d1) and (d2) illustrate the results obtained through the SBC algorithm. This algorithm deploys particle swarm optimization in the spatial domain to search for the 4 position error parameters directly. Although pcFPM and SBC algorithms consider 4 position error parameters to correct for the multi-parameter system errors in LED array microscopy, some artifacts are still present in the amplitude and phase images. Thus, these 4 parameters are insufficient to fully characterize the system errors. therefore, the corrections are limited. The image stitching results in the Fourier domain by these two methods, as shown in Fig. 5(c3) and (d3), compared to Fig. 5(b2), demonstrate added translation and rotation while partly maintaining square shape. The limited mapping of the image in the Fourier domain can be attributed to the insufficient parameters employed in the methods. Moreover, these methods employ a cost function that only considers the data fidelity term of the first $N$ images. Our method introduces a phase smoothing criterion in the cost function and optimizes the 7 system error parameters $\left ( {\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \right )$ in the spatial domain to minimize the cost function. To determine the effectiveness of adding the phase smoothing criterion, the results of the cost function when searching the 7 proposed system error parameters with and without the criterion are compared. The comparison results are depicted in Fig. 5(e1)-(e3) and (f1)-(f3). Without adding the phase smoothing criterion, both the reconstructed amplitude and phase still exhibit noticeable artifacts in Fig. 5(e1) and (e2). Nevertheless, incorporating the phase smoothing criterion has significantly improved the quality of reconstructed amplitude and phase results, as demonstrated in Fig. 5(f1) and (f2). Notably, the criterion has ensured a smoother phase image, thereby exhibiting that the assumption that adding the phase is smooth in the cost function is reflected during the optimization search. The stitching results in the Fourier domain are shown in Fig. 5(f3). It is evident that after incorporating 7 system error parameters, the resulting stitched image is no longer a regular square, but assumes a more arbitrary shape. This type of shape better reflects the frequency mapping typically encountered in LED array microscopy with 7 parameter system errors.
To measure the divergence between the HR amplitude and phase reconstructed by various methods and the ground truth, two distinct indicators, the root mean square error (RMSE) and the structural similarity index (SSIM), are used. RMSE calculates the difference between two images, and a smaller value implies less difference between the two images, whereas SSIM is an indicator that assesses the similarity between two images, considering aspects such as lightness, contrast, and image structure. The calculation formula for SSIM is as follows:
To illustrate the convergence results of whether the phase smoothing criterion is added or not, the above 100 groups of simulation images are used for verification. The convergence results for the 7 error parameters and $\Delta p$ are shown in Fig. 6, where $\Delta p$ represents the sum of pixel number difference between the position of all ideal and true sub-aperture center frequencies. It can be seen that the convergence results with the phase smoothing criterion added are better than those without. In addition, since the initial values of $\Delta x$ and $\Delta y$ are determined and their search range is narrowed during image preprocessing, these two parameters can converge quickly, as shown in Fig. 6(a)-(b). At the same time, by comparing the error parameters after convergence, it can be seen that among the 7 error parameters, $\Delta x$, $\Delta y$ and $\theta _z$ have the greatest influence on sub-aperture center frequencies. The convergence results of the other 4 error parameters $\left ( {\Delta z,{\theta _x},{\theta _y},\Delta \lambda } \right )$ are slightly different from the set value, which indicates that the sub-aperture center frequencies is less sensitive to these 4 parameters. However, if these 4 parameters are not considered, the sub-aperture center frequencies constitute a translated or rotated square, which cannot satisfy the frequency mapping of the LED array at any pose. If these 4 parameters are taken into account, a satisfactory alignment of the sub-aperture center frequencies will be achieved even if the fully accurate error parameters are not obtained.
To verify the universality and robustness of the proposed method, additive Gaussian white noise is added to 100 groups of LR images as described earlier, to simulate the noise of an actual acquisition system. The standard deviation of the noise is increased from 0 to 0.08 with a step size of 0.01. Then, 900 sets of images are reconstructed using the aforementioned algorithms. The reconstructed images under varying noise conditions are evaluated using the average SSIM and RMSE values. Fig. 7 illustrates the comparison of SSIM and RMSE values. Fig. 7(a) and (b) represent the average SSIM values of the amplitude and phase reconstruction results, respectively, while Fig. 7(c) and (d) represent the average RMSE values of the same. Regarding the amplitude reconstruction results presented in Fig. 7(a) and (c), it is clear that the proposed method outperforms the other algorithms in terms of both SSIM and RMSE indicators. And the algorithms for correcting 7 system error parameters (our method with or without phase smoothing criterion) are also superior to those for correcting only 4 position error parameters (pcFPM, SBC). As noise standard deviation increases, the reconstruction results of amplitude display a significant trend of variation. While the phase recovery results in Fig. 7(b) and (d) display a smaller trend of variation with noise compared to amplitude, our method along with the reconstruction results without the criterion outperformed the original FPM, pcFPM, and SBC. When comparing the effects of phase smoothing criterion to those without it, it is found that when noise levels are low, the phase result with criterion is slightly better. Conversely, when noise levels are high, the results without the criterion are slightly superior. The deviation in the search results may be attributed to the influence of noise on the reconstructed phase’s smoothness. Nevertheless, the impact of increasing the number of system error parameters on the reconstruction effect is significant, based on the results of amplitude and phase. Specifically, adding the phase smoothing criterion to the cost function also shows a positive impact. As for the calculation time of the algorithm, the optimization of 7 parameters (49.37s) and 4 parameters (pcFPM, 28.85s; SBC, 43.26s) are on the same level, and all can be executed within a minute on a desktop computer (Intel Core i5-12600KF, 3.7GHz). Therefore, our method improved the accuracy of the algorithm without significantly increasing its running time. Moreover, the system error parameters can be used for subsequent processing after being corrected once, which further reduces the algorithm’s computational overhead.
4. Experiments
This section evaluates the effectiveness of the proposed algorithm by applying the aforementioned methods to experimental data obtained from actual LED array microscopy. The experimental system employed is a modified version of a commercial inverted microscope in which its light source has been replaced with a commercial LED array controlled by an Arduino Mega 2560 microcontroller. Every LED element is sequentially illuminated and a scientific CMOS (sCMOS) camera (Dhyana 400BSI V2, Tucsen Photonics Co., Ltd, Fujian, China) with a pixel size of $6.5\mu m$ is synchronously controlled to record sample images. The system parameters are set to match those described in Section 3. Specifically, the magnification of the objective lens is $4\times$, while the numerical aperture is 0.13. The distance between adjacent LED elements in the commercial LED array is $2mm$, while the distance between the LED array and the sample is set at $50mm$. Green LEDs are used for illumination, but the central wavelength is unspecified. Before evaluating the effectiveness of the proposed method in correcting 3D misalignment of LED arrays, we align the LED array on an inverted microscope relatively accurately, but there is still an inevitable position deviation of 6 freedom degrees in the system.
And then, a USAF resolution target (Edmund Optics Inc., Barrington, NJ, USA) is tested using the LED array microscopy that was built. The camera captured 225 LR intensity images which are used to reconstruct the target using several algorithms. Fig. 8(a1) shows the first LR image, while Fig. 8(a2) and (a3) display the enlarged image within the boxes in Fig. 8(a1) and (a2), respectively. Fig. 8(b1)-(b3) display the amplitude reconstruction results using the original FPM. The reconstructed image suffers from poor quality due to the absence of system error correction, resulting in artifacts in the image, as shown in Fig. 8(b2)-(b3). The results of the pcFPM and SBC reconstructions are presented in Fig. 8(c1)-(c3) and Fig. 8(d1)-(d3), respectively. Some artifacts can still be seen in the 8th and 9th line pairs in Fig. 8(c3) and (d3). We note that 9-3 line pairs are clearly visible in the original literature for pcFPM and SBC. However, due to the possible position deviation of 6 freedom degrees in the system, there are still some artifacts in the reconstruction results of these two methods. Both pcFPM and SBC methods only consider 4 positional error parameters. If there are multiple system errors, the mapping ability of these methods in the Fourier domain is not enough. Fig. 8(e1)-(e3) show the reconstruction results obtained after accounting for 7 system error parameters without the use of phase smoothing criterion, while Fig. 8(f1)-(f3) exhibit the reconstruction images with phase smoothing criterion. In contrast to original FPM, pcFPM, and SBC, the reconstruction considering multiple system errors (as seen in Fig. 8(e3)-(f3) and Fig. 8(b3)-(d3)) results in a improvement in the quality of the reconstructed images. Comparing the results of phase smoothing criterion used versus not used (as demonstrated in Fig. 8(e3) and (f3)) shows that phase smoothing criterion enhances the optimization algorithm’s ability to locate system error parameters, resulting in higher quality reconstructed images.
Moreover, a stained specimen of Corpus ventriculi sec is tested to verify the biological experiment. Fig. 9(a1) displays the first LR image captured by the camera, while Fig. 9(b1)-(f1) present the reconstructed amplitude by various methods, and Fig. 9(b2)-(f2) demonstrate the corresponding phase results. The reconstructed amplitude results by different methods present little difference and perform well in the biological sample test, as shown in Fig. 9(b1)-(f1). However, a significant disparity in the reconstructed phase results through various methods can be noticed, by comparing Fig. 9(b2)-(f2). Owing to inadequate system error rectification capability, the original FPM yielded notable inaccuracies in the reconstructed phase results (Fig. 9(b2)). The phase results reconstructed through pcFPM and SBC (Fig. 9(c2) and (d2)) exhibit limited contrast and relatively similar features. Furthermore, there exist significant phase mutations (such as black dots in the figure) in both methods’ phase reconstruction outcomes, which can be attributed to the methods’ synthesis of only four positional errors and distorted mapping in the Fourier domain. The phase reconstruction images without and with phase smoothing criterion in Fig. 9(e1)-(e2) and (f1)-(f2) result in higher contrast and better effects than the reconstructions obtained by the three previous methods in Fig. 9(b2)-(d2). Fig. 9(f2), containing the phase smoothing criterion in the cost function, produced the highest contrast among all reconstructed methods.
5. Conclusion
This paper aims to address the problem of comprehensive error parameters in LED array microscopy, including the center wavelength error and the position error of six degrees of freedom. A system error correction algorithm with phase smoothing criterion is proposed, which uses SA algorithm to search for 7 error parameters in the spatial domain. At the beginning of the algorithm, the initial values $\Delta x$ and $\Delta y$ are determined by a simple image preprocessing, and their search range is also reduced. These two error parameters have the greatest influence on sub-apertures’ center frequencies, and thus are very helpful for the convergence of the algorithm. The simulation and experimental results show that if there are comprehensive system errors in the LED array microscopy, the reconstruction results obtained by our method are more accurate than those obtained by existing methods, but there is no significant increase in computational time. Comparing the reconstructive results with and without including the phase smooth criterion in the cost function supports its effectiveness. Additionally, this search method only needs to be performed once, and the system error parameters found can be used for various downstream applications as long as the system does not change, significantly reducing the subsequent computational time. In addition, the method in this paper may be more suitable for unskilled users to perform system correction from the software level. Of course, for skilled users, with the ability to use more accurate mechanical alignment methods and the use of spectrometers to measure the central wavelength, our method is also able to complement them.
Funding
National Natural Science Foundation of China (61875160).
Acknowledgments
This study was funded by National Nature Science Foundation of China (Grant No. 61875160). The authors would like to thank the reviewers and the associate editor for their comments that contributed to meaningful improvements in this paper. The authors would also like to thank Hao Li and Prof. Jinfeng Peng for their help in revising this paper.
Disclosures
The authors declare no conflicts of interest.
Data Availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
1. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018). [CrossRef]
2. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]
3. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]
4. X. Chen, Y. Zhu, M. Sun, D. Li, Q. Mu, and L. Xuan, “Apodized coherent transfer function constraint for partially coherent Fourier ptychographic microscopy,” Opt. Express 27(10), 14099–14111 (2019). [CrossRef]
5. P. Song, S. Jiang, H. Zhang, X. Huang, Y. Zhang, and G. Zheng, “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,” APL Photonics 4(5), 050802 (2019). [CrossRef]
6. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an led array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]
7. B. Lee, J. Hong, D. Yoo, J. Cho, Y. Jeong, S. Moon, and B. Lee, “Single-shot phase retrieval via Fourier ptychographic microscopy,” Optica 5(8), 976–983 (2018). [CrossRef]
8. J. Sun, Q. Chen, J. Zhang, Y. Fan, and C. Zuo, “Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography,” Opt. Lett. 43(14), 3365–3368 (2018). [CrossRef]
9. J. Zhang, T. Xu, Z. Shen, Y. Qiao, and Y. Zhang, “Fourier ptychographic microscopy reconstruction with multiscale deep residual network,” Opt. Express 27(6), 8612–8625 (2019). [CrossRef]
10. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]
11. Z. Yang, L. Zhang, N. Lü, H. Wang, Z. Zhang, and L. Yuan, “Progress of three-dimensional, label-free quantitative imaging of refractive index in biological samples,” Chin. J. Laser 49, 0507201 (2022). [CrossRef]
12. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]
13. J. Li, N. Zhou, J. Sun, S. Zhou, Z. Bai, L. Lu, Q. Chen, and C. Zuo, “Transport of intensity diffraction tomography with non-interferometric synthetic aperture for three-dimensional label-free microscopy,” Light: Sci. Appl. 11(1), 154 (2022). [CrossRef]
14. Y. Rivenson, K. de Haan, W. D. Wallace, and A. Ozcan, “Emerging advances to transform histopathology using virtual staining,” BME Front. 2020, 9647163 (2020). [CrossRef]
15. D. Ryu, J. Kim, D. Lim, H.-S. Min, I. Y. Yoo, D. Cho, and Y. Park, “Label-free white blood cell classification using refractive index tomography and deep learning,” BME Front. 2021, 9893804 (2021). [CrossRef]
16. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]
17. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]
18. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]
19. A. Zhou, W. Wang, N. Chen, E. Y. Lam, B. Lee, and G. Situ, “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express 26(18), 23661–23674 (2018). [CrossRef]
20. Y. Zhu, M. Sun, P. Wu, Q. Mu, L. Xuan, D. Li, and B. Wang, “Space-based correction method for led array misalignment in Fourier ptychographic microscopy,” Opt. Commun. 514, 128163 (2022). [CrossRef]
21. H. Lee, B. Chon, and H. Ahn, “Rapid misalignment correction method in reflective Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Lasers Eng. 138, 106418 (2021). [CrossRef]
22. C. Zheng, S. Zhang, D. Yang, G. Zhou, Y. Hu, and Q. Hao, “Robust full-pose-parameter estimation for the led array in Fourier ptychographic microscopy,” Biomed. Opt. Express 13(8), 4468–4482 (2022). [CrossRef]
23. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]