Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

LED array microscopy system correction method with comprehensive error parameters optimized by phase smoothing criterion

Open Access Open Access

Abstract

LED array microscopy is a novel computational imaging technique that can achieve two-dimensional (2D) phase imaging and three-dimensional (3D) refractive index imaging with both high resolution and a large field of view. Although its experimental setup is simple, the errors caused by LED array position and light source central wavelength obviously decrease the quality of reconstructed results. To solve this problem, comprehensive error parameters optimized by the phase smoothing criterion are put forward in this paper. The central wavelength error and 3D misalignment model with six freedom degree errors of LED array are considered as the comprehensive error parameters when the spatial positional and optical features of arbitrarily placed LED array are unknown. Phase smoothing criterion is also introduced to the cost function for optimizing comprehensive error parameters to improve the convergence results. Compared with current system correction methods, the simulation and experimental results show that the proposed method in this paper has the best reconstruction accuracy, which can be well applied to an LED array microscope system with unknown positional and optical features of the LED array.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Traditional microscopic imaging systems can capture objects’ two-dimensional (2D) intensity information, but lose their other important optical information, such as phase, polarization, and spectrum. In recent years, computational imaging technology, such as quantitative phase imaging and polarization imaging, can enhance the information acquisition ability by processing light sources, optical devices, detectors, and post-processing algorithms in the imaging link, raising widespread attention from researchers [1]. LED array microscopy, as a typical computational imaging technology, replaces the light source of traditional microscopic imaging systems with a programmable LED array. By collecting multiple images of a single object with angularly varying illuminations and combining with specific inversion algorithms, 2D phase imaging and three-dimensional (3D) refractive index imaging can be achieved with both high resolution and a large field of view [213]. Most significantly, this technology can be label-free, which is essential for biological studies where fluorescent probes or staining agents cannot be used or are difficult to label [14,15].

In 2013, Zheng et al. introduced the use of LED array microscopy to iteratively stitch low-resolution (LR) images with angle-varied illumination in the Fourier domain to create high-resolution (HR) complex images of specimens, and named Fourier ptychographic (FP) or Fourier ptychographic microscopy (FPM) [2]. Since then, numerous studies have been conducted to improve the original FPM method and widen its range of applications. Some methods have incorporated the aberrations present in the imaging system and obtained aberration information when reconstructing HR complex images, resulting in better reconstruction accuracy for FPM [35]. To reduce image acquisition time and improve data acquisition efficiency, approaches such as multi-coding illumination [6] and single-shot techniques [7,8] have been proposed. Moreover, some studies have focused on FPM algorithm improvement [9], noise suppression [10], and other related aspects. Furthermore, in recent years, utilizing the theory of optical diffraction tomography and drawing inspiration from FPM, researchers have achieved 3D refractive index imaging of biological samples using LED array microscopy [12,13]. This innovative imaging technology holds great potential for various biological and medical applications.

The experimental setup of LED array microscopy is very simple. Only the light source of commercial microscopy needs to be modified with a programmable LED array. However, various post-processing reconstruction algorithms are sensitive to system errors in the experimental system. The misalignment of LED array caused by the deviation between its actual and ideal position is a serious problem. This is because parallel light with different illumination angles is achieved by LED elements at different positions. When the position of LED elements deviates, the corresponding sub-aperture position in the Fourier domain will also be misaligned, decreasing in the quality of the reconstructed image and even serious artifacts. Although the physical alignment method can adjust some apparent misalignment, there are still positional errors that are difficult to adjust and measure in the system. Therefore, in response to the misalignment problem of LED array, some software level correction algorithms have been proposed. According to the different search domains of the algorithms, these algorithms can be divided into Fourier domain search and spatial domain search. In Fourier domain search algorithms, some studies use simulated annealing (SA) algorithm to search for the position of each sub-aperture in the Fourier domain, minimizing the cost function based on amplitude [16,17]. These methods have a long search time and are prone to falling into local minima due to the large number of sub-apertures searched. And there is no established position misalignment model for the LED array, which requires optimization search during each reconstruction, resulting in high computational costs. Subsequently, Sun et al. established a position misalignment model for LED array, using four position error parameters $(\Delta x,\Delta y,\Delta z,\theta )$ to describe the translation deviation in three directions and rotation deviation around the $z$-axis. After searching for the sub-aperture positions of the bright field images in the Fourier domain using SA algorithm, these four error parameters are obtained by the nonlinear regression method, and then all images are iterated for reconstruction. This method enhances iteration efficiency and adjusting accuracy, and is named pcFPM [18]. In spatial domain search algorithms, Zhou et al. proposed a method named misalignment correction FPM (mcFPM) [19]. It uses SA algorithm to directly search for two translation parameters $(\Delta x,\Delta y)$ and one rotation parameter $(\theta )$ in the spatial domain. Afterwards, Zhu et al. used particle swarm optimization to search for four parameters to correct positional errors and named it SBC [20]. The above methods are all implemented in the transmission LED array microscopy, and these position error correction methods can also be transplanted to the reflective LED array microscopy to improve reconstruction accuracy [21]. However, these LED array position misalignment models used in the above methods assume that the LED array is perpendicular to the optical axis, with only 3 or 4 degrees of freedom position deviation, which is sufficient in some cases. For a wider range of application scenarios, such as when modifying inverted microscopy, where the LED array is suspended above the sample, there may be a positional deviation of 6 degrees of freedom in the system, which makes the above methods less applicable. Therefore, Zheng et al. proposed the full-pose-parameter of LED array, and used a physics-based model according to the general knowledge of the microscope and the brightfield-to-darkfield boundaries of the image to solve the 6 position error parameters of the misplaced LED array [22]. In addition, the center wavelength of commercial LED arrays has errors in fabrication, although some studies have plotted the convergence index as a function of the LED wavelength to maximize it to obtain the center wavelength of the LED [23]. However, when there are too many error parameters in the system, this method will also be greatly limited in the inversion of comprehensive error parameters.

In response to the issue of comprehensive error parameters when the spatial positional and optical features of arbitrarily placed LED array are unknown, a 3D misalignment model of LED array is used in this paper. The full freedom of spatial motion of LED array (3 degrees of translational freedom and 3 degrees of rotational freedom) are considered. At the same time, the center wavelength error of LED light source is taken into account, and these 7 error parameters constitute all the errors generated by the LED array. Then the SA algorithm is used to search for these 7 parameters in the spatial domain to minimize the cost function. Unlike the previous cost function, which only contains amplitude data fidelity terms, we added a phase smoothing criterion to the cost function, which uses the prior knowledge that the reconstructed phase result is smooth. In addition, before the algorithm starts, a simple image preprocessing is performed to obtain the initial values and value ranges of some parameters, to reduce the search range and accelerate the convergence. Afterwards, validation is conducted on simulation and experimental data, and the results showed that the algorithm using 7 parameters for optimized reconstruction had higher quality than the algorithm using only 4 parameters, without significantly increasing the algorithm runtime. At the same time, by comparing the results of whether there is a phase smoothing criterion in the cost function, it is found that the reconstruction results after we add a phase smoothing criterion also have a better performance.

2. Principle

2.1 Comprehensive error parameters in LED array microscopy

This section describes the comprehensive error parameters in LED array microscopy and illustrates the necessity of these system error correction in LED array microscopy using FP reconstruction as an example. An LED array microscopy system typically comprises an LED array, a microscope with a low NA objective lens, and a monochromatic camera. The LED elements on the array sequentially light up from the center towards the periphery, illuminating the specimen from various angles. For LED element located in row $m$ and column $n$, the wave vector of the illumination light is represented as ${{\mathbf {k}}_{m,n}} = \left ( {k_x^{m,n},k_y^{m,n}} \right )$, and the camera captures the LR specimen’s intensity image ${I_{m,n}}$ can be described as:

$${I_{m,n}}\left( {\mathbf{r}} \right) = {\left| {o\left( {\mathbf{r}} \right){e^{ - i2\pi {{\mathbf{k}}_{m,n}}{\mathbf{r}}}} \otimes p\left( {\mathbf{r}} \right)} \right|^2} = {\left| {{\mathscr{F} ^{ - 1}}\left\{ {O\left( {{\mathbf{k}} - {{\mathbf{k}}_{m,n}}} \right)P\left( {\mathbf{k}} \right)} \right\}} \right|^2}$$
Where, ${\mathbf {r}}=\left ( {x,y} \right )$ denotes the coordinate related to spatial domain; $\otimes$ indicates the convolutional operator; ${\mathbf {k}} = \left ( {{k_x},{k_y}} \right )$ represents Fourier domain coordinate; $O\left ( {\mathbf {k}} \right ) = \mathscr {F} \left \{ {o\left ( {\mathbf {r}} \right )} \right \}$ refers to the sample Fourier spectrum, and $P\left ( {\mathbf {k}} \right ) = \mathscr {F} \left \{ {p\left ( {\mathbf {r}} \right )} \right \}$ represents the system’s pupil function. Additionally, $\mathscr {F}$ and ${\mathscr {F} ^{ - 1}}$ denote the 2D Fourier transform and 2D inverse Fourier transform, respectively. The FPM algorithms use an iterative approach to reconstruct HR complex images of the sample by dynamically swapping between the Fourier and spatial domains, which involves stitching together these LR images in the Fourier domain. Eq. (1) indicates that different illumination wave vectors correspond to specific sub-aperture positions in the Fourier domain, and hence precise knowledge of these aperture positions is essential to achieve high-fidelity reconstruction.

In LED array microscopy, the wave vector $\left ( {k_x^{m,n},k_y^{m,n}} \right )$ of the angle-varied illuminating light, which defines the sub-aperture position, is determined by the central wavelength $\lambda$ and spatial position $\left ( {{x_{m,n}},{y_{m,n}},{z_{m,n}}} \right )$ of the LED, as illustrated in Eq. (2).

$$\begin{aligned} k_x^{m,n} & ={-} \frac{{2\pi }}{\lambda }\frac{{{x_c} - {x_{m,n}}}}{{\sqrt {{{\left( {{x_c} - {x_{m,n}}} \right)}^2} + {{\left( {{y_c} - {y_{m,n}}} \right)}^2} + z_{m,n}^2} }} \\ k_y^{m,n} & ={-} \frac{{2\pi }}{\lambda }\frac{{{y_c} - {y_{m,n}}}}{{\sqrt {{{\left( {{x_c} - {x_{m,n}}} \right)}^2} + {{\left( {{y_c} - {y_{m,n}}} \right)}^2} + z_{m,n}^2} }} \end{aligned}$$
Where $\left ( {{x_c},{y_c}} \right )$ represents the center position of each sub-region. If the LED positions are misaligned, or if there is no prior understanding of the LED center wavelength, it may result in the sub-aperture positions mismatch during the stitching process, thus reducing the reconstruction quality. Therefore, a comprehensive error parameters model for LED array microscopy is set up, in which a 3D position misalignment model of the LED array is used [22]. In 2D case, the LED array is assumed to be located on a horizontal plane perpendicular to the optical axis, and 3 parameters $\left ( {\Delta x,\Delta y,\theta } \right )$ are used to characterize the actual position of the LED, as shown in Fig. 1(a). After accounting for the movement on the $z$-axis, the model includes the 4th parameter $\Delta z$, which accounts only for translation in 3 directions and yaw around the $z$-axis of the LED array as shown in Fig. 1(b). However, these parameters do not consider the roll around the $x$-axis and pitch around the $y$-axis. To accurately describe the spatial position of each LED in a wider range of LED array microscopy, such as when an LED array is hanging on an inverted microscope, it is essential to consider all 6 degrees of freedom. This paper employs 6 position error parameters to describe the position misalignment of the LED array, consisting of 3 translation parameters $\left ( {\Delta x,\Delta y,\Delta z} \right )$ and 3 rotation parameters $\left ( {{\theta _x},{\theta _y},{\theta _z}} \right )$. Together with the center wavelength error $\Delta \lambda$, these 7 parameters characterize all the system errors that can be generated by the LED array.

 figure: Fig. 1.

Fig. 1. The schematic diagram of a misaligned LED array. (a) 2D misalignment model of the LED array. (b) 3D misalignment model of the LED array.

Download Full Size | PDF

Assuming that the distance between adjacent LED elements in a commercial LED array is equal, denoted as $d$. Therefore, for any posture of the LED array in Fig. 1(b), according to the coordinate transformation method, the actual spatial position $\left ( {{{x'}_{m,n}},{{y'}_{m,n}},{{z'}_{m,n}}} \right )$ for the $m$-th row and $n$-th column LED element can be obtained as:

$$\left[ {\begin{array}{c} {{{x'}_{m,n}}}\\ {{{y'}_{m,n}}}\\ {{{z'}_{m,n}}} \end{array}} \right] = {R_x}{R_y}{R_z}\left[ {\begin{array}{c} {md}\\ {nd}\\ 0 \end{array}} \right] + \left[ {\begin{array}{c} {\Delta x}\\ {\Delta y}\\ {h + \Delta z} \end{array}} \right]$$
Where $h$ represents the nominal distance from the LED array to the sample. The matrices ${R_x}$, ${R_y}$, and ${R_z}$ represent rotation transformation around $x$, $y$ and $z$-axes, respectively. These matrices can be expressed as:
$${R_x} = \left[ {\begin{array}{ccc} 1 & 0 & 0\\ 0 & {\cos {\theta _x}} & {\sin {\theta _x}}\\ 0 & { - \sin {\theta _x}} & {\cos {\theta _x}} \end{array}} \right] \\ {R_y} = \left[ {\begin{array}{ccc} {\cos {\theta _y}} & 0 & { - \sin {\theta _y}}\\ 0 & 1 & 0\\ {\sin {\theta _y}} & 0 & {\cos {\theta _y}} \end{array}} \right] \\ {R_z} = \left[ {\begin{array}{ccc} {\cos {\theta _z}} & {\sin {\theta _z}} & 0\\ { - \sin {\theta _z}} & {\cos {\theta _z}} & 0\\ 0 & 0 & 1 \end{array}} \right]$$
Note that in Eq. (3), the LED array rotates first around the $z$-axis, then the $y$-axis, and finally the $x$-axis. If the rotation order is altered, the sequence of the three rotation transformation matrices in Eq. (3) will also change accordingly and consequently affect the value of the 3 rotation parameters $\left ( {{\theta _x},{\theta _y},{\theta _z}} \right )$. To avoid confusion about the meaning of parameters, this paper follows the same rotation order as specified in Eq. (3). Substitute Eq. (4) into Eq. (3), the actual spatial position of each LED element can be determined as:
$$\begin{aligned} {x'_{m,n}} & = md\cos {\theta _y}\cos {\theta _z} + nd\cos {\theta _y}\sin {\theta _z} + \Delta x \\ {y'_{m,n}} & = md\left( {\sin {\theta _x}\sin {\theta _y}\cos {\theta _z} - \cos {\theta _x}\sin {\theta _z}} \right) + nd\left( {\sin {\theta _x}\sin {\theta _y}\sin {\theta _z} + \cos {\theta _x}\cos {\theta _z}} \right) + \Delta y \\ {z'_{m,n}} & = md\left( {\cos {\theta _x}\sin {\theta _y}\cos {\theta _z} + \sin {\theta _x}\sin {\theta _z}} \right) + nd\left( {\cos {\theta _x}\sin {\theta _y}\sin {\theta _z} - \sin {\theta _x}\cos {\theta _z}} \right) + h + \Delta z \end{aligned}$$
In particular, when ${\theta _x}={\theta _y}={0^ \circ }$, Eq. (5) degenerates to the expression of the 2D position misalignment model. Therefore, the LED element positions obtained from 3D position misalignment model can be used to correct for position errors of LED arrays in any posture.

Additionally, accounting for the center wavelength error $\Delta \lambda$ of the LED, the illumination wave vector of each LED element can be expressed as:

$$\begin{aligned} k_x^{m,n} & ={-} \frac{{2\pi }}{{\lambda + \Delta \lambda }}\frac{{{x_c} - {{x'}_{m,n}}}}{{\sqrt {{{\left( {{x_c} - {x'_{m,n}}} \right)}^2} + {{\left( {{y_c} - {y'_{m,n}}} \right)}^2} + z_{m,n}^{'2}} }} \\ k_y^{m,n} & ={-} \frac{{2\pi }}{{\lambda + \Delta \lambda }}\frac{{{y_c} - {{y'}_{m,n}}}}{{\sqrt {{{\left( {{x_c} - {x'_{m,n}}} \right)}^2} + {{\left( {{y_c} - {y'_{m,n}}} \right)}^2} + z_{m,n}^{'2}} }} \end{aligned}$$
To illustrate the impact of these system errors on original FPM reconstruction, a simulated example is provided in Fig. 2. Specifically, Fig. 2(a1)-(a2) display the HR amplitude images reconstructed without system errors, and with center wavelength error of the LED, respectively. When there are full degrees of freedom position error exist, Fig. 2(a3)-(a5) show the HR amplitude reconstruction results using 0 position error parameters, 4 simulated position error parameters $(\Delta x,\Delta y,\Delta z,\theta _z)$, and 6 simulated position error parameters $(\Delta x,\Delta y,\Delta z,\theta _x,\theta _y,\theta _z)$, respectively. Corresponding to Fig. 2(a1)-(a5), Fig. 2(b1)-(b5) show the reconstructed HR phase images while Fig. 2(c1)-(c5) illustrate the sub-aperture center frequencies corresponding to different LEDs in the Fourier domain. The red dot and green circle in Fig. 2(c1)-(c5) represent the actual center frequency of each LED under erroneous and reconstructed conditions, respectively.

 figure: Fig. 2.

Fig. 2. A simulated FPM reconstruction example. (a1) The HR amplitude reconstruction results with no system error. (a2) The HR amplitude reconstruction results with a central wavelength error. (a3)-(a5) When the full degrees of freedom position error exist, the HR amplitude reconstruction results using 0 position error parameters, 4 simulated position error parameters, and 6 simulated position error parameters, respectively. (b1)-(b5) The corresponding HR phase reconstruction images for (a1)-(a5). (c1)-(c5) The sub-aperture central frequencies in the Fourier domain, where the red dot indicates the actual center frequency of each LED element with simulated errors, and the green circle represents the center frequency used for FPM reconstruction.

Download Full Size | PDF

In the absence of system errors (Fig. 2(a1)-(c1)) or when the system errors are precisely known (Fig. 2(a5)-(c5)), the sub-aperture center frequencies accurately align, thereby producing satisfactory reconstruction results. In cases where there is a deviation in the central wavelength of LED, the reconstructed central frequencies misaligns, and the cutoff frequency of $P\left ( {\mathbf {k}} \right )$ is altered, which adversely affects the mapping of the Fourier domain and reduces the accuracy of reconstruction, as shown in Fig. 2(a2)-(c2). Positional errors in the LED array cause substantial distortion in the center frequencies during the generation and reconstruction processes of the image, and may even affect the light and dark field boundaries, which can lead to significant inaccuracies in the reconstructed results, as illustrated in Fig. 2(a3)-(c3). If only 4 position error parameters are corrected, the reconstructed image quality will be greatly improved, but they will still not achieve satisfactory results due to the misalignment of the center frequencies, as shown in Fig. 2(a4)-(c4) . And two additional position error parameters, $\theta _x$ and $\theta _y$, control the shape of the Fourier domain map, making it more arbitrary than just a translated or rotated square. In summary, it is necessary to correct the comprehensive error parameters in LED array microscopy.

2.2 System error correction algorithm with phase smoothing criterion

In this section, we propose a system error correction algorithm with phase smoothing criterion based on the comprehensive error parameters model established in section 2.1. The spatial domain search correction is accomplished using the SA algorithm to minimize the cost function involving 7 error parameters.

In contrast to previous works, where the cost function only considered the data fidelity term of the intensity image, we include a regularization term about the phase smoothing criterion and utilize the continuous smooth variation of the phase image as a priori knowledge for optimization. Before starting the correction algorithm, a simple image preprocessing is performed in which a part of the initial values are adaptively selected to reduce the influence of human experience and ensure faster algorithm convergence. Fig. 3 illustrates the algorithm flow and detailed steps for implementing the proposed algorithm.

 figure: Fig. 3.

Fig. 3. Flowchart of the proposed algorithm.

Download Full Size | PDF

Step 1: The algorithm begins by performing a simple preprocessing step, which involves computing the total intensity value of each LR image. An adaptive threshold using the Otsu algorithm is then calculated, akin to the threshold segmentation of the image. The LR image is roughly classified as a bright field (with high total intensity value) or a dark field (with low total intensity value) image. The last bright field image in the image sequence, numbered $N$, is selected. Then, the result of the threshold segmentation is used to create a matrix that maps to the corresponding LED positions and is used for circle finding via the Hough transform. The initial values $\Delta {x_0}$ and $\Delta {y_0}$ can be obtained from the distance between the center of the circle and the center of the matrix, whereas the remaining 5 system error parameters are initialized to 0.

Step 2: The sample Fourier spectrum ${O_j}\left ( {\mathbf {k}} \right )$ and pupil function ${ P_j}\left ( {\mathbf {k}} \right )\left ( {j = 0} \right )$ are initialized. Typically, the up-sampled bright field LR image’s Fourier transform is employed as the initial sample Fourier spectrum. In addition, the initial pupil function is defined as a circular low-pass filter with a cutoff frequency of ${{2\pi {\textrm{NA}}} \mathord {\left /{\vphantom {{2\pi {\textrm{NA}}} \lambda }} \right. } \lambda }$, where NA is the numerical aperture of the objective lens.

Step 3: Calculate the wave vector ${{\mathbf {k}}_{m,n}}$ of the illumination light at different angles using Eq. (6), and generate the corresponding LR image estimate ${\psi _{m,n}}\left ( {\mathbf {r}} \right )$ from ${O_j}\left ( {\mathbf {k}} \right )$:

$${\psi _{m,n}}\left( {\mathbf{r}} \right) = {\mathscr{F} ^{ - 1}}\left\{ {{O_j}\left( {{\mathbf{k}} - {{\mathbf{k}}_{m,n}}} \right){P_j}\left( {\mathbf{k}} \right)} \right\}$$
Step 4: Utilize the captured images for intensity constraint:
$${\phi _{m,n}}\left( {\mathbf{r}} \right) = \sqrt {{I_{m,n}}\left( {\mathbf{r}} \right)} \frac{{{\psi _{m,n}}\left( {\mathbf{r}} \right)}}{{\left| {{\psi _{m,n}}\left( {\mathbf{r}} \right)} \right|}}$$
Where, ${\phi _{m,n}}\left ( {\mathbf {r}} \right )$ and ${\psi _{m,n}}\left ( {\mathbf {r}} \right )$ are LR complex images with and without intensity constraints, respectively. The updated Fourier spectrum of the LR image is ${\Phi _{m,n}}\left ( {\mathbf {k}} \right ) = \mathscr {F} \left \{ {{\phi _{m,n}}\left ( {\mathbf {r}} \right )} \right \}$.

Step 5: Apply the EPRY algorithm [3] for the simultaneous update of the sample Fourier spectrum and pupil function:

$$\begin{aligned} {O_{j + 1}}\left( {\mathbf{k}} \right) & = {O_j}\left( {\mathbf{k}} \right) + \frac{{P_j^ * \left( {{\mathbf{k}} + {{\mathbf{k}}_{m,n}}} \right)}}{{\left| {{P_j}\left( {\mathbf{k}} \right)} \right|_{\max }^2}}\left[ {\Phi \left( {{\mathbf{k}} + {{\mathbf{k}}_{m,n}}} \right) - {O_j}\left( {\mathbf{k}} \right){P_j}\left( {{\mathbf{k}} + {{\mathbf{k}}_{m,n}}} \right)} \right] \\ {P_{j + 1}}\left( {\mathbf{k}} \right) & = {P_j}\left( {\mathbf{k}} \right) + \frac{{O_j^ * \left( {{\mathbf{k}} - {{\mathbf{k}}_{m,n}}} \right)}}{{\left| {{O_j}\left( {\mathbf{k}} \right)} \right|_{\max }^2}}\left[ {\Phi \left( {\mathbf{k}} \right) - {O_j}\left( {{\mathbf{k}} - {{\mathbf{k}}_{m,n}}} \right){P_j}\left( {\mathbf{k}} \right)} \right] \end{aligned}$$
Step 6: Repeat steps 3-5 until the images of all illumination angles are updated.

Step 7: After one round of update iteration, the SA method is employed to search 7 system error parameters in the spatial domain to minimize the cost function, which is defined as:

$$f = \mathop {\arg \min }_{\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \left\{ {D\left( {\mathbf{r}} \right) + \tau R\left( {\mathbf{r}} \right)} \right\}$$
Where, $D\left ( {\mathbf {r}} \right )$ is the data fidelity term, $R\left ( {\mathbf {r}} \right )$ is the regularization term about the phase smoothing criterion, and $\tau > 0$ controls the size of the regularization term and balances the two terms. Data fidelity item $D\left ( {\mathbf {r}} \right )$ is represented as:
$$D\left( {\mathbf{r}} \right) = \sum_{m,n}^N {{{\left| {{I_{m,n}}\left( {\mathbf{r}} \right) - {I_{m,n}}\left( {\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \right)} \right|}^2}}$$
Eq. (11) measures the difference between the first $N$ captured images and the LR images computed in the presence of system errors. The regularization term $R\left ( {\mathbf {r}} \right )$ in Eq. (10) is expressed as:
$$R\left( {\mathbf{r}} \right) = \sqrt {\sum_{d = 1}^2 {\left| {{\partial _d}angle\left\{ {o\left( {\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \right)} \right\} > {\rm T}} \right|} }$$
Where, $o\left ( {\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \right )$ represents the complex amplitude obtained after one round of updating the first $N$ images with 7 error parameters based on ${O_j}\left ( {\mathbf {k}} \right )$. And ${\partial _d}angle\left \{ {\cdot } \right \}$ symbolizes the gradient operation on the phase component along the $d$-th direction. T represents the threshold, which is empirically chosen to be 0.3 in this paper. In general, the center frequencies have a large misalignment, and the phase image has a large artifact. As the center frequencies are gradually aligned, the artifact of the phase image decreases. The regularization term is based on prior knowledge that the reconstructed HR image’s phase is continuous and smooth. As a demonstration, the phase smoothing values in Fig. 2(b1)-(b5) are 1.34, 40.85, 403.48, 5.61 and 2.91, respectively. The smaller the value is, the smoother the image is, which is consistent with the direct observation of the image.

In the search process, the results of the reconstructed image are taken into account, which is conducive to the search of system error parameters. Subsequently, the 7 system error parameters found are used as initial values, and all illumination wave vectors are changed according to Eq. (6), while the cutoff frequency of ${P_j}\left ( {\mathbf {k}} \right )$ is changed to ${{2\pi {\textrm{NA}}} \mathord {\left / {\vphantom {{2\pi {\textrm{NA}}} {\left ( {\lambda + \Delta \lambda } \right )}}} \right. } {\left ( {\lambda + \Delta \lambda } \right )}}$.

Step 8: Repeat steps 3-7 $J$ times to optimize the comprehensive error parameters. Typically, the search process converges within more than 10 iterations.

Step 9: Return the sample Fourier spectrum $O\left ( {\mathbf {k}} \right )$ and pupil function $P\left ( {\mathbf {k}} \right )$.

It should be noted that the search for system error parameters is a one-time process after building the system. After that, with the system settings unchanged, the 7 system error parameters can be used in steps 3-6 and updated iteratively to obtain the HR complex images. This eliminates the need for the optimization search process and reduces computational costs for future applications.

3. Simulations

To validate the effectiveness of our method, this section conducted a simulation study before applying it for the system error correction of actual LED array microscopy. The system parameters selected for the simulation process are identical to the subsequent experimental system settings. The light source utilized for the simulation is a $15\times 15$ programmable LED array, which provides angle-varied illumination. The distance between each adjacent LED element in the LED array is $2mm$, and the nominal distance between the sample and LED array is $50mm$. The objective lens used in the simulation has a magnification of $4\times$ and an NA of 0.13, while the detector has a pixel size of $6.5\mu m$.

Following the customary approach of using two images, the camera man and the aerial view image, as amplitude and phase to simulate the HR complex image of the sample. We artificially introduce comprehensive system errors by setting the 7 parameters described in section 2.1. Generate 100 sets of LR intensity images, each containing 225 images. The 7 system error parameters are randomly changed, while the variation range for the 3 position translation parameters $\left ( {\Delta x,\Delta y,\Delta z} \right )$ is set as $\left [ { - 2mm,2mm} \right ]$, and the variation range for the 3 position rotation parameters $\left ( {{\theta _x},{\theta _y},{\theta _z}} \right )$ is set as $\left [ { - {5^ \circ },{5^ \circ }} \right ]$. The wavelength of the LED is fixed at $520nm$, and the variation range of wavelength error $\Delta \lambda$ is set to be $\left [ { -10nm,10nm} \right ]$. Choose one group for thorough analysis and evaluation. This group exhibits the position translation parameters $\left ( {\Delta x,\Delta y,\Delta z} \right )$ of $1.5mm$, the position rotation parameters $\left ( {{\theta _x},{\theta _y},{\theta _z}} \right )$ are set as $5^ \circ$, and a wavelength error $\Delta \lambda$ of $-10nm$.

In accordance with step 1 of the algorithm flowchart in section 2.2, image preprocessing operations are carried out. The total intensity of each image is calculated and normalized, as demonstrated in Fig. 4(a). The threshold, Thr, is determined by applying the Otsu method, which resulted in a value of 0.704. Images with a total intensity greater than Thr are classified as bright field images and those with lower intensity values as dark field images. The number of the last bright field image, $N$, is recorded as 77. After setting the LED position corresponding to the bright field image to 1 as shown in Fig. 4(b), a circle finding operation is performed. The distance from the center of the circle to the center of the matrix yielded the initial values of system error $\Delta {x_0}$ and $\Delta {y_0}$. Specifically, the values are $1.53mm$ for $\Delta {x_0}$, and $1.81mm$ for $\Delta {y_0}$. The system errors $\Delta {x_0}$ and $\Delta {y_0}$ obtained through image preprocessing are found to be close to the set value of $1.5mm$. To analyze the discrepancy between the values obtained from the preprocessing method and the actual system error, the 100 sets of images mentioned above are processed, and the disparities are depicted using a box plot, as seen in Fig. 4(c). The $x$ and $y$ directions are indicated by blue and red, respectively. The results indicate that the difference between the initial system error values obtained by our preprocessing method and the actual error values is generally within the range of $\pm 0.2mm$, with none exceeding $\pm 0.5mm$. As a result, during the calibration process of the actual experimental system, the search ranges for the system error parameters can be limited to $\left [ {\Delta {x_0} - 0.5mm,\Delta {x_0} + 0.5mm} \right ]$ and $\left [ {\Delta {y_0} - 0.5mm,\Delta {y_0} + 0.5mm} \right ]$, respectively to reduce the search range and accelerate the convergence.

 figure: Fig. 4.

Fig. 4. Preprocessing process and result analysis. (a) Calculate the total intensity value of each image, normalize it, and perform adaptive threshold segmentation. (b) Set the position of the LEDs to 1 for values greater than the threshold and perform a circular search operation to determine $\Delta {x_0}$ and $\Delta {y_0}$ by the distance between the center of the circle and the center of the matrix. (c) Box plot of the difference between 100 sets of initial values obtained from preprocessing and the set error value.

Download Full Size | PDF

Various algorithms are utilized to correct the system errors of the LED array microscopy after the image preprocessing step. Then, the reconstructed HR amplitude and phase are compared in Fig. 5. Fig. 5(a1) and (a2) show the ground truth amplitude and phase images, respectively. Fig. 5(b1) and (b2) display the reconstructed HR amplitude and phase images using the original FPM algorithm, which does not correct for system errors. It is apparent that the reconstructed results do not match the ground truth, indicating that the original FPM algorithm produces significantly worse results. The Fourier domain stitching results are depicted in Fig. 5(b3), where a regular square shape can be observed. Fig. 5(c1) and (c2) present the amplitude and phase results produced by pcFPM. This algorithm uses SA to search for the center frequencies corresponding to 9 center LED elements in Fourier domain and subsequently calculates the 4 position errors $\left ( {\Delta x,\Delta y,\Delta z,{\theta _z}} \right )$ by performing nonlinear fitting. Fig. 5(d1) and (d2) illustrate the results obtained through the SBC algorithm. This algorithm deploys particle swarm optimization in the spatial domain to search for the 4 position error parameters directly. Although pcFPM and SBC algorithms consider 4 position error parameters to correct for the multi-parameter system errors in LED array microscopy, some artifacts are still present in the amplitude and phase images. Thus, these 4 parameters are insufficient to fully characterize the system errors. therefore, the corrections are limited. The image stitching results in the Fourier domain by these two methods, as shown in Fig. 5(c3) and (d3), compared to Fig. 5(b2), demonstrate added translation and rotation while partly maintaining square shape. The limited mapping of the image in the Fourier domain can be attributed to the insufficient parameters employed in the methods. Moreover, these methods employ a cost function that only considers the data fidelity term of the first $N$ images. Our method introduces a phase smoothing criterion in the cost function and optimizes the 7 system error parameters $\left ( {\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \right )$ in the spatial domain to minimize the cost function. To determine the effectiveness of adding the phase smoothing criterion, the results of the cost function when searching the 7 proposed system error parameters with and without the criterion are compared. The comparison results are depicted in Fig. 5(e1)-(e3) and (f1)-(f3). Without adding the phase smoothing criterion, both the reconstructed amplitude and phase still exhibit noticeable artifacts in Fig. 5(e1) and (e2). Nevertheless, incorporating the phase smoothing criterion has significantly improved the quality of reconstructed amplitude and phase results, as demonstrated in Fig. 5(f1) and (f2). Notably, the criterion has ensured a smoother phase image, thereby exhibiting that the assumption that adding the phase is smooth in the cost function is reflected during the optimization search. The stitching results in the Fourier domain are shown in Fig. 5(f3). It is evident that after incorporating 7 system error parameters, the resulting stitched image is no longer a regular square, but assumes a more arbitrary shape. This type of shape better reflects the frequency mapping typically encountered in LED array microscopy with 7 parameter system errors.

 figure: Fig. 5.

Fig. 5. The reconstruction results of different methods for simulated data. (a1)-(a2) The ground truth amplitude and phase images, respectively. (b1)-(f1) The HR amplitude images recovered using the original FPM, pcFPM, SBC, our method without phase smoothing criterion, and our method with the criterion, respectively. (b2)-(f2) The corresponding HR phase images to (b1)-(f1). (b3)-(f3) The stitched images in the Fourier domain using different methods.

Download Full Size | PDF

To measure the divergence between the HR amplitude and phase reconstructed by various methods and the ground truth, two distinct indicators, the root mean square error (RMSE) and the structural similarity index (SSIM), are used. RMSE calculates the difference between two images, and a smaller value implies less difference between the two images, whereas SSIM is an indicator that assesses the similarity between two images, considering aspects such as lightness, contrast, and image structure. The calculation formula for SSIM is as follows:

$$\textit{SSIM}\left( {X,Y} \right) = \frac{{\left( {2{\mu _X}{\mu _Y} + {C_1}} \right)\left( {2{\sigma _{XY}} + {C_2}} \right)}}{{\left( {\mu _X^2 + \mu _Y^2 + {C_1}} \right)\left( {\sigma _X^2 + \sigma _Y^2 + {C_2}} \right)}}$$
Where, $\mu _X$ and $\mu _Y$ represent the mean of images $X$ and $Y$, respectively. $\sigma _X^2$ and $\sigma _Y^2$ represent the variance of images $X$ and $Y$, respectively. $\sigma _{XY}$ represents the covariance between images $X$ and $Y$, and $C_1$, $C_2$ are constants. The SSIM value ranges from 0 to 1, and is higher when two images are of more similar structural information. Compute the RMSE and SSIM for each image in Fig. 5 and display them under the corresponding image. The SSIM values for the amplitude and phase results reconstructed by original FPM have the lowest values, while their RMSE values are the highest, indicating the worst performance. The SSIM and RMSE values of pcFPM, SBC and reconstruction results without criterion are in the middle. Our method with the phase smoothing criterion recovers the highest SSIM values for both the amplitude and phase results while the lowest RMSE values. These results indicate that the difference between the reconstructed amplitude and phase images and the ground truth is minimal, which aligns with the intuitive observations.

To illustrate the convergence results of whether the phase smoothing criterion is added or not, the above 100 groups of simulation images are used for verification. The convergence results for the 7 error parameters and $\Delta p$ are shown in Fig. 6, where $\Delta p$ represents the sum of pixel number difference between the position of all ideal and true sub-aperture center frequencies. It can be seen that the convergence results with the phase smoothing criterion added are better than those without. In addition, since the initial values of $\Delta x$ and $\Delta y$ are determined and their search range is narrowed during image preprocessing, these two parameters can converge quickly, as shown in Fig. 6(a)-(b). At the same time, by comparing the error parameters after convergence, it can be seen that among the 7 error parameters, $\Delta x$, $\Delta y$ and $\theta _z$ have the greatest influence on sub-aperture center frequencies. The convergence results of the other 4 error parameters $\left ( {\Delta z,{\theta _x},{\theta _y},\Delta \lambda } \right )$ are slightly different from the set value, which indicates that the sub-aperture center frequencies is less sensitive to these 4 parameters. However, if these 4 parameters are not considered, the sub-aperture center frequencies constitute a translated or rotated square, which cannot satisfy the frequency mapping of the LED array at any pose. If these 4 parameters are taken into account, a satisfactory alignment of the sub-aperture center frequencies will be achieved even if the fully accurate error parameters are not obtained.

 figure: Fig. 6.

Fig. 6. Convergence results with or without adding the phase smoothing criterion. (a)-(h) The RMSE of 7 error parameters $\left ( {\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \right )$ and $\Delta p$ during iterations.

Download Full Size | PDF

To verify the universality and robustness of the proposed method, additive Gaussian white noise is added to 100 groups of LR images as described earlier, to simulate the noise of an actual acquisition system. The standard deviation of the noise is increased from 0 to 0.08 with a step size of 0.01. Then, 900 sets of images are reconstructed using the aforementioned algorithms. The reconstructed images under varying noise conditions are evaluated using the average SSIM and RMSE values. Fig. 7 illustrates the comparison of SSIM and RMSE values. Fig. 7(a) and (b) represent the average SSIM values of the amplitude and phase reconstruction results, respectively, while Fig. 7(c) and (d) represent the average RMSE values of the same. Regarding the amplitude reconstruction results presented in Fig. 7(a) and (c), it is clear that the proposed method outperforms the other algorithms in terms of both SSIM and RMSE indicators. And the algorithms for correcting 7 system error parameters (our method with or without phase smoothing criterion) are also superior to those for correcting only 4 position error parameters (pcFPM, SBC). As noise standard deviation increases, the reconstruction results of amplitude display a significant trend of variation. While the phase recovery results in Fig. 7(b) and (d) display a smaller trend of variation with noise compared to amplitude, our method along with the reconstruction results without the criterion outperformed the original FPM, pcFPM, and SBC. When comparing the effects of phase smoothing criterion to those without it, it is found that when noise levels are low, the phase result with criterion is slightly better. Conversely, when noise levels are high, the results without the criterion are slightly superior. The deviation in the search results may be attributed to the influence of noise on the reconstructed phase’s smoothness. Nevertheless, the impact of increasing the number of system error parameters on the reconstruction effect is significant, based on the results of amplitude and phase. Specifically, adding the phase smoothing criterion to the cost function also shows a positive impact. As for the calculation time of the algorithm, the optimization of 7 parameters (49.37s) and 4 parameters (pcFPM, 28.85s; SBC, 43.26s) are on the same level, and all can be executed within a minute on a desktop computer (Intel Core i5-12600KF, 3.7GHz). Therefore, our method improved the accuracy of the algorithm without significantly increasing its running time. Moreover, the system error parameters can be used for subsequent processing after being corrected once, which further reduces the algorithm’s computational overhead.

 figure: Fig. 7.

Fig. 7. Performance of different algorithms under noise conditions. (a) Average SSIM values of 100 sets of reconstructed amplitudes. (b) Average SSIM values of 100 sets of reconstructed phases. (c) Average RMSE values of 100 sets of reconstructed amplitudes. (d) Average RMSE values of 100 sets of reconstructed phases.

Download Full Size | PDF

4. Experiments

This section evaluates the effectiveness of the proposed algorithm by applying the aforementioned methods to experimental data obtained from actual LED array microscopy. The experimental system employed is a modified version of a commercial inverted microscope in which its light source has been replaced with a commercial LED array controlled by an Arduino Mega 2560 microcontroller. Every LED element is sequentially illuminated and a scientific CMOS (sCMOS) camera (Dhyana 400BSI V2, Tucsen Photonics Co., Ltd, Fujian, China) with a pixel size of $6.5\mu m$ is synchronously controlled to record sample images. The system parameters are set to match those described in Section 3. Specifically, the magnification of the objective lens is $4\times$, while the numerical aperture is 0.13. The distance between adjacent LED elements in the commercial LED array is $2mm$, while the distance between the LED array and the sample is set at $50mm$. Green LEDs are used for illumination, but the central wavelength is unspecified. Before evaluating the effectiveness of the proposed method in correcting 3D misalignment of LED arrays, we align the LED array on an inverted microscope relatively accurately, but there is still an inevitable position deviation of 6 freedom degrees in the system.

And then, a USAF resolution target (Edmund Optics Inc., Barrington, NJ, USA) is tested using the LED array microscopy that was built. The camera captured 225 LR intensity images which are used to reconstruct the target using several algorithms. Fig. 8(a1) shows the first LR image, while Fig. 8(a2) and (a3) display the enlarged image within the boxes in Fig. 8(a1) and (a2), respectively. Fig. 8(b1)-(b3) display the amplitude reconstruction results using the original FPM. The reconstructed image suffers from poor quality due to the absence of system error correction, resulting in artifacts in the image, as shown in Fig. 8(b2)-(b3). The results of the pcFPM and SBC reconstructions are presented in Fig. 8(c1)-(c3) and Fig. 8(d1)-(d3), respectively. Some artifacts can still be seen in the 8th and 9th line pairs in Fig. 8(c3) and (d3). We note that 9-3 line pairs are clearly visible in the original literature for pcFPM and SBC. However, due to the possible position deviation of 6 freedom degrees in the system, there are still some artifacts in the reconstruction results of these two methods. Both pcFPM and SBC methods only consider 4 positional error parameters. If there are multiple system errors, the mapping ability of these methods in the Fourier domain is not enough. Fig. 8(e1)-(e3) show the reconstruction results obtained after accounting for 7 system error parameters without the use of phase smoothing criterion, while Fig. 8(f1)-(f3) exhibit the reconstruction images with phase smoothing criterion. In contrast to original FPM, pcFPM, and SBC, the reconstruction considering multiple system errors (as seen in Fig. 8(e3)-(f3) and Fig. 8(b3)-(d3)) results in a improvement in the quality of the reconstructed images. Comparing the results of phase smoothing criterion used versus not used (as demonstrated in Fig. 8(e3) and (f3)) shows that phase smoothing criterion enhances the optimization algorithm’s ability to locate system error parameters, resulting in higher quality reconstructed images.

 figure: Fig. 8.

Fig. 8. Experimental results with the USAF resolution target. (a1)-(a3) The first LR image captured by the camera. (b1)-(b3) The amplitude reconstruction results using the original FPM method. (c1)-(c3) The amplitude reconstruction results using the pcFPM method. (d1)-(d3) The amplitude reconstruction results using the SBC method. (e1)-(e3) The amplitude reconstruction results using our method without the phase smoothing criterion. (f1)-(f3) The amplitude reconstruction results using our method with the phase smoothing criterion.

Download Full Size | PDF

Moreover, a stained specimen of Corpus ventriculi sec is tested to verify the biological experiment. Fig. 9(a1) displays the first LR image captured by the camera, while Fig. 9(b1)-(f1) present the reconstructed amplitude by various methods, and Fig. 9(b2)-(f2) demonstrate the corresponding phase results. The reconstructed amplitude results by different methods present little difference and perform well in the biological sample test, as shown in Fig. 9(b1)-(f1). However, a significant disparity in the reconstructed phase results through various methods can be noticed, by comparing Fig. 9(b2)-(f2). Owing to inadequate system error rectification capability, the original FPM yielded notable inaccuracies in the reconstructed phase results (Fig. 9(b2)). The phase results reconstructed through pcFPM and SBC (Fig. 9(c2) and (d2)) exhibit limited contrast and relatively similar features. Furthermore, there exist significant phase mutations (such as black dots in the figure) in both methods’ phase reconstruction outcomes, which can be attributed to the methods’ synthesis of only four positional errors and distorted mapping in the Fourier domain. The phase reconstruction images without and with phase smoothing criterion in Fig. 9(e1)-(e2) and (f1)-(f2) result in higher contrast and better effects than the reconstructions obtained by the three previous methods in Fig. 9(b2)-(d2). Fig. 9(f2), containing the phase smoothing criterion in the cost function, produced the highest contrast among all reconstructed methods.

 figure: Fig. 9.

Fig. 9. Experimental results with Corpus ventriculi sec. (a1) The first LR image captured by the camera. (b1)-(b2) The amplitude and phase reconstruction results using the original FPM method. (c1)-(c2) The amplitude and phase reconstruction results using the pcFPM method. (d1)-(d2) The amplitude and phase reconstruction results using the SBC method. (e1)-(e2) The amplitude and phase reconstruction results using our method without the phase smoothing criterion. (f1)-(f2) The amplitude and phase reconstruction results using our method with the phase smoothing criterion.

Download Full Size | PDF

5. Conclusion

This paper aims to address the problem of comprehensive error parameters in LED array microscopy, including the center wavelength error and the position error of six degrees of freedom. A system error correction algorithm with phase smoothing criterion is proposed, which uses SA algorithm to search for 7 error parameters in the spatial domain. At the beginning of the algorithm, the initial values $\Delta x$ and $\Delta y$ are determined by a simple image preprocessing, and their search range is also reduced. These two error parameters have the greatest influence on sub-apertures’ center frequencies, and thus are very helpful for the convergence of the algorithm. The simulation and experimental results show that if there are comprehensive system errors in the LED array microscopy, the reconstruction results obtained by our method are more accurate than those obtained by existing methods, but there is no significant increase in computational time. Comparing the reconstructive results with and without including the phase smooth criterion in the cost function supports its effectiveness. Additionally, this search method only needs to be performed once, and the system error parameters found can be used for various downstream applications as long as the system does not change, significantly reducing the subsequent computational time. In addition, the method in this paper may be more suitable for unskilled users to perform system correction from the software level. Of course, for skilled users, with the ability to use more accurate mechanical alignment methods and the use of spectrometers to measure the central wavelength, our method is also able to complement them.

Funding

National Natural Science Foundation of China (61875160).

Acknowledgments

This study was funded by National Nature Science Foundation of China (Grant No. 61875160). The authors would like to thank the reviewers and the associate editor for their comments that contributed to meaningful improvements in this paper. The authors would also like to thank Hao Li and Prof. Jinfeng Peng for their help in revising this paper.

Disclosures

The authors declare no conflicts of interest.

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

1. J. N. Mait, G. W. Euliss, and R. A. Athale, “Computational imaging,” Adv. Opt. Photonics 10(2), 409–483 (2018). [CrossRef]  

2. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

3. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

4. X. Chen, Y. Zhu, M. Sun, D. Li, Q. Mu, and L. Xuan, “Apodized coherent transfer function constraint for partially coherent Fourier ptychographic microscopy,” Opt. Express 27(10), 14099–14111 (2019). [CrossRef]  

5. P. Song, S. Jiang, H. Zhang, X. Huang, Y. Zhang, and G. Zheng, “Full-field Fourier ptychography (FFP): Spatially varying pupil modeling and its application for rapid field-dependent aberration metrology,” APL Photonics 4(5), 050802 (2019). [CrossRef]  

6. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier ptychography with an led array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

7. B. Lee, J. Hong, D. Yoo, J. Cho, Y. Jeong, S. Moon, and B. Lee, “Single-shot phase retrieval via Fourier ptychographic microscopy,” Optica 5(8), 976–983 (2018). [CrossRef]  

8. J. Sun, Q. Chen, J. Zhang, Y. Fan, and C. Zuo, “Single-shot quantitative phase microscopy based on color-multiplexed Fourier ptychography,” Opt. Lett. 43(14), 3365–3368 (2018). [CrossRef]  

9. J. Zhang, T. Xu, Z. Shen, Y. Qiao, and Y. Zhang, “Fourier ptychographic microscopy reconstruction with multiscale deep residual network,” Opt. Express 27(6), 8612–8625 (2019). [CrossRef]  

10. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

11. Z. Yang, L. Zhang, N. Lü, H. Wang, Z. Zhang, and L. Yuan, “Progress of three-dimensional, label-free quantitative imaging of refractive index in biological samples,” Chin. J. Laser 49, 0507201 (2022). [CrossRef]  

12. R. Horstmeyer, J. Chung, X. Ou, G. Zheng, and C. Yang, “Diffraction tomography with Fourier ptychography,” Optica 3(8), 827–835 (2016). [CrossRef]  

13. J. Li, N. Zhou, J. Sun, S. Zhou, Z. Bai, L. Lu, Q. Chen, and C. Zuo, “Transport of intensity diffraction tomography with non-interferometric synthetic aperture for three-dimensional label-free microscopy,” Light: Sci. Appl. 11(1), 154 (2022). [CrossRef]  

14. Y. Rivenson, K. de Haan, W. D. Wallace, and A. Ozcan, “Emerging advances to transform histopathology using virtual staining,” BME Front. 2020, 9647163 (2020). [CrossRef]  

15. D. Ryu, J. Kim, D. Lim, H.-S. Min, I. Y. Yoo, D. Cho, and Y. Park, “Label-free white blood cell classification using refractive index tomography and deep learning,” BME Front. 2021, 9893804 (2021). [CrossRef]  

16. L.-H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, G. Tang, M. Soltanolkotabi, and L. Waller, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]  

17. R. Eckert, Z. F. Phillips, and L. Waller, “Efficient illumination angle self-calibration in Fourier ptychography,” Appl. Opt. 57(19), 5434–5442 (2018). [CrossRef]  

18. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]  

19. A. Zhou, W. Wang, N. Chen, E. Y. Lam, B. Lee, and G. Situ, “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express 26(18), 23661–23674 (2018). [CrossRef]  

20. Y. Zhu, M. Sun, P. Wu, Q. Mu, L. Xuan, D. Li, and B. Wang, “Space-based correction method for led array misalignment in Fourier ptychographic microscopy,” Opt. Commun. 514, 128163 (2022). [CrossRef]  

21. H. Lee, B. Chon, and H. Ahn, “Rapid misalignment correction method in reflective Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Lasers Eng. 138, 106418 (2021). [CrossRef]  

22. C. Zheng, S. Zhang, D. Yang, G. Zhou, Y. Hu, and Q. Hao, “Robust full-pose-parameter estimation for the led array in Fourier ptychographic microscopy,” Biomed. Opt. Express 13(8), 4468–4482 (2022). [CrossRef]  

23. Z. Bian, S. Dong, and G. Zheng, “Adaptive system correction for robust Fourier ptychographic imaging,” Opt. Express 21(26), 32400–32410 (2013). [CrossRef]  

Data Availability

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. The schematic diagram of a misaligned LED array. (a) 2D misalignment model of the LED array. (b) 3D misalignment model of the LED array.
Fig. 2.
Fig. 2. A simulated FPM reconstruction example. (a1) The HR amplitude reconstruction results with no system error. (a2) The HR amplitude reconstruction results with a central wavelength error. (a3)-(a5) When the full degrees of freedom position error exist, the HR amplitude reconstruction results using 0 position error parameters, 4 simulated position error parameters, and 6 simulated position error parameters, respectively. (b1)-(b5) The corresponding HR phase reconstruction images for (a1)-(a5). (c1)-(c5) The sub-aperture central frequencies in the Fourier domain, where the red dot indicates the actual center frequency of each LED element with simulated errors, and the green circle represents the center frequency used for FPM reconstruction.
Fig. 3.
Fig. 3. Flowchart of the proposed algorithm.
Fig. 4.
Fig. 4. Preprocessing process and result analysis. (a) Calculate the total intensity value of each image, normalize it, and perform adaptive threshold segmentation. (b) Set the position of the LEDs to 1 for values greater than the threshold and perform a circular search operation to determine $\Delta {x_0}$ and $\Delta {y_0}$ by the distance between the center of the circle and the center of the matrix. (c) Box plot of the difference between 100 sets of initial values obtained from preprocessing and the set error value.
Fig. 5.
Fig. 5. The reconstruction results of different methods for simulated data. (a1)-(a2) The ground truth amplitude and phase images, respectively. (b1)-(f1) The HR amplitude images recovered using the original FPM, pcFPM, SBC, our method without phase smoothing criterion, and our method with the criterion, respectively. (b2)-(f2) The corresponding HR phase images to (b1)-(f1). (b3)-(f3) The stitched images in the Fourier domain using different methods.
Fig. 6.
Fig. 6. Convergence results with or without adding the phase smoothing criterion. (a)-(h) The RMSE of 7 error parameters $\left ( {\Delta x,\Delta y,\Delta z,{\theta _x},{\theta _y},{\theta _z},\Delta \lambda } \right )$ and $\Delta p$ during iterations.
Fig. 7.
Fig. 7. Performance of different algorithms under noise conditions. (a) Average SSIM values of 100 sets of reconstructed amplitudes. (b) Average SSIM values of 100 sets of reconstructed phases. (c) Average RMSE values of 100 sets of reconstructed amplitudes. (d) Average RMSE values of 100 sets of reconstructed phases.
Fig. 8.
Fig. 8. Experimental results with the USAF resolution target. (a1)-(a3) The first LR image captured by the camera. (b1)-(b3) The amplitude reconstruction results using the original FPM method. (c1)-(c3) The amplitude reconstruction results using the pcFPM method. (d1)-(d3) The amplitude reconstruction results using the SBC method. (e1)-(e3) The amplitude reconstruction results using our method without the phase smoothing criterion. (f1)-(f3) The amplitude reconstruction results using our method with the phase smoothing criterion.
Fig. 9.
Fig. 9. Experimental results with Corpus ventriculi sec. (a1) The first LR image captured by the camera. (b1)-(b2) The amplitude and phase reconstruction results using the original FPM method. (c1)-(c2) The amplitude and phase reconstruction results using the pcFPM method. (d1)-(d2) The amplitude and phase reconstruction results using the SBC method. (e1)-(e2) The amplitude and phase reconstruction results using our method without the phase smoothing criterion. (f1)-(f2) The amplitude and phase reconstruction results using our method with the phase smoothing criterion.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

I m , n ( r ) = | o ( r ) e i 2 π k m , n r p ( r ) | 2 = | F 1 { O ( k k m , n ) P ( k ) } | 2
k x m , n = 2 π λ x c x m , n ( x c x m , n ) 2 + ( y c y m , n ) 2 + z m , n 2 k y m , n = 2 π λ y c y m , n ( x c x m , n ) 2 + ( y c y m , n ) 2 + z m , n 2
[ x m , n y m , n z m , n ] = R x R y R z [ m d n d 0 ] + [ Δ x Δ y h + Δ z ]
R x = [ 1 0 0 0 cos θ x sin θ x 0 sin θ x cos θ x ] R y = [ cos θ y 0 sin θ y 0 1 0 sin θ y 0 cos θ y ] R z = [ cos θ z sin θ z 0 sin θ z cos θ z 0 0 0 1 ]
x m , n = m d cos θ y cos θ z + n d cos θ y sin θ z + Δ x y m , n = m d ( sin θ x sin θ y cos θ z cos θ x sin θ z ) + n d ( sin θ x sin θ y sin θ z + cos θ x cos θ z ) + Δ y z m , n = m d ( cos θ x sin θ y cos θ z + sin θ x sin θ z ) + n d ( cos θ x sin θ y sin θ z sin θ x cos θ z ) + h + Δ z
k x m , n = 2 π λ + Δ λ x c x m , n ( x c x m , n ) 2 + ( y c y m , n ) 2 + z m , n 2 k y m , n = 2 π λ + Δ λ y c y m , n ( x c x m , n ) 2 + ( y c y m , n ) 2 + z m , n 2
ψ m , n ( r ) = F 1 { O j ( k k m , n ) P j ( k ) }
ϕ m , n ( r ) = I m , n ( r ) ψ m , n ( r ) | ψ m , n ( r ) |
O j + 1 ( k ) = O j ( k ) + P j ( k + k m , n ) | P j ( k ) | max 2 [ Φ ( k + k m , n ) O j ( k ) P j ( k + k m , n ) ] P j + 1 ( k ) = P j ( k ) + O j ( k k m , n ) | O j ( k ) | max 2 [ Φ ( k ) O j ( k k m , n ) P j ( k ) ]
f = arg min Δ x , Δ y , Δ z , θ x , θ y , θ z , Δ λ { D ( r ) + τ R ( r ) }
D ( r ) = m , n N | I m , n ( r ) I m , n ( Δ x , Δ y , Δ z , θ x , θ y , θ z , Δ λ ) | 2
R ( r ) = d = 1 2 | d a n g l e { o ( Δ x , Δ y , Δ z , θ x , θ y , θ z , Δ λ ) } > T |
SSIM ( X , Y ) = ( 2 μ X μ Y + C 1 ) ( 2 σ X Y + C 2 ) ( μ X 2 + μ Y 2 + C 1 ) ( σ X 2 + σ Y 2 + C 2 )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.