Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Forward imaging neural network with correction of positional misalignment for Fourier ptychographic microscopy

Open Access Open Access

Abstract

Fourier ptychographic microscopy (FPM) is a computational imaging technology used to achieve high-resolution imaging with a wide field-of-view. The existing methods of FPM suffer from the positional misalignment in the system, by which the quality of the recovered high-resolution image is determined. In this paper, a forward neural network method with correction of the positional misalignment (FNN-CP) is proposed based on TensorFlow, which consists of two models. Both the spectrum of the sample and four global position factors, which are introduced to describe the positions of the LED elements, are treated as the learnable weights in layers in the first model. By minimizing the loss function in the training process, the positional error can be corrected based on the trained position factors. In order to fit the wavefront aberrations caused by optical components in the FPM system for better recovery results, the second model is designed, in which the spectrum of the sample and coefficients of different Zernike modes are treated as the learnable weights in layers. After the training process of the second model, the wavefront aberration can be fit according to the coefficients of different Zernike modes and the high-resolution complex image can be obtained based on the trained spectrum of the sample. Both the simulation and experiment have been performed to verify the effectiveness of our proposed method. Compared with the state-of-art FPM methods based on forward neural network, FNN-CP can achieve the best reconstruction results.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Fourier ptychographic microscopy (FPM) [14] is a recently developed computational imaging technology, which can achieve both high resolution and wide filed-of-view (FOV) by overcoming the limitation of the optical spatial-bandwidth-product (SBP). This method combines the theories of synthetic aperture [59] with phase recovery technology [1015] to recover both intensity and phase information. In a typical FPM system, an LED array is used to generate angle-varied illumination and an objective lens with a low numerical aperture (NA) and a wide FOV is used for imaging. A series of low-resolution images is recorded by the camera, while each single LED corresponding to a unique area of the sample’s spectrum, which is determined by the illumination angle and the NA of the objective lens, is sequentially turned on. After iteratively synthesizing those low-resolution images in the spectral domain, a high-resolution complex amplitude image is recovered with a wide FOV. The final NA of the system is equal to the sum of both the NA of the objective lens and the illumination NA.

Due to the ability to achieve both high-resolution and wide FOV, the FPM has attracted much attention since it was proposed in 2013. In order to extend the application of the FPM, a lot of traction has been gained in a short time [1618]. To reduce the data acquisition time or increase the reconstruction speed, several methods and illumination strategies have been proposed [1923] and a few optimization methods, such as Wirtinger flow optimization [24] and Gauss-Newton method [19], have been developed. In addition, there are also some researches focused on the correcting the system aberration [25], suppressing noise [2629] and correcting the LED positional misalignment [3033].

Except the researches mentioned above, a few methods focusing on solving FPM reconstruction problems based on deep neural networks (DNN) have been proposed recently, for the reason that the purpose of the FPM and DNN is similar in minimizing the non-linear loss function [3436]. However, the training process of the DNN requires a large amount of training data and can just work correctly under the same FPM system, which makes the methods time-consuming, unstable and non-universal. Recently, a new FPM algorithm based on a feed-forward neural network has been proposed [37]. It utilizes a forward pass to model the real imaging process of the actual FPM reconstruction process, which is still an iterative algorithm, and a backward pass to train the model. By modeling the real imaging process, the method does not have the same problem as the methods based on DNN have. Inspired by this research, Sun et al. proposed a FPM reconstruction model termed forward imaging neural network with pupil recovery (FINN-P) to suppress the pupil aberrations of the system [38].

However, none of the FPM methods based on neural network takes into account the influence of the positional misalignment of the LED array, by which the quality of the final high-resolution image will be degraded. In this paper, we propose a FPM reconstruction method based on the forward neural network models with correction of the positional misalignment, termed FNN-CP. The proposed method consists of two models based on an open-source machine-learning library, TensorFlow. The serial number of the LED elements turned on and the corresponding low-resolution images are introduced as the inputs in the two models. In the first model, we set both the Fourier spectrum of the sample and four global position factors, which determines the positional misalignment of the LED array, as the learnable weights in layers. The system parameters of the second model are determined by the trained factors in the first model. Both the Fourier spectrum of the sample and the coefficients of different Zernike modes are set as the learnable weights in layers. The output of the both models is modeled as the loss function, which indicates the difference between the updated and original complex wave. Both the simulation and experiment clearly show that FNN-CP can correct the global positional misalignment of the LED array and achieve better reconstruction results.

The structure of this paper is arranged as follows. In section 2, the degradation of the recovery quality caused by the positional misalignment of the LED array will be analyzed. In section 3, the FNN-PC will be elaborated. Simulation and experiment will be presented to prove the effectiveness of our method in section 4. Finally, we will conclude our work and describe the prospects for future work in section 5.

2. Positional misalignment in FPM

As is shown in Fig. 1, (a) traditional FPM system consists of an LED array, a sample plane, an objective lens with a low NA and a tube lens, a CCD camera.

 figure: Fig. 1.

Fig. 1. The diagram of a traditional FPM system

Download Full Size | PDF

In the imaging process, LED elements on the LED array are sequentially turned on to produce illumination with different angles. With the nth LED turned on, a low-resolution intensity image is captured by the camera, which can be described as

$${I_n}(x,y) = {|{(o(x,y) \cdot \exp (i({k_{xn}}x + {k_{yn}}y))\ast PSF(x,y)} |^2},$$
where ‘.’ denotes element-wise multiplication, ‘*’ denotes convolution, o (x, y) represents the complex sample, exp (i (kxn x + kyn y)) represents the illumination plane wave with wave-vector (kxn, kyn) and PSF (x, y) represents the PSF of the objective. In FPM system, CTF is the Fourier transform of the PSF, which is defined as
$$CTF({k_x},{k_y}) = {\cal F}\{ PSF(x,y)\} = \left\{ {\begin{array}{cc} 1,\textrm{ } & if\textrm{ }{k_x}^2 + {k_y}^2 \le {{(NA \cdot \frac{{2\pi }}{\lambda })}^2}\\ 0,\textrm{ } & otherwise\textrm{ } \end{array}} \right.,$$
where NA is the numerical aperture of the objective and λ is the center wavelength of the LED element.

To better analyze the influence of the positioning error in FPM, four global position factors, which are shift factors along x- and y-axis Δx and Δy, rotation factor θ and height factor Δh are introduced to describe the positions of the LED sources . The position of nth LED can be described as

$$\begin{array}{l} {x_n} = d[\cos (\theta ){n_1} + \sin (\theta ){n_2}] + \Delta x,\\ {y_n} = d[ - \sin (\theta ){n_1} + \cos (\theta ){n_2}] + \Delta y, \end{array}$$
where n1 and n2 represent that the nth LED element is located at row n1, column n2 of the LED array, d represents the distance between adjacent LEDs. The coordinate of the LED element at the center of the LED array is defined as (x0, y0), so that the incident wave-vector (kx, ky) for nth LED element can be written as
$$\begin{array}{l} {k_{xn}} ={-} \frac{{2\pi }}{\lambda }\frac{{{x_0} - {x_n}}}{{\sqrt {{{({x_0} - {x_n})}^2} + {{({y_0} - {y_n})}^2} + {{(h + \Delta h)}^2}} }},\\ {k_{yn}} ={-} \frac{{2\pi }}{\lambda }\frac{{{y_0} - {y_n}}}{{\sqrt {{{({x_0} - {x_n})}^2} + {{({y_0} - {y_n})}^2} + {{(h + \Delta h)}^2}} }}, \end{array}$$
where h represents the distance between the LED array and the sample plane.

As shown in Eq. (1) and Eq. (4), the incident wave-vector is determined by the position of the LED element so that the captured low-resolution image In will not be the same as the original while the position changes. It is impossible to utilize the changed low-resolution images to recover satisfactory high-resolution results. In other words, the degradation of the recovery quality will be caused by the positional misalignment of the LED array (can be seen in Fig. 2). Thus, correcting the positional misalignment of the LED array is very important for achieving satisfactory recovery results in FPM progress.

 figure: Fig. 2.

Fig. 2. A simulation performed based on a misaligned FPM system. (a) A simplified model of the positional misalignment of the LED array. (b) The enlargement of the central red windowed part in (a), where Δx, Δy and θ present the global positional factors. (c1) and (c2) are used as the intensity and phase images of the high-resolution sample. (d1) and (d2) are the reconstruction results recovered by the misaligned FPM system.

Download Full Size | PDF

3. Forward neural network model with correction of positional misalignment

The forward imaging progress in FPM is given in Eq. (1), which can be also written as

$${I_n}(x,y) = {|{{{\cal F}^{ - 1}}\{ O({k_x},{k_y}) \cdot {\cal F}\{ PSF(x,y) \cdot exp ( - i({k_{xn}}x + {k_{yn}}y))\} \} } |^2},$$
where ‘F’ and ‘F−1’, respectively, denote the Fourier transform and inverse Fourier transform and O(kx, ky) represents the spectrum of the sample. We employ PSFn (x, y) to denote PSF (x, y) · exp (i (kxn x + kyn y)), and then Eq. (5) can be rewritten as
$${I_n}(x,y) = {|{{{\cal F}^{ - 1}}\{ O({k_x},{k_y}) \cdot {\cal F}\{ PS{F_n}(x,y)\} \} } |^2}.$$

Based on the forward imaging progress in FPM and neural network, the method called FNN-CP is proposed to correct the positional misalignment of the FPM system, which consists of two models. The first model is mainly used to correct the positional misalignment of the LED array, where the Fourier spectrum of the sample and four global position factors are set as trainable weights. The second model is used to fit the wavefront aberration of the optical components in FPM system, where the Fourier spectrum of the sample and the coefficients of different Zernike modes are set as trainable weights. For each model, gradient back-propagation is used to optimize the learnable weights in two layers by alternately training one and keeping another fixed during the reconstruction process, so that the pre-train progress is not required in the proposed method. The flow chart of the FNN-CP is shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. The flow chart of the FNN-CP.

Download Full Size | PDF

In the first model, the Fourier spectrum of the sample and four global position factors are the learnable weights in two trainable layers. The initial value of the sample’s spectrum are predefined as the low-resolution complex wave in the Fourier domain with the illumination generated by the center LED source, while the four global position factors are all predefined to zero.

The location of the nth LED is used as one of the input layer in the first model, so that the incident wave-vector (kxn, kyn) can be calculated in trainable layer based on Eqs. (3) and (4), depend on which PSFn (x, y) can be obtained. Since the learnable weights in TensorFlow implementation should be real, the complex spectrum of the sample O (kx, ky) and PSFn (x, y) are separated as real and imaginary parts with subscripts ‘r’ and ‘i’, which can be expressed as

$$O({k_x},{k_y})\textrm{ = }{O_r}({k_x},{k_y})\textrm{ + }i \cdot {O_i}({k_x},{k_y}),$$
$$PS{F_n}(x,y) = PS{F_{nr}}(x,y) + i \cdot PS{F_{ni}}(x,y).$$
Then, CTFn (kx, ky) are used to express the Fourier transform of PSFn (x, y), which are also separated as
$$CT{F_n}({k_x},{k_y}) = CT{F_{nr}}({k_x},{k_y}) + i \cdot CT{F_{ni}}({k_x},{k_y}).$$
Based on Eq. (7) and Eq. (9), the estimated low-resolution complex wave En in the Fourier domain can be obtained as
$$\begin{array}{ll} {E_n}({k_x},{k_y}) &= O({k_x},{k_y}) \cdot CT{F_n}({k_x},{k_y})\\ &\textrm{ = [}{O_r}({k_x},{k_y}) \cdot CT{F_{nr}}({k_x},{k_y}) - {O_i}({k_x},{k_y}) \cdot CT{F_{ni}}({k_x},{k_y})]\\ & \quad + i \cdot [{O_r}({k_x},{k_y}) \cdot CT{F_{ni}}({k_x},{k_y}) + {O_i}({k_x},{k_y}) \cdot CT{F_{nr}}({k_x},{k_y})]. \end{array}$$
After that, the captured low-resolution intensity image In is used as an input layer to update the complex wave En and the output of the model can be written as
$$E_n^{update}({k_x},{k_y}) = {\cal F}\{ \sqrt {{I_n}(x,y)} \cdot \frac{{{{\cal F}^{ - 1}}\{ {E_n}({k_x},{k_y})\} }}{{|{{{\cal F}^{ - 1}}\{ {E_n}({k_x},{k_y})\} } |}}\} .$$
The loss function of the first model is defined as the difference between the updated Enupdate and the original En, which is used as L2-norm to obtain satisfactory reconstruction results under a fixed learning rate and can be written as
$$loss\textrm{ = }\frac{1}{{pixel}}\sum\limits_{pixel} {{{|{E_n^{update}({k_x},{k_y}) - {E_n}({k_x},{k_y})} |}^2}} .$$
The purpose of the training process is to minimize the loss function. Stochastic gradient descent with Nesterov acceleration (NAG) is employed as the optimizer to train the Fourier spectrum of the sample, while Adaptive Moment Estimation is employed to train the four global position factors. Only after all the captured low-resolution intensity images have been used to train the network, a single epoch of the training process is completed. Since the training process of the first model is completed, four trained global position factors can be acquired.

After that, in order to fit the wavefront aberration of the system to get a better reconstruction results, the second model is designed.

Different Zernike modes are used to indicate the wavefront aberration, which can be expressed as

$$Z({k_x},{k_y}) = {z_0}{Z_0}({k_x},{k_y}) + {z_1}{Z_1}({k_x},{k_y}) + {z_2}{Z_2}({k_x},{k_y})\textrm{ + } \cdots ,$$
where z0, z1, z2, … are the coefficients of different Zernike modes and set as learnable weights in the second model with the initial values of zero. In our method, the first 28 Zernike modes are selected, which is enough to indicate the aberration of the optical system. In addition, the spectrum of the sample is also set as learnable weights. Similar to the first model, the initial value of the sample’s spectrum is also predefined as the low-resolution complex wave in the Fourier domain when the sample is illuminated by the center LED element.

The location of the nth LED is used as an input layer to calculate the PSFn (x, y) based on the corrected four global position factors. By imposed with Z (kx, ky), the pupil function in FPM can be written as

$${P_n}({k_x},{k_y}) = {\cal F}\{ PS{F_n}({x,y} )\} \cdot Z({k_x},{k_y})\textrm{ = }{P_{nr}}({k_x},{k_y})\textrm{ + }i \cdot {P_{ni}}({k_x},{k_y}).$$
So that the estimated low-resolution complex wave En in the Fourier domain can be obtained as
$$\begin{array}{ll} {E_n}({k_x},{k_y}) &= \textrm{[}{O_r}({k_x},{k_y}) \cdot {P_{nr}}({k_x},{k_y}) - {O_i}({k_x},{k_y}) \cdot {P_{ni}}({k_x},{k_y})]\\ & + i \cdot [{O_r}({k_x},{k_y}) \cdot {P_{ni}}({k_x},{k_y}) + {O_i}({k_x},{k_y}) \cdot {P_{nr}}({k_x},{k_y})]. \end{array}$$
Then, the captured low-resolution intensity image In is used as an input layer, the output and the loss function of the second model are the same as Eq. (11) and Eq. (12).

NAG is employed to train the spectrum of the sample while Nesterov-accelerated Adaptive Moment Estimation is employed to train the coefficients of different Zernike modes. Since the training process of the second model is completed, the high-resolution complex amplitude image of the sample can be obtained by imposing inverse Fourier transform on the trained spectrum of the sample O (kx, ky).

4. Performance of the methods

4.1 Simulation evaluation

In order to verify the effectiveness of the method, the FPM system with positional error is simulated to produce a series of low-resolution images, which are used for training the proposed network. The parameters of the FPM system are given in Table 1. Four global position factors are introduced in the system, with Δx of 0.6 mm, Δy of 1.0 mm, θ of 5 degrees and Δh of 1 mm.

Tables Icon

Table 1. Main parameters of the simulated FPM system.

As is shown in Figs. 2(c1) and 2(c2), two high-resolution images are employed as the amplitude and phase images of the sample in our work, each of them contains 256 × 256 pixels. A set of 225 low-resolution intensity images is generated by the simulated FPM system with positional error and then trained on FNN-CP. For comparison, two state-of-the-art forward neural networks, termed Jiang’s method [37] and FINN-P [38], are trained with the same dataset. A traditional FPM reconstruction algorithm based on Gauss-Newton method [3] and conventional positional misalignment correction method for FPM called pcFPM [30] are used to recover high-resolution results. To emphasize the effectiveness of the first model in our method in correcting positional error, the simulated dataset is trained only on the second model of our method, which is denoted as FNN-Z, with four global factors of zero.

To ensure the convergence of all methods, two models in our method is separately trained for 50 epochs. The traditional FPM algorithm stops after 50 iterations and pcFPM stops after 12 iterations. FNN-Z and FINN-P are both trained for 50 epochs while the Jiang’s method stops after 200 steps. The reconstruction results for all the methods can be seen in Fig. 4(a). With the imposed positional misalignment of the LED array, the quality of the results reconstructed by the traditional algorithm, Jiang’s method, FINN-P and FNN-Z is degraded, while pcFPM and FNN-CP can get satisfactory reconstruction results because of the ability to correct the positional misalignment. Compared with pcFPM, FNN-CP is designed based on the forward imaging neural network and can correct the wavefront aberration with the design of the second model, so that the high-resolution intensity and phase images recovered by FNN-CP are more similar to the ground truth. Based on the four global position factors trained in FNN-CP, the distribution map containing ideal, simulated actual and corrected positions can be seen in Fig. 4(b), which can indicate that all the wrong positions have been almost corrected. As shown in Fig. 4, the FNN-CP can correct the positional misalignment in FPM system and achieve satisfactory reconstruction results with positional error.

 figure: Fig. 4.

Fig. 4. The reconstruction results for the traditional FPM algorithm, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP. (a) The recovered high-resolution intensity and phase images. (b) The distribution map containing ideal, actual and corrected positions for the FNN-CP.

Download Full Size | PDF

In addition, the reason why FINN-P and FNN-Z fails to reconstruct satisfactory results is that the location of the LED elements at the edge of the LED array changes much more than that of the LED elements at the center while the rotation factor θ is not zero. It is impossible to use only a single Z (kx, ky) or pupil function to represent wavefront aberrations with illumination of different LED elements.

To verify the stableness of our proposed method, 50 simulations with different random positional misalignments are performed. All system parameters are kept unchanged except the four global position factors to simulate different positional misalignments. The values of the introduced random shift factors and height factor are randomly predefined between −1000 µm and 1000 µm while the value of the rotation factor is randomly predefined between −5° and 5°. In each simulation, the results of our proposed method are compared with traditional method, pcFPM, Jiang’s method, FINN-P and FNN-Z. For comparison, the value of the structural similarity index (SSIM), mean-square-error (MSE) and normalized mean square error (NMSE) are used as evaluation indicators. SSIM are calculated between the reconstructed high-resolution amplitude image and the ground truth to measure the similarity of two pictures, while MSE are calculated between the amplitude or phase images of the reconstructed results and the ground truth to evaluate the degree of their difference. NMSE are calculated between the reconstructed spectrum and the spectrum of the ground truth to indicate the quality of the recovery result, which can be written as [3]

$$\textrm{NMSE} = \frac{{{{\sum\nolimits_{pixel} {\left|{{O_{truth}}({k_x},{k_y}) - \frac{{\sum\nolimits_{pixel} {{O_{truth}}({k_x},{k_y}){O^\ast }({k_x},{k_y})} }}{{\sum\nolimits_{pixel} {{{|{O({k_x},{k_y})} |}^2}} }}O({k_x},{k_y})} \right|} }^2}}}{{\sum\nolimits_{pixel} {{{|{{O_{truth}}({k_x},{k_y})} |}^2}} }}.$$
The reconstruction results are shown in Table 2. Due to its effectiveness in correcting both the positional misalignment and wavefront aberration in FPM, FNN-CP can yield better reconstruction results than other methods do.

Tables Icon

Table 2. The mean value of reconstruction results for different methods.

It is worth mentioning that, all the simulations are performed on a personal computer (Intel Core i7-8700 CPU, 3.20 GHz). For the reason that the structure of FNN-Z and FINN-P are different from that of Jiang’s method, with a single trainable layer in Jiang’s method and two trainable layers in FNN-Z and FINN-P. The cost time of a single epoch of FNN-Z and FINN-P is about 16 seconds while that of Jiang’s method is about 8 seconds. In addition, the purpose of two models in FNN-CP are illustrated in the Section 3. For different raw inputs produced by the same system, it is no need to retrain the first model in FNN-CP. In this case, the cost time of FNN-CP is almost the same as FNN-Z and FINN-P, which will not limit the application of FNN-CP.

4.2 Experimental evaluation

In order to verify the effectiveness of the FNN-CP experimentally, the methods are test on real experimental datasets. A microscope with an Olympus objective (magnification 4×, NA = 0.13) is used as an imaging system while an 15×15 programmable LED array with the incident wavelength of 532 nm is used to provide angle-varied illuminations. The distance between adjacent LED is 4 mm while the distance between the LED matrix and the sample plane is 85 mm. A scientific CCD camera is used with pixel size of 2.4µm. By sequentially turning on the LED elements in the array, 225 low-resolution images are captured. A USAF target is used as a sample and the entire FOV of the captured low-resolution image is shown in Fig. 5(a). In order to verify the effectiveness of the FNN-CP, one region of interest (ROI) in the entire FOV is selected for reconstruction, which is shown in Fig. 5(b). Based on the captured low-resolution images, the traditional method, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP are used respectively to recover high-resolution images of the sample. The reconstructed amplitude images recovered by each method are shown in Figs. 5(c)–5(h). Because of the reason that the positional misalignment of the system is slight so that the quality of the reconstruction images recovered by FINN-P, FNN-Z and FNN-CP is close. However, from three green window parts in Figs. 5(f)–5(h), we can clearly see that the lines in the Fig. 5(h) are smoother. Moreover, to make it more clearly, the intensity contrast profile of 55 pixels in Group 8 Element 6 of Figs. 5(f)–5(h) is shown in Fig. 5(i). Compared with the methods mentioned above, the FNN-CP can correct the positional misalignment of the system and achieves better recovery quality.

 figure: Fig. 5.

Fig. 5. Experimental results of a USAF high-resolution target. (a) The entire FOV of an obtained low-resolution image. (b) The enlargement of the ROI in the entire FOV. (c) – (h) The reconstruction results recovered by traditional algorithm, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP. (i) Intensity contrast profile of Group 8 Element 6 is illustrated by red dotted line in (f), blue dashed line in (g), orange solid line in (h).

Download Full Size | PDF

In addition, a tissue slice of filiform papilla of a cat’s tongue is also used as a sample in our experiments. A 7×7 programmable LED array is used to provide angle-varied illuminations. Similar to Fig. 5, the entire FOV of the captured low-resolution images is shown in Fig. 6(a). Figures 6(b) and 6(c) show the reconstructed amplitude and phase images of two ROI in the entire FOV respectively recovered by traditional algorithm, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP. The results can tell us that the images recovered by FNN-CP can express the most details, which confirms the effectiveness of our method.

 figure: Fig. 6.

Fig. 6. Experimental results of a tissue slice of a filiform papilla of a cat’s tongue. (a) The entire FOV of an obtained low-resolution image. (b)-(c) The enlargements of two ROI in the entire FOV and the intensity and phase high-resolution images recovered by traditional algorithm, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP, respectively.

Download Full Size | PDF

5. Conclusion

In this paper, a Fourier ptychographic forward neural network with correction of the positional misalignment (FNN-CP) is proposed based on the open-source machine-learning library, TensorFlow. The effectiveness of the proposed method is verified by both simulation and experiment. The FNN-CP consists of two models, which are designed to correct the positional error and the wavefront aberration of the system. Four global position factors are introduced to describe the positions of the LED elements and set as learnable weights in the first model. The coefficients of different Zernike modes are set as learnable weights in the second model to fit wavefront aberrations of the optical components in FPM system. By using the forward imaging neural network with a specially designed workflow, FNN-CP can correct the positional misalignment in FPM system and achieve better reconstruction results than other FPM reconstruction methods based on forward neural network.

There are some points we will focus on in the future work. Firstly, for each LED elements, unique Zernike polynomials can be introduced to describe the wavefront aberration of the system. In this case, the reconstructed results can be more accurate. Secondly, because we just use four global position factors to describe the positions of the LED elements, it is difficult to correct positional error of single LED element based on our proposed method. To solve the problem, positions of each LED elements (xn, yn) can be modeled as learnable weights in a layer and updated in the network training process.

Funding

National Natural Science Foundation of China (61327902).

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

2. X. Ou, R. Horstmeyer, C. Yang, and G. Zheng, “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett. 38(22), 4845–4848 (2013). [CrossRef]  

3. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

4. X. Ou, R. Hostmeyer, G. Zheng, and C. Yang, “High numerical aperture Fourier ptychography: principle, implementation and characterization,” Opt. Express 23(3), 3472–3491 (2015). [CrossRef]  

5. M. Bashkansky, R. L. Lucke, E. Funk, L. Rickard, and J. Reintjes, “Two-dimensional synthetic aperture imaging in the optical domain,” Opt. Lett. 27(22), 1983–1985 (2002). [CrossRef]  

6. S. A. Alexandrov, T. R. Hillman, T. Gutzler, and D. D. Sampson, “Sysnthetic aperture Fourier holographic optical microscopy,” Phys. Rev. Lett. 97(16), 168102 (2006). [CrossRef]  

7. J. A. Jensen, S. I. Nikolov, K. L. Gammelmark, and M. H. Pedersen, “Systhetic aperture ultrasound imagin,” Ultrasonics 44, e5–e15 (2006). [CrossRef]  

8. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]  

9. J. Fan, J. Suo, J. Wu, H. Xie, Y. Shen, F. Chen, G. Wang, L. Cao, G. Jin, and Q. He, “Video-rate imaging of biological dynamics at centimeter scale and micrometre resolution,” Nat. Photonics 13(11), 809–816 (2019). [CrossRef]  

10. V. Elser, “Phase retrieval by iterated projections,” J. Opt. Soc. Am. A 20(1), 40–55 (2003). [CrossRef]  

11. J. M. Rodenburg and H. M. Faulkner, “A phase retrieval algorithm for shifting illumination,” Appl. Phys. Lett. 85(20), 4795–4797 (2004). [CrossRef]  

12. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]  

13. J. R. Fienup, “Phase retrieval algorithms: a personal tour,” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]  

14. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Proc. Mag. 32(3), 87–109 (2015). [CrossRef]  

15. E. J. Candes, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: Theory and algorithms,” IEEE Trans. Inf. Theory 61(4), 1985–2007 (2015). [CrossRef]  

16. S. Dong, K. Guo, P. Nanda, R. Shiradkar, and G. Zheng, “FPscope: a field-portable high-resolution microscope using a cellphone lens,” Biomed. Opt. Express 5(10), 3305–3310 (2014). [CrossRef]  

17. L. Tian, Z. Liu, L. -H. Yeh, M. Chen, J. Zhong, and L. Waller, “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica 2(10), 904–911 (2015). [CrossRef]  

18. J. Chung, X. Ou, R. P. Kulkarni, and C. Yang, “Counting white blood cells from a blood smear using Fourier ptychographic microscopy,” PLoS One 10(7), e0133489 (2015). [CrossRef]  

19. L. Tian, X. Li, K. Ramchandran, and L. Waller, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

20. A. Zhou, N. Chen, H. Wang, and G. Situ, “Analysis of Fourier ptychographic microscopy with half of the captured images,” J. Opt. 20(9), 095701 (2018). [CrossRef]  

21. X. He, C. Liu, and J. Zhu, “Single-shot aperture-scanning Fourier ptychography,” Opt. Express 26(22), 28187–28196 (2018). [CrossRef]  

22. X. He, C. Liu, and J. Zhu, “Single-shot Fourier ptychography based on diffractive beam splitting,” Opt. Lett. 43(2), 214–217 (2018). [CrossRef]  

23. B. Lee, J. -y. Hong, D. Yoo, J. Cho, Y. Jeong, S. Moon, and B. Lee, “Single-shot phase retrieval via Fourier ptychographic microscopy,” Optica 5(8), 976–983 (2018). [CrossRef]  

24. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]  

25. C. Shen, A. C. S. Chan, J. Chung, D. E. Williams, A. Hajimiri, and C. Yang, “Computational aberration correction of VIS-NIR multispectral imaging microscopy base on Fourier ptychography,” Opt. Express 27(18), 24923–24937 (2019). [CrossRef]  

26. C. Zuo, J. Sun, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

27. A. Konijnenberg, W. Coene, and H. Urbach, “Model-independent noise-robust extension of ptychography,” Opt. Express 26(5), 5857–5874 (2018). [CrossRef]  

28. M. Li, L. Bian, X. Cao, and J. Zhang, “Noise-robust coded-illumination imaging with low computational complexity,” Opt. Express 27(10), 14610–14622 (2019). [CrossRef]  

29. X. Tao, J. Zhang, C. Tao, P. Sun, R. Wu, and Z. Zheng, “” Tunable-illumination for lase Fourier ptychographic microscopy base on a background noise-reducing system,”,” Opt. Commun. 468, 125764 (2020). [CrossRef]  

30. J. Sun, Q. Chen, Y. Zhang, and C. Zuo, “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express 7(4), 1336–1350 (2016). [CrossRef]  

31. J. Liu, Y. Li, W. Wang, H. Zhang, Y. Wang, J. Tan, and C. Liu, “Stable and robust frequency domain position compensation strategy for Fourier ptychographic microscopy,” Opt. Express 25(23), 28053–28067 (2017). [CrossRef]  

32. A. Zhou, W. Wang, N. Chen, E. Y. Lam, B. Lee, and G. Situ, “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express 26(18), 23661–23674 (2018). [CrossRef]  

33. J. Zhang, X. Tao, P. Sun, and Z. Zheng, “A positional misalignment correction method for Fourier ptychographic microscopy base on quasi-Newton method with a global optimization module,” Opt. Commun. 452, 296–305 (2019). [CrossRef]  

34. A. Kappeler, S. Ghosh, J. Holloway, O. Cossairt, and A. Katsaggelos, “PtychNet: CNN based Fourier ptychography,” in 2017 IEEE International Conference on Image Processing (ICIP), (IEEE, 2017), pp. 1712–1716.

35. N. Thanh, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach to Fourier ptychographic microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

36. J. Zhang, T. Xu, Z. Shen, Y. Qiao, and Y. Zhang, “Fourier ptychographic microscopy reconstruction with multiscale deep residual network,” Opt. Express 27(6), 8612–8625 (2019). [CrossRef]  

37. S. Jiang, K. Guo, J. Liao, and G. Zheng, “Solving Fourier ptychographic imaging problems via neural network modeling and TensorFlow,” Biomed. Opt. Express 9(7), 3306–3319 (2018). [CrossRef]  

38. M. Sun, X. Chen, Y. Zhu, D. Li, Q. Mu, and L. Xuan, “Neural network model combined with pupil recovery for Fourier ptychographic microscopy,” Opt. Express 27(17), 24161–24174 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. The diagram of a traditional FPM system
Fig. 2.
Fig. 2. A simulation performed based on a misaligned FPM system. (a) A simplified model of the positional misalignment of the LED array. (b) The enlargement of the central red windowed part in (a), where Δx, Δy and θ present the global positional factors. (c1) and (c2) are used as the intensity and phase images of the high-resolution sample. (d1) and (d2) are the reconstruction results recovered by the misaligned FPM system.
Fig. 3.
Fig. 3. The flow chart of the FNN-CP.
Fig. 4.
Fig. 4. The reconstruction results for the traditional FPM algorithm, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP. (a) The recovered high-resolution intensity and phase images. (b) The distribution map containing ideal, actual and corrected positions for the FNN-CP.
Fig. 5.
Fig. 5. Experimental results of a USAF high-resolution target. (a) The entire FOV of an obtained low-resolution image. (b) The enlargement of the ROI in the entire FOV. (c) – (h) The reconstruction results recovered by traditional algorithm, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP. (i) Intensity contrast profile of Group 8 Element 6 is illustrated by red dotted line in (f), blue dashed line in (g), orange solid line in (h).
Fig. 6.
Fig. 6. Experimental results of a tissue slice of a filiform papilla of a cat’s tongue. (a) The entire FOV of an obtained low-resolution image. (b)-(c) The enlargements of two ROI in the entire FOV and the intensity and phase high-resolution images recovered by traditional algorithm, pcFPM, Jiang’s method, FINN-P, FNN-Z and FNN-CP, respectively.

Tables (2)

Tables Icon

Table 1. Main parameters of the simulated FPM system.

Tables Icon

Table 2. The mean value of reconstruction results for different methods.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = | ( o ( x , y ) exp ( i ( k x n x + k y n y ) ) P S F ( x , y ) | 2 ,
C T F ( k x , k y ) = F { P S F ( x , y ) } = { 1 ,   i f   k x 2 + k y 2 ( N A 2 π λ ) 2 0 ,   o t h e r w i s e   ,
x n = d [ cos ( θ ) n 1 + sin ( θ ) n 2 ] + Δ x , y n = d [ sin ( θ ) n 1 + cos ( θ ) n 2 ] + Δ y ,
k x n = 2 π λ x 0 x n ( x 0 x n ) 2 + ( y 0 y n ) 2 + ( h + Δ h ) 2 , k y n = 2 π λ y 0 y n ( x 0 x n ) 2 + ( y 0 y n ) 2 + ( h + Δ h ) 2 ,
I n ( x , y ) = | F 1 { O ( k x , k y ) F { P S F ( x , y ) e x p ( i ( k x n x + k y n y ) ) } } | 2 ,
I n ( x , y ) = | F 1 { O ( k x , k y ) F { P S F n ( x , y ) } } | 2 .
O ( k x , k y )  =  O r ( k x , k y )  +  i O i ( k x , k y ) ,
P S F n ( x , y ) = P S F n r ( x , y ) + i P S F n i ( x , y ) .
C T F n ( k x , k y ) = C T F n r ( k x , k y ) + i C T F n i ( k x , k y ) .
E n ( k x , k y ) = O ( k x , k y ) C T F n ( k x , k y )  = [ O r ( k x , k y ) C T F n r ( k x , k y ) O i ( k x , k y ) C T F n i ( k x , k y ) ] + i [ O r ( k x , k y ) C T F n i ( k x , k y ) + O i ( k x , k y ) C T F n r ( k x , k y ) ] .
E n u p d a t e ( k x , k y ) = F { I n ( x , y ) F 1 { E n ( k x , k y ) } | F 1 { E n ( k x , k y ) } | } .
l o s s  =  1 p i x e l p i x e l | E n u p d a t e ( k x , k y ) E n ( k x , k y ) | 2 .
Z ( k x , k y ) = z 0 Z 0 ( k x , k y ) + z 1 Z 1 ( k x , k y ) + z 2 Z 2 ( k x , k y )  +  ,
P n ( k x , k y ) = F { P S F n ( x , y ) } Z ( k x , k y )  =  P n r ( k x , k y )  +  i P n i ( k x , k y ) .
E n ( k x , k y ) = [ O r ( k x , k y ) P n r ( k x , k y ) O i ( k x , k y ) P n i ( k x , k y ) ] + i [ O r ( k x , k y ) P n i ( k x , k y ) + O i ( k x , k y ) P n r ( k x , k y ) ] .
NMSE = p i x e l | O t r u t h ( k x , k y ) p i x e l O t r u t h ( k x , k y ) O ( k x , k y ) p i x e l | O ( k x , k y ) | 2 O ( k x , k y ) | 2 p i x e l | O t r u t h ( k x , k y ) | 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.