Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Undersampled Fourier ptychography for reflective-based long range imaging

Open Access Open Access

Abstract

Fourier ptychography (FP) can be a promising technique for long-range and high-resolution imaging. In this work, we explore reconstructions with undersampled data for meter-scale reflective based Fourier ptychographic imaging. To reconstruct with under-sampling captures, we propose a novel cost function for FP phase retrieval and design a new optimization algorithm based on gradient descent. To verify the proposed methods, we perform the high-fidelity reconstruction of the targets with sampling parameter less than one. Compared to the state-of-the-art alternative-projectionbased FP algorithm, the proposed one can achieve the same performance but with much less data.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

High resolution imaging from a long distance is important yet challenging in many applications like surveillance, remote sensing and astronomy. Lenses with large apertures are usually advocated to provide sufficient resolution in long range imaging. However, they can be expensive, heavy and bulky [1]. Originated from the microscopic imaging, Fourier ptychography [24] (FP) can offer an alternative way to create a synthetic aperture, which expands the passband of the imaging system and surpasses the resolution limitation with a small-aperture lens. FP can produce the Fourier spectrum of the object in front of the imaging lens. By shifting the camera, FP captures low-resolution images corresponding to different positions of the Fourier spectrum of the target [58]. FP reconstruction algorithms can then be employed to computationally synthesize these measurements to expand the synthetic aperture [915].

In optical imaging system, the light diffraction from imaging aperture produces Airy pattern on the imaging plane which limits the imaging resolution. Image sensors (e.g., CMOS and CCD) are used to digitally sample the recorded scene of the target on the imaging plane [17]. The pixel size also plays an important role in imaging quality and post image signal processing [16]. We can define the sampling parameter (Ω) as the ratio between the Airy spot’s radius and the pixel size of the camera: $\Omega = {{1.22\lambda F\# } / w}$, where λ is the wavelength, $F\#$ is the F number of the optical system, and w is the pixel pitch. The parameter reflects the number of samples with respect to the Airy spot. Given a fixed imaging aperture, increasing the sampling parameter would need finer pixels for better imaging quality. However, large sampling parameter with finer pixels would bring extra burden to imaging storage and computation. As a reference, the data amount with Ω=4 will be 16 times than that with Ω=1.

The sampling parameter Ω in the Fourier ptychography microscopy (FPM) is usually around one or two [2,18,19] to achieve high resolution and large field of view simultaneously. In particular, a subsampled FP scheme has been proposed in FPM by introducing a subsampled mask in the reconstruction process [20], and achieves good reconstruction when Ω is below 1. Unfortunately, speckle is inevitably in the reflective coherent imaging [21]. Since the phase of the speckle fluctuates significantly, the reconstruction performance in reflective FP is degraded dramatically with the decrease of the Ω [5]. Notice that the speckle size is the same as the Ariy spot’s size [21], so the sampling parameter Ω also represents the number of samples with respect to the speckle size. Recall the Gerchberg-Saxton trick [22,23] in the alternating projection (AP) FP algorithm [9,14], where the amplitude estimation is replaced by the square of the recorded intensity. Since the recorded intensity is the integral of the speckle within the pixel, large sampling parameter is required to sample speckle patterns. For example, SAVI employs Ω≈7 to obtain five times superresolution [5]. If the sampling parameter Ω can be reduced to one, we may save nearly 98% data.

In this paper, we investigate the FP reconstruction under low sampling parameter Ω for the macroscopic reflective Fourier ptychography. We introduce the detector’s sampling after the optical imaging, and formulate the FP forward model for long range imaging. Based on the proposed forward model, we propose a novel cost function for Fourier ptychographic phase retrieval, and design the gradient descent based optimization algorithm. We examine the proposed method with various sampling parameter Ω with synthetic experiments and demonstrate high fidelity reconstructions with the proposed algorithm under low sampling parameter. Furthermore, we experimentally demonstrate that our algorithm can achieve good reconstruction even when the data is under-sampled at Ω=0.8. By reducing the sampling parameter Ω, we show that it is possible to increase the capacity of the FP imaging system by reducing the data amount.

2. Generalized forward model for long-range Fourier ptychography

The scheme of long range reflective fourier ptychography is shown in Fig. 1. A source emits a spatially coherent optical field to illuminate the object. The field reflected from the object is then propagated back toward the imaging system, and produces a low-resolution image therein. The reflected object field is a two-dimensional optical field $O({x,y} )$. After a sufficient long distance, the far field propagation of the object field $O({x,y} )$ can be computed by Fourier transform, and thus the Fourier spectrum of the object is reproduced in front of the camera’s lens. Hence, the optical field $\Psi ({u,v} )$ arriving at the lens aperture is given by [24],

$$\Psi ({u,v} )= \frac{{\exp ({jkz} )}}{{j\lambda z}}\exp \left( {\frac{{jk({{u^2} + {v^2}} )}}{{2z}}} \right)F{\{{O({x,y} )} \}_{u = \frac{x}{{\lambda z}},v = \frac{y}{{\lambda z}}}}$$
Where $k = {{2\pi } / \lambda }$ is the wave number, $\lambda$ is the wavelength, and z is the propagation distance from the object to the lens. $({x,y} )$ and $({u,v} )$ are the spatial coordinates in object plane and the aperture plane. Besides the object’s spectrum $\tilde{O}({u,v} )= F\{{O({x,y} )} \}$, there is additional quadratic phase factor and constant phase factor in $\Psi ({u,v} )$.

 figure: Fig. 1.

Fig. 1. Schematic diagram of macroscopic reflective Fourier ptychographic imaging. A laser produces a coherent optical field to illuminate the object. The reflected field from the object is then captured by camera. Inside the camera, a low-resolution replica of the object with speckle is reproduced in front of the sensor. Due to sampling process of the sensor, the captured image will be pixelated when the sampling parameter is low.

Download Full Size | PDF

Since the lens aperture has finite diameter, only the light field $\Psi ({u,v} )$ inside the aperture can get through the aperture, and be imaged onto the sensor. Let the pupil function of the aperture be $P({u,v} )$, and the above filtering process is expressed by $\Psi ({u,v} )P({u,v} )$. For an aberration-free optical system, the pupil function $P({u,v} )$ is given by,

$$P\left( {u,v} \right) = \left\{\begin{array}{ll}1 & \sqrt {u^2 + v^2} \le {D / 2}\\ 0 & {{\rm else}} \end{array} \right.$$
Where, D is the diameter of the aperture.

The filtered field picks up the quadratic phase of the lens, and propagates a distance d to the image plane. Using the Fresnel diffraction formula, the field $\Gamma ({x,y} )$ arriving at the sensor plane is given by,

$$\begin{aligned} \Gamma ({x,y} )&= \frac{{\exp ({jkd} )}}{{j\lambda d}}F\left\{ {\Psi ({u,v} )P({u,v} )\exp \left( {\frac{{jk({{u^2} + {v^2}} )}}{2}\left( {\frac{1}{d} - \frac{1}{f}} \right)} \right)} \right\}\\& ={-} \frac{{\exp ({jk({z + d} )} )}}{{{\lambda ^2}zd}}F\left\{ {\widetilde O({u,v} )P({u,v} )\exp \left( {\frac{{jk({{u^2} + {v^2}} )}}{2}\left( {\frac{1}{z} + \frac{1}{d} - \frac{1}{f}} \right)} \right)} \right\}\\& = CF\{{\widetilde O({u,v} )P({u,v} )} \}\end{aligned}$$
Where, $C ={-} {{\exp ({jk({z + d} )} )} / {({{\lambda^2}zd} )}}$ is the constant term. f is the focal length of the lens. And the object distance z and the image distance d satisfy the lens law of geometrical optics, ${1 / z} + {1 / d} = {1 / f}$. For simplicity, we dropped the coordinate scaling from the object plane to the image plane.

Shifting the camera within the object’s spectrum, pupil function $P({u,v} )$ will extract a different portion of the light field $\widetilde O(u )$. Thus, the optical field ${\Gamma _i}({x,y} )$ at the image plane is given by:

$${\Gamma _i}\left( {x,y} \right) = CF\left\{ {\widetilde O\left( {u,v} \right)P\left( {u - {u_i},v - {v_i}} \right)} \right\},\,\,i = 1, \cdots ,N$$
Where, $({{u_i},{v_i}} )$ is the offset determined by the camera position.

A CCD or CMOS image sensor is placed in the image plane to records the incoming optical field ${\Gamma _i}({x,y} )$. The image sensor has three-fold impacts on the optical field [17]: firstly, it can only measure the intensity of the field, and loss the phase information; secondly, its pixel integrates the intensity over its pixel area; thirdly, the grid of pixels samples the intensity, and output the discrete values. Mathematically, the discretely sampled image ${I_i}({p,q} )$ is given by,

$${I_i}({p,q} )= \left( {{{|{{\Gamma _i}({x,y} )} |}^2} \otimes W\left( {\frac{x}{w},\frac{y}{w}} \right)} \right) \times \sum\limits_{p,q} {\delta ({x - pw,y - qw} )}$$
Where, w is the pixel size. We assume the fill factor of the pixel is 100%, so that the pixel pitch is also w. ${\otimes}$ denotes the convolution, $\delta ({x - pw,y - qw} )$ is the delta function representing the sampling over an interval w. $W(x )$ is the rectangular window function, defined as
$$W\left( {\frac{x}{w},\frac{y}{w}} \right) = \left\{ \begin{array}{l} {1 / {{w^2}}};|x |< {w / {2,|y |< {w / {2,}}}}\\ 0;\textrm{otherwise} \end{array} \right.$$

Without the pixel sampling, the intensity ${|{{\Gamma _i}({x,y} )} |^2}$ is a full speckle image. During the reconstruction, the alternating projection (AP) algorithms [9,14,25] use the square of the recorded intensity $|{{\Gamma _i}({x,y} )} |$ to replace or update the estimated amplitude to enforce the object constraint. When the pixel sampling is involved, the speckle image ${|{{\Gamma _i}({x,y} )} |^2}$ are sampled by the detector’s pixels, and it will get more pixelated as the pixel size increased. Notice that the speckle size [21] is given by $\chi = 1.22\lambda F\#$, which has the same formula as the Ariy spot of the imaging system. When the sampling parameter Ω is less than 1, the speckle image ${|{{\Gamma _i}({x,y} )} |^2}$ is under sampled to the image ${I_i}({p,q} )$. Since there is significant phase fluctuation in the speckle, there will be a big difference between ${I_i}({p,q} )$ and ${|{{\Gamma _i}({x,y} )} |^2}$. Therefore, the reconstruction performance degrades with the decrease of Ω.

3. Reconstruction with undersampled data

3.1 Vectorization Notation

For multivariate optimization problems, it is convenient to reformulate the problem using linear algebra. And thus, the functions need to be vectorized. Each of the captured images ${I_i}({p,q} )$, having $m \times m$ pixels, are raster-scanned into vectors ${\mathbf{I}_{\mathbf i}}$ with size ${m^2} \times 1$. Since the estimated object $O({x,y} )$ have higher space-bandwidth product than the raw images, the estimated object $O({x,y} )$ should have $n \times n$ pixels, where n > m. For convenience, we actually solve for the Fourier space of the object, $\widetilde O({u,v} )$, which is vectorized into a vector $\widetilde {\mathbf{O}}$ with size ${n^2} \times 1$. Before multiplying the pupil function, the Fourier space of the object $\widetilde {\mathbf{O}}$ is translated by a ${n^2} \times {n^2}$ shift matrix ${\mathbf{Q}_{\mathbf i}}$. The matrix ${\mathbf{Q}_{\mathbf i}}$ performs the spectrum shift of $\widetilde O({u - {u_i},v - {v_i}} )$, so there is only one ‘one’ entries in each row. The pupil function ${P}({u,v} )$ is vectorized into a vector $\mathbf{P}$ with size ${n^2} \times 1$. The diag(·) operator puts the entries of a vector into the diagonal of a matrix. The 2D Fourier transform and inverse transform operator are n2×n2 matrices defined as ${\mathbf{F}^{{- 1}}}$ and $\mathbf{F}$. Then, Eq. (4) can be rewritten as,

$$\begin{aligned} {{\mathbf \Gamma }_i}&{= }{{\mathbf F}^{{- 1}}}\textrm{diag}({\mathbf P} ){{\mathbf Q}_i}\widetilde {\mathbf O}\\& = {{\mathbf A}_i}\widetilde {\mathbf O} \end{aligned}$$
Where, ${{\mathbf A}_i} = {{\mathbf F}^{{- 1}}}\textrm{diag}({\mathbf P} ){{\mathbf Q}_i}$ is a linear operator to form the optical field at the sensor plane.

The final image ${I_i}({p,q} )$ as defined in Eq. (5) can be rewritten as,

$${{\mathbf I}_i} = {\mathbf {SW}}{|{{{\mathbf A}_i}\widetilde {\mathbf O}} |^2}$$
Where, ${\mathbf W}$ is the convolution matrix with all-one kernel to perform the pixel integration. $\mathbf{S}$ is the downsampling matrix, which transforms a ${n^2} \times 1$ vector into a ${m^2} \times 1$ vector by selecting values out of the original vector, so the entries of this matrix are either 1 or 0 and each row contains at most one nonzero element. If we define the sensor matrix ${\mathbf D} = {\mathbf {SW}}$, the vectorized forward model is given by,
$${{\mathbf I}_i} = {\mathbf D}{|{{{\mathbf A}_i}\widetilde {\mathbf O}} |^2}$$

3.2 Fourier ptychographic reconstruction with undersampled data

As noted in Ref. [9], defining the cost function for FP phase retrieval is important and tricky. It is straightforward to define the intensity-based cost function, which is based on the least square error between the measurements ${\mathbf{I_{i}}}$ and the reprojected estimation $\mathbf{D}{|{\mathbf{{A}_{i}}\widetilde {\mathbf{O}}} |^2}$, and then the cost function can be minimized via the Wirtinger flow algorithm [12] to recover the object $\mathbf{O}$. Unfortunately, the intensity-based algorithms suffer from phase-amplitude leakage and phase blurring when the data is not perfect [9]. Although the amplitude-based cost function is preferable, it is not applicable to define such cost function with Eq. (9).

In this paper, we propose a new cost function, which is given by,

$$\mathop {\min }\limits_{\textrm O} L\left( {\widetilde {\mathbf O}} \right) = \mathop {\min }\limits_{\textrm O} \sum\limits_{i} {\frac{1}{2}\left\| {\frac{{{{\mathbf D}^{\mathbf T}}\left( {{{\mathbf I}_{\mathbf i}}{ - \mathbf D}{{\left| {{{\mathbf A}_{\mathbf i}}\widetilde {\mathbf O}} \right|}^2}} \right)}}{{\left| {{{\mathbf A}_{\mathbf i}}\widetilde {\mathbf O}} \right|}}} \right\|_2^2}$$

Here, we add two terms to the least square error $\left\|{{\mathbf{I}_{\mathbf i}}{- \mathbf D}{{|{{\mathbf{A}_{\mathbf i}}\widetilde {\mathbf O}} |}^2}} \right\|_2^2$. Firstly, we introduce an up-sampling operator ${\mathbf{D}^{T}}$, which helps to measure the error on the original domain, instead of the down-sampled domain. And secondly, we divide the error by the $|{\mathbf{{A}_{i}}\widetilde {\mathbf O}} |$ to transform the cost function from the intensity-based type to the amplitude-based type.

The minimization of Eq. (10) can be performed using the sequential gradient descent method, where single measured image is treated as an optimization problem. The cost function for each problem is just one component of Eq. (10) and is written as

$$\mathop {\min }\limits_{\widetilde {\textrm O}} {L_{\ i}}\left( {\widetilde {\mathbf O}} \right) = \mathop {\min }\limits_{\textrm O} \frac{1}{2}\left\| {\frac{{{{\mathbf D}^{\mathbf T}}\left( {{{\mathbf I}_{\mathbf i}}{- \mathbf D}{{\left| {{{\mathbf A}_{\mathbf i}}\widetilde {\mathbf O}} \right|}^2}} \right)}}{{\left| {{{\mathbf A}_{\mathbf i}}\widetilde {\mathbf O}} \right|}}} \right\|_2^2$$

We rewrite the loss function ${L_{i}}$ as

$${L_{\mathbf{i}}}{\text{ = }}{{\mathbf{g}}_{\mathbf{i}}}^{\mathbf{T}}{{\mathbf{g}}_{\mathbf{i}}}\,\;\;\;\;\textrm {where}\,{{\mathbf{g}}_{\mathbf{i}}} = \frac{{{{\mathbf{D}}^{\mathbf{T}}}{{\mathbf{I}}_{\mathbf{i}}}{\mathbf{ - }}{{\mathbf{D}}^{\mathbf{T}}}{\mathbf{D}}{{\left| {{{\mathbf{A}}_{\mathbf{i}}}{\mathbf{O}}} \right|}^2}}}{{\left| {{{\mathbf{A}}_{\mathbf{i}}}{\mathbf{O}}} \right|}}$$
Where, the superscript $\mathbf{T}$ stands for the transposing matrix operation. Based on the chain rule, the derivative of ${L_{i}}$ with respect to $\mathbf{O}$ can be expressed as,
$$\frac{{\partial {L_{\mathbf i}}}}{{\partial {\mathbf O}}} = \frac{{\partial {{\mathbf g}_{\mathbf i}}}}{{\partial {\mathbf O}}}{{\mathbf g}_{\mathbf i}}\textrm{ = } - \frac{{\partial |{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}}{{\partial {\mathbf O}}}\left( {\textrm{diag}\left( {\frac{{{{\mathbf D}^{\mathbf T}}{{\mathbf I}_{\mathbf i}}}}{{{{|{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}^2}}}} \right)\textrm{ + }{{\mathbf D}^{\mathbf T}}{\mathbf D}} \right){{\mathbf g}_{\mathbf i}}.$$

Appling the same deviation as in Ref. [9], we can calculate the derivative of $|{\mathbf{{A}_{i}}{\mathbf O}} |$ with respect to $\mathbf{O}$ as,

$$\frac{{\partial |{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}}{{\partial {\mathbf O}}} = \frac{{\partial ({{{({{{\mathbf A}_{\mathbf i}}{\mathbf O}} )}^\ast }{{\mathbf A}_{\mathbf i}}{\mathbf O}} )}}{{\partial {\mathbf O}}}\frac{{\partial |{{{\mathbf A}_{\mathbf i}}{O}} |}}{{\partial {{|{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}^2}}} = {\mathbf A}_{\mathbf i}^{\mathbf H}\textrm{diag}\left( {\frac{{({{{\mathbf A}_{\mathbf i}}{\mathbf O}} )}}{{|{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}}} \right). $$
Where, the superscript $\mathbf{H}$ is conjugate complex transposing operation.

Substituting Eq. (14) into Eq. (13), The gradient of cost function ${L_{i}}$ over object $\widetilde {\mathbf O}$ can be expressed as

$$\frac{{\partial {L_{\mathbf i}}}}{{\partial {\mathbf O}}} ={-} {{\mathbf A}_{\mathbf i}}^{\mathbf H}\textrm{diag}\left( {\frac{{({{{\mathbf A}_{\mathbf i}}{\mathbf O}} )}}{{|{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}}} \right)\left( {\textrm{diag}\left( {\frac{{{{\mathbf D}^{\mathbf T}}{{\mathbf I}_{\mathbf i}}}}{{{{|{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}^2}}}} \right) + {{\mathbf D}^{\mathbf T}}{\mathbf D}} \right)\frac{{{{\mathbf D}^{\mathbf T}}{{\mathbf I}_{\mathbf i}}{- }{{\mathbf D}^{\mathbf T}}{\mathbf D}{{|{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}^2}}}{{|{{{\mathbf A}_{\mathbf i}}{\mathbf O}} |}}$$

Then, the object can be updated by the gradient descent scheme with the descent direction given by Eq. (15),

$${\widetilde {\mathbf O}_{({i + 1,iter} )}} = {\widetilde {\mathbf O}_{({i,iter} )}} + \beta {\left. {\frac{{\partial {L_i}}}{{\partial \widetilde {\mathbf O}}}} \right|_{\widetilde {\mathbf O} = {{\widetilde {\mathbf O}}_{({i,iter} )}}}}$$
Where, β is the step size. When the Ω is above 1, the step size β can be set as 1/n2. And for the undersampled imaging case with Ω≤1, the step size β can be scaled down with a factor of 1/2 after a few iterations.

The proposed algorithm can be implemented with the following steps.

oe-31-8-13414-i001

4. Numerical Simulation Results

In the simulations, “cameraman” with 256 × 256 pixels as shown in Fig. 2 (a) is used as the amplitude of object, and its physical size is specified as 9 mm × 9 mm. The phase of the object is assumed as the random phase ranging from –π to π, in order to simulate the rough surface of the object. The working wavelength is 500 nm, and the imaging distance is 1.5 meters. The focal length of the imaging system is 75 mm, and the aperture’s diameter is 7.5 mm. Thus, the speckle’s size produced by imaging system is about 6um. In the following two simulations, the synthetic aperture is set as 40 mm, which is 5.3 times than the original 7.5 mm aperture.

 figure: Fig. 2.

Fig. 2. Original image and simulated reference image.

Download Full Size | PDF

The peak signal to noise ratio (PSNR [26]), which measures the reconstruction fidelity in terms of mean square error, and structure similarity index measure (SSIM [27]), which is a perceptual image quality metric, are used to quantitatively evaluate the reconstruction quality. It is noted that both the PSNR and the SSIM require the reference image to be compared with. The original ‘cameraman’ image can’t serve as the reference image since speckle is always present in the real-world laser image. Therefore, we extract the portion of the object’s spectrum which is covered by the Fourier ptychographic spectrum scanning, and then the inverse Fourier transform of that portion is used as the reference image. Figure 2 (b) shows the reference image simulated with 40 mm aperture, which is the same as the Fourier ptychographic synthetic aperture.

In the first simulation, we analyze the reconstruction performance when Ω is larger than 1. Since the Ariy’s spot size of the imaging system is 6um, we vary the pixel size w from 1.5um to 6um, which is corresponding to the value of Ω from 4 to 1. 15 × 15 grid-sampled images with 70% overlap are simulated for each sampling parameter Ω. Samples of low resolution images are shown in Fig. 3 (c). We perform the AP (alternating projection) algorithm [14] and the proposed algorithm on each simulated datasets with different Ω. As shown in Fig. 3 (a) and Fig. 3 (b), the reconstruction quality of the AP algorithm degrades significantly with the decrease of the Ω. On the other hand, our algorithm achieves much improved reconstruction, where the PSNR is above 27 dB, and the SSIM is above 0.75 for all Ω values. Reconstruction examples of AP algorithm and our algorithm are shown in Fig. 3 (d) and Fig. 3 (e). It is seen that both of the AP algorithm and our algorithm can perform well reconstruction when Ω is 4 (w = 1.5um). However, when the Ω is 1 (w = 6um), the AP algorithm fails to reconstruct satisfactory results, while our algorithm shows good reconstruction, which has little visual difference compared to the result of Ω=4. Notice that reducing the sampling parameter can decrease the amount of pixels in the recorded low-resolution image. The data amount in Ω=1 would be one sixteenth of that in Ω=4. Therefore, it would be beneficial to use the proposed algorithm in terms of reconstruction quality and the data amount.

 figure: Fig. 3.

Fig. 3. the reconstruction performance of AP algorithm and the proposed algorithm when the sampling parameter Ω is above 1. The PSNR (a) and SSIM (b) of recovered images are plotted at various sampling parameters. The low resolution images and the reconstructed images of AP algorithm and the proposed algorithm are shown in (c), (d) and (e), where the left column is for Ω=4, and the right column is for Ω=1.

Download Full Size | PDF

In the second simulation, we test our algorithm using undersampled data. We increase the pixel size to 8um and 10um, which is corresponding to Ω=0.75 and Ω=0.6. As shown in Fig. 4 (a), the recorded images are severely undersampled. We increase the overlap ratio to 85%, and simulate two undersampled datasets for each sampling parameter. In each dataset, there are 29 × 29 images. Besides the the AP algorithm, we reproduce the subsampled alternating-projection (SAP) FP algorithm [20], which is used for the subsampled reconstruction in FPM. SAP generates a subsampled mask by dividing one original pixel into 4 sub-pixels, and only 1 out of 4 sub-pixels is updated by the measurement in the updating process. We performed the AP algorithm, the SAP algorithm and our algorithm on the undersampled datasets. As shown in Fig. 4 (b), the reconstructed image of AP algorithm shows strong artifacts and poor resolution. For the SAP algorithm, the subsampled mask fail to work under strong speckles, and it introduces severe lattice-like artifacts as shown in Fig. 4 (c). The reconstructed results of our algorithm, as shown in Fig. 4 (d), present improved details and resolution. And they show little visual difference compared to the reconstructed results with high sampling parameter.

 figure: Fig. 4.

Fig. 4. Reconstruction results when the sampling parameter is below 1. The low resolution images and the reconstructed images of the AP algorithm, the SAP algorithm and the proposed algorithm are shown in (a), (b) and (c), where the left column is for Ω=0.75, and the right column is for Ω=0.6. The performance of PSNR (d) and SSIM (e) of images recovered by AP algorithm and the proposed method are plotted over iterations.

Download Full Size | PDF

5. Experimental results

Figure 5 shows the experimental setup for macroscopic Fourier ptychographic imaging. A 532 nm laser is passed through a spatial filter and a focusing lens to produce a converging spherical wave. As suggested by SAVI [5], a convex lens (diameter Ø2 inch, focal length 300 mm) is used to image the pinhole of the spatial filter to the lens’ aperture, which helps to realize the Fraunhofer diffraction approximation in a short optical path. Two polarizers are used to filter out the noninterference light, one of which is placed after the pinhole, and the other is inserted before the camera. The camera is mounted on a XY translation stage (Zolix, PSA050-11-X) to scan the Fourier spectrum of the object.

 figure: Fig. 5.

Fig. 5. Macroscopic reflective fourier ptychographic imaging setup

Download Full Size | PDF

In the first experiment, we demonstrate our algorithm with the “USAF” resolution target. The standoff distance is 1 meters. We use an image sensor (IMX178) with 2048 × 3056 pixels and 2.4 um pixel pitch for recording, and a lens with focal length 75 mm and diameter 5 mm is used for imaging. The speckle size of the raw image is 9.6 um, so the sampling parameter in the raw image datasets is four. Adjacent positions of the camera are 0.625 mm to obtain the 87.5% overlap ratio. A grid of 25 × 25 raw images is captured, which can produce a synthetic aperture of 20 mm. Using the raw image, we mimic the pixel integration to perform 4 × 4 pixel binning and 5 × 5 pixel binning, and obtain new images with pixel size 9.6um and 12um. In this way, we generate two new datasets, whose sampling parameters Ω are 1 and 0.8 respectively. Samples of raw image and images with Ω=1 and Ω=0.8 are shown in Fig. 6 (a). We perform the AP algorithm, the SAP algorithm and the proposed algorithm on the three datasets. The reconstructed images are shown in Fig. 6 (b, c, d), and detailed comparison on pattern (1,3), pattern (2,1), and pattern (3,3) are shown in Fig. 7. It is shown that the AP algorithm achieves good performance on the raw dataset with Ω=4. But in the case of sampling parameter Ω=1, the reconstructed image becomes blurry. When the sampling parameter Ω is reduced to 0.8, the reconstructed image presents strong pixelated artifacts, and it is difficult to discriminate bar pattern (2,1). For the SAP algorithm, the reconstruction quality is the same as the AP algorithm when Ω=4. When the data is subsampled or even Ω=1, the lattice-like artifacts is present, so that the reconstruction quality is worse than the AP algorithm. Again, bars in pattern (2,1) can’t be resolved by the SAP algorithm. The proposed algorithm demonstrates excellent performance on the three datasets. As shown in the right column of Fig. 7, there is little visual degradation between the reconstructed image using undersampled data (Ω=0.8) and that using oversampled data (Ω=4). The bar pattern (3,3) can still be resolved by our algorithm, therefore our algorithm achieves 2.8 times more resolution than the comparing algorithms. Notice that the data amount in Ω=0.8 is just the 1/25th of that in Ω=4. It would be beneficial to use the proposed algorithm in terms of reconstruction quality and the data amount.

 figure: Fig. 6.

Fig. 6. Experimental results on the USAF resolution target. (a) samples of low-resolution images from the datasets with Ω=4, Ω=1, and Ω=0.8. The low sampling parameter images are generated by perfoming pixel binning on the raw image (left-top). The reconstructed images of AP algorithm, SAP algorithm and the proposed algorithm are shown in (b, c, d), where the left, central and right column is Ω=4, Ω=1 and Ω=0.8.

Download Full Size | PDF

 figure: Fig. 7.

Fig. 7. Zoomed bar pattern images from Fig. 6. (a) magnified regions of bar pattern (1,3). (b) magnified regions of bar pattern (2,1). (c) magnified regions of bar pattern (3,3). Inside the red box is the comparison of three algorithms’ reconstruction quality when Ω≤1. It is shown that when the Ω=0.8, both of the AP algorithm and SAP algorithm can’t resolve the bars in pattern (2,1), while our algorithm can discriminate the bars in pattern (3,3). Hence our algorithm can obtain 2.8 times more resolution than the comparing algorithms.

Download Full Size | PDF

In the second experiment, we examine our algorithm with the undersampled raw data. We replace the image sensor with sCMOS sensor (PCO.edge3.1), and keep the remaining configuration of the setup the same as the first experiment. The pixel size of sCMOS sensor is 6.5um, so the sampling parameter of the new setup is Ω=0.75. ¥1 paper bill is used as the imaging object, and the captured raw image is shown in Fig. 8 (a). Adjacent positions of the camera are 0.625 mm to obtain the 87.5% overlap ratio, and a grid of 25 × 25 raw images is captured, which can produce a synthetic aperture of 20 mm. The AP algorithm, the SAP algorithm and the proposed algorithm are performed on the dataset. As shown in Fig. 8 (b), the SAP algorithm gives the worst reconstruction quality since the lattice-like artifacts has dominated the reconstructed image. As shown in Fig. 8 (c), the AP algorithm presents improved image quality compared the SAP algorithm, however, its shows pixelated artifacts which degrade the image textures and resolution. Our algorithm demonstrates the best reconstruction quality since it can recover the object with detailed textures and clear edges, as shown in Fig. 8 (d).

 figure: Fig. 8.

Fig. 8. Experimental results on paper bill. The undersampled low-res image (a) is captured by the imaging system with sampling parameter Ω=0.75. The reconstructed images of AP algorithm, SAP algorithm and the proposed algorithm are shown in (b, c, d).

Download Full Size | PDF

Conclusion

This paper investigates the FP reconstruction under low sampling parameter for the macroscopic reflective Fourier ptychography. By introducing the detector’s sampling after the optical imaging, the generalized forward model of the Fourier ptychography is formulated mathematically. Based on the generalized forward model, we propose a novel Fourier ptychographic cost function, and design its optimization algorithm. Numerically and experimentally, we demonstrate that our algorithm can recover the object with good fidelity at sampling parameter below one. By reducing the sampling parameter, we show that our algorithm is beneficial to increase the capacity of the FP imaging system by reducing the data amount.

Funding

Major project on High Resolution Earth Observation (GFZX04014307).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. B. G. Van, A. Meinel, and M. Meinel, “The scaling relationship between telescope cost and aperture size for very large telescopes,” Proc. SPIE 2004, 563–570 (2017). [CrossRef]  

2. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]  

3. P. C. Konda, L. Loetgering, K. C. Zhou, S. Xu, A. R. Harvey, and R. Horstmeyer, “Fourier ptychography: current applications and future promises,,” Opt. Express 28(7), 9603–9630 (2020). [CrossRef]  

4. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of Fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

5. J. Holloway, Y. Wu, M. K. Sharma, O. Cossairt, and A. Veeraraghavan, “SAVI: Synthetic apertures for long-range, subdiffraction-limited visible imaging using Fourier ptychography,” Sci. Adv. 3(4), e1602564 (2017). [CrossRef]  

6. J. Wu, F. Yang, and L. Cao, “Resolution enhancement of long-range imaging with sparse apertures,” Opt. Laser Eng. 155, 107068 (2022). [CrossRef]  

7. J. Holloway, M. S. Asif, M. K. Sharma, N. Matsuda, R. Horstmeyer, and O. Cossairt, “Toward Long-Distance Subdiffraction Imaging Using Coherent Camera Arrays,” IEEE Trans. Comput. Imaging 2(3), 251–265 (2016). [CrossRef]  

8. S. Dong, R. Horstmeyer, R. Shiradkar, K. Guo, X. Ou, Z. Bian, X. Xin, and G. Zheng, “Aperture-scanning Fourier ptychography for 3D refocusing and super-resolution macroscopic imaging,” Opt. Express 22(11), 13586–13599 (2014). [CrossRef]  

9. L. H. Yeh, J. Dong, J. Zhong, L. Tian, M. Chen, and G. Tang, “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express 23(26), 33214–33240 (2015). [CrossRef]  

10. T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah, “Deep learning approach to Fourier ptychographic microscopy,” Opt. Express 26(20), 26470–26484 (2018). [CrossRef]  

11. S. Jiang, K. Guo, J. Liao, and G. Zheng, “Solving Fourier ptychographic imaging problems via neural network modeling and TensorFlow,” Biomed. Opt. Express 9(7), 3306–3319 (2018). [CrossRef]  

12. L. Bian, J. Suo, G. Zheng, K. Guo, F. Chen, and Q. Dai, “Fourier ptychographic reconstruction using Wirtinger flow optimization,” Opt. Express 23(4), 4856–4866 (2015). [CrossRef]  

13. M. Zhao, X. Zhang, Z. Tian, and S. Liu, “Neural network model with positional deviation correction for Fourier ptychography,” J. Soc. Inf. Disp. 29(10), 749–757 (2021). [CrossRef]  

14. X. Ou, G. Zheng, and C. Yang, “Embedded pupil function recovery for Fourier ptychographic microscopy,” Opt. Express 22(5), 4960–4972 (2014). [CrossRef]  

15. C. Zuo, J. Sun J, and Q. Chen, “Adaptive step-size strategy for noise-robust Fourier ptychographic microscopy,” Opt. Express 24(18), 20724–20744 (2016). [CrossRef]  

16. R. D. Fiete, “Image Quality and λFN/p for Remote Sensing Systems,” Opt. Eng 38(7), 1229–1240 (1999). [CrossRef]  

17. R. H. Vollmerhausen, D. A. Reago, and R. G. Driggers, Analysis and Evaluation of Sampled Imaging Systems (SPIE, 2010).

18. T. Aidukas, R. Eckert, A. R. Harvey, L. Waller, and P. C. Konda, “Low-cost, sub-micron resolution, wide-field computational microscopy using opensource hardware,” Sci. Rep. 9(1), 7457 (2019). [CrossRef]  

19. A. Pan, Y. Zhang, K. Wen, M. Zhou, J. Min, M. Lei, and B. Yao, “Subwavelength resolution Fourier ptychography with hemispherical digital condensers,” Opt. Express 26(18), 23119–23131 (2018). [CrossRef]  

20. S. Dong, Z. Bian, R. Shiradkar, and G. Zheng, “Sparsely sampled Fourier ptychography,” Opt. Express 22(5), 5455–5464 (2014). [CrossRef]  

21. J. W. Goodman, “Speckle Phenomena in Optics: Theory and Applications,” SPIE Press (2007).

22. J. R. Fienup, “Phase Retrieval Algorithms: a Comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

23. J. R. Fienup, “Phase retrieval algorithms: a personal tour [Invited],” Appl. Opt. 52(1), 45–56 (2013). [CrossRef]  

24. J. W. Goodman, Introduction to Fourier optics (Roberts and Company publishers, 2005).

25. K. Ramchandran, L. Waller, L. Tian L, and X. Li, “Multiplexed coded illumination for Fourier Ptychography with an LED array microscope,” Biomed. Opt. Express 5(7), 2376–2389 (2014). [CrossRef]  

26. H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. on Image Process. 15(11), 3440–3451 (2006). [CrossRef]  

27. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. on Image Process. 13(4), 600–612 (2004). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Schematic diagram of macroscopic reflective Fourier ptychographic imaging. A laser produces a coherent optical field to illuminate the object. The reflected field from the object is then captured by camera. Inside the camera, a low-resolution replica of the object with speckle is reproduced in front of the sensor. Due to sampling process of the sensor, the captured image will be pixelated when the sampling parameter is low.
Fig. 2.
Fig. 2. Original image and simulated reference image.
Fig. 3.
Fig. 3. the reconstruction performance of AP algorithm and the proposed algorithm when the sampling parameter Ω is above 1. The PSNR (a) and SSIM (b) of recovered images are plotted at various sampling parameters. The low resolution images and the reconstructed images of AP algorithm and the proposed algorithm are shown in (c), (d) and (e), where the left column is for Ω=4, and the right column is for Ω=1.
Fig. 4.
Fig. 4. Reconstruction results when the sampling parameter is below 1. The low resolution images and the reconstructed images of the AP algorithm, the SAP algorithm and the proposed algorithm are shown in (a), (b) and (c), where the left column is for Ω=0.75, and the right column is for Ω=0.6. The performance of PSNR (d) and SSIM (e) of images recovered by AP algorithm and the proposed method are plotted over iterations.
Fig. 5.
Fig. 5. Macroscopic reflective fourier ptychographic imaging setup
Fig. 6.
Fig. 6. Experimental results on the USAF resolution target. (a) samples of low-resolution images from the datasets with Ω=4, Ω=1, and Ω=0.8. The low sampling parameter images are generated by perfoming pixel binning on the raw image (left-top). The reconstructed images of AP algorithm, SAP algorithm and the proposed algorithm are shown in (b, c, d), where the left, central and right column is Ω=4, Ω=1 and Ω=0.8.
Fig. 7.
Fig. 7. Zoomed bar pattern images from Fig. 6. (a) magnified regions of bar pattern (1,3). (b) magnified regions of bar pattern (2,1). (c) magnified regions of bar pattern (3,3). Inside the red box is the comparison of three algorithms’ reconstruction quality when Ω≤1. It is shown that when the Ω=0.8, both of the AP algorithm and SAP algorithm can’t resolve the bars in pattern (2,1), while our algorithm can discriminate the bars in pattern (3,3). Hence our algorithm can obtain 2.8 times more resolution than the comparing algorithms.
Fig. 8.
Fig. 8. Experimental results on paper bill. The undersampled low-res image (a) is captured by the imaging system with sampling parameter Ω=0.75. The reconstructed images of AP algorithm, SAP algorithm and the proposed algorithm are shown in (b, c, d).

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

Ψ ( u , v ) = exp ( j k z ) j λ z exp ( j k ( u 2 + v 2 ) 2 z ) F { O ( x , y ) } u = x λ z , v = y λ z
P ( u , v ) = { 1 u 2 + v 2 D / 2 0 e l s e
Γ ( x , y ) = exp ( j k d ) j λ d F { Ψ ( u , v ) P ( u , v ) exp ( j k ( u 2 + v 2 ) 2 ( 1 d 1 f ) ) } = exp ( j k ( z + d ) ) λ 2 z d F { O ~ ( u , v ) P ( u , v ) exp ( j k ( u 2 + v 2 ) 2 ( 1 z + 1 d 1 f ) ) } = C F { O ~ ( u , v ) P ( u , v ) }
Γ i ( x , y ) = C F { O ~ ( u , v ) P ( u u i , v v i ) } , i = 1 , , N
I i ( p , q ) = ( | Γ i ( x , y ) | 2 W ( x w , y w ) ) × p , q δ ( x p w , y q w )
W ( x w , y w ) = { 1 / w 2 ; | x | < w / 2 , | y | < w / 2 , 0 ; otherwise
Γ i = F 1 diag ( P ) Q i O ~ = A i O ~
I i = S W | A i O ~ | 2
I i = D | A i O ~ | 2
min O L ( O ~ ) = min O i 1 2 D T ( I i D | A i O ~ | 2 ) | A i O ~ | 2 2
min O ~ L   i ( O ~ ) = min O 1 2 D T ( I i D | A i O ~ | 2 ) | A i O ~ | 2 2
L i  =  g i T g i where g i = D T I i D T D | A i O | 2 | A i O |
L i O = g i O g i  =  | A i O | O ( diag ( D T I i | A i O | 2 )  +  D T D ) g i .
| A i O | O = ( ( A i O ) A i O ) O | A i O | | A i O | 2 = A i H diag ( ( A i O ) | A i O | ) .
L i O = A i H diag ( ( A i O ) | A i O | ) ( diag ( D T I i | A i O | 2 ) + D T D ) D T I i D T D | A i O | 2 | A i O |
O ~ ( i + 1 , i t e r ) = O ~ ( i , i t e r ) + β L i O ~ | O ~ = O ~ ( i , i t e r )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.