Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Quantitative phase and amplitude imaging with an efficient support constraint

Open Access Open Access

Abstract

High-speed quantitative phase and amplitude imaging methods have led to numerous biological discoveries. For general samples, phase retrieval from a single-diffraction pattern has been an algorithmic and experimental challenge. Here we present a quantitative phase and amplitude imaging method applying an efficient support constraint to yield a rapid algorithmic convergence due to the removal of the twin image and spatial shift ambiguities. Compared to previous complex-valued imaging, our method is lenslet-free and relies neither on assumption based on sample sparsity nor interferometric measurements. Our method provides a robust method for imaging in materials and biological science, while its rapid imaging capability will benefit the investigation of dynamical processes.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Information about phase and amplitude is of great importance to perform biomedical imaging, aberration measurement and correction in visual optics, and three-dimensional imaging. While traditional imaging microscopes only record the amplitude, an object’s phase information is highly desired in many situations. Recently, quantitative phase imaging (QPI) has emerged as an invaluable optical tool to image optical thickness variation of living cells and tissues without the need for specific staining or exogenous contrast agents. Even though the physical nature of optics has been developed for more than a century, there are still many restrictions to retrieve both the amplitude and phase of an object [1,2].

As detectors only capture light irradiance, two main groups of techniques have emerged to tackle the problem of recovering the complex value of an object. The first one obtains information in object space. Examples of these kinds of techniques are interferometric approaches [3], Fourier ptychography (FP) [4], differential phase contrast (DPC) [5]. The interferometric approach measures the interference between light through the object and a reference beam. Although this measurement is extremely precise, its high sensitivity to environmental perturbations and high cost restrict its application. FP reconstructs the high-resolution phase and amplitude images by fusing information from multiple images taken with different illumination angle. DPC is another effective way to acquire quantitative phase images by deconvolving the phase gradient images with the phase transfer function.

As an alternative, a second group of techniques to record information in Fourier space has emerged, such as Shack-Hartmann (SH) wavefront sensors [6], coherent diffractive imaging (CDI) [7,8] and ptychography [9,10]. SH sensors combine a lenslet array with a pixelated detector. By placing the detector at the focal plane of the lenslet array, the positions of the foci generated by each lenslet can be linked to the phase of the wavefront in each region. However, the number of lenslets, their focal length, and their diameter limit the spatial resolution, dynamic range, and sensibility of SH sensors. Apart from SH sensors, CDI can also recover the phase of coherent wave given adequate measurements of its intensity distribution in the Fourier plane and some priori information about the support domain. One of the central difficulties is that its results are often trapped in local minima. Ptychography is a particularly powerful CDI. The basic idea of ptychography is to illuminate a sample with a focused beam and repeatedly record its far-field diffraction pattern as a function of sample position. Iterative retrieval methods are applied to recover the sample’s complex value from this set of measurements.

Here, we perform a quantitative phase and amplitude imaging method with an efficient support constraint. Compared to ptychography requiring an array of partially overlapping probing spots on the object, i.e., significant redundancy in the measured data, to make phase retrieval more robust to stagnation [11], the support constraint of our method is imposed in object space so that intensity diffraction patterns in Fourier space from multiple illumination spots are recorded in a single exposure. Using the modulation constraint and the support constraint together, our method gets rapid convergence and high-quality images because ambiguous results conflicting with these two constraints are rapidly eliminated in the iterative reconstruction. We analyze the performance of our method in simulation and experiment, showing that high resolution and rapid imaging capability are accessible.

2. Stagnation problem

The basic idea of our method is to use the relationship between the amplitude and phase of an object and its Fourier irradiance distribution. This idea is also at the heart of current CDI. Although the iterative transform algorithm in CDI works well for many cases of interest, there is no guarantee that it will converge to a solution [12]. If the number of independent equations equals the number of unknowns, the algorithm has a single solution. Unfortunately, the equations are difficult to solve, and have an enormous number of local minima. The presence of noise and limited prior knowledge (loose constraints) increases the number of solutions within the noise level and constraints range.

For certain types of objects, the iterative algorithm can stagnate on partially reconstructed images. As shown in Fig. 1, there are three kinds of stagnation characterized by twin image, spatial shift ambiguities and oscillating stripes [13]. The first stagnation problem results from the fact that an object ρ(r) and its twin ρ*(r) (the complex conjugated object rotated by 180°) have the same Fourier modulus and, for cases in which the support of the object is symmetric with respect to this rotation, have the same support. The stagnation problem will also arise when the support constraint is used in a manner that has no obvious effect on the intensity change of the Fourier frequency space. If so, the reconstructed image is in a position shifted relative to the position of the support constraint, then the object space will not truncate (cut off spatially) part of the image. In this case, ρ(r+r0) has the same Fourier modulus as ρ(r), thus location of the object is arbitrary. Figure 1(b) shows the reconstructed object of Fig. 1(a) with twin image and spatial shift ambiguities due to centrosymmetric mask and corresponding Fourier intensity without enough obvious changes.

 figure: Fig. 1

Fig. 1 Three kinds of stagnation problems. (a) Object. (r) (b) Reconstructed object with twin image and spatial shift ambiguities. (c) Object (left) and its Fourier transform (right). (d) Fourier transform without the high frequency in the vertical direction (right) and object recovered from it with oscillating stripes (left).

Download Full Size | PDF

Another kind of stagnation problem is characterized by an output image that looks much like the true object but with a pattern of stripes superimposed. The pattern of stripes is approximately sinusoidal in one direction and constant in the orthogonal direction. Depending on the object itself and the support constraint, this problem frequently occurs to varying degrees. Figure 1(d) shows the reconstructed object of Fig. 1(c) with oscillating stripes problem. The Fourier space away from the origin (high frequency) encodes details of the image, whereas the Fourier space near the origin (low frequency) encodes large scale intensity variation. The high frequency loss in Fourier space in the vertical direction leads to oscillating stripes problem. Notice that there are obvious stripes in the reconstructed image, whereas overall contrast is still pretty good since not too much power is thrown away. We think this is due to the very sharp cutoff of the “ideal” filter. A Butterworth or Exponential filter with reasonably low order will not cause these, such as oversampling smoothness (OSS) algorithm [14].

3. Algorithm

Here, we describe the algorithm that we have developed in the support constraint to jump above stagnation hurdles, allowing the algorithm to move on toward a solution properly. The flowchart of our algorithm is shown in Fig. 2. The support constraint is one provided by digital micromirror device (DMD) where the wavefield is known to have non-zeros values within a finite region. The Fourier intensity image captured under known support constraint is used to recover the object. Phase and amplitude reconstruction are performed by iteratively propagating a wavefield estimate between the object and Fourier spaces. The applied constraint and the necessary inputs in two spaces are indicated in Fig. 2(a). Normally, we start with an initial guess in object space. We think a good starting point is to select intensity as a random image and phase as zero.

 figure: Fig. 2

Fig. 2 (a) Iterative recovery process of phase retrieval with support constraint 1 of (b). Steps 1-5 illustrate the algorithm, following principles from phase retrieval. Step 1: initialize the object to be recovered. Step 2: the projection on the object space involves setting the components outside the support constraint to 0, while leaving the rest of the values unchanged. Step 3: the projection of the object onto the Fourier intensity set is accomplished by setting the modulus of the object in Fourier space to the measured Fourier intensity, and leaving the phase unchanged. Step 4: repeat the HIO algorithm based on steps 2-3 once more. (b) Two complementary support constraints. The length of the two patterns is n, the width of all the bars is 0.1n, and the side length of the equilateral triangles is 0.707n. The centroids of the equilateral triangles are at the center of the two patterns. The selection of specific parameters is shown in Fig. 3(c)-3(d).

Download Full Size | PDF

Second, we apply the support constraint S in object space. A projection onto the support PS involves driving towards zero all values outside S, which is a crucial step to resolve ambiguities during phase retrieval:

PSρ(r)={ρ(r)rS0rS
In our method, the support constraint is replaced with a non-centrosymmetric mask to remove the twin image. Figure 2(b) shows two support constraints we select, which have a significant effect on the intensity variation of the Fourier frequency space to solve spatial shift ambiguities. In this case, our support constraint can provide more useful Fourier information than traditional circular support constraint. The support constraint of our method is imposed in object space, which is equivalent to recording the intensity diffraction patterns in Fourier space from multiple illumination spots in a single exposure. In this way, we can think of our method as parallel ptychography.

Third, we apply the modulus constraint M in Fourier space. Since the low-frequency components of the Fourier modulus determine the rough appearance or energy distribution of the object, the saturated pixels in Fourier space will seriously increase the reconstruction error. Our motive for using Eq. (2) is to exclude the effects of the saturated pixels in the algorithm. A projection onto the modulus PM for unsaturated pixels in Fourier space involves applying the fast Fourier transform (FFT) to the object ρ(r), and replacing the calculated Fourier amplitude |F[ρ(r)]| with square root of the exactly oversampled diffraction intensity IS(k), then applying the inverse FFT to obtain an updated object:

E(k)={IS(k)F[ρ(r)]|F[ρ(r)]|IS(k)=max(IS(k))F[ρ(r)]IS(k)<max(IS(k))
PMρ(r)=F1{E(k)}

Miao et al. clarified the sampling requirement for unique phase retrieval and pointed out that oversampling of diffraction patterns usually guarantees the convergence [15]. When the linear oversampling ratio is not less than two, the exactly oversampled diffraction intensity IS(k) can be obtained from the processed Fourier intensity I(k) through deconvolution [16]:

IS(k)=F{F1[I(k)]sinc(r/M)}
where M is number of sampling points in Fourier space.

The large overall intensity value in Fourier space leads to a much more rapid convergence due to the higher signal-to-noise ratio (SNR) in object space. The measured Fourier intensity IM(k) is normalized and multiplied by the product of the number of object pixels N2 and the weight w to obtain the processed data I(k):

I(k)=wN2IM(k)max[IM(k)]

In the fourth step, the hybrid input-output (HIO) algorithm [17] based on support constraint and modulus constraint is repeated until a self-consistent solution is achieved (we usually repeat these steps within 200 times):

ρn+1(r)={PMρn(r)rSρn(r)PMρn(r)rS

The oscillating stripes can be mitigated by employing OSS algorithm, i.e., implementing a general smoothness constraint W(k) upon the region outside the support [14]:

W(k)=exp[12(k/α)2]
By changing parameter α, the width of the Gaussian filter can be tuned to reduce the influence of high-frequency information outside the support, while the density inside the support is not disturbed.

4. Support constraint

Reconstructing complex-valued objects is possible if one has a tight enough support constraint that is one of a number of special types of support constraints. The support of the object plays the most important role in determining whether the solution is unique. In [18,19], all sampled objects with supports whose known convex hulls are polygons having no parallel sides are unique. However, the presence of parallel sides does not imply that multiple solutions exist for the data in question, only that they might occur, and that knowledge of the convex hull is insufficient to ensure uniqueness. The library of supports for which the solution is known to be unique is growing as we learn more about this important constraint.

One would expect to be able to find other support constraints for which the iterative Fourier-transform algorithm performs successfully. In [20], Fienup have shown the reconstruction result of an object having triangular support. The sharpness of the edges of the object and zeroed corners will influence the ability of the iterative algorithm to reconstruct an image by using only a support constraint in object space. As a non-centrosymmetric mask, the random pattern can also converge quickly. Similarly, coherent modulation imaging (CMI) is a newly developed single-shot CDI technique, which adopts a highly random phase plate to diffract the light field to be observed into a speckle pattern [21]. Although the random pattern gets rapid convergence, in experiment the SNR of the final reconstruction is limited by the inability to detect high-order diffraction, which is strongly scattered by the phase plate and too weak to be recorded by the detector.

To further explore the influence of more complex support constraint parameters on the reconstruction of various types of objects, we selected Fig. 2(b) for analysis. The two complementary support constraints in Fig. 2(b), viz. support constraint 1 having separated parts and support constraint 2 for which the object can be reconstructed by using the recursive algorithm with latent reference points, can almost always guarantee the uniqueness of the solution for different types of objects.

The influence of support constraint 1 in Fig. 2(b) on the reconstruction of the complex-valued object can be compared with the holography properties [20]. If one of the separated parts is sufficiently separated from the other parts, then it acts as a holographic reference part, and the complex-valued object can be easily extracted from its autocorrelation. The object ρ(r) has Fourier transform F(k):

ρ(r)=ρ1(r)+ρ2(r)δ(rr0)
F(k)=F1(k)+F2(k)exp(i2πkr0)
where δ(r) is a Dirac delta function, and F1(k)=F[ρ1(r)]=|F1(k)|exp[iφ1(k)], F2(k)=F[ρ2(r)]=|F2(k)|exp[iφ2(k)]. The squared Fourier modulus of the object is

|F(k)|2=|F1(k)+F2(k)exp(i2πkr0)|2=|F1(k)|2+|F2(k)|2+2|F1(k)||F2(k)|cos[2πkr0+φ1(k)+φ2(k)]

If separation r0 is sufficiently large compared with the width of the object, the spatial modulation of the cosine fringe by the phase φ1(k) and φ2(k) gives an solution of the phase [22]. Then the holography condition is satisfied. With sufficiently separated parts, there is a fringelike structure in the squared Fourier modulus of the object. As one departs further from the holography condition, the fringes degrade into a speckle pattern, the ability to decipher the phase from the degraded fringes diminishes. For a support constraint with a hole in the middle, the number of iterations required for convergence is similar to that of the objects having separated support constraint. Along any 1D cut through the center of the object, the support does have two separated parts.

The holography condition demonstrates that having a support constraint consisting of (at least) two separated parts makes the reconstruction of the object by the iterative Fourier-transform algorithm much easier than reconstruction with only a simple connected support constraint. Support constraint 1 in Fig. 2(b) has more than two separated parts, and the separation, i.e., the width of the bar, is large enough to ensure compliance with holography condition.

For support constraint 2 in Fig. 2(b), the objects contain reference points not satisfying the holography condition but satisfying weaker conditions. In [23], if sampled objects known to have triangular support (with nonzero corners) and some other shapes including latent reference points, then the Fourier transform of the object satisfies Eisenstein’s theorem, making it an irreducible 2D polynomial and guaranteeing that the solution to the phase retrieval problem is unique. Compared to simple shapes such as triangles, objects having complicated supports, e.g., constraint 2 in Fig. 2(b), tend to be easier to reconstruct than objects with convex symmetric support in the 2D case, which give more restrictions in object space and more useful information in Fourier space.

By applying support constraints of different shapes to the complex wave function, we generated a set of diffraction patterns with different degrees of Fourier modulus restriction. For F[ρ(r)]=0, the projection onto the modulus PM is multivalued. The phase vortices associated with these zeros cause stagnation in iterative algorithms. Our support constraints can effectively reduce the presence of zero values in Fourier space to guarantee the uniqueness of the solution for different types of objects.

Our method uses the two complementary support constraints in Fig. 2(b) to reconstruct the corresponding objects and stitch them together. To ensure that the number of unknowns is equal, the two complementary support constraints have equal transmission areas. According to this condition, given the width of all the bars in the case of shape determination, the scale of the equilateral triangle at the center of the pattern can be determined. By applying bars of different widths, we find that there is a substantial range of width values for which convergence is obtained to the correct solution, up to the usual ambiguities. The increase in the width of the bar, i.e., the reduction in the triangle region, will deteriorate the convergence speed and reconstruction error due to the weakening of the non-centrosymmetric property; while the reduction in the width of the bar causes the separation to depart from the holography condition, and the triangle area of the complementary pattern is too large to be solved.

As shown in Fig. 3, We carried out reconstruction tests for a wide range of complex-valued objects. Simulations were performed in which noise was added to the diffraction patterns and the patterns were digitized. This was done by adding 0.1% noise to the maximum pixel value in the diffraction pattern, with correspondingly larger amounts of noise on less intense pixels, where the number of counts is lower. The statistical error for each intensity value was calculated using a random deviate drawn from a Poisson distribution. The reconstructed intensity (Fig. 3(a1)-3(a4)) and phase (Fig. 3(b1)-3(b4)) images using our support constraints in Fig. 2(b) with intermediate bar width value (0.1n) are perfect and robust, even for objects with tapered edges and zeroed corners. Figures 3(c)-3(d) calculate the root mean square error (RMSE) and structural similarity (SSIM) of above reconstruction results to quantify the reconstruction quality and the width of the bar.

 figure: Fig. 3

Fig. 3 Image reconstructions after 200 iterations from simulated diffraction patterns using our support constraints in Fig. 2(b) with different bar widths. (a1-a4) are the recovered amplitude with intermediate bar width value (0.1n). (b1-b4) are the recovered phase with intermediate bar width value (0.1n). (c-d) are the variations of the RMSE and SSIM of (a1-b4) with the ratio of the bar width to the length of the pattern. The curves with different colors represent different objects, while the curves with two line types represent amplitude and phase, respectively.

Download Full Size | PDF

We can see that our method effectively ameliorates the stagnation problems that occur in traditional iterative solution without the need for significant redundancy in the measured data.

5. Simulation

Figures 4(a)-4(b) show the test data used to demonstrate various support constraints. The test object consists of photos of peppers (intensity, varying from 0 to 1) and the mandrill (phase, varying from 0 to π). This is a complex data set, representing a difficult problem in phase retrieval. Figures 4(a1)-4(a3) and Figs. 4(b1)-4(b3) show the amplitude and phase of the reconstruction for different support constraints after 200 iterations. For ease of comparison, all support constraints have equal transmission area, and all results are obtained using the HIO algorithm with the same parameters. The second and third columns are reconstructed with a single circular support constraint and four overlapping circular support constraints (ptychography), respectively; while the last column applies the first support constraint of Fig. 2(b), which provides more Fourier information than that of the second column.

 figure: Fig. 4

Fig. 4 Image reconstructions after 200 iterations from simulated diffraction patterns with different constraints. (a-b) are the original amplitude and phase. (a1-b3) are the recovered amplitude and phase with a single circular support constraint (a1, b1), four overlapping circular support constraints (a2, b2) and our support constraint (a3, b3). (c-d) are the variation of the RMSE and SSIM of (a1-b3) with iteration number. The curves with different colors represent different support constraints, while the curves with two line types represent amplitude and phase, respectively.

Download Full Size | PDF

As shown in Figs. 4(c)-4(d), we calculate the RMSE and SSIM of above reconstruction results to quantify the reconstruction quality and the number of iterations. RMSE is the measurement of the difference between the values predicted by the target image and the values of reconstructed image; while SSIM is a method for measuring the similarity between two images for the fact that the interdependencies between pixels, especially when they are spatially close, carry important information about the structure of a visualized image. It is clear that our method (blue lines) without overlapping redundant information achieves comparable or even better results than ptychography (green lines) and is far superior to using a single circular support constraint (red lines).

For above research on the characteristics of the support constraint, we have carried out reconstruction tests for a wide range of complex-value objects (up to tens of thousands of samples from Visual Object Classes Challenge 2010 and some other classic image databases) containing periodic and non-periodic objects, and obtain similar results. We can clearly see that the proposed algorithm yields better results than the traditional phase retrieval algorithm.

6. Experiments

To demonstrate our method, we present the experimental device shown in Fig. 5. Light from the laser (PGL-FS-532-10mW, CNI Laser) is expanded into a 30 mm diameter collimated beam with a beam expander (BE) and impinges on the DMD (DLP 4500, 912*1140 resolution array, 7.6- µm micromirror pitch). Because the margin of the beam has much lower intensity than the center, we set the beam size larger than the DMD size and block the outer region for better illumination uniformity. In experiments, we use a group of 5*5 micromirrors instead of one to insure the best performance. By using the 4f system formed by lenses L1 with a focal length of 180mm and L2 with a focal length of 36mm, light is projected onto an object, which modifies the wavefront. After going through it, light passes through a condensing lens (CL), with a focal length of 36mm. For convenience, we apply CL as an FFT converter in the visible range. Our method can also be applied to lensless imaging, which makes sense in non-visible imaging [24]. In its Fourier plane, CMOS (Sony IMX253, active resolution 4112*3012, pixel size 3.45μm) measures the irradiance in Fourier space. To retrieve the amplitude and phase information of the object, corresponding support constraints of Fig. 2(b) are projected.

 figure: Fig. 5

Fig. 5 Experimental verification of the proposed method. Captions: LS, laser source; BE, beam expander; L1, L2, lenses; OBJ, object; CL, condensing lens.

Download Full Size | PDF

In order to check the validity of our system, we respectively show the results for the periodic object, non-periodic object and transparent object. As a first example, the positive USAF test target is imaged using above experiment setup, and the reconstructed intensity of our method is shown in Fig. 6(a). In terms of resolvability, we can see that the line pairs remain resolved up to Group 7, Element1, of which the line width is 3.9 µm. The intensity fluctuation along marked profiles are shown in Fig. 6(b) for proving a high contrast.

 figure: Fig. 6

Fig. 6 (a) Reconstruction results of the positive USAF test target. The region of Group 7, Element 1 is zoomed in and shown in the upper right corner. (b) The profiles along the red and black line segments in the upper right corner.

Download Full Size | PDF

We also demonstrate the performance of our method in non-periodic object. Figure 7(a) shows the photograph of the negative amplitude mask with three characters of ‘OPT’. Figure 7(b) is the reconstructed intensity image obtained with the proposed method. Objects with a majority of zero amplitudes urgently require tight support constraints, otherwise traditional phase retrieval methods tend to stagnation. Our support constraints provide more Fourier information to accurately recover objects. Our reconstructions were finished within 200 iterations and the total calculation time was roughly 3 seconds for an image with a spatial resolution of 250×250 pixels. It can be concluded that the reconstruction quality and calculation time of our method are satisfactory.

 figure: Fig. 7

Fig. 7 (a) Photograph of the negative amplitude mask with three characters of ‘OPT’. (b) The reconstructed intensity image obtained with the proposed method.

Download Full Size | PDF

Furthermore, a visible transmission grating (grating period 10 pairs/mm) is applied in the quantitative experiment using the same method and procedure. Figures 8(a)-8(b) represent reconstruction intensity and phase images of a sub-region grating for a 390 µm× 390 µm rectangular patch. Figure 8(c) is a few periods of the associated red line profile without interpolation. The curve is well consistent with theoretical value. The quantitative characterizations of above samples further indicate success and accuracy of our method.

 figure: Fig. 8

Fig. 8 (a-b) Reconstructed intensity and phase images of the transmission grating with 10 pairs/mm period. (c) Measured quantitative phase line profiles for a few periods grating.

Download Full Size | PDF

7. Discussion

We have introduced a quantitative phase and amplitude imaging method based on patterned illumination produced by DMD. Instead of resorting to an array of lenslets in SH or interferometric design, the spatial information is captured by illuminating the object with binary amplitude masks. With ptychography, the support constraint for phase retrieval is provided by the confined illumination probe in object space, so the sample or the probe must be mechanically scanned through the desired FOV. With our method, however, the support constraint is provided by DMD, and intensity diffraction patterns from multiple illumination spots are recorded in a single exposure. In this regard, our method appears as parallel ptychography, solving the problem of mechanical scanning and algorithm stagnation. The simple implementation of our system also offers the possibility of being an add-on module of a conventional microscope, thus allowing us to perform quantitative phase imaging with spatial resolution up to the diffraction limit. This method is well suited to work at low-light-level scenarios, where similar approaches such as single pixel imaging have already been proposed to obtain amplitude information [25,26]. Last, given the technological challenge of manufacturing optical lenses that work outside the visible spectrum, the technique proposed here is a good candidate to operate in regions such as IR and THz, where similar approaches have already been used to obtain amplitude images [27,28].

Funding

National Natural Science Foundation of China (NSFC) (No. 61327902).

Acknowledgments

We thank Jingtao Fan from Tsinghua University and Tao Yue from Nanjing University for discussion about this project.

References

1. F. Soldevila, V. Durán, P. Clemente, J. Lancis, and E. Tajahuerce, “Phase imaging by spatial wavefront sampling,” Optica 5(2), 164 (2018). [CrossRef]  

2. V. Micó, J. Zheng, J. Garcia, Z. Zalevsky, and P. Gao, “Resolution enhancement in quantitative phase microscopy,” Adv. Opt. Photonics 11(1), 135 (2019). [CrossRef]  

3. C. Zheng, R. Zhou, C. Kuang, G. Zhao, Z. Yaqoob, and P. T. So, “Digital micromirror device-based common-path quantitative phase imaging,” Opt. Lett. 42(7), 1448–1451 (2017). [CrossRef]   [PubMed]  

4. G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics 7(9), 739–745 (2013). [CrossRef]   [PubMed]  

5. H. Lu, J. Chung, X. Ou, and C. Yang, “Quantitative phase imaging and complex field reconstruction by pupil modulation differential phase contrast,” Opt. Express 24(22), 25345–25361 (2016). [CrossRef]   [PubMed]  

6. H. Gong, T. E. Agbana, P. Pozzi, O. Soloviev, M. Verhaegen, and G. Vdovin, “Optical path difference microscopy with a Shack-Hartmann wavefront sensor,” Opt. Lett. 42(11), 2122–2125 (2017). [CrossRef]   [PubMed]  

7. R. Horisaki, R. Egami, and J. Tanida, “Single-shot phase imaging with randomized light (SPIRaL),” Opt. Express 24(4), 3765–3773 (2016). [CrossRef]   [PubMed]  

8. Y. Geng, J. Tan, C. Guo, C. Shen, W. Ding, S. Liu, and Z. Liu, “Computational coherent imaging by rotating a cylindrical lens,” Opt. Express 26(17), 22110–22122 (2018). [CrossRef]   [PubMed]  

9. P. Thibault, M. Dierolf, A. Menzel, O. Bunk, C. David, and F. Pfeiffer, “High-Resolution Scanning X-ray Diffraction Microscopy,” Science 321(5887), 379–382 (2008). [CrossRef]   [PubMed]  

10. A. Sun, X. He, Y. Kong, H. Cui, X. Song, L. Xue, S. Wang, and C. Liu, “Ultra-high speed digital micro-mirror device based ptychographic iterative engine method,” Biomed. Opt. Express 8(7), 3155–3162 (2017). [CrossRef]   [PubMed]  

11. A. M. Maiden and J. M. Rodenburg, “An improved ptychographical phase retrieval algorithm for diffractive imaging,” Ultramicroscopy 109(10), 1256–1262 (2009). [CrossRef]   [PubMed]  

12. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]   [PubMed]  

13. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase Retrieval with Application to Optical Imaging: A contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015). [CrossRef]  

14. J. A. Rodriguez, R. Xu, C. C. Chen, Y. Zou, and J. Miao, “Oversampling smoothness: an effective algorithm for phase retrieval of noisy diffraction intensities,” J. Appl. Cryst. 46(2), 312–318 (2013). [CrossRef]   [PubMed]  

15. J. Miao, D. Sayre, and H. N. Chapman, “Phase retrieval from the magnitude of the Fourier transforms of nonperiodic objects,” J. Opt. Soc. Am. A 15(6), 1662 (1998). [CrossRef]  

16. C. Song, D. Ramunno-Johnson, Y. Nishino, Y. Kohmura, T. Ishikawa, C.-C. Chen, T.-K. Lee, and J. Miao, “Phase retrieval from exactly oversampled diffraction intensity through deconvolution,” Phys. Rev. B Condens. Matter Mater. Phys. 75(1), 012102 (2007). [CrossRef]  

17. S. Marchesini, “Invited article: a unified evaluation of iterative projection algorithms for phase retrieval,” Rev. Sci. Instrum. 78(1), 011301 (2007). [CrossRef]   [PubMed]  

18. B. J. Brames, “Unique phase retrieval with explicit support information,” Opt. Lett. 11(2), 61 (1986). [CrossRef]   [PubMed]  

19. T. R. Crimmins, “Phase Retrieval for Discrete Functions with Support Constraints,” J. Opt. Soc. Am. A 4(1), 124–134 (1987). [CrossRef]  

20. J. R. Fienup, “Reconstruction of a Complex-Valued Object from the Modulus of Its Fourier-Transform Using a Support Constraint,” J. Opt. Soc. Am. A 4(1), 118–123 (1987). [CrossRef]  

21. X. Pan, C. Liu, and J. Zhu, “Coherent amplitude modulation imaging based on partially saturated diffraction pattern,” Opt. Express 26(17), 21929–21938 (2018). [CrossRef]   [PubMed]  

22. T. R. Crimmins and J. R. Fienup, “Uniqueness of phase retrieval for functions with sufficiently disconnected support,” J. Opt. Soc. Am. 73(2), 218 (1983). [CrossRef]  

23. J. R. Fienup, “Reconstruction of objects having latent reference points,” J. Opt. Soc. Am. 73(11), 1421 (1983). [CrossRef]  

24. H. N. Chapman and K. A. Nugent, “Coherent lensless X-ray imaging,” Nat. Photonics 4(12), 833–839 (2010). [CrossRef]  

25. Y. Zhang, J. Suo, Y. Wang, and Q. Dai, “Doubling the pixel count limitation of single-pixel imaging via sinusoidal amplitude modulation,” Opt. Express 26(6), 6929–6942 (2018). [CrossRef]   [PubMed]  

26. J. Shin, B. T. Bosworth, and M. A. Foster, “Single-pixel imaging using compressed sensing and wavelength-dependent scattering,” Opt. Lett. 41(5), 886–889 (2016). [CrossRef]   [PubMed]  

27. N. Radwell, K. J. Mitchell, G. M. Gibson, M. P. Edgar, R. Bowman, and M. J. Padgett, “Single-pixel infrared and visible microscope,” Optica 1(5), 285 (2014). [CrossRef]  

28. N. Burdet, X. Shi, D. Parks, J. N. Clark, X. Huang, S. D. Kevan, and I. K. Robinson, “Evaluation of partial coherence correction in X-ray ptychography,” Opt. Express 23(5), 5452–5467 (2015). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Three kinds of stagnation problems. (a) Object. (r) (b) Reconstructed object with twin image and spatial shift ambiguities. (c) Object (left) and its Fourier transform (right). (d) Fourier transform without the high frequency in the vertical direction (right) and object recovered from it with oscillating stripes (left).
Fig. 2
Fig. 2 (a) Iterative recovery process of phase retrieval with support constraint 1 of (b). Steps 1-5 illustrate the algorithm, following principles from phase retrieval. Step 1: initialize the object to be recovered. Step 2: the projection on the object space involves setting the components outside the support constraint to 0, while leaving the rest of the values unchanged. Step 3: the projection of the object onto the Fourier intensity set is accomplished by setting the modulus of the object in Fourier space to the measured Fourier intensity, and leaving the phase unchanged. Step 4: repeat the HIO algorithm based on steps 2-3 once more. (b) Two complementary support constraints. The length of the two patterns is n, the width of all the bars is 0.1n, and the side length of the equilateral triangles is 0.707n. The centroids of the equilateral triangles are at the center of the two patterns. The selection of specific parameters is shown in Fig. 3(c)-3(d).
Fig. 3
Fig. 3 Image reconstructions after 200 iterations from simulated diffraction patterns using our support constraints in Fig. 2(b) with different bar widths. (a1-a4) are the recovered amplitude with intermediate bar width value (0.1n). (b1-b4) are the recovered phase with intermediate bar width value (0.1n). (c-d) are the variations of the RMSE and SSIM of (a1-b4) with the ratio of the bar width to the length of the pattern. The curves with different colors represent different objects, while the curves with two line types represent amplitude and phase, respectively.
Fig. 4
Fig. 4 Image reconstructions after 200 iterations from simulated diffraction patterns with different constraints. (a-b) are the original amplitude and phase. (a1-b3) are the recovered amplitude and phase with a single circular support constraint (a1, b1), four overlapping circular support constraints (a2, b2) and our support constraint (a3, b3). (c-d) are the variation of the RMSE and SSIM of (a1-b3) with iteration number. The curves with different colors represent different support constraints, while the curves with two line types represent amplitude and phase, respectively.
Fig. 5
Fig. 5 Experimental verification of the proposed method. Captions: LS, laser source; BE, beam expander; L1, L2, lenses; OBJ, object; CL, condensing lens.
Fig. 6
Fig. 6 (a) Reconstruction results of the positive USAF test target. The region of Group 7, Element 1 is zoomed in and shown in the upper right corner. (b) The profiles along the red and black line segments in the upper right corner.
Fig. 7
Fig. 7 (a) Photograph of the negative amplitude mask with three characters of ‘OPT’. (b) The reconstructed intensity image obtained with the proposed method.
Fig. 8
Fig. 8 (a-b) Reconstructed intensity and phase images of the transmission grating with 10 pairs/mm period. (c) Measured quantitative phase line profiles for a few periods grating.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

P S ρ(r)={ ρ(r)rS 0rS
E(k)={ I S (k) F[ρ(r)] | F[ρ(r)] | I S (k)=max( I S (k)) F[ρ(r)] I S (k)<max( I S (k))
P M ρ(r)= F 1 { E(k) }
I S (k)=F{ F 1 [I(k)] sinc(r/M) }
I(k)=w N 2 I M (k) max[ I M (k)]
ρ n+1 (r)={ P M ρ n (r)rS ρ n (r) P M ρ n (r)rS
W(k)=exp[ 1 2 (k/α) 2 ]
ρ(r)= ρ 1 (r)+ ρ 2 (r)δ(r r 0 )
F(k)= F 1 (k)+ F 2 (k)exp(i2πk r 0 )
| F(k) | 2 = | F 1 (k)+ F 2 (k)exp(i2πk r 0 ) | 2 = | F 1 (k) | 2 + | F 2 (k) | 2 +2| F 1 (k) || F 2 (k) |cos[2πk r 0 + φ 1 (k)+ φ 2 (k)]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.