Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Single-shot pixel super-resolution phase imaging by wavefront separation approach

Open Access Open Access

Abstract

We propose a novel approach for lensless single-shot phase retrieval, which provides pixel super-resolution phase imaging. The approach is based on a computational separation of carrying and object wavefronts. The imaging task is to reconstruct the object wavefront, while the carrying wavefront corrects the discrepancies between the computational model and physical elements of an optical system. To reconstruct the carrying wavefront, we do two preliminary tests as system calibration without an object. Essential for phase retrieval noise is suppressed by a combination of sparse- and deep learning-based filters. Robustness to discrepancies in computational models and pixel super-resolution of the proposed approach are shown in simulations and physical experiments. We report an experimental computational super-resolution of 2μm, which is 3.45× smaller than the resolution following from the Nyquist-Shannon sampling theorem for the used camera pixel size of 3.45μm. For phase bio-imaging, we provide Buccal Epithelial Cells reconstructed with a quality close to the quality of a digital holographic system with a 40× magnification objective. Furthermore, the single-shot advantage provides a possibility to record dynamic scenes, where the frame rate is limited only by the used camera. We provide amplitude-phase video clip of a moving alive single-celled eukaryote.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Complex-valued object imaging has been long studied in a wide range of tasks over the past decades and leads to significant developments in phase microscopy. It has a high importance in biomedical imaging, since a large amount of biological specimens (e.g., epithelial cells, tissues, red blood cells, etc) are so-called phase objects [1,2]. These cells are made of molecules with indices of refraction very close to that of water. It means that these objects require special preparation if analyzed by traditional bright-field microscopy, because their amplitude alone can not yield an adequate amount of information about the object’s internal structure. A noninvasive way to obtain the missing information is to recover the object’s phase characteristics using quantitative phase imaging methods [35]. The methods used for complex imaging are usually based on two main techniques, which are holography [6] or phase retrieval [7]. The history of phase retrieval goes back more than 50 years ago to Sayre’s observations of the Bragg diffraction [8]. He captured the diffraction pattern of a coherently illuminated sample and recognized that an adequate high sampling rate will result in a unique real-space image of the sample. This idea led to the early phase retrieval method of coherent diffraction imaging [9] (CDI). Contrary to holography where the phase reconstruction is made from a hologram, which is the interference pattern between the object and reference beams, in phase retrieval only a single beam is used. This single beam is disturbed by an object and the intensity of the diffracted wavefront is captured by a sensor as a diffraction pattern. The fact that only the intensity of the light radiation can be captured while the phase is lost in all observations, results in an ill-posed problem called phase problem. The Gerchberg-Saxton algorithm [10] provides a good tool to solve this problem by iterative forward and backward propagation between the object and the sensor planes. In each iteration, the amplitude of the wavefront is updated by the captured images, hence the errors from the ill-conditioning are reduced. The technique is depending on two sets of images, as one set on each plane. This dependence can be bypassed by a prior knowledge about support constraints (e.g. non-negativity or known apodization diameter) on the planes. Applying these constraints in every iteration, the reconstruction error decreases, therefore this generalization is called error-reduction algorithm [7]. Afterward, Fienup proposed an additional time-domain correction step to improve the convergence rate, and led to the well-known Hybrid Input-Output [11] (HIO) algorithm.

In the present day, the literature provides several methods to solve the phase problem. As Fannjiang et al. state [12] the natural way to overcome the ill-posedness is by reducing the number of unknown parameters. One of the most common approach is using the above mentioned support constraints on the signal [13,14]. Another more recent and expansively studied solution is sparsity [15]. It includes a special prior knowledge based on the assumption that the observed object $x$ has some known sparse representation, as $x=\Psi \alpha$. The representation matrix $\Psi$ is called sparsity basis, while $\alpha$ stands for the sparse vector. In the most basic case the object is composed of few point sources, so $\Psi$ will be an identity matrix.

In most approaches, the key to solve the phase problem is to capture the propagated object wavefront on several decorrelated diffraction patterns (e.g., [16]). The decorrelation can be generated by lateral shearing [17], using tilt diffraction model [18], displacing the sensor plane along the optical axis [19], or ptychography [2022]. Another possibility is to extend the system with more illumination wavelengths. The advantage of the multi-wavelength method is the capability to work with spatially incoherent light [23], and the color imaging possibility [24]. Special programmable devices can be also used to obtain decorrelated images, such as Spatial Light Modulator (SLM) [25] or Digital Micromirror Device (DMD) [26].

Most developed techniques are using far-field approach, since it requires a much simpler propagation model, which can be defined by Fourier transform only. However, with the trend of miniaturization [27] in technology, the far-field techniques are not adequate anymore because they are not implementable in a compact system. The near-field techniques provide a good solution, but they require changes in the propagation model. It is due to the fact that in far-field approach the spherical waves are flattened out far from the source, so they are handled as planar wavefronts. However, in near-field approach, this assumption is not valid anymore, and the wavefronts have to be treated as spherical. The Angular Spectrum (AS) method can solve the wave equation exactly for near-field diffraction, since the formula is extracted from the Rayleigh-Sommerfeld diffraction theory without approximation.

A significant advantage of the phase retrieval method is the lensless imaging capability, which allows compact realization. The eschew of lenses makes the system light and cost-effective while free from lens aberration and provides a larger field of view [28]. The literature provides alternatives to imitate the lenses in imaging with special modulation masks, so-called Fresnel lenses [29,30]. Although the Fresnel lenses are also sensitive to chromatic aberrations when used with broadband sources and distort the formed images. Another lensless approach - which we followed - is to use wavefront modulation with a modulation mask [3133] or diffuser [34]. They distort an illumination carrying wavefront and provide a coded diffraction pattern on a sensor with known modulation. However, along with the positive effect, the mask is a source of additional errors and corrupts the phase reconstruction. It increases the loss from ill-conditioning, therefore several observations with different masks are typically used for error compensation and better convergence [35].

In the recent years, various state-of-the-art techniques have been developed to realize single-shot phase retrieval. Several single-shot ptychographic methods have been proposed, for instance, Lee et al. used a lens array to form the subimages [36] and achieve 3.1 $\mu$m resolution; He et al. applied Dammann grating at a certain distance [37] to resolve 4.2 $\mu$m details; and Goldberger et al. presented a pinhole array-based 3D ptychography technique [38] with theoretical resolution of 10 $\mu$m. The literature provides single-shot wavefront modulation-based techniques with simpler hardware, however it is generally in trade-off relationship with the resolution. DiffuserCam [34] offers a compact, easy-to-build camera with a resolution of 79 $\mu$m; and Horisaki et al. presented coded aperture imaging technique [39] to resolve 22 $\mu$m details of USAF targets.

In our previous paper [32] a design and modeling of the single-exposure lensless system are proposed with wavefront modulation by the random binary phase mask. Developed there a reconstruction algorithm, named SR-SPAR, is based on support constraint in the object plane and on a sparsity-based block-matching 3D (BM3D) [40] filter. We have demonstrated pixel super-resolved reconstructions in simulations and physical experiments, however the algorithm fails to reconstruct small phase values and provides generally noise corrupted output, especially in experimental data.

The main contribution of the current paper is a new model approach. To overcome the problems of the previous solution, the problem formulation has been rethought. In the new formulation, we propose a computational separation of the carrying wavefront from the objects’ wavefront and a specific calibration of the optical system. Separated carrying wavefront compensates the discrepancies between the computational model and physical elements, providing a matching of the computational model with a real physical system. This unique physics-based compensation is less demanding than, for example, the recently reported deep learning-based calibration system [41]. The developed Single-shot Super-Resolution Phase Retrieval (SSR-PR) algorithm provides quantitative phase imaging for a wider range of object’s phase values in high resolution and with sufficiently better noise suppression. In contrast to the state-of-the-art single-shot phase retrieval algorithms, the proposed approach is not just able to exceed the hardware complexity - resolution trade-off, but surpass them in the achieved resolution. The formation of the phase retrieval problem with this new approach is presented in Section 2. The calibration includes two prior experiments which are made without the studied object: the laser beam only and the mask-diffracted laser beam. Using them together, we can estimate the corrected carrying wavefront in the object plane (Section 3.2). In Section 3 the SSR-PR algorithm is described with the updated upsampling (Section 3.1), carrying wavefront calculation (Section 3.2), and noise suppression (Section 3.3). We provide simulation experiments in Section 4 and demonstrate the high-quality pixel super-resolution reconstruction and robustness to errors of the mask and of the illumination wavefront (Section 4.2). Experimental results are in Section 5, where we show reconstruction of a phase target and biological objects.

2. Problem formation

The typical lensless phase retrieval scheme includes a coherent light source (e.g., a laser beam) and an object to be reconstructed. If the illuminating (carrying) wavefront of the laser beam $u_{b,0}$ is diffracted by the complex object $u_{o}$ then the intensity pattern of the diffracted wavefront on the sensor plane can be written as

$$z=|P_d \left \{u_{o} \cdot u_{b,0}\right \}|^2.$$

Here $u_{o}$ and $u_{b,0}$ are 2D functions and the operator $P_{d}$ stands for the free space forward propagation on the distance $d$. Typically, the carrying wavefront is assumed to be well-known and flat ($u_{b,0}=1$), however in the proposed approach, we keep it in the formulation, assuming that it might not be equal to 1. The phase problem consists of the fact that only the intensity of the light radiation ($z$) can be captured while the phase is lost. It is an ill-posed problem, therefore, traditionally quantitative phase reconstruction is not possible from a single diffraction pattern without a prior knowledge of support constraint. As it was developed in our previous approach [32], we apply a random binary modulation phase mask $M$ in the system as shown in Fig. 1. It can be seen that the proposed system is simple without lenses and moving parts. The optical elements are arranged in in-line configuration with small distances ($d<1$ cm) between them, resulting in a compact system. Taking the modulation into account Eq. (1) can be rewritten as

$$z = \left | P_{d_{2}}\left \{ M\cdot P_{d_{1}}\left \{u_{o} \cdot u_{b,0} \right \}\right \} \right |^{2},$$
where $d_{1}$ and $d_{2}$ correspond to the object-mask and mask-sensor distances. The beam is diffracted by the mask, propagates forward, and spread across the sensor resulting in a coded intensity pattern. This coded pattern covers larger area of the sensor, therefore more data can be collected. More data and the prior known mask pattern can offer enough information to improve the complex-valued object reconstruction. The wavefront propagation is modelled by the Rayleigh-Sommerfeld theory, in which the angular spectrum (AS) method [42] is defined as:
$$u(x,y,d)=\mathfrak{F}^{{-}1} \left \{ H(f_{x},f_{y},d)\cdot\mathfrak{F} \left \{ u(x,y,0) \right \} \right \},$$
$$H\left (f_{x},f_{y},d \right)=\begin{cases} e^{i\frac{2\pi }{\lambda }d\sqrt{1-\lambda ^{2}\left ( f_{x}^{2} + f_{y}^{2}\right )}}, & f_{x}^{2} + f_{y}^{2}\leq \frac{1}{\lambda ^{2}},\\ 0, & otherwise.\\\end{cases}$$

The method determines the free space propagation of $u(x,y,0)$ in distance $d$ resulting in $u(x,y,d)$, where $\mathfrak {F}$ and $\mathfrak {F}^{-1}$ operators stand for the Fourier and inverse Fourier transforms. The AS operator $H(f_{x},f_{y},d)$ is defined by the distance $d$, the spatial frequencies $f_{x}$, $f_{y}$ and the wavelength $\lambda$.

 figure: Fig. 1.

Fig. 1. Sketch of the SSR-PR optical system: A coherent illumination source (laser), an object to be tested, a binary phase-mask, and a sensor. The sketch is made by 3doptix optical system design.

Download Full Size | PDF

The coded diffraction pattern has to be decoded with the known phase mask and distances to achieve decent reconstructions. As we showed in our previous paper [32], if we know the system parameters perfectly the ill-conditioning can be resolved by the modified SR-SPAR algorithm and the phase reconstructed unequivocally. The achieved resolution is limited only by the transfer function (Eq. (4)). However in physical experiments we use approximated values of the parameters at most. Due to the error of approximations, we had difficulties to reconstruct small phase values and the retrieved object was disordered by the errors of the theoretically precise inputs.

The proposed SSR-PR method uses a prior calibration of the optical system (Fig. 1) to provide an estimation of the carrying wavefront. The following algorithm has been developed to compensate efficiently the discrepancy between the reality of the optical system and the computational models used for wavefront reconstruction. In Eq. (2) $u_{b,0}$ is replaced by a complex-valued compensated carrying wavefront on the object plane as

$$\widehat{u_{b}} = u_{b,0} \cdot CPO.$$

Here, $u_{b,0}$ and the Compensation by Prior Observation ($CPO$) are 2D functions calculated from the calibration diffraction patterns of two prior experiments as presented in Section 3.2. These diffraction patterns allow us to calculate a complex-valued compensation ($CPO$). The compensation is able to correct the approximation errors caused by the nonideal optical elements, estimate the carrying wavefront phase, and increase the image formation model correspondence with reality. Moreover, we improved the initial diffraction pattern upsampling for computational super-resolution reconstructions of SSR-PR. Several interpolation methods were analyzed and the most adequate technique was selected as stairstep interpolation with Lanczos-3 kernel. It already proved its effectiveness in previous researches connected to remote sensing [43] and medical imaging [44].

3. SSR-PR algorithm

In this section, we describe the developed SSR-PR algorithm. In Section 3.1 the enhancements of the initial diffraction pattern up-sampling are presented to achieve pixel super-resolution reconstruction. Using the up-sampled diffraction patterns we introduce a novel approach in Section 3.2 to give an approximation of the compensated carrying wavefront. This compensated wavefront and the filters initiated in Section 3.3 are embedded into the SSR-PR algorihtm (Section 3.4).

3.1 Diffraction pattern up-sampling for pixel super-resolution

The computational wavefront propagation requires discretization of the continuous diffraction patterns by a computational pixel size of $\Delta _{c} \times \Delta _{c}$. Physically the sampling is performed by the sensor, therefore the resolution is limited by the sensor pixel size of $\Delta _{s} \times \Delta _{s}$. This means that traditionally the highest resolution is achieved with $\Delta _{c}=\Delta _{s}$, which is also called pixel-wise resolution. We are talking about computational super-resolution if the smallest resolved details are smaller than the sensor pixel size $\Delta _{s}$. A general approach for super-resolution imaging is using multiple slightly shifted observations. If the captured patterns are merged properly, the result is an upsampled, super-resolved diffraction pattern, from which super-resolved image of the object is retrievable [45]. The required subpixel resolution can be also achieved by special upsampling techniques developed for single image processing [46].

In this paper, we followed a different approach based on the modeling of optical image formation and registration. We found that using upsampled diffraction patterns as input for the iterative method the resolution increases. In our preceding paper [32] we demonstrated that using this technique with ideal support constraint and BM3D filter, the resolution is limited only by the AS transfer function (Eq. (4)). This limitation occurs if the computational pixel size is too small, and as a result the high frequencies will be eliminated. Due to the elimination, the problem will be assumed more ill-posed and the reconstructions will be not acceptable. In our system ($\lambda =532$ nm) this limit occurs if $\Delta _{c}<376$ nm. This limit corresponds to a super-resolution factor of $r_{s}=\Delta _{s}/\Delta _{c}=9.17$, which means that the diffraction patterns are initially upsampled by 9$\times$. In nonideal case, the reconstructions are disturbed by the errors of the optical system and these errors are accumulated in the up-sampling. We found that taking $r_{s}>4$ the resolution will be not improved observably, but the calculation time will significantly increase.

Previously in SR-SPAR, the diffraction patterns were upsampled by a common box interpolation kernel and super-resolution factor of $r_{s}$. However, we recognized that the resolution of the reconstructions are depending on the interpolation method of the up-sampling. For SSR-PR, we have tested the most commonly used interpolation kernels as follows:

  • 1. box kernel: nearest-neighbor interpolation with pixel value duplication
  • 2. triangle kernel: bilinear interpolation with distance-weighted averaging in the $2\times 2$ nearest neighborhood
  • 3. cubic kernel: bicubic interpolation with distance-weighted averaging in the $4\times 4$ nearest neighborhood
  • 4. Lanczos-2 kernel: Lanczos interpolation based on the 3-lobed Lanczos window function
  • 5. Lanczos-3 kernel: Lanczos interpolation based on the 5-lobed Lanczos window function
The interpolation kernels were used to up-sample the input diffraction patterns of the SSR-PR algorithm (Section 3.4) and the Relative Root-Mean Square errors (RRMSE) of the reconstructions are shown in Fig. 2. The RRMSEs are calculated by
$$RRMSE=\frac{\left \| \varphi_{o}-\widehat{\varphi_{o}} \right \|_{F}}{\left \| \varphi_{o} \right \|_{F}},$$
where $\left \| \cdot \right \|_{F}$ stands for the Frobenius norm, while $\varphi _o$ and $\widehat {\varphi _o}$ are the phases of the original and the reconstructed object.

 figure: Fig. 2.

Fig. 2. RRMSE of phase reconstructions by (a) different up-sampling kernels and (b) stairstep interpolation of Lanczos-3 kernel with different increment-steps.

Download Full Size | PDF

As it is shown, using Lanczos-3 kernel, the RRMSE decreases by 20% compared to the previously used box kernel. Then we applied stairstep interpolation with the selected Lanczos-3 kernel. In this method, we continuously resize the pattern with the same increment to 400% ($r_{s}=4$). If the ratio between the maximum size-increase of 300% and the increment-step is not an integer, then the last increment-step is taken to get exactly the required upsampling size. The lowest RRMSE is achieved by using increment-step of 3% with $RRMSE<0.1$, which count as a successful reconstruction.

3.2 CPO calculation

The proper reconstructions require precisely known prior knowledge about the optical system to solve the phase problem. However discrepancies could appear between the computation model and reality. These unnoted discrepancies in the illumination, mask parameters, propagation modeling, and distances are resulting in deficient parameter approximations and inaccurate image formation. Using support constraint and sparsity-based filter can moderately reduce this noise, but small phase values are lost [32] for small details.

The SSR-PR method is aiming to expand our prior knowledge, hence providing a calibration to the optical system. We define the compensation $CPO$ which has a ternal function: give a proper estimation for the phase part of the carrying wavefront, compensate the corruptions appearing due to the errors of modulation mask and the distances, and correct the approximations in wavefront propagation. Two prior experiments were initiated as follows: In the first experiment, the mask and the object is omitted and we capture the intensity pattern of illumination source (beam) only as $z_{b}$. This pattern can be used to give a better approximation of the carrying wavefront as $u_{b,0}=\sqrt {z_{b}}$, instead of the general assumption of $u_{b,0}=1$. We assume that coherent plane waves are generated by the laser with the same wavefront on all planes. In the second experiment, the mask is placed back and the diffraction pattern is captured as $z_{m}$. Using the up-sampling techniques on the diffraction patterns $z$, $z_{b}$, and $z_{m}$ as presented in Section 3.1, they will result in $\widetilde {z}$, $\widetilde {z_{b}}$, and $\widetilde {z_{m}}$.

We are assuming that our laser has the same amplitude on all planes, therefore the carrying wavefront on the mask plane is taken as $\sqrt {z_{b}}$. Since the phase is unknown, this wavefront is just an approximation of the real carrying wavefront, which we aim to correct by $CPO$. The mask $M$ is added to this up-sampled carrying wavefront on the mask plane ($M\cdot \sqrt {\widetilde {z_{b}}}$) and they are propagated forward to sensor plane. The amplitude is replaced by $\sqrt {\widetilde {z_{m}}}$ and the updated wavefront is propagated back to the object plane with mask subtraction, resulting in $\widehat {u_{b}}$, from which $CPO$ can be calculated as

$$CPO=\frac{\widehat{u_{b}}}{\sqrt{\widetilde{z_{b}}}}$$

The flowchart of this Compensation calculation is shown in Fig. 3. However as we can see from this equation, $\widehat {u_{b}}$ already contains the compensation, therefore calculating $CPO$ is only referential.

 figure: Fig. 3.

Fig. 3. Flowchart to calculate $CPO$-compensated carrying wavefront.

Download Full Size | PDF

3.3 Enhanced filters

The modulation mask scatters the beam, so the high intensities will be scattered and spread across the sensor. Due to this scattering and wide spreading, a larger area of the sensor is in use. Therefore, much more information can be collected, but also more noise is generated and accumulated. The sparse-based BM3D filter already proved its effectiveness for several optical phase recovering problems [47,48], and we also used it successfully to filter the reconstructed wavefront [32]. However occasionally the similar patches of the modulation mask can result in correlated noises, which are corrupting the sparse representation. To overcome this problem, in SSR-PR the latest version of BM3D [40] is used, which focuses on the collaborative filtering of correlated noise.

However BM3D might still fail occasionally, so a state-of-the-art deep learning based, plug and play image restoration (IR) filter was added with Dilated-Residual U-Net (DRUNet) Deep Denoiser Prior [49]. It combines U-net [50] and ResNet [51] effectively to create a better precondition for sparse filtering. We use the filter configuration that was trained for plug and play IR applications on a large dataset containing almost 9000 images with different noise levels. Overall, DRUNet provides better RRMSE than BM3D, however some details might be lost. These small details can be preserved by the sparsity based BM3D filter. The experiments showed that using DRUNet before BM3D the reconstruction’s RRMSE decreases compared using DRUNet filter alone.

Previously apodization with zero values was used to eliminate the appearing noise in the area outside of the support constraint. The diameter of the support constraint was corresponding to the size of the illumination beam. In the SSR-PR, the wavefront on the object plane $u_{o}\cdot \widehat {u_{b}}$ is suppressed (divided) by $\widehat {u_{b}}$. The remaining noises outside of the support constraint are apodized by assuming a plane wavefront with zero phase shift.

3.4 SSR-PR iterations

The flowchart of the SSR-PR method is shown in Fig. 4. The initialization consists of the calculation of the compensated carrying wavefront $\widehat {u_{b}}$ and the initiation of the object as $u_{o}^{(0)} = 1$. Their sizes are corresponding to the up-sampled diffraction pattern of $\widetilde {z}$. The algorithm parameters include the distances ($d_{1}$, $d_{2}$), sensor and mask properties, mask position, wavelength, and filtering properties. The first step of the Main Reconstruction is a forward propagation of the complex wavefront $u_{o} \cdot \widehat {u_{b}}$, where $\widehat {u_{b}}$ contains the amplitude of the carrying wavefront and the compensation $CPO$, as in Eq. (5). The second step is a wavefront update with the up-sampled amplitude $\sqrt {\widetilde {z}}$ derived from the captured intensity of the mask diffracted object observation. The third step starts with backward propagation of the updated wavefront, then divided by the compensated carrying wavefront $\widehat {u_{b}}$. To avoid the division with zero values, we use regularization [52] on $\widehat {u_{b}}$ with the weight $\alpha$ as

$$u_{b}'=\frac{u_{b}^{*}}{\left ( 1-\alpha \right ) \left | u_{b} \right |^{2}+\alpha\left | u_{b} \right |_{max}^{2}},$$
where $u^{*}$ stands for the conjugate of $u$. In the fourth step, the obtained object wavefront’s amplitude and phase are separately filtered, then merged into a complex wavefront and apodized. Iterations run until a criteria is met (e.g., number of iterations, RRMSE).

 figure: Fig. 4.

Fig. 4. Flowchart of the SSR-PR method.

Download Full Size | PDF

4. Simulations

To demonstrate the advantages of the SSR-PR approach and algorithm, we provide a set of simulations with self-made resolution phase target as the object $u_{o}$ (Fig. 5(a)). The object is assumed to follow the structure of a physical USAF phase target by simulating parallel etched lines for each elements per group. The simulated etch depths of the lines are 100 nm, which are more than 5$\times$ smaller than the used illumination wavelength of $\lambda =532$ nm, corresponding to phase shift of only

$$\Delta\varphi = \frac{2 \pi (n-1) \Delta h}{\lambda}=\frac{2 \pi (1.4607-1) 100[nm]}{532[nm]} = 0.544[rad].$$

The wavefront modulation is made by a single stationary binary phase mask $M$. The mask parameter selection is based on our previous research [32]. The experiments showed that the best performance is achieved if we select the mask pixels half the size of the sensor pixel’s ($\Delta _{s}=3.5~\mu$m, $\Delta _{m}=1.75~\mu$m) and their maximum phase-shifts are taken close to $\pi$. Corresponding to these, the mask pixel heights are taken as binary random values with equal probabilities for 0 and 500 nm (0 and 2.72 rad). For the intensity pattern generation it is desirable to take as small computational pixel size $\Delta _{c}$ as possible to give the best approximation of the continuous data. To simulate the behaviour of the sensor, the patterns are downsampled by averaging every $r_{s}\times r_{s}$ sized patches and crop the required sensor size with $N_{1}\times N_{2}=2048\times 2448$. However a restriction occurs [35] due to the discretization of the continuous wavefront, which limits the size of the computational matrices $N_{c}\times N_{c}$. The restriction is in connection with the summed propagation distance $d_{1}+d_{2}$, the computational pixel size $\Delta _{c}$, and the wavelength $\lambda$. The inequality defines the minimum size $N_{eff}$ of the computation matrices for effective sampling as

$$r_{s}\cdot N_{c}\geq N_{eff}=(d_{1}+d_{2})\lambda/\Delta_{c}^2.$$

The wavelength is given, and the distances are taken to imitate a physical scheme as $d_{1}=1$ mm and $d_{2}=8$ mm. Since the correct mask placing requires super-resolution factor power of two, and this factor is limited to $r_{s}=9.17$, the smallest computational pixel size can be taken as $\Delta _{c}=0.4375~\mu$m ($r_{s}=8$), resulting in $N_{eff}=25015$. To satisfy the inequality without peradventure, we took $r_{s}\cdot N_{x}=26000$. This way we have ideally modeled a high-resolution wavefront to be reconstructed with low-resolution observation using Lanczos-3 up-sampling. The MATLAB demo codes of the SSR-PR algorithm are available in Code 1 Ref. [53] .

 figure: Fig. 5.

Fig. 5. Phase reconstructions of the simulated phase-only resolution target, with their corresponding cross-sections. a) Original phase image, b) reconstructions with previous SR-SPAR [32], c) and the novel SSR-PR methods.

Download Full Size | PDF

4.1 Simulation results

The phase reconstructions of the complex-valued object after 20 iterations are shown in Fig. 5. We have compared the SSR-PR reconstruction Fig. 5(c) with the original object Fig. 5(a) and the previous SR-SPAR method [32] Fig. 5(b). Since the modulation mask causes corruption on the wavefront, the SR-SPAR requires a strong sparse filtering and narrow support constraint window, which eliminates the small phase values. Due to the improvements presented in Section 3, the SSR-PR approach provides significantly better reconstructions with 3$\times$ smaller RRMSE than the SR-SPAR, while preserving small phase values as well. The cross-sections of the phase values demonstrate the phase-correct pixel super-resolution by resolving the 6th element of group 9. The line-thickness of this element is $0.875~\mu$m, which is 4$\times$ smaller than the sensor pixel size. Being able to resolve these small details is a significant advantage over the diffraction limited systems, since we can overcome the diffraction limited resolution $\Delta _{Abbe}$. This limit can be calculated by Abbe’s criterion as

$$\Delta_{Abbe} = \frac{\lambda}{NA}\approx\frac{2d\lambda}{N\Delta_{s}}=\frac{2(d_{1}+d_{2})\lambda}{N\Delta_{s}}=1.34~\mu m.$$

4.2 Error compensation demonstration

One of the significant advantages of the SSR-PR is the robustness to setup errors between the ideal and real optical elements. Namely the modulation mask is assumed to be precisely known, although errors could appear due to the manufacturing tolerances. Furthermore the theoretically plane illumination beam could have curvature on its phase. We simulated such errors separately and created the corresponding diffraction patterns. From these patterns, the object is retrieved and RRMSE of the previous and proposed reconstructions are shown in Fig. 6. In the first case (Fig. 6(a)) we assumed a uniform height difference on all modulation mask pixels in the difference interval from -25% to 25%. In the second case (Fig. 6(b)) dull pixel edges were simulated by using Gaussian filter with given $\sigma$ on the up-sampled modulation mask. $\sigma <0.3$ means sharp edges, when all values correspond to the expected value. If $\sigma \geq 0.3$ the values in a single mask pixel will change according to $\sigma$. The change in the value is a 20% decrease at $\sigma =0.5$, and goes up to more than 67% at $\sigma >1$. This means that at this point the single pixels will become dull bulges with less than 35% of the expected value. In the third case (Fig. 6(c)) we simulate phase corresponding to the expanding wavefront, which is simulated as an approximation by adding spherical phase to the illumination beam $u_{b}$. The maximum phase shifts representing the wavefront curvature are shown in the x-axis. We assumed zero shifts at the edges and the maximum at the center. In Fig. 6 we can see that the previous SR-SPAR method (red) is extremely sensitive to the errors and provides good results only when the system is near ideal. The SSR-PR method (green) is capable of compensating the appearing errors and significantly surpassing SR-SPAR.

 figure: Fig. 6.

Fig. 6. RRMSE of phase reconstructions with errors of the optical system. In (a) and (b) mask errors are assumed as uniform height-difference and dull pixels, while in (c) the wavefront of the illumination beam is curved.

Download Full Size | PDF

5. Physical experiments

Three sets of physical experiments have been done. In the first one we aimed to demonstrate the super-resolution power of the proposed SSR-PR by reconstructing a calibrated resolution phase test-target. In the second set, the biomedical application possibility is demonstrated by resolving a static biological sample as Buccal Ephitelial Cells without any special preparation. In the third experiment, the video microscope application is demonstrated by capturing and reconstructing a video of a living dynamic biological sample as a moving single-celled eukaryote, so called protozoa. The reconstruction does not need any special preparation of the specimen.

In the case of static samples Digital Holographic Microscopy [54] (DHM) was used to verify the results. The DHM is using classical holographic principles and the off-axis hologram is recorded by a digital sensor with a 40x magnification objective. For proper comparison, the phases are converted into height values by rearranging Eq. (9) as follows:

$$\Delta h = \frac{\Delta \varphi \lambda}{2\pi(n-1)}.$$

The refractive index $n$ of dry ephitelial cells from the oral cavity was taken from this paper [55].

For each experiment, a laser illumination was used with wavelength of $\lambda =532$ nm. The light modulation was made with a random binary phase mask, similar to the simulations. The previously used mask [32] with 1 $\mu$m pixel size was replaced by a new mask with pixel size of $\Delta _{m}=1.73\mu$m. This size is half of the sensor’s, therefore we can fit the mask better if the super-resolution factor is power of 2. As we stated before (Section 3.1) it is not worth taking $r_{s}>4$, since the calculation time will significantly increase, while the resolution will not change noticeably, so we selected it as $r_{s}=4$. The phase mask was manufactured with an electron beam lithography system on a fused silica glass. The phase-shift difference between the binary mask pixels are expected to be 2.72 rad (500 nm). However by analyzing the physical mask by Scanning Electron Microscope (SEM), we found that the mask characterization includes the dullness (Section 4.2) errors as shown in Fig. 7. Furthermore by measuring the surface of the mask, we found that a ca. 10% uniform mask difference is present. The diffraction patterns were recorded with a CMOS sensor (FLIR Blackfly S BFS-U3-51S5M-BD) with the pixel size of $\Delta _{s}=3.45\mu$m, maximum resolution of $2448\times 2048$, and a 12 bit dynamic range. The calculation was made on a computer with 32 GB of RAM and 3.41 GHz Intel Core i7-6700 CPU. The software was written in MATLAB 2019b. For each set 20 iterations were made with field of view (FOV) of c.a. $1\times 1$ $mm^{2}$. The algorithm was run on CPU and took around 50 seconds per iteration, but could be realized on GPU with around 100 ms per iteration.

 figure: Fig. 7.

Fig. 7. Scanning electron microscope images of the modulation phase mask from (a) top view and (b) side view.

Download Full Size | PDF

5.1 Resolution target

The reconstructions of the resolution target (Phasefocus PFPT01-16-127 [56]) are shown in Fig. 8, where the etch-depth for each line is 127 nm. We have compared the result of DHM (Fig. 8(a)), the reconstruction using the previous SR-SPAR (Fig. 8(b)) and the novel SSR-PR methods (Fig. 8(c)). The SR-SPAR provides correction in the phase, however, as shown in the simulations (Section 4.2), it is very sensitive to the errors of the optical elements. These error cause corruption in the reconstructions, which can be eliminated only by an effective compensation. We can see that the compensation based SSR-PR approach is capable to resolve even the smallest group of the object with line-width of 2 $\mu$m, which is almost 2$\times$ smaller than the sensor pixel size. This result is 3.45$\times$ smaller than the resolution following from the Nyquist-Shannon sampling theorem. Using $CPO$ results in a much clearer reconstruction of the object, while providing significantly better phase-values than our previous approaches, such as SR-SPAR.

 figure: Fig. 8.

Fig. 8. Height maps of the calibrated Phasefocus resolution phase target [56] from the phase reconstructions with their corresponding cross-sections. (a) Digital holographic method, (b) previous SR-SPAR method, (c) SSR-PR method.

Download Full Size | PDF

5.2 Static biological sample

The reconstructions of the biological sample (Buccal Ephitalial Cells) are shown in Fig. 9. The sample was taken from the mouth and applied on a sample slide without any preceding preparation. Similar to the previous experiment with the resolution phase target, we have compared the result of DHM (Fig. 9(a)), the reconstruction using the previous SR-SPAR (Fig. 9(b)) and the novel SSR-PR methods (Fig. 9(c)). The phase reconstructions were wrapped, therefore Phase Unwrapping via MAx flows [57] (PUMA) algorithm was used to unwrap them. We can see that the SR-SPAR already provides phase-correct reconstruction, however the result is generally noisy. It follows that the quality is inadequate and the unwrapping is challenging. In SSR-PR, the noises are compensated and filtered out properly, which results not only better quality but correct unwrapping as well. Comparing the reconstructions we can see that the SSR-PR offers quality phase imaging with a good correspondence to the DH system - with a much simpler optical setup.

 figure: Fig. 9.

Fig. 9. Height maps of Buccal Ephitelial Cells from the unwrapped phase reconstructions with their corresponding cross-sections. (a) Digital holographic method, (b) previous SR-SPAR method, (c) SSR-PR method.

Download Full Size | PDF

5.3 Dynamic biological sample

One of the advantages of the SSR-PR approach is a full wavefront reconstruction of dynamic scenes. We recorded the movement of a single-celled eukaryote, in real time for 10 seconds, resulting in a set of diffraction patterns containing 287 frames. Then the frames are post-processed using the proposed method. After the reconstruction of the whole image set, a Spatio-temporal Video Filtering [58] (RF3D) was applied on the reconstructed video footage to remove the random and fixed-pattern noises. It is a modification of BM3D, therefore following a similar sparsity-based filtering, but taking several consecutive image frames into account.

This result is a breakthrough, since, contrary to the existing phase retrieval based algorithms [59,60] the SSR-PR is capable of computational super-resolved video reconstruction via phase retrieval with a high frame rate limited only by the camera. As the best of our knowledge, this is unique of its kind. In Fig. 10 a few frames are presented from the video. We can see the reconstructed amplitudes on the top row and phases on the bottom row. We have to mention here, that in order to find living eukaryote we let animal feces and water mixture to evolve. Then, the sample was taken from this unprepared mixture, so a scattering effect around this surrounding media corrupts the reconstructions. Having dried and clear sample of Epithelial cell (Fig.9/c) we do not observe this effect. The full reconstructed video can be seen in Visualization 1.

 figure: Fig. 10.

Fig. 10. Frames from complex-valued dynamic object video reconstruction. Amplitude (top row) and phase (bottom row) reconstructions of a moving single-celled eukaryote. The video footage (see Visualization 1) consists 287 frames through 10 second, which from the 50th, 100th, 150th, and 200th frames are shown here.

Download Full Size | PDF

6. Discussion and conclusion

A novel approach and algorithm are proposed for single-shot lensless pixel super-resolution phase retrieval system using a computational separation of carrying and object wavefronts. The separation of carrying wavefront allows us to compensate discrepancies of the computational model and physical optical setup. The compensation is based on preliminary tests of the optical elements, which might be treated as prior system calibration. Namely, these tests were made without the object, one with the phase mask, and one without (laser beam only). The latter one was used to to give an approximation of the carrying wavefront on the object plane, which is then corrected by the proposed compensation. As a result, the carrying wavefront bears corrections of the noises appearing from the error of the optical elements and of the light source. Without separation of the carrying wavefront, all these errors were concentrated in the object wavefront, making the reconstruction very challenging with the demand for extensive noise suppression. Additionally, the novel approximate of the object wavefront is used in the SSR-PR method, resulting in high-quality super-resolved reconstructions. For upsampling, the default box kernel was replaced by Lanczos-3 kernel with stairstep interpolation, providing better quality of reconstructions with low value of RRMSE.

We adjusted the simulations to fit to the physical system and reconstructed details 4$\times$ smaller than the sensor pixel size. Comparing with the previous SR-SPAR method, the difference is especially well marked in the case of small phase values. This corresponding well to the physical experiments, in which we were able to resolve 2 $\mu$m wide lines, as the smallest group of a calibrated test object, with 3.45 $\mu$m sensor pixel size. The biological application was demonstrated by high-detailed reconstruction of static Buccal Ephitelial Cells without any preceding preparation. A significant advantage of SSR-PR is the possibility to observe dynamic objects. We recorded the movement of a single-celled eukaryote and after post-processing we achieved a high frame-rate super-resolved video footage.

A possibility for further work is to move into the multiwavelength direction with unique sensor filters to keep the single exposure property. This direction could also provide us color imaging capability. Furthermore, the simple optical setup and appropriately small dimensions makes the system suitable for realisation in extremely compact systems, e.g., mobile devices, or mini-microscope. Developing such integrated hardware could be the next milestone in this research.

Funding

Academy of Finland (320166); Teknologiateollisuuden 100-Vuotisjuhlasäätiö (CIWIL); Jane ja Aatos Erkon Säätiö (CIWIL).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. L. V. Wang and H.-I. Wu, Biomedical optics: principles and imaging (John Wiley & Sons, 2012).

2. J. Pawley, Handbook of biological confocal microscopy, vol. 236 (Springer Science & Business Media, 2006).

3. Y. Park, C. Depeursinge, and G. Popescu, “Quantitative phase imaging in biomedicine,” Nat. Photonics 12(10), 578–589 (2018). [CrossRef]  

4. Y. Jo, H. Cho, S. Y. Lee, G. Choi, G. Kim, H.-s. Min, and Y. Park, “Quantitative phase imaging and artificial intelligence: a review,” IEEE J. Sel. Top. Quantum Electron. 25(1), 1–14 (2018). [CrossRef]  

5. T. Cacace, V. Bianco, and P. Ferraro, “Quantitative phase imaging trends in biomedical applications,” Opt. Lasers Eng. 135, 106188 (2020). [CrossRef]  

6. D. Gabor, “A new microscopic principle,” Nature 161(4098), 777–778 (1948). [CrossRef]  

7. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21(15), 2758–2769 (1982). [CrossRef]  

8. D. Sayre, “Some implications of a theorem due to shannon,” Acta Crystallogr. 5(6), 843 (1952). [CrossRef]  

9. J. Miao, P. Charalambous, J. Kirz, and D. Sayre, “Extending the methodology of x-ray crystallography to allow imaging of micrometre-sized non-crystalline specimens,” Nature 400(6742), 342–344 (1999). [CrossRef]  

10. R. W. Gerchberg, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

11. J. R. Fienup, “Phase retrieval with continuous version of hybrid input-output,” in Frontiers in Optics (Optical Society of America, 2003), p. ThI3.

12. A. Fannjiang and T. Strohmer, “The numerics of phase retrieval,” arXiv preprint arXiv:2004.05788 (2020).

13. J. Kang, S. Takazawa, N. Ishiguro, and Y. Takahashi, “Single-frame coherent diffraction imaging of extended objects using triangular aperture,” Opt. Express 29(2), 1441–1453 (2021). [CrossRef]  

14. X. Dong, X. Pan, C. Liu, and J. Zhu, “Single shot multi-wavelength phase retrieval with coherent modulation imaging,” Opt. Lett. 43(8), 1762–1765 (2018). [CrossRef]  

15. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE signal processing magazine 32(3), 87–109 (2015). [CrossRef]  

16. I. A. Shevkunov, N. S. Balbekin, and N. V. Petrov, “Comparison of digital holography and iterative phase retrieval methods for wavefront reconstruction,” in Holography, Diffractive Optics, and Applications VI, vol. 9271 (International Society for Optics and Photonics, 2014), p. 927128.

17. A. de Beurs, X. Liu, G. Jansen, A. Konijnenberg, W. Coene, K. Eikema, and S. Witte, “Extreme ultraviolet lensless imaging without object support through rotational diversity in diffractive shearing interferometry,” Opt. Express 28(4), 5257–5266 (2020). [CrossRef]  

18. Z. Hu, C. Tan, Z. Song, and Z. Liu, “A coherent diffraction imaging by using an iterative phase retrieval with multiple patterns at several directions,” Opt. Quantum Electron. 52(1), 29 (2020). [CrossRef]  

19. H. Ling, “Three-dimensional measurement of a particle field using phase retrieval digital holography,” Appl. Opt. 59(12), 3551–3559 (2020). [CrossRef]  

20. L. Zhou, J. Song, J. S. Kim, X. Pei, C. Huang, M. Boyce, L. Mendonça, D. Clare, A. Siebert, C. S. Allen, E. Liberti, D. Stuart, X. Pan, P. D. Nellist, P. Zhang, A. I. Kirkland, and P. Wang, “Low-dose phase retrieval of biological specimens using cryo-electron ptychography,” Nat. Commun. 11(1), 2773 (2020). [CrossRef]  

21. G. Zheng, C. Shen, S. Jiang, P. Song, and C. Yang, “Concept, implementations and applications of fourier ptychography,” Nat. Rev. Phys. 3(3), 207–223 (2021). [CrossRef]  

22. H. Yan, “Ptychographic phase retrieval by proximal algorithms,” New J. Phys. 22(2), 023035 (2020). [CrossRef]  

23. T. Tahara, T. Ito, Y. Ichihashi, and R. Oi, “Multiwavelength three-dimensional microscopy with spatially incoherent light, based on computational coherent superposition,” Opt. Lett. 45(9), 2482–2485 (2020). [CrossRef]  

24. J. Mariën, R. Stahl, A. Lambrechts, C. van Hoof, and A. Yurt, “Color lens-free imaging using multi-wavelength illumination based phase retrieval,” Opt. Express 28(22), 33002–33018 (2020). [CrossRef]  

25. Z. Wang, G.-X. Wei, X.-L. Ge, H.-Q. Liu, and B.-Y. Wang, “High-resolution quantitative phase imaging based on a spatial light modulator and incremental binary random sampling,” Appl. Opt. 59(20), 6148–6154 (2020). [CrossRef]  

26. L. Deng, J. D. Yan, D. S. Elson, and L. Su, “Characterization of an imaging multimode optical fiber using a digital micro-mirror device based single-beam system,” Opt. Express 26(14), 18436–18447 (2018). [CrossRef]  

27. M. Eisenstein, “On their best behavior,” Nat. Methods 16(1), 5–8 (2019). [CrossRef]  

28. S. Jiang, J. Zhu, P. Song, C. Guo, Z. Bian, R. Wang, Y. Huang, S. Wang, H. Zhang, and G. Zheng, “Wide-field, high-resolution lensless on-chip microscopy via near-field blind ptychographic modulation,” Lab Chip 20(6), 1058–1065 (2020). [CrossRef]  

29. J. Wu, H. Zhang, W. Zhang, G. Jin, L. Cao, and G. Barbastathis, “Single-shot lensless imaging with fresnel zone aperture and incoherent illumination,” Light: Sci. Appl. 9(1), 53 (2020). [CrossRef]  

30. S. MiriRostami, V. Y. Katkovnik, and K. O. Eguiazarian, “Extended dof and achromatic inverse imaging for lens and lensless mpm camera based on wiener filtering of defocused otfs,” Opt. Eng. 60(5), 051204 (2021). [CrossRef]  

31. V. Boominathan, J. K. Adams, J. T. Robinson, and A. Veeraraghavan, “Phlatcam: Designed phase-mask based thin lensless camera,” IEEE Trans. Pattern Anal. Mach. Intell. 42(7), 1618–1629 (2020). [CrossRef]  

32. P. Kocsis, I. Shevkunov, V. Katkovnik, and K. Egiazarian, “Single exposure lensless subpixel phase imaging: optical system design, modelling, and experimental study,” Opt. Express 28(4), 4625–4637 (2020). [CrossRef]  

33. F. Zhang, B. Chen, G. R. Morrison, J. Vila-Comamala, M. Guizar-Sicairos, and I. K. Robinson, “Phase retrieval by coherent modulation imaging,” Nat. Commun. 7(1), 13367 (2016). [CrossRef]  

34. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “Diffusercam: lensless single-exposure 3d imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

35. V. Katkovnik, I. Shevkunov, N. V. Petrov, and K. Egiazarian, “Computational super-resolution phase retrieval from multiple phase-coded diffraction patterns: simulation study and experiments,” Optica 4(7), 786–794 (2017). [CrossRef]  

36. B. Lee, J.-y. Hong, D. Yoo, J. Cho, Y. Jeong, S. Moon, and B. Lee, “Single-shot phase retrieval via fourier ptychographic microscopy,” Optica 5(8), 976–983 (2018). [CrossRef]  

37. X. He, X. Pan, C. Liu, and J. Zhu, “Single-shot phase retrieval based on beam splitting,” Appl. Opt. 57(17), 4832–4838 (2018). [CrossRef]  

38. D. Goldberger, J. Barolak, C. G. Durfee, and D. E. Adams, “Three-dimensional single-shot ptychography,” Opt. Express 28(13), 18887–18898 (2020). [CrossRef]  

39. R. Horisaki, T. Kojima, K. Matsushima, and J. Tanida, “Subpixel reconstruction for single-shot phase imaging with coded diffraction,” Appl. Opt. 56(27), 7642–7647 (2017). [CrossRef]  

40. Y. Mäkinen, L. Azzari, and A. Foi, “Collaborative filtering of correlated noise: Exact transform-domain variance for improved shrinkage and patch matching,” IEEE Trans. on Image Process. 29, 8339–8354 (2020). [CrossRef]  

41. E. Bostan, R. Heckel, M. Chen, M. Kellman, and L. Waller, “Deep phase decoder: self-calibrating phase microscopy with an untrained deep neural network,” Optica 7(6), 559–562 (2020). [CrossRef]  

42. J. W. Goodman, Introduction to Fourier optics (Roberts and Company Publishers, 2005).

43. B. Madhukar and R. Narendra, “Lanczos resampling for the digital processing of remotely sensed images,” in Proceedings of International Conference on VLSI, Communication, Advanced Devices, Signals & Systems and Networking (VCASAN-2013), (Springer, 2013), pp. 403–411.

44. T. Moraes, P. Amorim, J. V. Da Silva, and H. Pedrini, “Medical image interpolation based on 3d lanczos filtering,” Comput. Methods Biomech. Biomed. Eng. Imaging & Vis. 8(3), 294–300 (2020). [CrossRef]  

45. R. Gerchberg, “Super-resolution through error energy reduction,” Opt. Acta 21(9), 709–720 (1974). [CrossRef]  

46. Y. Rivenson, A. Stern, and B. Javidi, “Single exposure super-resolution compressive imaging by double phase encoding,” Opt. Express 18(14), 15094–15103 (2010). [CrossRef]  

47. V. Katkovnik and J. Astola, “High-accuracy wave field reconstruction: decoupled inverse imaging with sparse modeling of phase and amplitude,” J. Opt. Soc. Am. A 29(1), 44–54 (2012). [CrossRef]  

48. V. Katkovnik and J. Astola, “Sparse ptychographical coherent diffractive imaging from noisy measurements,” J. Opt. Soc. Am. A 30(3), 367–379 (2013). [CrossRef]  

49. K. Zhang, Y. Li, W. Zuo, L. Zhang, L. Van Gool, and R. Timofte, “Plug-and-play image restoration with deep denoiser prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence (2021).

50. O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention, (Springer, 2015), pp. 234–241.

51. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2016), pp. 770–778.

52. A. Maiden, D. Johnson, and P. Li, “Further improvements to the ptychographical iterative engine,” Optica 4(7), 736–745 (2017). [CrossRef]  

53. P. Kocsis, “SSRPR_code,” figshare (2021), https://doi.org/10.6084/m9.figshare.16743727.

54. V. Katkovnik, I. Shevkunov, N. V. Petrov, and K. Egiazarian, “High-accuracy off-axis wavefront reconstruction from noisy data: local least square with multiple adaptive windows,” Opt. Express 24(22), 25068–25083 (2016). [CrossRef]  

55. A. Belashov, A. Zhikhoreva, V. Bespalov, O. Vasyutinskii, N. Zhilinskaya, V. Novik, and I. Semenova, “Determination of the refractive index of dehydrated cells by means of digital holographic microscopy,” Tech. Phys. Lett. 43(10), 932–935 (2017). [CrossRef]  

56. T. Godden, A. Muñiz-Piniella, J. Claverley, A. Yacoot, and M. Humphry, “Phase calibration target for quantitative phase imaging with ptychography,” Opt. Express 24(7), 7679–7692 (2016). [CrossRef]  

57. G. Valadao and J. Bioucas-Dias, “Puma: phase unwrapping via max flows,” in Proceedings of Conference on Telecommunications–ConfTele, (Citeseer, 2007), pp. 609–612.

58. M. Maggioni, E. Sánchez-Monge, and A. Foi, “Joint removal of random and fixed-pattern noise through spatiotemporal video filtering,” IEEE Trans. on Image Process. 23(10), 4282–4296 (2014). [CrossRef]  

59. D. Ryu, Z. Wang, K. He, G. Zheng, R. Horstmeyer, and O. Cossairt, “Subsampled phase retrieval for temporal resolution enhancement in lensless on-chip holographic video,” Biomed. Opt. Express 8(3), 1981–1995 (2017). [CrossRef]  

60. J. Zhang, Q. Chen, J. Li, J. Sun, and C. Zuo, “Lensfree dynamic super-resolved phase imaging based on active micro-scanning,” Opt. Lett. 43(15), 3714–3717 (2018). [CrossRef]  

Supplementary Material (2)

NameDescription
Code 1       MATLAB code for SSR-PR algorithm
Visualization 1       A moving single-celled eukaryote, so-called protozoa is observed with the proposed optical system. We recorded an image set of 287 diffraction patterns over 10 seconds. These patterns are post-processed by the novel SSR-PR method resulting in a 4x su

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Sketch of the SSR-PR optical system: A coherent illumination source (laser), an object to be tested, a binary phase-mask, and a sensor. The sketch is made by 3doptix optical system design.
Fig. 2.
Fig. 2. RRMSE of phase reconstructions by (a) different up-sampling kernels and (b) stairstep interpolation of Lanczos-3 kernel with different increment-steps.
Fig. 3.
Fig. 3. Flowchart to calculate $CPO$-compensated carrying wavefront.
Fig. 4.
Fig. 4. Flowchart of the SSR-PR method.
Fig. 5.
Fig. 5. Phase reconstructions of the simulated phase-only resolution target, with their corresponding cross-sections. a) Original phase image, b) reconstructions with previous SR-SPAR [32], c) and the novel SSR-PR methods.
Fig. 6.
Fig. 6. RRMSE of phase reconstructions with errors of the optical system. In (a) and (b) mask errors are assumed as uniform height-difference and dull pixels, while in (c) the wavefront of the illumination beam is curved.
Fig. 7.
Fig. 7. Scanning electron microscope images of the modulation phase mask from (a) top view and (b) side view.
Fig. 8.
Fig. 8. Height maps of the calibrated Phasefocus resolution phase target [56] from the phase reconstructions with their corresponding cross-sections. (a) Digital holographic method, (b) previous SR-SPAR method, (c) SSR-PR method.
Fig. 9.
Fig. 9. Height maps of Buccal Ephitelial Cells from the unwrapped phase reconstructions with their corresponding cross-sections. (a) Digital holographic method, (b) previous SR-SPAR method, (c) SSR-PR method.
Fig. 10.
Fig. 10. Frames from complex-valued dynamic object video reconstruction. Amplitude (top row) and phase (bottom row) reconstructions of a moving single-celled eukaryote. The video footage (see Visualization 1) consists 287 frames through 10 second, which from the 50th, 100th, 150th, and 200th frames are shown here.

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

z = | P d { u o u b , 0 } | 2 .
z = | P d 2 { M P d 1 { u o u b , 0 } } | 2 ,
u ( x , y , d ) = F 1 { H ( f x , f y , d ) F { u ( x , y , 0 ) } } ,
H ( f x , f y , d ) = { e i 2 π λ d 1 λ 2 ( f x 2 + f y 2 ) , f x 2 + f y 2 1 λ 2 , 0 , o t h e r w i s e .
u b ^ = u b , 0 C P O .
R R M S E = φ o φ o ^ F φ o F ,
C P O = u b ^ z b ~
u b = u b ( 1 α ) | u b | 2 + α | u b | m a x 2 ,
Δ φ = 2 π ( n 1 ) Δ h λ = 2 π ( 1.4607 1 ) 100 [ n m ] 532 [ n m ] = 0.544 [ r a d ] .
r s N c N e f f = ( d 1 + d 2 ) λ / Δ c 2 .
Δ A b b e = λ N A 2 d λ N Δ s = 2 ( d 1 + d 2 ) λ N Δ s = 1.34   μ m .
Δ h = Δ φ λ 2 π ( n 1 ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.