Abstract

In this paper, we consider computational super-resolution inverse diffraction phase retrieval. The optical setup is lensless, with a spatial light modulator for aperture phase coding. The paper is focused on experimental tests of the super-resolution sparse phase amplitude retrieval algorithm. We start from simulations and proceed to physical experiments. Both simulation tests and experiments demonstrate good-quality imaging for super-resolution with a factor of 4 and a serious advantage over diffraction-limited resolution as defined by Abbe’s criterion.

© 2017 Optical Society of America

1. INTRODUCTION

A. Problem Setup

The phase retrieval problem concerns the reconstruction of 2D and 3D complex-domain objects provided that only the intensity of light radiation is measured. Such problems appear in science and engineering with a wide area of applications from astronomy and optics to medicine and biology (e.g., [13]).

The standard formalization of the phase retrieval problem is of the form

ys=|Ps{uo}|2,s=1,,L,
where uoCN×N is an N×N complex-valued 2D image of the object (specimen), Ps: CN×NCN1×N1 is a complex-valued forward propagation operator from the object to sensor planes, ysR+N1×N1 are N1×N1 intensity measurements at the sensor plane, L is the number of experiments, and us=Ps{uo} is a notation for the complex-valued wavefront at the sensor plane.

In this paper, we consider the setup with aperture modulation such that

us=Ps{uo}=P{Msuo}.
Here MsCN×N are phase modulation masks, Ms(k,l)=exp(jϕk,l(s)), and “∘” stands for the entry-wise (Hadamard) product of two matrices.

The phases ϕk,l(s) in Ms can be generated as deterministic or random. It results in observations known as coded diffraction patterns (e.g., [4]):

ys=|P{Msuo}|2,s=1,,L.
The phase modulation is able to change the diffraction pattern of P{uo} dramatically, enabling redistribution of observed intensities from low to higher frequencies.

Various methods have been developed to make these Ps sufficiently different in order to achieve observation diversity to be able to find uo from observations {ys}1L. A defocussing of the registered images is one of the instruments to achieve sufficient phase diversity [59]. In a recent development, a spatial light modulator (SLM) is exploited for the defocussing (e.g., [1012]).

A modulation of the wavefront is another efficient tool to achieve the desirable diversity of observations (e.g., [1315]).

For noisy observations, Eq. (3) is changed for

zs=G{|us|2},s=1,,L,
where G stands for the generator of random observations.

For the wavefront propagation from the object to the sensor plane, we use the Rayleigh–Sommerfeld model with the transfer function defined through the angular spectrum (AS) ([16], Eqs. (3)–(74)):

us(x,y,z)=F1[H(fx,fy,z)·F[us(x,y,0)]],
H(fx,fy,z)={exp[i2πλz1λ2(fx2+fy2)],fx2+fy21λ2,0otherwise,
where operators F and F1 stand for the Fourier transform (FT) and inverse FTs, respectively; us(x,y,z) is a wavefront propagated along distance z from the initial position us(x,y,0); H(fx,fy,z) is the AS transfer function; fx,fy are spatial frequencies; x, y denote spatial coordinates; and λ is the wavelength.

In this paper, we assume that the observations have a Poissonian distribution typical for optical photon counting. This process is well modeled by independent Poissonian random variables of the form

p(zs[l]=k)=exp(ys[l]χ)(ys[l]χ)kk!,
where p(zs[l]=k) is a probability that the random observation zs[l] take an integer value k0 and ys[l] is the intensity of the wavefront, defined by Eq. (1), at pixel l.

The parameter χ>0 in Eq. (7) is a scaling factor of the Poisson distribution. Defining the observation signal-to-noise ratio (SNR) as the ratio between the square of the mean and the variance of zs, we have SNR=E2{zs}/var{zs}=ysχ. It follows that the relative noisiness of the observations becomes stronger as χ0 (SNR0) and approaches zero when χ (SNR). The latter case corresponds to the noiseless scenario, i.e., zs/χys with probability 1. The scale parameter χ is of importance for modeling as it allows controlling a level of noise in observations.

Reconstruction of the complex-valued object uo (phase and amplitude) from noiseless or noisy observations is the phase retrieval problem. Here phase emphasizes that in the object phase is a variable of first priority, whereas amplitude may be an auxiliary variable often useful only in order to improve the phase imaging.

B. Phase Retrieval Algorithms

There is a flow of publications on phase retrieval. Various versions of the Gerchberg–Saxton technique are quite universal for different setups (e.g., [79,17,18]). These algorithms, based on alternating projections between the object and observation planes, are flexible in order to incorporate any information available for variables in these planes. Recent developments in this area as well as a review are provided in [19].

Contrary to this type of intuitive heuristic algorithms, the variational formulations have a stronger mathematical background and are based on numerical algorithms for solving some optimization problems (e.g., [2022]).

A new variational algorithm for phase retrieval from noisy data based on transform domain sparsity for object phase and amplitude is developed in [23]. The sparsity concept as a tool for phase retrieval is a topic of publication [24], where the original sparsifying learning transform is developed.

Phase retrieval from coded diffraction patterns is of special interest in recent publications (e.g., [14,25]). The uniqueness of the phase retrieval in this scenario is proved in a later paper.

The spatial resolution of phase retrieval is limited by two factors: low-pass filtering by the propagation operator P and the size of the sensor’s pixels. High-frequency spatial information is lost in intensity observations, which can be treated as corresponding to a sub-sampled true object uo. Super-resolution imaging allows compensating for this sub-sampling effect.

One of the straightforward approaches to overcoming pixel size limitations is to use a sequence of laterally shifted holograms (e.g., [2628]) or sub-pixel scanning of speckle patterns (e.g., [29]).

The computational approach to super-resolution is different assuming that the optical setup is fixed and a super-resolution image should be extracted from the data available. Note that computational imaging is a hot topic for the real domain, with a flow of publications, whereas for phase/amplitude imaging, in the complex domain, the situation is quite different, with a few publications, which can be referred to, in particular, for phase retrieval.

Compressed sensing (CS) is one of the comparatively new computational techniques for restoration of sub-sampled data. Application of this sort of techniques in optics can be found in [30,31]. CS can be treated as super-resolution; however, there is a deep difference between these two problems. CS is focused on the sub-sampling design with the smallest number of observations as a target, whereas in super-resolution, the observations are given on a usually regular and fixed grid.

C. Contribution and Structure of This Paper

The transform domain phase/amplitude sparsity introduced in [23] was generalized for the super-resolution phase retrieval in [32]. This paper is focused on experimental testing of the general sparsity-based approach developed in [23] and [32], where only simulation results have been demonstrated. We modify the algorithms from [32] to the lensless optical setup used in the experiments with AS wavefront propagation [Eqs. (5) and (6)]. Contrary to it, a lenslet and the FT propagation model are used in [32]. We provide simulation and experimental tests demonstrating that the developed algorithm enables super-resolution phase retrieval with an up-sampling factor of up to 4 and with a significant advantage over the diffraction-limited system with resolution defined by Abbe’s criterion.

In what follows, the proposed algorithm is presented in Section 2. The experiment details and results are discussed in Section 3.

2. SUPER-RESOLUTION PHASE RETRIEVAL

A. Complex Domain Sparsity

The complex domain sparsity for the object as used in this paper assumes that there are small fragments (patches) of the object uo, which have similar features. These similar patches can be found in different parts of the image. It follows that these patches taken together admit sparse representations: they can be well approximated by linear combinations of a small number of basic functions.

The complex domain images, such as the object uo=Boexp(iφo), are defined by two variables, phase φo and amplitude Bo. Respectively, the sparse representation can be imposed on the following pairs of real-valued variables:

  • (1) Phase φ (interferometric or absolute) and amplitude Bo.
  • (2) The real and imaginary parts of uo.

In this paper, we apply absolute phase/amplitude sparsity following [33,34] where it was already tested for various optical problems.

The success of the sparsity approach depends on how rich and redundant are the dictionaries (sets of basic functions) used for image analysis and synthesis. In this paper, we use the block-matching and 3D filtering (BM3D) algorithm for the sparsity implementation [35].

Let us mention the basic steps of this advanced technique known as nonlocal self-similarity sparsity. At the first stage, the image is partitioned into small overlapping square patches. For each patch, a group of similar patches is collected, which are stacked together to form a 3D array (group). This stage is called grouping. The entire 3D group array is projected onto a 3D predefined transform basis. The spectral coefficients obtained as a result of this transform are hard-thresholded (small coefficients are zeroed) and the inverse 3D transform gives the filtered patches, which are returned to the original position of these patches in the image. This stage is called collaborative filtering. This process is repeated for all pixels of the entire image and the resultant overlapped filtered patches are aggregated in the final image estimate. This last stage is called aggregation. Details of this algorithm are provided in [35].

The links of this algorithm with the general sparsity concept are revealed in [36]. It is shown that the grouping operations result in data adaptive analysis and synthesis of image transforms (frames), which when combined with the hard-thresholding of these transforms define the BM3D algorithm. It is emphasized that sparsity is achieved mainly due to the grouping, which allows to jointly analyze similar patches and in this way to guarantee sparsity (self-similarity of patches) at least for each of the 3D groups.

Note that the standard BM3D algorithm as presented in [35] is composed of two successive steps, thresholding and Wiener filtering. In this paper, we use a simplified version of BM3D including grouping, transforms, and thresholding without Wiener filtering. In this form, this algorithm is obtained from the variational setting of the wavefront retrieval problem.

It is shown in [23] that the nonlocal block-wise sparsity imposed on phase and amplitude results in separate BM3D filtering of phase and amplitude:

φ^o=BM3Dphase(φo,thφ),
B^o=BM3Dampl(Bo,thB).
Here φ^o and B^o are, respectively, sparse approximations of φo and Bo. phase and ampl as indices of BM3D are used in order to emphasize that the parameters of BM3D can be different for phase and amplitude. thφ and thB are thresholding parameters of the algorithms. The phase in Eq. (8) can be interferometric or absolute depending on the sparsity formulation.

B. Super-resolution SPAR Algorithm

Conventionally, the pixels are square ΔSLM×ΔSLM and ΔS×ΔS for the SLM and sensor, respectively. A continuous object is discretized by computational pixels Δc×Δc. This discretization is necessary for both digital data processing and modeling of wavefront propagation and image formation. Contrary to the pixels of the SLM and sensor, which are defined by the corresponding optical-electronic devices, the object pixels Δo are computational Δo=Δc, which maybe be taken to be of arbitrary small sizes.

Assuming for a moment Δc=ΔS, the reconstruction of uo from the observations {zs} is the conventional phase retrieval with the resolution for the object dictated by the pixel size of the sensor. Let us term this case pixel (or pixel-wise) resolution imaging.

If Δc<ΔS, we arrive at a much more interesting case of sub-pixel resolution (or super-resolution). It is convenient to assume that ΔS=rS·Δc, where rS1 is an integer pixel up-sampling factor, and ΔSLM=rSLM·Δc, where rSLM1. Then rS2 and rSLM2 are numbers of smaller Δc×Δc computational object pixels covering a single pixel of the sensor and SLM, respectively. These SLM pixels give the same modulation phase shift to the rSLM2 object pixels in these groups.

In order to study the effects of SLM phase modulation, we also introduce an SLM superpixel as a larger size pixel M·ΔSLM×M·ΔSLM, which is composed of adjacent M×M SLM pixels. In the phase modulation, the corresponding M×M SLM superpixels provide the same phase shifts to the M2rSLM2 object pixels covered by the superpixel.

In our experiments, we use for the SLM’s superpixels the random phases ϕn,m(s) generated as follows:

ϕn,m(s)=ϵn,m(s),
where ϵn,m(s) are independent and identically-distributed random zero-mean Gaussian variables with a standard deviation of 1.

Then the corresponding phase modulation masks Ms in Eq. (2) are defined as

Ms(n,m)=exp(jϕn,m(s)·σ),
where σ is a parameter defining the standard deviation of the random phase in the masks Ms.

The computational wavefront reconstruction goes from the continuous domain wavefront propagation [Eqs. (5) and (6)] to the corresponding discrete model based on pixelation of the object distribution uo; thus, we arrive at the discrete modeling of the system at hand linking the discrete values of the sensor output (observations) with the discrete values of the object with the integral FT replaced by the fast FT. Note that all these calculations are produced with the computation sampling period Δc.

A discrete modeling of the continuous propagation in Eqs. (5) and (6) imposes restrictions on the computational sampling period Δc with connection to the distance z, the wavelength λ, and the computational aperture size. With reference to [3739], for the considered scenario, these restrictions can be given as the inequality

zzmax=NΔc2/λ,
where zmax is an upper bound for the distance between the object and sensor, and N defines the N×N computational support of a zero-padded object and the sensor, with a computational aperture size of NΔc×NΔc.

In order to simplify the algorithm presentation, we preserve the notation P for this discrete model initially introduced for continuous variables and integrals. The abbreviation SR-SPAR (super-resolution sparse phase amplitude retrieval) is used for this algorithm.

Let us emphasize that SR-SPAR has originated from the variational formulation introduced for the optimal reconstruction of uo from the Poissonian observations {zs[k,l]}. The corresponding minus log-likelihood for the Poissonian observations according to Eq. (7) is as follows:

L=s=1Lk,l[|us[k,l]|2χzs[k,l]log(|us[k,l]|2χ)].
This criterion should be minimized with respect to uo[k,l] provided the equations linking uo and uS and the restrictions imposed by the sparsity requirements.

The derivation of the algorithm is based on the technique developed in [23] for pixel-wise resolution phase retrieval and FT propagation. We show the block scheme of this algorithm in Fig. 1 referring to [23] and [32] for details.

 

Fig. 1. Flow chart of the SR-SPAR algorithm.

Download Full Size | PPT Slide | PDF

The inputs z˜s in this algorithm are the observations zs up-sampled by factor rS. We use the zero-order up-sampling giving z˜s as piece-wise invariant, with the invariant values for computational pixels corresponding to each of the larger size pixels of the sensor. At the initialization step of the algorithm, the initial estimate of the object wavefront x0 is created. At Step 1, the object wavefront estimate xt is multiplied by the phase mask Ms and propagates using the operator P to the sensor plane; the result is denoted with vst. These wavefronts are calculated for the computational diffraction area N×N denoted with SD.

At Step 2, this wavefront is updated to the variable ust by filtering the amplitude of vst according to the given observations z˜s. The following formula as derived in [23] defines the rule how the amplitude bs is calculated:

bs=|vs|+|vs|2+4z˜sγ1(1+γ1χ)2(1+γ1χ).
These calculations are element-wise; γ1>0 is the parameter of the algorithm, which defines proportion between |vs| and z˜s. This update is produced for the known observation z˜s, i.e., for the pixels belonging to the sensor area SS, vsSS. The computational diffraction area is always equal to or larger than the sensor area, SSSD. For the area out of the sensor, the wavefront values are preserved, us=vs for vsSD\SS.

At Step 3, the estimates {ust} back-propagate to the object plane and update the object wavefront estimate xt+1. Here Ms* means a complex conjugate of Ms and P1{ust} means inverse of the model Eq. (5).

The unwrapping of phase with reconstruction of the absolute phase in Step 4 is necessary only if the object phase range exceeds 2π [40]. Sparsification (filtering on the base of the sparse approximations) is produced in Step 5. At Step 6, the object reconstruction is updated.

C. Parameters of SR-SPAR

The performance of the SR-SPAR algorithm essentially depends on its parameters. Optimization can be produced for each amplitude/phase distribution and noise level. However, in our study, the parameters are fixed for all the experiments. The image patches in BM3D are square 3×3. The group size is limited by 39 patches. The step size between neighboring patches is equal to 2. Discrete cosine (for patches) and Haar (for group length) transforms are used for 3D group data processing in BM3D.

The parameters defining the iterations of the algorithm are as follows: γ1=1/χ, tha=2.0, thφ=2.0. The number of iterations is fixed to 30.

For our experiments, we use MATLAB R2016b on a computer with 32 GB of RAM and CPU with a 3.40 GHz Intel(R) Core(TM) i7-3770 processor.

The computation complexity of the algorithm is characterized by the time required for processing. For 30 iterations, L=10, and 64×64 images, zero-padded to 4992×4992, this time is equal 2150 s.

We make the MATLAB demo codes (see Code 1, Ref. [41]) of the developed SR-SPAR algorithm publicly available at http://www.cs.tut.fi/sgn/imaging/sparse. These can be used to reproduce the experiments presented in this paper as well as for further tests.

3. SIMULATION AND EXPERIMENTS

A. Experimental Setup

The experimental setup, presented in Fig. 2, contains a laser LASOS GLK 3250 TS with a wavelength of λ=532  nm, a digital 8-bit CMOS camera EVS 3264×2448 with a pixel size of ΔS=1.4  μm, and a phase-only reflective SLM Holoeye LETO 1920×1080 with a pixel size of ΔSLM=6.4  μm and a fill factor of 93%. The polarization of the light beam emitted from the laser source is changed from vertical to horizontal by the retardation plate λ/4 and the polarizer P. Further, the beam expands in the collimators of the two lenses L1 and L2 and passes through the beamsplitter (BS) to the SLM, where the light wavefront obtains the desired phase retardation, and propagates to the CMOS camera through the same BS. The optical distance between the SLM and CMOS is 21.55 mm. According to Eq. (2), on the SLM we implement the sum of the object phase and the phase modulation masks Ms. The SR-SPAR algorithm has been tested on two phase objects with the invariant amplitude B0=10 and the phase images “cameraman” and a logo “TUT” (see Figs. 3 and 4, middle images in the top rows). All the results presented further in Figs. 310 are shown in images of identical structure—top line: reconstructed and original phases in the left and middle images, respectively, and reconstructed amplitude in the right image; bottom line: phase (left) and amplitude (right) longitudinal cross sections.

 

Fig. 2. Optical setup. λ4, retardation plate; P, polarizer; L1,L2, lenses; BS, beamsplitter; SLM, spatial light modulator; CMOS, registration camera.

Download Full Size | PPT Slide | PDF

 

Fig. 3. Simulation—“cameraman” test image, pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8, RMSEphase=0.00253.

Download Full Size | PPT Slide | PDF

 

Fig. 4. Simulation—logo “TUT” test image, pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8, RMSEphase=0.000852.

Download Full Size | PPT Slide | PDF

 

Fig. 5. Simulation—“cameraman” test image, sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4, RMSEphase=0.00228.

Download Full Size | PPT Slide | PDF

 

Fig. 6. Simulation—logo “TUT” test image, sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4, RMSEphase=0.000592.

Download Full Size | PPT Slide | PDF

 

Fig. 7. Experiment—phase object “cameraman,” pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8.

Download Full Size | PPT Slide | PDF

 

Fig. 8. Experiment—phase object logo “TUT,” pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8.

Download Full Size | PPT Slide | PDF

 

Fig. 9. Experiment—phase object “cameraman,” sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4.

Download Full Size | PPT Slide | PDF

 

Fig. 10. Experiment—phase object logo “TUT,” sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4.

Download Full Size | PPT Slide | PDF

In our experiments, both the phase object and phase modulation masks are implemented on the SLM. Thus, the object’s and SLM’s pixels are equal, i.e., Δo=ΔSLM. The object size in pixels is 64×64 (in SLM’s pixels). In the modeling and experiments, we use the same values of the parameters: propagation distance z=21.55  mm, ΔS=1.4  μm, ΔSLM=4×ΔS,Δo=ΔSLM.

The objects and sensor are zero-padded to 4992×4992 pixels, i.e., N=4992. It is the largest dimension allowed by the RAM of our computer. The experiments have been produced with super-resolution factors of rS=1,2,3,4. In what follows, we show the results only for rS=1 (pixel resolution) and rS=4 (sub-pixel resolution). The computational pixels for pixel resolution and sub-pixel resolution take values of Δc=1.4  μm and Δc=0.35  μm, respectively.

The propagation distance z=21.55  mm is minimal for our optical setup restricted by the used reflective SLM. For rS=1, the distance zmax in Eq. (12) is zmax=18.4  mm. This upper bound is smaller than the used z=21.55  mm, but because the difference is not significant, the theoretical requirement [Eq. (12)] is approximately fulfilled.

We evaluate the diffraction-limited resolution using Abbe’s criterion:

ΔAbbe=λNA2zλNΔS,
where NA stands for numerical aperture.

In the inverse imaging, the phase retrieval resolution is defined by the computational pixel Δc provided good-quality imaging is achieved.

Let us compare these experimentally observed resolutions with the theoretical upper bounds for super-resolution calculated according to Abbe’s criterion.

A square part of the sensor 2448×2448 is used for the pixel resolution imaging; then according to Eq. (15), ΔAbbe=6.7  μm while Δc=1.4  μm, and it follows that ΔAbbeΔc.

A smaller central part of the sensor 1224×1224 is used for the sub-pixel resolution imaging with rS=4. According to Eq. (15) then, ΔAbbe=13.4  μm while Δc=1.4/4=0.35  μm and once more ΔAbbeΔc.

Thus, super-resolution imaging, in terms of Abbe’s criterion, is demonstrated by SR-SPAR for both pixel and sub-pixel resolution imaging of course provided that the reconstructed images are of good quality.

The ratio ΔAbbe/Δc compares the diffraction-limited and computational resolution. In the case ΔAbbe/Δc1, which is always fulfilled in our tests, the computational resolution is essentially higher than the diffraction-limited resolution as defined by Abbe’s criterion.

B. Simulation Experiments

In this section, we show the results for nearly noiseless observations with the Poissonian scale parameter χ=1000. The way that the SPAR algorithm suppresses the Poissonian noise can be seen in [23], where it is done for pixel resolution and the FT propagation model. Similar strong filtering can be demonstrated for the super-resolution by SR-SPAR. In what follows, the accuracy of reconstruction is evaluated by the root-mean-square error (RMSE) criterion.

1. Pixel Resolution

In Figs. 3 and 4, we present the results for pixel resolution, i.e., the computational and sensor pixels have equal sizes: Δc=ΔS, Δo=ΔSLM,ΔSLM=4·ΔS, M=8. The latter means that for the modulation phase masks, we use SLM superpixels with M=8. It follows from the figures that SR-SPAR demonstrates perfect results in phase reconstruction and negligible errors in amplitude reconstructions. The amplitude reconstructions have clearly visible squares forming 8×8 grids. These squares are a mapping of the SLM’s superpixels. Each of these squares has a size of 8×8 in SLM pixels. Thus, the total object size is 64×64 in SLM pixels and 256×256 in computational pixels Δc.

We can also note that the amplitude errors are definitely correlated with the phase variations: features of “cameraman” and “TUT” can be seen in the amplitude reconstructions. In the intensity measurements, the amplitude and phase of the object are tangled due to propagation. The algorithm tries to untangle them. The observed cross-talk in the phase/amplitude reconstructions means that this work is not done perfectly.

In order to optimize the accuracy of the reconstructions, we have studied the dependence of RMSE with respect to the following four parameters: number of observations, L, number of iterations, T, standard deviation of the random phase in the phase modulation masks, and size of the SLM’s superpixels, M.

In what follows, we illustrate these results calculating the RMSE for pixel-wise phase reconstruction, rS=1, in experiments with the “cameraman” phase test image.

More or less similar results have been observed for the “TUT” phase test image with various super-resolution factors rS.

The dependence on the number of observations is presented in Fig. 11. It is a decreasing function taking a steady-state value starting from L=10. We take this value of L for our simulation. However, for processing of experimental data, we take a larger value of L=20.

 

Fig. 11. Phase RMSE dependence on the number of observations L.

Download Full Size | PPT Slide | PDF

The standard deviation (STD) of the random zero-mean Gaussian phase masks is a crucial parameter for reconstruction because it influences both the accuracy and convergence of the algorithm. Figure 12 illustrates RMSE as a function of STD. The curve takes a minimal value at σ=0.3π, acceptable for our experiments.

 

Fig. 12. Phase RMSE dependence on the standard deviation σ of zero-mean Gaussian random phase masks in Eq. (11).

Download Full Size | PPT Slide | PDF

Another significant parameter is the SLM superpixel size M. Figure 13 clearly indicates that in simulation, the best value is M=1. However, it was observed during processing of experimental data that larger values of M provide better results. It happens due to the cross-talk between adjacent SLM pixels [42]. We accepted M=8 as an optimal value and used it for processing of simulation and experimental data.

 

Fig. 13. Phase RMSE dependence on SLM superpixel size M.

Download Full Size | PPT Slide | PDF

Figure 14 illustrates the performance of the algorithm with respect to the number of iterations, T. The RMSE curve decreases monotonically with steady-state values achieved after about 15 iterations. Nevertheless, for experimental data, we take T=30.

 

Fig. 14. Phase RMSE dependence on iteration number T.

Download Full Size | PPT Slide | PDF

2. Sub-pixel Resolution

In this subsection, we show the results for the super-resolution performance of SR-SPAR with a super-resolution factor of rS=4, i.e., ΔS=rS·Δc,ΔSLM=4·ΔS, ΔAbbe=6.7  μm, Δo=ΔSLM, and Δc=0.35  μm. This computational pixel is smaller than the wavelength λ=532  nm. In this way, we arrive at the sub-wavelength resolution. Figures 5 and 6 demonstrate nearly perfect reconstructions, which are similar to the reconstructions obtained for pixel resolution.

C. Physical Experiments

The parameters used in the physical experiments exactly correspond to those for the simulation tests. The phase objects “cameraman” and “TUT” are modeled by the SLM with phase ranges of [0,π5] and [0,π6], respectively. The computational pixels have sizes of Δc=1.4  μm and Δc=0.35  μm, respectively, for pixel- and sub-pixel resolution.

In the pixel resolution case, we use all 2448×2448 pixels of the sensor; therefore, Abbe’s resolution is ΔAbbe=6.7  μm. In the sub-pixel resolution case, the resolution of rS=4 means that we need to operate with (4×2448)×(2448×4) computational pixels, which is not possible on our computer. As a result, we use only a central part of the sensor 1224×1224; therefore, Abbe’s resolution for this case is ΔAbbe=13.4  μm.

1. Pixel Resolution

The pixel resolution images are shown in Figs. 7 and 8. Despite a large distance z between the object and sensor planes and strong diffraction effects in the registered diffraction patterns, SR-SPAR gives reliable results with a clearly visible “cameraman” image in Fig. 7. The phase images “cameraman” and “TUT” are clearly visible in the amplitude reconstruction. It happens due to the effect discussed above. Let us repeat these arguments here. In the intensity measurements, the amplitude and phase of the object are tangled due to propagation. The algorithm should untangle them. The observed cross-talk in the phase/amplitude reconstructions means that this work is not done perfectly as well as the propagation model used is not accurate. In the experimental data, the additional effects follow from the cross-talk between the programmed phase and amplitude in the SLM.

It happens, in particular, due to a non-ideality of the SLM with some leakage from phase to amplitude modulation. The corresponding phase cross section demonstrates that the reconstruction follows the respective changes in the original image. The reconstructions for the “TUT” phase object are also quite acceptable as can be seen from the phase cross section showing the locations of the peaks quite accurately. We can also see the features of the phase object in the amplitude reconstruction. Again, at least partially, it can be attributed to the non-ideality of the SLM. The blurred shape of the reconstructions as compared with the shape of the reconstructions obtained in the simulation tests is due to an approximate modeling of wavefront propagation imbedded in SR-SPAR as defined by Eqs. (5) and (6).

2. Sub-pixel Resolution

The results for sub-pixel super-resolution are presented in Figs. 9 and 10. Again we can see quite good reconstructions for both the “cameraman” and “TUT” phase objects. The comments on amplitude reconstructions are identical to those given above for the pixel resolution case. It is interesting that the reconstructions for sub-pixel resolution look very similar to the reconstructions obtained for pixel resolution. The only difference visible clearly in the cross sections is that the sub-pixel resolution images are smoother than the pixel resolution ones. The blurred character of these reconstructions can be attributed to an approximate modeling of wavefront propagation by Eqs. (5) and (6), as well as to the fact that a much smaller number of sensor pixels are used in these experiments. For the pixel-wise imaging, we use 2448×2448 sensor pixels and only 1224×1224 sensor pixels for sub-pixel imaging. This reduction in the used sensor size happens due to an increased number of computational pixels required for sub-pixel resolution with rs=4. In this case, 1224×1224 is the largest data array accepted by the RAM of our computer.

Note also that for sub-pixel resolution Δc=ΔS/4, and provided fixed image support in computational pixels, N=4992, zmax=18.4/16  mm according to Eq. (12). It is much less than the used z=21.55  mm; thus, the condition Eq. (12) is severely violated. It means a discrepancy between reality and modeling of wavefront propagation is what results in the additional blurring in Figs. 9 and 10.

4. CONCLUSION AND DISCUSSION

Computational sub-pixel super-resolution phase retrieval is considered for phase-coded intensity observations. An iterative algorithm is proposed for a lensless setup with the scalar Rayleigh–Sommerfeld model for wavefront propagation. One of the essential instruments of the algorithm is a sparsity hypothesis applied to both phase and amplitude. The efficiency of the algorithm is confirmed by simulation tests and experiments. It is shown that high-level super-resolution can be achieved with a super-resolution factor of up to 4, i.e., the pixel size of the reconstructed object is four times smaller than the pixel size of the sensor. In comparison with the wavelength, the achieved super-resolution is up to two-thirds the wavelength. The demonstrated computational resolution is essentially higher than the diffraction-limited resolution as defined by Abbe’s criterion.

Further work is planned. First, we are going to use a transmissive (not a reflective) SLM, which allows making the propagation distance z much smaller and in this way to fulfill the condition Eq. (12). Second, we plan to use in our experiments a physical phase test object, which can allow observing details with resolution smaller than the SLM’s pixel size. Third, in the algorithm development, an adaptive correction of the transfer function for wavefront propagation is to be designed.

Finally, we wish to demonstrate that more impressive super-resolution can be achieved for a smaller freespace propagation distance. It is one of the arguments to use a transmissive SLM instead of a reflective one in our future work.

We show the simulation results obtained for z=21.55  mm and z=1  mm. The USAF test is used in the experiments for phase modeling. In contrast to the experiments considered above, we assume that the object pixels are smaller than the SLM’s pixels, Δo=0.35  μm. Remember that ΔS=1.4  μm and ΔSLM=4×ΔS. For the experiments, we use only a central part of the USAF test, 150×150, in order to reduce the size of the reconstructed images and to deal with only the smallest elements of the USAF test chart. The results are shown in Fig. 15 for sub-pixel resolution with rS=4. As in our previous figures, we show the phase reconstructions and the true phase in the first row of images and the cross sections in the second row. It is clear that the resolution of the reconstruction for a smaller distance of z=1  mm is much sharper and much more accurate as compared with the reconstruction for z=21.55  mm, which is actually quite blurred. A comparison of the cross sections allows making further quantitative conclusions on the achieved resolution.

 

Fig. 15. Simulation. USAF test chart phase image with different propagation distances, sub-pixel resolution, and rS=4. Top row, phase images—on the left and right are reconstructions for z=21.55  mm and z=1  mm, respectively, while the original phase is in the middle. Bottom row—longitudinal cross sections: left for areas 2–4 of the USAF test chart (horizontal cross section of vertical lines) and right for areas 3–5 of the USAF test chart (vertical cross section of horizontal lines). The arrows (red) on the original phase image show the locations of the cross sections.

Download Full Size | PPT Slide | PDF

The left cross-section plot is done for the horizontal cross section in areas 2–4 of the USAF image. These vertical lines have a width equal to two object pixels. It can be seen from the plots that SR-SPAR successfully resolved these lines for both the distances z=21.55  mm and z=1  mm. While the accuracy for z=1  mm is essentially better, we can claim that in both the cases a resolution of 0.7 μm (distance between two vertical lines) is shown.

The right cross section plot is done for the vertical cross section in areas 3–5 of the USAF image, where the distance between the horizontal lines is equal to one pixel, i.e., 0.35 μm. We can see from this plot that SR-SPAR successfully resolved these lines for z=1  mm and failed to do so for z=21.55  mm. Thus, better resolution of 0.35 μm is achieved for z=1  mm as compared with a lower resolution of 0.7 μm achieved for z=21.55  mm.

Funding

Suomen Akatemia (287150); Ministry of Education and Science of the Russian Federation (Minobrnauka) (3.1893.2017/4.6); Horizon 2020 Framework Programme (H2020) (687328 - HOLO).

Acknowledgment

The authors would like to thank anonymous reviewers for helpful and stimulating comments.

REFERENCES

1. R. K. Tyson, Principles of Adaptive Optics, 4th ed. (CRC Press, 2014).

2. L. Wang and H. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

3. B. Kress and P. Meyrueis, Applied Digital Optics: from Micro-Optics to Nanooptics (Wiley, 2009).

4. E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).

5. D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. Test calculations,” J. Phys. D 6, 2200–2216 (1973). [CrossRef]  

6. W. O. Saxton, “Correction of artefacts in linear and nonlinear high resolution electron micrographs,” J. Microsc. Spectrosc. Electron. 5, 665–674 (1980).

7. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30, 833–835 (2005). [CrossRef]  

8. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45, 8596–8605 (2006). [CrossRef]  

9. P. Almoro, A. M. S. Maallo, and S. Hanson, “Fast-convergent algorithm for speckle-based phase retrieval and a design for dynamic wavefront sensing,” Appl. Opt. 48, 1485–1493 (2009). [CrossRef]  

10. C. Kohler, F. Zhang, and W. Osten, “Characterization of a spatial light modulator and its application in phase retrieval,” Appl. Opt. 48, 4003–4008 (2009). [CrossRef]  

11. L. Camacho, V. Micy, Z. Zalevsky, and J. Garcha, “Quantitative phase microscopy using defocussing by means of a spatial light modulator,” Opt. Express 18, 6755–6766 (2010). [CrossRef]  

12. M. Agour, P. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid. Publ. 7, 12046 (2012). [CrossRef]  

13. P. Almoro, G. Pedrini, P. N. Gundu, W. Osten, and S. G. Hanson, “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng. 49, 252–257 (2011). [CrossRef]  

14. E. J. Candès, X. Lib, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015). [CrossRef]  

15. P. Almoro, D. P. Quang Duc, I. Serrano-Garcia David, H. Satoshi, H. Yoshio, T. Mitsuo, and Y. Toyohiko, “Enhanced intensity variation for multiple-plane phase retrieval using spatial light modulator as convenient tunable diffuser,” Opt. Lett. 41, 2161–2164 (2016). [CrossRef]  

16. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).

17. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

18. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982). [CrossRef]  

19. C. Guo, S. Liu, and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt 54, 4698–4708 (2015). [CrossRef]  

20. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015). [CrossRef]  

21. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory , 61, 1985–2007 (2015). [CrossRef]  

22. Y. Chen and E. J. Candès, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” 2015, http://statweb.stanford.edu/~candes/papers/TruncatedWF.pdf.

23. V. Katkovnik, “Phase retrieval from noisy data based on sparse approximation of object phase and amplitude,” 2015, http://www.cs.tut.fi/~lasip/DDT/pdfs/Single_column.pdf.

24. Q. Lian, B. Shi, and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digit Signal Process. 62, 11–25 (2017). [CrossRef]  

25. A. Fannjiang, “Absolute uniqueness of phase retrieval with random illumination,” Inverse Probl. 28, 075008 (2012). [CrossRef]  

26. W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011). [CrossRef]  

27. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23, 6171–6180 (2015). [CrossRef]  

28. A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013). [CrossRef]  

29. P. Almoro, G. Pedrini, and W. Osten, “Aperture synthesis in phase retrieval using a volume-speckle field,” Opt. Lett. 32, 733–735 (2007). [CrossRef]  

30. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17, 23920–23946 (2009). [CrossRef]  

31. J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016). [CrossRef]  

32. V. Katkovnik and K. Egiazarian, “Sparse super-resolution phase retrieval from phase-coded noisy intensity patterns,” 2017, http://www.cs.tut.fi/sgn/imaging/sparse/.

33. V. Katkovnik and J. Astola, “High-accuracy wavefield reconstruction: decoupled inverse imaging with sparse modeling of phase and amplitude,” J. Opt. Soc. Am. A 29, 44–54 (2012). [CrossRef]  

34. V. Katkovnik and J. Astola, “Sparse ptychographical coherent diffractive imaging from noisy measurements,” J. Opt. Soc. Am. A 30, 367–379 (2013). [CrossRef]  

35. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007). [CrossRef]  

36. A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process. 21, 1715–1728 (2012). [CrossRef]  

37. N. Verrier and M. Atlan, “Off-axis digital hologram reconstruction: some practical considerations,” Appl. Opt. 50, H136–H146 (2011). [CrossRef]  

38. F. Zhang, G. Pedrini, and W. Osten, “Reconstruction algorithm for high-numerical-aperture holograms with diffraction-limited resolution,” Opt. Lett. 31, 1633–1635 (2006). [CrossRef]  

39. A. F. Doval and C. Trillo, “Dimensionless formulation of the convolution and angular spectrum reconstruction methods in digital holography,” Proc. SPIE 7387, 73870U (2010). [CrossRef]  

40. J. M. Bioucas-Dias and G. Valadão, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process. 16, 698–709 (2007). [CrossRef]  

41. Matlab Demo code, http://dx.doi.org/10.6084/m9.figshare.4833932.

42. P. Gemayel, B. Colicchio, A. Dieterlen, and P. Ambs, “Cross-talk compensation of a spatial light modulator for iterative phase retrieval applications,” Appl. Opt. 55, 802–810 (2016). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. R. K. Tyson, Principles of Adaptive Optics, 4th ed. (CRC Press, 2014).
  2. L. Wang and H. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).
  3. B. Kress and P. Meyrueis, Applied Digital Optics: from Micro-Optics to Nanooptics (Wiley, 2009).
  4. E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).
  5. D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. Test calculations,” J. Phys. D 6, 2200–2216 (1973).
    [Crossref]
  6. W. O. Saxton, “Correction of artefacts in linear and nonlinear high resolution electron micrographs,” J. Microsc. Spectrosc. Electron. 5, 665–674 (1980).
  7. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front reconstruction from a sequence of interferograms recorded at different planes,” Opt. Lett. 30, 833–835 (2005).
    [Crossref]
  8. P. Almoro, G. Pedrini, and W. Osten, “Complete wavefront reconstruction using sequential intensity measurements of a volume speckle field,” Appl. Opt. 45, 8596–8605 (2006).
    [Crossref]
  9. P. Almoro, A. M. S. Maallo, and S. Hanson, “Fast-convergent algorithm for speckle-based phase retrieval and a design for dynamic wavefront sensing,” Appl. Opt. 48, 1485–1493 (2009).
    [Crossref]
  10. C. Kohler, F. Zhang, and W. Osten, “Characterization of a spatial light modulator and its application in phase retrieval,” Appl. Opt. 48, 4003–4008 (2009).
    [Crossref]
  11. L. Camacho, V. Micy, Z. Zalevsky, and J. Garcha, “Quantitative phase microscopy using defocussing by means of a spatial light modulator,” Opt. Express 18, 6755–6766 (2010).
    [Crossref]
  12. M. Agour, P. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid. Publ. 7, 12046 (2012).
    [Crossref]
  13. P. Almoro, G. Pedrini, P. N. Gundu, W. Osten, and S. G. Hanson, “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng. 49, 252–257 (2011).
    [Crossref]
  14. E. J. Candès, X. Lib, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).
    [Crossref]
  15. P. Almoro, D. P. Quang Duc, I. Serrano-Garcia David, H. Satoshi, H. Yoshio, T. Mitsuo, and Y. Toyohiko, “Enhanced intensity variation for multiple-plane phase retrieval using spatial light modulator as convenient tunable diffuser,” Opt. Lett. 41, 2161–2164 (2016).
    [Crossref]
  16. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).
  17. R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).
  18. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21, 2758–2769 (1982).
    [Crossref]
  19. C. Guo, S. Liu, and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt 54, 4698–4708 (2015).
    [Crossref]
  20. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
    [Crossref]
  21. E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory, 61, 1985–2007 (2015).
    [Crossref]
  22. Y. Chen and E. J. Candès, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” 2015, http://statweb.stanford.edu/~candes/papers/TruncatedWF.pdf .
  23. V. Katkovnik, “Phase retrieval from noisy data based on sparse approximation of object phase and amplitude,” 2015, http://www.cs.tut.fi/~lasip/DDT/pdfs/Single_column.pdf .
  24. Q. Lian, B. Shi, and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digit Signal Process. 62, 11–25 (2017).
    [Crossref]
  25. A. Fannjiang, “Absolute uniqueness of phase retrieval with random illumination,” Inverse Probl. 28, 075008 (2012).
    [Crossref]
  26. W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
    [Crossref]
  27. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23, 6171–6180 (2015).
    [Crossref]
  28. A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
    [Crossref]
  29. P. Almoro, G. Pedrini, and W. Osten, “Aperture synthesis in phase retrieval using a volume-speckle field,” Opt. Lett. 32, 733–735 (2007).
    [Crossref]
  30. S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17, 23920–23946 (2009).
    [Crossref]
  31. J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
    [Crossref]
  32. V. Katkovnik and K. Egiazarian, “Sparse super-resolution phase retrieval from phase-coded noisy intensity patterns,” 2017, http://www.cs.tut.fi/sgn/imaging/sparse/ .
  33. V. Katkovnik and J. Astola, “High-accuracy wavefield reconstruction: decoupled inverse imaging with sparse modeling of phase and amplitude,” J. Opt. Soc. Am. A 29, 44–54 (2012).
    [Crossref]
  34. V. Katkovnik and J. Astola, “Sparse ptychographical coherent diffractive imaging from noisy measurements,” J. Opt. Soc. Am. A 30, 367–379 (2013).
    [Crossref]
  35. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
    [Crossref]
  36. A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process. 21, 1715–1728 (2012).
    [Crossref]
  37. N. Verrier and M. Atlan, “Off-axis digital hologram reconstruction: some practical considerations,” Appl. Opt. 50, H136–H146 (2011).
    [Crossref]
  38. F. Zhang, G. Pedrini, and W. Osten, “Reconstruction algorithm for high-numerical-aperture holograms with diffraction-limited resolution,” Opt. Lett. 31, 1633–1635 (2006).
    [Crossref]
  39. A. F. Doval and C. Trillo, “Dimensionless formulation of the convolution and angular spectrum reconstruction methods in digital holography,” Proc. SPIE 7387, 73870U (2010).
    [Crossref]
  40. J. M. Bioucas-Dias and G. Valadão, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process. 16, 698–709 (2007).
    [Crossref]
  41. Matlab Demo code, http://dx.doi.org/10.6084/m9.figshare.4833932.
  42. P. Gemayel, B. Colicchio, A. Dieterlen, and P. Ambs, “Cross-talk compensation of a spatial light modulator for iterative phase retrieval applications,” Appl. Opt. 55, 802–810 (2016).
    [Crossref]

2017 (1)

Q. Lian, B. Shi, and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digit Signal Process. 62, 11–25 (2017).
[Crossref]

2016 (3)

2015 (6)

E. J. Candès, X. Lib, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).
[Crossref]

C. Guo, S. Liu, and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt 54, 4698–4708 (2015).
[Crossref]

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory, 61, 1985–2007 (2015).
[Crossref]

E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).

K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimization of sampling pattern and the design of Fourier ptychographic illuminator,” Opt. Express 23, 6171–6180 (2015).
[Crossref]

2013 (2)

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

V. Katkovnik and J. Astola, “Sparse ptychographical coherent diffractive imaging from noisy measurements,” J. Opt. Soc. Am. A 30, 367–379 (2013).
[Crossref]

2012 (4)

V. Katkovnik and J. Astola, “High-accuracy wavefield reconstruction: decoupled inverse imaging with sparse modeling of phase and amplitude,” J. Opt. Soc. Am. A 29, 44–54 (2012).
[Crossref]

A. Fannjiang, “Absolute uniqueness of phase retrieval with random illumination,” Inverse Probl. 28, 075008 (2012).
[Crossref]

A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process. 21, 1715–1728 (2012).
[Crossref]

M. Agour, P. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid. Publ. 7, 12046 (2012).
[Crossref]

2011 (3)

P. Almoro, G. Pedrini, P. N. Gundu, W. Osten, and S. G. Hanson, “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng. 49, 252–257 (2011).
[Crossref]

N. Verrier and M. Atlan, “Off-axis digital hologram reconstruction: some practical considerations,” Appl. Opt. 50, H136–H146 (2011).
[Crossref]

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

2010 (2)

A. F. Doval and C. Trillo, “Dimensionless formulation of the convolution and angular spectrum reconstruction methods in digital holography,” Proc. SPIE 7387, 73870U (2010).
[Crossref]

L. Camacho, V. Micy, Z. Zalevsky, and J. Garcha, “Quantitative phase microscopy using defocussing by means of a spatial light modulator,” Opt. Express 18, 6755–6766 (2010).
[Crossref]

2009 (3)

2007 (3)

J. M. Bioucas-Dias and G. Valadão, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process. 16, 698–709 (2007).
[Crossref]

P. Almoro, G. Pedrini, and W. Osten, “Aperture synthesis in phase retrieval using a volume-speckle field,” Opt. Lett. 32, 733–735 (2007).
[Crossref]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

2006 (2)

2005 (1)

1982 (1)

1980 (1)

W. O. Saxton, “Correction of artefacts in linear and nonlinear high resolution electron micrographs,” J. Microsc. Spectrosc. Electron. 5, 665–674 (1980).

1973 (1)

D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. Test calculations,” J. Phys. D 6, 2200–2216 (1973).
[Crossref]

1972 (1)

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Agour, M.

M. Agour, P. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid. Publ. 7, 12046 (2012).
[Crossref]

Almoro, P.

Ambs, P.

Astola, J.

Atlan, M.

Bioucas-Dias, J. M.

J. M. Bioucas-Dias and G. Valadão, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process. 16, 698–709 (2007).
[Crossref]

Bishara, W.

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Camacho, L.

Candès, E.

E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).

Candès, E. J.

E. J. Candès, X. Lib, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).
[Crossref]

E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory, 61, 1985–2007 (2015).
[Crossref]

Chapman, H. N.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Chen, S.

Q. Lian, B. Shi, and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digit Signal Process. 62, 11–25 (2017).
[Crossref]

Cohen, O.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Colicchio, B.

Coskun, A. F.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

Dabov, K.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Danielyan, A.

A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process. 21, 1715–1728 (2012).
[Crossref]

Dieterlen, A.

Dong, S.

Doval, A. F.

A. F. Doval and C. Trillo, “Dimensionless formulation of the convolution and angular spectrum reconstruction methods in digital holography,” Proc. SPIE 7387, 73870U (2010).
[Crossref]

Egiazarian, K.

A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process. 21, 1715–1728 (2012).
[Crossref]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Eldar, Y. C.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17, 23920–23946 (2009).
[Crossref]

Falldorf, C.

M. Agour, P. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid. Publ. 7, 12046 (2012).
[Crossref]

Fannjiang, A.

A. Fannjiang, “Absolute uniqueness of phase retrieval with random illumination,” Inverse Probl. 28, 075008 (2012).
[Crossref]

Fienup, J. R.

Foi, A.

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Garcha, J.

Gazit, S.

Gemayel, P.

Gerchberg, R. W.

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).

Greenbaum, A.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

Gundu, P. N.

P. Almoro, G. Pedrini, P. N. Gundu, W. Osten, and S. G. Hanson, “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng. 49, 252–257 (2011).
[Crossref]

Guo, C.

C. Guo, S. Liu, and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt 54, 4698–4708 (2015).
[Crossref]

Guo, K.

Hanson, S.

Hanson, S. G.

P. Almoro, G. Pedrini, P. N. Gundu, W. Osten, and S. G. Hanson, “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng. 49, 252–257 (2011).
[Crossref]

Im, H.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Iwamoto, Y.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Jeong, S.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Katkovnik, V.

V. Katkovnik and J. Astola, “Sparse ptychographical coherent diffractive imaging from noisy measurements,” J. Opt. Soc. Am. A 30, 367–379 (2013).
[Crossref]

V. Katkovnik and J. Astola, “High-accuracy wavefield reconstruction: decoupled inverse imaging with sparse modeling of phase and amplitude,” J. Opt. Soc. Am. A 29, 44–54 (2012).
[Crossref]

A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process. 21, 1715–1728 (2012).
[Crossref]

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

Khademhosseinieh, B.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

Kohler, C.

Kress, B.

B. Kress and P. Meyrueis, Applied Digital Optics: from Micro-Optics to Nanooptics (Wiley, 2009).

Lee, H.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Li, X.

E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).

E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory, 61, 1985–2007 (2015).
[Crossref]

Lian, Q.

Q. Lian, B. Shi, and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digit Signal Process. 62, 11–25 (2017).
[Crossref]

Lib, X.

E. J. Candès, X. Lib, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).
[Crossref]

Liu, S.

C. Guo, S. Liu, and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt 54, 4698–4708 (2015).
[Crossref]

Luckhart, S.

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Luo, W.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

Maallo, A. M. S.

Meyrueis, P.

B. Kress and P. Meyrueis, Applied Digital Optics: from Micro-Optics to Nanooptics (Wiley, 2009).

Miao, J.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Micy, V.

Misell, D. L.

D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. Test calculations,” J. Phys. D 6, 2200–2216 (1973).
[Crossref]

Mitsuo, T.

Mudanyali, O.

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Nanda, P.

Osten, W.

Ozcan, A.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Pathania, D.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Pedrini, G.

Pivovarov, M.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Quang Duc, D. P.

Satoshi, H.

Saxton, W. O.

W. O. Saxton, “Correction of artefacts in linear and nonlinear high resolution electron micrographs,” J. Microsc. Spectrosc. Electron. 5, 665–674 (1980).

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Segev, M.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

S. Gazit, A. Szameit, Y. C. Eldar, and M. Segev, “Super-resolution and reconstruction of sparse sub-wavelength images,” Opt. Express 17, 23920–23946 (2009).
[Crossref]

Serrano-Garcia David, I.

Shechtman, Y.

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

Sheridan, J. T.

C. Guo, S. Liu, and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt 54, 4698–4708 (2015).
[Crossref]

Shi, B.

Q. Lian, B. Shi, and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digit Signal Process. 62, 11–25 (2017).
[Crossref]

Sikora, U.

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Soltanolkotabi, M.

E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory, 61, 1985–2007 (2015).
[Crossref]

E. J. Candès, X. Lib, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).
[Crossref]

E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).

Song, J.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Su, T. W.

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Swisher, C. L.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Szameit, A.

Toyohiko, Y.

Trillo, C.

A. F. Doval and C. Trillo, “Dimensionless formulation of the convolution and angular spectrum reconstruction methods in digital holography,” Proc. SPIE 7387, 73870U (2010).
[Crossref]

Tyson, R. K.

R. K. Tyson, Principles of Adaptive Optics, 4th ed. (CRC Press, 2014).

Valadão, G.

J. M. Bioucas-Dias and G. Valadão, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process. 16, 698–709 (2007).
[Crossref]

Verrier, N.

Wang, L.

L. Wang and H. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

Weissleder, R.

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

Wu, H.

L. Wang and H. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

Yaglidere, O.

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Yoshio, H.

Zalevsky, Z.

Zhang, F.

Zhang, Y.

Zheng, G.

Appl. Comput. Harmon. Anal. (2)

E. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).

E. J. Candès, X. Lib, and M. Soltanolkotabi, “Phase retrieval from coded diffraction patterns,” Appl. Comput. Harmon. Anal. 39, 277–299 (2015).
[Crossref]

Appl. Opt (1)

C. Guo, S. Liu, and J. T. Sheridan, “Iterative phase retrieval algorithms. I: optimization,” Appl. Opt 54, 4698–4708 (2015).
[Crossref]

Appl. Opt. (6)

Digit Signal Process. (1)

Q. Lian, B. Shi, and S. Chen, “Transfer orthogonal sparsifying transform learning for phase retrieval,” Digit Signal Process. 62, 11–25 (2017).
[Crossref]

IEEE Signal Process. Mag. (1)

Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J. Miao, and M. Segev, “Phase retrieval with application to optical imaging: a contemporary overview,” IEEE Signal Process. Mag. 32(3), 87–109 (2015).
[Crossref]

IEEE Trans. Image Process. (3)

K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Image denoising by sparse 3D transform-domain collaborative filtering,” IEEE Trans. Image Process. 16, 2080–2095 (2007).
[Crossref]

A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3D frames and variational image deblurring,” IEEE Trans. Image Process. 21, 1715–1728 (2012).
[Crossref]

J. M. Bioucas-Dias and G. Valadão, “Phase unwrapping via graph cuts,” IEEE Trans. Image Process. 16, 698–709 (2007).
[Crossref]

IEEE Trans. Inf. Theory (1)

E. J. Candès, X. Li, and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory, 61, 1985–2007 (2015).
[Crossref]

Inverse Probl. (1)

A. Fannjiang, “Absolute uniqueness of phase retrieval with random illumination,” Inverse Probl. 28, 075008 (2012).
[Crossref]

J. Eur. Opt. Soc. Rapid. Publ. (1)

M. Agour, P. Almoro, and C. Falldorf, “Investigation of smooth wave fronts using SLM-based phase retrieval and a phase diffuser,” J. Eur. Opt. Soc. Rapid. Publ. 7, 12046 (2012).
[Crossref]

J. Microsc. Spectrosc. Electron. (1)

W. O. Saxton, “Correction of artefacts in linear and nonlinear high resolution electron micrographs,” J. Microsc. Spectrosc. Electron. 5, 665–674 (1980).

J. Opt. Soc. Am. A (2)

J. Phys. D (1)

D. L. Misell, “An examination of an iterative method for the solution of the phase problem in optics and electron optics: I. Test calculations,” J. Phys. D 6, 2200–2216 (1973).
[Crossref]

Lab Chip (1)

W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011).
[Crossref]

Opt. Express (3)

Opt. Lasers Eng. (1)

P. Almoro, G. Pedrini, P. N. Gundu, W. Osten, and S. G. Hanson, “Enhanced wavefront reconstruction by random phase modulation with a phase diffuser,” Opt. Lasers Eng. 49, 252–257 (2011).
[Crossref]

Opt. Lett. (4)

Optik (1)

R. W. Gerchberg and W. O. Saxton, “A practical algorithm for the determination of phase from image and diffraction plane pictures,” Optik 35, 237–246 (1972).

Proc. SPIE (1)

A. F. Doval and C. Trillo, “Dimensionless formulation of the convolution and angular spectrum reconstruction methods in digital holography,” Proc. SPIE 7387, 73870U (2010).
[Crossref]

Sci. Rep. (2)

J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6, 24681 (2016).
[Crossref]

A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su, A. F. Coskun, and A. Ozcan, “Increased space-bandwidth product in pixel super-resolved lensfree on-chip microscopy,” Sci. Rep. 3, 1717 (2013).
[Crossref]

Other (8)

Y. Chen and E. J. Candès, “Solving random quadratic systems of equations is nearly as easy as solving linear systems,” 2015, http://statweb.stanford.edu/~candes/papers/TruncatedWF.pdf .

V. Katkovnik, “Phase retrieval from noisy data based on sparse approximation of object phase and amplitude,” 2015, http://www.cs.tut.fi/~lasip/DDT/pdfs/Single_column.pdf .

V. Katkovnik and K. Egiazarian, “Sparse super-resolution phase retrieval from phase-coded noisy intensity patterns,” 2017, http://www.cs.tut.fi/sgn/imaging/sparse/ .

J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts & Company, 2005).

R. K. Tyson, Principles of Adaptive Optics, 4th ed. (CRC Press, 2014).

L. Wang and H. Wu, Biomedical Optics: Principles and Imaging (Wiley, 2007).

B. Kress and P. Meyrueis, Applied Digital Optics: from Micro-Optics to Nanooptics (Wiley, 2009).

Matlab Demo code, http://dx.doi.org/10.6084/m9.figshare.4833932.

Supplementary Material (1)

NameDescription
» Code 1       Matlab Demo code

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Flow chart of the SR-SPAR algorithm.
Fig. 2.
Fig. 2. Optical setup. λ4, retardation plate; P, polarizer; L1,L2, lenses; BS, beamsplitter; SLM, spatial light modulator; CMOS, registration camera.
Fig. 3.
Fig. 3. Simulation—“cameraman” test image, pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8, RMSEphase=0.00253.
Fig. 4.
Fig. 4. Simulation—logo “TUT” test image, pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8, RMSEphase=0.000852.
Fig. 5.
Fig. 5. Simulation—“cameraman” test image, sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4, RMSEphase=0.00228.
Fig. 6.
Fig. 6. Simulation—logo “TUT” test image, sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4, RMSEphase=0.000592.
Fig. 7.
Fig. 7. Experiment—phase object “cameraman,” pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8.
Fig. 8.
Fig. 8. Experiment—phase object logo “TUT,” pixel-wise resolution: rS=1, ΔAbbe/Δc=4.8.
Fig. 9.
Fig. 9. Experiment—phase object “cameraman,” sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4.
Fig. 10.
Fig. 10. Experiment—phase object logo “TUT,” sub-pixel resolution: rS=4, ΔAbbe/Δc=38.4.
Fig. 11.
Fig. 11. Phase RMSE dependence on the number of observations L.
Fig. 12.
Fig. 12. Phase RMSE dependence on the standard deviation σ of zero-mean Gaussian random phase masks in Eq. (11).
Fig. 13.
Fig. 13. Phase RMSE dependence on SLM superpixel size M.
Fig. 14.
Fig. 14. Phase RMSE dependence on iteration number T.
Fig. 15.
Fig. 15. Simulation. USAF test chart phase image with different propagation distances, sub-pixel resolution, and rS=4. Top row, phase images—on the left and right are reconstructions for z=21.55  mm and z=1  mm, respectively, while the original phase is in the middle. Bottom row—longitudinal cross sections: left for areas 2–4 of the USAF test chart (horizontal cross section of vertical lines) and right for areas 3–5 of the USAF test chart (vertical cross section of horizontal lines). The arrows (red) on the original phase image show the locations of the cross sections.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

ys=|Ps{uo}|2,s=1,,L,
us=Ps{uo}=P{Msuo}.
ys=|P{Msuo}|2,s=1,,L.
zs=G{|us|2},s=1,,L,
us(x,y,z)=F1[H(fx,fy,z)·F[us(x,y,0)]],
H(fx,fy,z)={exp[i2πλz1λ2(fx2+fy2)],fx2+fy21λ2,0otherwise,
p(zs[l]=k)=exp(ys[l]χ)(ys[l]χ)kk!,
φ^o=BM3Dphase(φo,thφ),
B^o=BM3Dampl(Bo,thB).
ϕn,m(s)=ϵn,m(s),
Ms(n,m)=exp(jϕn,m(s)·σ),
zzmax=NΔc2/λ,
L=s=1Lk,l[|us[k,l]|2χzs[k,l]log(|us[k,l]|2χ)].
bs=|vs|+|vs|2+4z˜sγ1(1+γ1χ)2(1+γ1χ).
ΔAbbe=λNA2zλNΔS,

Metrics