Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Adaptive sparse reconstruction for lensless digital holography via PSF estimation and phase retrieval

Open Access Open Access

Abstract

In-line lensless digital holography has great potential in multiple applications; however, reconstructing high-quality images from a single recorded hologram is challenging due to the loss of phase information. Typical reconstruction methods are based on solving a regularized inverse problem and work well under suitable image priors, but they are extremely sensitive to mismatches between the forward model and the actual imaging system. This paper aims to improve the robustness of such algorithms by introducing the adaptive sparse reconstruction method, ASR, which learns a properly constrained point spread function (PSF) directly from data, as opposed to solely relying on physics-based approximations of it. ASR jointly performs holographic reconstruction, PSF estimation, and phase retrieval in an unsupervised way by maximizing the sparsity of the reconstructed images. Like traditional methods, ASR uses the image formation model along with a sparsity prior, which, unlike recent deep learning approaches, allows for unsupervised reconstruction with as little as one sample. Experimental results in synthetic and real data show the advantages of ASR over traditional reconstruction methods, especially in cases where the theoretical PSF does not match that of the actual system.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Lensless digital holography is achieved by illuminating a specimen with partially coherent light and recording its diffraction pattern (or hologram) [1]. As the name suggests, lensless imaging systems obviate lenses, enabling the design of devices that are not only more compact but also less expensive than lens-based systems [2]. Conventional lensless imaging configurations have a magnification factor close to one, allowing for large fields of view. Moreover, since lensless imaging devices do not need to focus on a given distance along the optical path, they can record holograms that contain information about multiple planes across the specimen volume, and focusing can be done computationally post facto. All these features make lensless imaging a promising alternative to traditional lens-based systems in multiple applications [3,4], including those related to biomedical sciences, as it allows for the design of small, high throughput, low-cost devices [5,6]. In fact, lensless imaging has already been successfully used to measure biological specimens such as blood [711], urine [12], semen [13], and pap smears [7,8,14,15].

However, despite these advantages and use cases, lensless digital holography still poses substantial challenges. For example, while the unit magnification factor conveniently results in a large field of view, it can also significantly limit the resolution of the system [1,11]. Additionally, since the diffraction pattern is measured at the sensor plane, reconstruction algorithms are needed to retrieve relevant data at the plane of interest. In principle, back-propagating the measured signal to the specimen plane by deconvolving with the point spread function (PSF) of the system for a given focal depth would lead to perfect reconstruction if the measured signal was the full complex-valued diffraction pattern and the PSF of the system was known. However, imaging sensors measure only the intensity of the diffraction pattern and in practice the PSF of the system can only be modeled or estimated as it is often not known exactly. The loss of phase information makes simple deconvolution highly prone to artifacts, for which common solutions involve modifying the optical system to an off-axis configuration [16] or acquiring multiple measurements of the same specimen at known distances [7,14,15,17]. Both approaches increase the cost and complexity of the imaging system, which, as a consequence, limit the potential of lensless digital holography in certain applications. Thus, we focus on improving the reconstruction algorithms for a single hologram acquired in the standard in-line setup.

As an inverse problem, holographic reconstruction has been typically addressed by estimating the parameters of a generative model. Since the physics of optical systems is relatively well-understood, it is natural to leverage the image formation model for this purpose, which describes the recorded hologram as a function of the illumination wavefront, the modulation introduced by the specimen, and the PSF that models the diffraction from the specimen to the sensor. However, even if the illumination wavefront and the PSF were known, this image formation model is not invertible due to the lack of the phase information at the sensor plane, and thus reconstruction algorithms must rely on priors to solve the problem [4,15]. Sparsity has been identified as a relevant prior in different areas including biomedical applications, since critical specimens such as blood or urine are expected to correspond to reconstructed images in which most of their pixels represent background as opposed to objects. Several methods using sparse priors to perform lensless holographic reconstruction have been proposed [15,18,19], however they heavily rely on physics-based approximations of the PSF, and therefore are not robust to changes in the optical path or deviations from the nominal parameters of the system. In fact, recent works in lensless digital holography [20] and other imaging modalities [2124] have emphasized the need for learning or adapting the PSF to the data at hand.

Initial attempts for PSF learning estimated parameters of a PSF with a specified form [21,22,24,25], which inherently restricts the space of PSF models that can be considered. For example, autofocus (AF) algorithms express the PSF as a function of the focal depth, evaluate a set of candidate depths, and select the best one according to a given criterion [25]. Alternatively, current efforts are moving towards learning PSF models with higher degrees of freedom [20,23]. For example, in [23] generative adversarial networks (GANs) are used to perform blind deconvolution in epifluorescence microscopy, in which the PSF is modeled as a one channel convolutional neural network. Similarly, in [20] two GANs are jointly trained to perform reconstruction and PSF learning for lensless holography of natural images. Unlike our work, these approaches do not deal with the loss of phase information, either because of the imaging modality they are working with [23], or because of the assumption of the far-field regime [20]. Moreover, both rely on supervised training at the level of samples and/or distributions, therefore requiring large amounts of ground truth data, which are rarely available. Hence, although the need for adapting the PSF to data has been established, there is a lack of methods that could jointly perform phase retrieval, holographic reconstruction, and PSF estimation with limited (or no) supervision.

As an alternative to traditional holographic reconstruction methods, black-box approaches that use deep learning architectures to model the transformation from holograms to reconstructions have gained significant attention in recent years [3,8,26]. Given their black-box nature, these methods do not explicitly perform phase retrieval or PSF estimation, nevertheless they have achieved high-levels of performance in reconstruction. However, since little to no prior knowledge is involved in the architectural design, large amounts of annotated data are needed to learn the transformation, which limits their performance in datasets different from those used for training.

In this context, the main contribution of this paper is the adaptive sparse reconstruction method (ASR), an unsupervised method that increases the reconstruction quality of lensless digital holograms by learning to adapt a properly constrained PSF in a data-driven manner. The ASR method jointly performs phase retrieval, holographic reconstruction, and PSF estimation by maximizing the sparsity of the reconstructed images. Similar to traditional holographic reconstruction methods, ASR leverages the image formation model along with a sparsity assumption, allowing for unsupervised training with as little as one sample hologram. However, notably different from traditional methods, ASR introduces a learnable PSF, which increases the richness of the model in a meaningful way. Results in synthetic and experimental data show the advantages of ASR over traditional unsupervised reconstruction and autofocus algorithms, especially in cases where the theoretical PSF does not match that of the actual system (e.g. in presence of phase aberrations, spherical illumination, or deviation from nominal optical parameters). The code will be publicly available in https://github.com/carolina-pacheco/ASR following publication.

2. Methods

Let $H^2\in \mathbb {R}^{m\times n}$ denote a hologram corresponding to the intensity of a diffraction pattern recorded by an in-line lensless imaging device. Given its amplitude, $H$, the reconstruction task is to find a complex-valued matrix, $X\in \mathbb {C}^{m\times n}$, that represents the modulation introduced by the specimen. Here we are interested in specimens for which most pixels are expected to represent background, which allows the assumption that $X\in \mathbb {C}^{m\times n}$ is a sparse matrix. This assumption corresponds to its magnitude $|X|\in \mathbb {R}^{m\times n}$ being sparse. Notably, we consider the practical scenario where there is no access to the true value of $X$ during training (i.e., fully unsupervised reconstruction).

In this section we first describe the sparse phase recovery method (SPR) introduced in [19], which efficiently solves the holographic reconstruction problem by using sparse regularization and alternating minimization. We then present our proposed method, adaptive sparse reconstruction (ASR), which aims to improve over SPR by introducing a learnable PSF and a loss function that directly attempts to minimize the number of non-zero pixels in the reconstructed image.

2.1 Sparse phase recovery (SPR)

The SPR method [19] reconstructs the image $X$ by solving the following optimization problem

$$\min_{X,W,\mu} \dfrac{1}{2} \lVert H\odot W -\mu\textbf{1}-T\ast X\lVert_F^2 + \gamma \lVert X \lVert_1\quad \text{s.t.}\quad \lvert W \lvert = \textbf{1}.$$
Here $\odot$ denotes element-wise matrix product, $W\in \mathbb {C}^{m\times n}$ is the phase of the hologram, and therefore the constraint $\lvert W\lvert =\textbf {1}$ ensures that only the phase of $H$ is modulated, $\textbf {1}\in \mathbb {R}^{m\times n}$ is a matrix full of ones, $\ast$ denotes circular convolution, $\mu \in \mathbb {C}$ represents the constant background signal at the sensor plane, and $T\in \mathbb {C}^{m\times n}$ is the PSF of the system. $\lVert \cdot \lVert _F$ and $\lVert \cdot \lVert _1$ denote the Frobenius and $L_1$ matrix norms, respectively. The first term of the loss function in (1) is a data fidelity term, between the data at the sensor plane, $H\odot W$, and the model, $T\ast X +\mu \textbf {1}$, and the second term promotes the sparsity of the reconstructed image. The sparsity parameter, $\gamma \in \mathbb {R}^+$, modulates the relative importance of the sparsity-inducing regularizer in the optimization.

The SPR method assumes that the PSF of the system, $T\in \mathbb {C}^{m\times n}$, is known, and solves the optimization problem in (1) by alternating minimization, with a closed-form solution for each subproblem (see Section 3 in [19] for details). The PSF is obtained by using the wide angular spectrum approximation (WAS) [16,19], which requires precise knowledge of the optical system, including the wavelength of the light source, the pixel size of the sensor, and the distance between the sample and the sensor. As a result, SPR works well in sparse samples when the parameters of the optical system are known and fixed, however its performance is extremely sensitive to changes in the nominal parameters of the system or the optical path.

2.2 Adaptive sparse reconstruction (ASR)

In this section we introduce the adaptive sparse reconstruction method, ASR, which extends SPR by allowing it to learn the PSF of the system in a data-driven fashion. Naturally, one might consider directly incorporating the PSF as an optimization variable in (1); however, this leads to degenerate solutions. For example, given a single non-zero pixel in $X$ and any $H$, the PSF, $T$, can be chosen such that the data fidelity term in (1) is $0$. To circumvent this problem, optical principles can be incorporated to constrain the model. In particular, one reasonable assumption for the PSFs of lensless imaging systems is that the linear operator they define, $\mathcal {T}(X)\equiv T\ast X$, is unitary, which corresponds to an energy-preserving transformation. This property is satisfied by physics-based approximations of the PSF [16], and is equivalent to requiring the Fourier coefficients of the PSF to lie on the unit circle, i.e. $\lvert \mathcal {F}\left \{T\right \}\lvert =\textbf {1}$, where $\mathcal {F}\left \{\cdot \right \}$ denotes the 2D Fourier transform. As a consequence of this constraint, the back-propagation from the sensor to the specimen plane is given by a convolution with the complex conjugate of $T$, denoted as $T^*$ – i.e., $\forall Y \in \mathbb {C}^{m \times n}, \ \ T^* * (T*Y) = Y$. Thus, $T^*$ can be understood as the inverse of the PSF operator. Moreover, due to the fact that convolutions with $T$ (and $T^*$) are unitary operators, for any given $W$, $b$, and $T$, one can find the optimal value for $X$ in (1) in closed form as,

$$X_{opt} = \mathop{\textrm{arg}\,\textrm{min}}\limits_X \frac{1}{2} \| (H \odot W - \mu \mathbf{1})*T^* - X\|_F^2 + \gamma \|X\|_1 = SFT_\gamma ((H \odot W - \mu \mathbf{1})*T^*)$$
where $SFT_\gamma (\cdot )$ denotes the complex-valued soft-thresholding operator applied entry-wise:
$$SFT_\gamma(a) = \begin{cases} 0 & \text{if } |a| \leq \gamma \\ a \frac{|a| - \gamma}{|a|} & \text{otherwise.} \end{cases}$$

Substituting this solution for $X$ back into (1) results in a related optimization problem

$$\min_{W,\tilde{\mu}} \ell_\gamma\left(\left(H\odot W\right)\ast T^*-\tilde{\mu}\textbf{1}\right) \quad \text{s.t.}\quad \lvert W \lvert = \textbf{1},$$
where $\tilde {\mu }\equiv \mu \ast T^*\in \mathbb {C}^{m\times n}$ represents the background signal at the specimen plane, and $\ell _\gamma \left (\cdot \right )$ denotes the Huber loss defined as,
$$\ell_\gamma\left(Q \right) = \sum_{i,j} \begin{cases} \dfrac{1}{2}\lvert Q_{i,j} \lvert^2 & \text{if } \lvert Q_{i,j} \lvert \leq\gamma\\ \gamma\left(\lvert Q_{i,j} \lvert -\dfrac{1}{2}\gamma\right) & \text{otherwise.}\\ \end{cases}$$

Now, incorporating $T$ as an optimization variable in (4) , we arrive at the ASR method which performs phase retrieval, PSF estimation, and holographic reconstruction by solving the following optimization problem

$$\min_{W,\tilde{B},T} \ell_\gamma\left(\left(H\odot W\right)\ast T^*-\tilde{B}\right) \quad \text{s.t.}\quad \lvert W \lvert = \textbf{1}, \quad \lVert\mathcal{F}\left\{\tilde{B}\right\}\lVert_0\leq \beta, \quad \lvert\mathcal{F}\left\{T\right\}\lvert=\mathbf{1},$$
where we have additionally replaced the model of a constant background, $\tilde {\mu }\in \mathbb {C}$, with a more general background signal, $\tilde {B}\in \mathbb {C}^{m\times n}$, to allow for spatial variation of the background. Note that if unconstrained, $\tilde {B}$ can drive the loss function to $0$ regardless of other optimization variables. Thus, we only consider cases in which the background has limited frequency content by constraining the number of non-zero coefficients in Fourier domain, $\lVert \mathcal {F}\left \{\tilde {B}\right \} \lVert _0\leq \beta$. Also note that now the PSF is incorporated as an optimization variable with appropriate constraints as described above.

Unlike the SPR loss function, the ASR loss function in (6) is composed of just one term, which directly promotes the sparsity of the reconstructed image at the object plane by minimizing its Huber loss. Note also that the Huber loss, which emerges naturally from the SPR formulation, corresponds to the Moreau envelope of the $L_1$ norm, which results in desirable properties such as being once differentiable, continuous, and quadratic near the origin [27], and allows for gradient-based optimization methods to be employed for optimizing the model as we detail below.

To optimize our model in (6), we employ a series of alternating updates between $W$, $\tilde {B}$, and $T$. First, note that due to the constraints on $T$ in (6) we only need to search over the phase coefficients for the Fourier transform of $T$. As a result, we equivalently reparameterize $T$ as $T_Q = \mathcal {F}^{-1} \left \{\exp \left (iQ\right )\right \}$, where $Q\in \mathbb {R}^{m\times n}$ represents the phase of $T$ in Fourier domain, which inherently satisfies the constraint on $T$. Following this reparameterization, similar to the SPR algorithm, we update $W$ and $\tilde {B}$ via (closed-form) alternating minimization solutions for the $W$ and $\tilde {B}$ subproblems and update $T$ via a gradient descent step on $Q$ (which recall is unconstrainted w.r.t. $Q$ and inherently satisfies the constraints on $T$). Thus, the $k$-th update of the optimization variables is given by

$$\begin{aligned}Q^{(k)} &= Q^{(k-1)} - \epsilon \nabla_Q \ell_\gamma\left((H\odot W^{(k-1)})\ast T_{Q^{(k-1)}}^* - \tilde{B}^{(k-1)}\right) \\ W^{(k)} &= \exp\left(i\sphericalangle\left( \tilde{B}^{(k-1)}\ast T_{Q^{(k)}}\right)\right) \\ \tilde{B}^{(k)} &= \mathcal{F}^{{-}1}\left\{\mathcal{K}_{\beta}\left\{\mathcal{F}\left\{\left(H\odot W^{(k)}\right) \ast T^*_{Q^{(k)}}\right\}\right\}\right\}, \end{aligned}$$
where $\nabla _Q\ell _\gamma \left (\cdot \right )$ denotes the gradient of the function $\ell _\gamma \left (\cdot \right )$ with respect to the variable $Q$, $\epsilon$ corresponds to the learning rate of the gradient descent algorithm, and $\mathcal {K}_{\beta }\left \{\cdot \right \}$ denotes the thresholding operator that zeroes out all but the top-$\beta$ entries in terms of magnitude.

The optimization problem in (6) performs phase retrieval, reconstruction, and PSF estimation given a single hologram. However, it is natural to consider that one can better learn the PSF of the system by leveraging information from multiple samples, provided these samples are acquired using the same optical system under the same conditions (e.g., a video of cells flowing through a micro-fluidic chamber). We note that our model naturally extends to such situations. Specifically, given $N$ holograms, $\left \{H_n\right \}_{n=1}^N$, we can solve for a unique $W$ and $\tilde {B}$ for each image, and a common PSF, $T$, shared across all of the holograms, resulting in the following model:

$$\min_{\{W_n,\tilde{B}_n\}_{n=1}^{N},T} \dfrac{1}{N}\sum_{n=1}^N\ell_\gamma\left(\left(H_n\odot W_n \right)\ast T^*-\tilde{B}_n\right) \hspace{0.1cm} \text{s.t.}\hspace{0.1cm} \lvert W_n \lvert = \textbf{1 }\forall n, \hspace{0.1cm} \lVert\mathcal{F}\left\{\tilde{B}_n\right\}\lVert_0\leq \beta\text{ }\forall n, \hspace{0.1cm} \lvert\mathcal{F}\left\{T\right\}\lvert=\mathbf{1}.$$
Similar to the single hologram case, this optimization problem is solved by alternating updates. The $W_n$ and $\tilde {B}_n$ subproblems are separable for each sample, thus the single hologram case is recovered and the closed-form solutions are those shown in (7). In the case of the $T$ subproblem, a stochastic gradient descent step of the loss function (8) is taken in the variable $Q$.

3. Results

To validate and evaluate our proposed algorithm we perform experiments with data from synthetic models as well as real urine and blood specimens. In order to illustrate its contribution in the context of holographic reconstruction in general, the PSF learned by ASR is used to perform inference with a traditional reconstruction method. Namely, reconstructed images obtained by SPR using a physics-based approximation for the PSF are compared to those obtained by the same method using the PSF learned via ASR instead (analogous experiments using back-propagation (BP) rather than SPR can be found in Section 5 of Supplement 1). In this way we can isolate and directly evaluate the effect of adapting the PSF, which is the main contribution of ASR. For this purpose, we perform two minor modifications to the SPR method presented in [19], solving the following problem instead

$$\min_{X,W,B} \dfrac{1}{2}\lVert H\odot W -B-T\ast X\lVert_F^2 + \gamma \lVert X \lVert_0\quad \text{s.t.}\quad \lvert W \lvert = \textbf{1}, \quad \lVert\mathcal{F}\left\{B\right\}\lVert_0\leq \beta,$$
where we use as regularizer the pseudonorm $L_0$ instead of its convex relaxation $L_1$, and similar to (6) we allow for a spatially variable background of limited frequency content, replacing $\mu \in \mathbb {C}$ by $B\in \mathbb {C}^{m\times n}$. Like (1), the optimization problem in (9) can also be solved by alternating minimization between $X$, $W$, and $B$, where each subproblem has a known closed-form solution. SPR can be implemented using the $L_1$ norm or $L_0$ pseudo-norm as regularizers, however $L_0$ is preferred in this case as it shows slightly better performance than $L_1$ in our experiments (see Tables S1 and S2 in Supplement 1 for details).

We also compare the proposed method to a commonly used approach: direct back-propagation, adapting the PSF with autofocus (BP+AF). Detailed description and results for BP-based approaches can be found in Supplement 1.

3.1 Experiments with synthetic data

To quantify the performance benefits of our proposed method we construct a synthetic dataset using a computational model of urine containing red blood cells (RBCs), white blood cells (WBCs), and bacteria. Details about the specimen model can be found in Supplement 1. With this generative model we can randomly draw hundreds of samples for the sparse matrix $X\in \mathbb {C}^{m\times n}$. Figure 1(a) shows one of these samples, which illustrates the differences between the simulated objects.

 figure: Fig. 1.

Fig. 1. Simulated data. (a) Example of a specimen containing RBCs, WBCs and bacteria, $15\times 15$ pixels crops illustrate shape and intensity differences between them. (b) Synthetic holograms of the same specimen imaged through different imaging models (bottom row), and the corresponding parameters of the models (top row).

Download Full Size | PDF

A dataset of $250$ samples was generated by randomly sampling from the specimen model , and three imaging models are considered to simulate different configurations of the imaging system. As a baseline, we start with a planar illumination model given by

$$H = \left\lvert \left(X+\textbf{1}\right)\ast T_{WAS}\left(z,p,\lambda\right)\right\lvert,$$
where $H$ represents the squared root of the hologram and $T_{WAS}\left (\cdot \right )$ denotes the wide angular spectrum approximation for the PSF [16], which explicitly depends on the focal depth, $z$, the pixel size of the sensor, $p$, and the wavelength of the light source, $\lambda$. This model corresponds to a constant illumination wavefront at the specimen plane, and it is considered a baseline because it coincides with the assumptions of the model used by SPR, ASR, and BP for reconstruction.

We then consider a spherical illumination model given by

$$H = \left\lvert \left(S\odot \left(X+\textbf{1}\right)\right)\ast T_{WAS}\left(z,p,\lambda\right)\right\lvert,$$
where the illumination wavefront at the specimen plane, $S\in \mathbb {C}^{m\times n}$, corresponds to a radially increasing phase, as shown in Fig. 1(b). This model generates a magnification effect on the imaged objects, which is relevant in cases were the light source and the specimen are close [1]. Finally, we consider a phase interference model given by
$$H = \left\lvert \left(\left(\left(X+\textbf{1}\right)\ast {T}_{WAS}\left(\dfrac{z}{2},p,\lambda\right)\right)\odot\mathcal{P}\right)\ast {T}_{WAS}\left(\dfrac{z}{2},p,\lambda\right)\right\lvert,$$
in which $\mathcal {P}\in \mathbb {C}^{m\times n}$ denotes a phase object such that $\lvert \mathcal {P}\lvert = \mathbf {1}$ and the phase follows a polynomial model of second order. The random phase interference model assumes planar illumination, with a phase aberration located halfway between the specimen and sensor planes, hence the WAS approximation for the PSF is evaluated at $z/2$. Two examples of random phase interference where the coefficients of the second order polynomial were randomly drawn are presented in Fig. 1(b), where it can be seen that, unlike the spherical illumination case, the phase aberration is not centered or symmetric. The presence of such phase aberrations in the optical path generates spatially variable distortions, which can be challenging for holographic reconstruction algorithms.

The set of holograms obtained by the planar illumination model is used to study the robustness of the reconstruction algorithms to deviations from the nominal parameters of the optical system. In particular, three main cases are studied: initialization of the PSF by using the nominal parameters of the system, $T = T_{WAS}\left (z,p,\lambda \right )$; initialization of the PSF by using a pixel size $10\%$ larger than the one used in the simulation, $T = T_{WAS}\left (z,1.1 p,\lambda \right )$; and initialization of the PSF as if the parameters of the system were unknown, $\mathcal {F}\left \{T\right \}=\mathbf {1}$.

The reconstructed images are compared to the ground truth images by means of three well established full-reference image metrics [28]: the structural similarity index (SSIM), the peak signal-to-noise ratio (PSNR), and the root mean squared error (RMSE). Details about implementation and evaluation procedure can be found in Supplement 1. Figure 2(a) compares reconstructed images obtained by SPR using the initial PSF versus the PSF learned by ASR, along with reconstructions obtained by BP+AF. Table 1 presents statistics of the quality metrics evaluated in the entire dataset ($N=250$). For simplicity, in the figure and hereafter, the case that uses the PSF learned by ASR is referred to as “ASR”, while the baseline case that uses a physics-based approximation of the PSF is referred to as “SPR”.

 figure: Fig. 2.

Fig. 2. Results in simulated data. Comparison of SPR, ASR, and BP+AF reconstructions (a) in the planar illumination model for different PSF initializations, and (b) for the spherical and phase interference models initialized using nominal parameters for the PSF. In both cases, the first column shows the ground truth diffraction pattern, while the rows show SPR, ASR, and BP+AF reconstructions. Two enlarged regions are presented in each case for detailed comparison. Also, two color scales are used to enable visibility in cases of low-intensity reconstructions.

Download Full Size | PDF

Tables Icon

Table 1. Results in planar illumination model for different initializations. The mean (standard deviation) of each metric evaluated over $N=250$ samples are presented. In all cases SPR, ASR, and BP+AF metrics are statistically different with p-value $<0.005$.

As expected based on the results presented in [19], the SPR algorithm can successfully reconstruct the ground truth objects of the simulated specimen when it utilizes the exact PSF that generated the data (Nominal parameters case). However, the performance of SPR significantly decreases for even relatively small deviations from the nominal parameters (Larger pixel size case), in which the shape and intensity of the reconstructed objects do not resemble those of the ground truth objects. The same observation holds for the most extreme case in which the parameters of the system are assumed unknown. While the reconstructed images generated by ASR and SPR are almost identical when initialized using the nominal parameters, an evident improvement provided by ASR is observed for the cases in which the PSF used at initialization does not match the one used to generate the data. By learning to adapt the PSF, the proposed method significantly improves the quality of the reconstructed images and retrieves objects whose shapes and intensities closely resemble those of the ground truth objects, even when the parameters of the optical system are unknown. As shown in Table 1, these findings are supported by the similar performance of SPR and ASR in the Nominal parameters case, and a significant improvement of ASR over SPR in the Larger pixel size and Unknown parameters cases in all three image metrics, demonstrating the robustness of ASR to changes, and even full ignorance, of the parameters of the optical system. In fact, the performance achieved by ASR in terms of PSNR and RMSE is very similar regardless of the initialization.

Figure 2(a) and Table 1 show that the BP+AF method is also robust to deviations from the nominal parameters of the system, however note that for the Unknown parameters case it still requires an initial guess for the pixel size and wavelength. ASR outperforms BP+AF in all cases according to SSIM and PSNR metrics, and although BP- and SPR-based reconstructions look similar, the difference in performance is largely due to the twin image artifact observed in BP, even when the nominal parameters are accurately known (see Fig. S3 in Supplement 1 with an adjusted colorscale). Note however that BP+AF method outperforms SPR and ASR in terms of RMSE, since this metric only considers differences between pixel values, as opposed to the structure of the underlying image, and the intensity of the artifacts in these cases is small.

Data generated by the spherical illumination and random phase interference models are used to evaluate the robustness of the reconstruction algorithms to deviations from the assumed model. To this end, in all cases the PSF is initialized at the nominal parameters, i.e. using the PSF that generated the data. Figure 2(b) compares the reconstructed images obtained by SPR using nominal parameters for the PSF versus using the PSF learned by ASR, and the BP+AF reconstructions. Table 2 shows the corresponding statistics of the quality metrics at the dataset level. Results show a significant decrease in the performance of SPR when the imaging model of the data does not match the planar illumination model, going from a structural similarity index (SSIM) of $0.73$ to less than $0.5$, a PSNR of $34.7 dB$ to $31.4 dB$, and increasing the RMSE from $0.037$ to more than $0.054$. The sensitivity of SPR to the imaging model is also qualitatively observed, since the reconstructed objects lack definition and do not resemble the ground truth objects. Unlike SPR, in all three models ASR reconstructs objects whose intensities and shapes are similar to those of the ground truth objects. In fact, ASR outperforms SPR in all three evaluation metrics. Images reconstructed with BP+AF qualitatively show improvements with respect to the baseline case (SPR with nominal parameters), however the performance of ASR is still significantly better (except for the RMSE in Phase interference #1 where BP+AF error is slightly smaller, which can be attributable to the small intensity of the reconstructions and the sparsity of the ground truth).

Tables Icon

Table 2. Results in spherical illumination model and phase interference model. The mean (standard deviation) of each metric evaluated over $N=250$ samples are presented. In all but one case (marked with *), SPR, ASR, and BP+AF metrics are statistically different with p-value $<0.005$.

In summary, experiments on simulated data show advantages of the proposed method in its robustness to not only changes, or even full ignorance, of the parameters of the optical system, but also unknown modifications of the imaging model. BP+AF shows competitive results for synthetic data, especially in the case of deviation from the nominal parameters of the system.

3.2 Experiments with real data

Next, to validate our algorithm on real datasets we present qualitative results on a variety of experimental settings. Specifically, urine specimens imaged as static samples, as well as blood specimens imaged while flowing through a microfluidic device are used to investigate the performance of the reconstruction algorithms in practical settings. Standard in-line lensless digital holography is utilized, a detailed description of which can be found in [1]. In this section we first describe two datasets of urine data and present the corresponding results, and then do likewise for two datasets of blood data. Despite using the same imaging principle, different datasets purposely have different optical parameters and number of samples, allowing us to explore the performance of the reconstruction algorithms in different conditions.

3.2.1 Urine datasets

Urine samples were prepared using Liquichek level 2 (Biorad Laboratories, CA, USA), a liquid, human-based urinalysis control that includes RBCs, WBCs, crystals, and casts. Approximately $15 \mu l$ of Liquichek were placed on a microscope slide with a $150\mu m$ glass cover slip on top, and left to dry for $24$ hours before being imaged. Details of the imaging system can be found in [12].

One small dataset, Urine Dataset #1, was generated by imaging two samples with a CMOS sensor (pixel size $1.67\mu m$, area $3664\times 2748$ pixels), illuminated by a $340 nm$ UV LED, through a $200 \mu m$ pinhole, at an estimated focal depth of $1630\mu m$. The light source was located $6 cm$ away from the samples, and the samples were placed directly on top of the sensor. This dataset is used to investigate the robustness of reconstruction algorithms to inaccurate knowledge of the optical parameters of the system. In particular, two cases are presented: initialization of the PSF by using a larger sample-to-sensor distance, $T=T_{WAS}(z+230\mu m,p,\lambda )$; and initialization of the PSF as if the parameters of the system were unknown, $\mathcal {F}\left \{T\right \}=\mathbf {1}$. Figure 3(b) shows images reconstructed by SPR using the initial PSF and the one learned by ASR for both cases studied, while Fig. 3(a) shows the SPR and BP+AF reconstructions initialized at the nominal parameters to serve as reference.

 figure: Fig. 3.

Fig. 3. Results in Urine dataset #1 for different PSF initializations. (a) SPR and BP+AF reconstructions with nominal parameters as a reference, and (b) SPR and ASR reconstructions at the top and bottom, respectively, for different initializations. Two enlarged regions are presented in each case for detailed comparison.

Download Full Size | PDF

Results show a decrease of the quality of the reconstructed images when the nominal parameters of the system are not accurately known. In particular, while SPR reconstructs most objects in the Larger sample-sensor distance case in terms of quantity, their contours are blurry and they do not always resemble the shape of the objects reconstructed by SPR in the nominal case. Moreover, in the Unknown parameters case, most objects are not even visible. However, after the PSF has been adapted by ASR, the definition of the reconstructed objects is significantly improved in both cases, closely resembling those obtained by SPR with nominal parameters. Thus, even in the small-data regime, ASR learns a PSF that provides significant advantages in reconstruction quality when the optical parameters of the system are not accurately known. In contrast, the BP+AF algorithm is not able to properly estimate the focal depth, resulting in low-quality reconstructions even in the nominal parameters case (see autofocusing details in Section 5 of Supplement 1).

Aiming to study the performance of reconstruction algorithms in situations that deviate from the assumptions of the model but are still of practical interest, a dataset composed of $19$ samples was generated, Urine Dataset #2, in which the light source was placed significantly closer to the specimen ($1.5cm$). The modified distance is out of the recommended range for in-line lensless systems ($>2-3cm$) [1], which negatively affects its spatial resolution. However, as moving the light source closer to the specimen has potential benefits in reducing the overall size of the imaging device, we investigate the performance of reconstruction algorithms in this regime. The sensor used in this case has a pixel size of $1.85\mu m$ with an area of $4000\times 3000$ pixels. The samples are illuminated with a $340 nm$ UV LED, through a $200 \mu m$ pinhole, at an estimated focal depth of $2550\mu m$. One of the samples in the dataset was also imaged with the light source at $6cm$ from the sample (within the recommended range), to serve as a reference when evaluating the reconstructions. ASR was trained using all samples in the dataset ($N=19$), and the PSF was initialized at the nominal parameters. Figure 4(a) compares images reconstructed by SPR using the PSF estimated by the nominal parameters, and using the PSF learned by ASR , along with the BP+AF reconstruction. As a reference, Fig. 4(b) shows the SPR reconstruction of the same sample when imaged in a standard configuration.

 figure: Fig. 4.

Fig. 4. Results in Urine dataset #2. (a) Comparison of SPR, ASR , and BP+AF reconstructions in holograms acquired with a light source-to-specimen distance shorter than recommended (1.5cm). As a reference, (b) shows the SPR reconstruction obtained when the same specimen is illuminated within the recommended distance (6cm). Two enlarged regions are presented in each case for detailed comparison.

Download Full Size | PDF

Placing the light source closer to the samples has a magnifying effect, but SPR is still able to reconstruct most of the objects. The sharpness and intensities of the reconstructed objects, however, significantly decrease, making it hard to distinguish smaller objects (see white arrow in Fig. 4). In this situation, the PSF learned by ASR not only increases the intensity of most objects, but also significantly improves their definition, which can be critical in the case smaller particles. In contrast, BP+AF method generates low-intensity reconstructions where not even the shape of large objects is well-resolved. The gap in performance between synthetic and real data for the BP+AF method suggests that properly adapting the focal depth for BP reconstruction is significantly more challenging in real datasets.

3.2.2 Blood datasets

Anti-coagulated human blood samples were imaged in suspension while flowing through a $50\mu m$-tall microfluidic channel designed to create a thin layer of cells [11]. The samples were illuminated by a light source with $\lambda =637nm$, and measured by a sensor with pixel size of $1.12\mu m$, and an area of $4096\times 3072$ pixels. Imaging samples while flowing through a microfluidic device is of great relevance in the development of point-of-care devices, in fact [11,29] have proposed methods to detect and count blood cells in such samples.

A set of $400$ holograms, Blood dataset #1, was collected while blood was flowing through the microfluidic imaging device. To illustrate the generalization capabilities of ASR, $20$ holograms were uniformly selected as training set, and then their reconstructed images were compared to those obtained from holograms excluded from training. Figure 5(a) shows images reconstructed with SPR from a hologram within the training set using the PSF estimated by the nominal parameters (estimated focal depth $800\mu m$), along with the results obtained with the PSF learned by ASR. In addition, it depicts the corresponding reconstruction obtained by the BP+AF method. Figure 5(b) shows those obtained from a hologram excluded from the training set.

 figure: Fig. 5.

Fig. 5. Results in Blood dataset #1. Comparison of SPR, ASR, and BP+AF reconstructions for (a) one image within the training set, and (b) one image excluded from the training set. BP-based reconstructions have reduced contrast, so two color scales are used to allow visibility of low-intensity reconstructions.

Download Full Size | PDF

In this case, SPR is able to reconstruct objects with reasonable definition and intensity using the nominal parameters of the optical system. However, the PSF learned by ASR in some cases allows the reconstruction of objects with a greater level of detail. For example, the distinctive shape of RBCs as biconcave disks is visible in regions that were initially reconstructed mostly as blobs (see white arrow in second enlarged region of Fig. 5(a) and 5(b)). These observations hold for both, the hologram within the training set and the one excluded from it, attesting to the generalization ability of ASR. In fact, the similar behavior observed in seen and unseen samples suggests that the PSF learned by ASR effectively captures general features of the specimen and imaging system, as opposed to specific features of the images used for training. Figures 5(a) and 5(b) show that the BP+AF method is able to reconstruct objects resembling those reconstructed by SPR, however their intensities and definition are significantly lower.

There exist objects visible in the SPR reconstruction, but not observed when the PSF learned by ASR is used (see right edge of second enlarged region in Fig. 5(b)). However, in the absence of ground truth data it is not clear if a relevant object is being missed, or debris is being removed.

To evaluate the performance of the algorithms in the reconstruction of small objects, blood samples containing immunolabeled platelets were jointly imaged through a lensless imaging device and a fluorescent microscope, as described in [11]. A subset of $40$ holograms paired with fluorescent images, Blood dataset #2, was used to study the quality of the reconstructed images. Figure 6(b) compares reconstructions obtained by SPR using a physics-based approximation of the PSF, to those obtained by SPR using the PSF learned by ASR , along with the BP+AF reconstruction. As a reference, a corresponding fluorescent image is shown in Fig. 6(a) to indicate the presence of immunolabeled platelets.

 figure: Fig. 6.

Fig. 6. Results in Blood dataset #2. (a) Fluorescent image indicating the presence of platelets, and (b) the corresponding SPR, ASR, and BP+AF reconstructions. (c) shows the stability of the results in a enlarged crop for different values of $\gamma$.

Download Full Size | PDF

On the one hand, similar to the results in Blood dataset #1, some of the objects reconstructed by ASR show a greater level of detail, for example, biconcave disks are observed indicating the presence of RBCs. On the other hand, while SPR initially fails to retrieve some platelets, or does so with reduced intensity, the PSF learned by ASR increases the intensity of the reconstructed platelets, improving their potential of being detected (see white arrows pointing to platelets in Fig. 6(b)). Figure 6(c) shows that these advantages are not specific to the particular choice of the sparsity parameter $\gamma$, but stable over a wide range. Moreover, the reconstructions themselves are more stable with the PSF learned by ASR , compared to those obtained with the nominal PSF.

The BP+AF method is able to reconstruct larger objects in this dataset, however with an order of magnitude smaller contrast than SPR does. This becomes critical when reconstructing smaller objects, as BP+AF misses the platelets present in the two enlarged regions (see white arrows), which can have profound implications in biomedical applications [11].

4. Discussion and conclusion

ASR is an unsupervised method that improves the reconstruction quality of sparse samples imaged through lensless devices. Unlike recent black-box approaches, ASR builds upon traditional reconstruction methods by leveraging the image formation model and sparsity prior, which allow for unsupervised learning with limited amount of samples. Moreover, ASR introduces a rich and properly constrained PSF, which is adapted in a data-driven manner. The learnable PSF represents the forward model of a real-world optical system, which can deviate from the nominal PSF estimated by physics-based approximations. This distinction is usually neglected by traditional reconstruction methods.

Experiments in synthetic data show that traditional reconstruction methods are extremely sensitive to precise knowledge of the optical parameters of the system, significantly decreasing their performance even in the presence of small perturbations of the nominal parameters. The proposed method, conversely, is able to properly adapt the PSF of the system to reconstruct objects that resemble those that generated the synthetic data. The proposed method was also experimentally tested in different conditions (urine and blood specimens, static and moving samples, different pixel sizes, different dataset sizes, etc), in all of them showing some favorable aspects with respect to the baseline approach. Although autofocusing shows robust performance in simpler cases such as deviation from nominal parameters in synthetic data, it fails to perform competitively in real datasets. Overall, the results suggest that ASR learns a PSF that effectively captures relevant features of the optical system, as opposed to specific features of the images in the training set, leading to reconstructed objects with improved definition and contrast. The proposed method is particularly advantageous in cases in which the model does not match the optical system, or the nominal parameters of the system are not accurately known (or not known at all), however it can provide advantages even in nominal conditions (for example, better distinction of RBCs shape or increased intensity of small platelets in blood datasets).

However, the contribution of ASR goes beyond performing reconstruction in a given set of images, as it learns a PSF that can effectively be used to reconstruct new images with any traditional holographic reconstruction method without computational overhead (after training). As far as the authors know, this is the first time that an unsupervised method is proposed to learn a PSF in the context of lensless imaging devices.

Unsupervised learning to jointly perform reconstruction, phase retrieval, and PSF estimation is an ambitious problem, hence the proposed method has been specifically designed for thin, sparse samples. Sparsity conveniently provides a strong prior to guide learning, while also being a relevant assumption in multiple areas including biomedical sciences. However, reconstructing dense and thick samples is also of practical interest, and thus the sparse and thin sample regime is an intrinsic limitation of the proposed method. That being said, although the results presented here are specific to samples that are sparse in the spatial domain, ASR could be extended to cases in which the samples are sparse in a different domain, as long as the applied transformation is invertible and differentiable. In the future we would also like to address more complex imaging settings by extending this approach beyond the single PSF case.

Funding

National Institute on Aging (1R01AG067396).

Acknowledgments

The authors thank Dr. Stuart Ray for useful discussions regarding this work. Some of the data for this project was provided by miDiagnostics.

Disclosures

CP, GNM, NJD, RV, and BDH: Johns Hopkins University (P).

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. Y. Wu and A. Ozcan, “Lensless digital holographic microscopy and its applications in biomedicine and environmental monitoring,” Methods 136, 4–16 (2018). [CrossRef]  

2. S. B. Kim, H. Bae, K.-i. Koo, M. R. Dokmeci, A. Ozcan, and A. Khademhosseini, “Lens-free imaging for biological applications,” J. Lab. Autom. 17(1), 43–49 (2012). [CrossRef]  

3. Z. Göröcs, M. Tamamitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Rivenson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018). [CrossRef]  

4. A. Berdeu, O. Flasseur, L. Méès, L. Denis, F. Momey, T. Olivier, N. Grosjean, and C. Fournier, “Reconstruction of in-line holograms: combining model-based and regularized inversion,” Opt. Express 27(10), 14951–14968 (2019). [CrossRef]  

5. Z. Göröcs and A. Ozcan, “On-chip biomedical imaging,” IEEE Rev. Biomed. Eng. 6, 29–46 (2013). [CrossRef]  

6. Z. Göröcs, D. Baum, F. Song, K. de Haan, H. C. Koydemir, Y. Qiu, Z. Cai, T. Skandakumar, S. Peterman, M. Tamamitsu, and A. Ozcan, “Label-free detection of giardia lamblia cysts using a deep learning-enabled portable imaging flow cytometer,” Lab Chip 20(23), 4404–4412 (2020). [CrossRef]  

7. A. Greenbaum and A. Ozcan, “Maskless imaging of dense samples using pixel super-resolution based multi-height lensfree on-chip microscopy,” Opt. Express 20(3), 3129–3143 (2012). [CrossRef]  

8. Y. Rivenson, Y. Zhang, H. Günaydın, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

9. F. Yellin, B. D. Haeffele, S. Roth, and R. Vidal, “Multi-cell detection and classification using a generative convolutional model,” in Proceedings of the IEEE conference on computer vision and pattern recognition, (2018), pp. 8953–8961.

10. F. Yellin, B. D. Haeffele, and R. Vidal, “Blood cell detection and counting in holographic lens-free imaging by convolutional sparse dictionary learning and coding,” in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), (IEEE, 2017), pp. 650–653.

11. B. D. Haeffele, C. Pick, Z. Lin, E. Mathieu, S. C. Ray, and R. Vidal, “Generative optical modeling of whole blood for detecting platelets in lens-free images,” Biomed. Opt. Express 11(4), 1808–1818 (2020). [CrossRef]  

12. G. N. McKay, A. Oommen, C. Pacheco, M. T. Chen, S. C. Ray, R. Vidal, B. D. Haeffele, and N. J. Durr, “Lens free holographic imaging for urinary tract infection screening,” 10.48550/arxiv.2203.09999 (2022).

13. T.-W. Su, A. Erlinger, D. Tseng, and A. Ozcan, “Compact and light-weight automated semen analysis platform using lensfree on-chip microscopy,” Anal. Chem. 82(19), 8307–8312 (2010). [CrossRef]  

14. A. Greenbaum, Y. Zhang, A. Feizi, P.-L. Chung, W. Luo, S. R. Kandukuri, and A. Ozcan, “Wide-field computational imaging of pathology slides using lens-free on-chip microscopy,” Sci. Transl. Med. 6(267), 267ra175 (2014). [CrossRef]  

15. Y. Rivenson, Y. Wu, H. Wang, Y. Zhang, A. Feizi, and A. Ozcan, “Sparsity-based multi-height phase recovery in holographic microscopy,” Sci. Rep. 6(1), 37862 (2016). [CrossRef]  

16. M. K. Kim, “Principles and techniques of digital holographic microscopy,” J. Photonics Energy 1(1), 018005 (2010). [CrossRef]  

17. J. Zhang, J. Sun, Q. Chen, J. Li, and C. Zuo, “Adaptive pixel-super-resolved lensfree in-line digital holography for wide-field on-chip microscopy,” Sci. Rep. 7(1), 1–15 (2017). [CrossRef]  

18. J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania, Y. Iwamoto, M. Pivovarov, R. Weissleder, and H. Lee, “Sparsity-based pixel super resolution for lens-free digital in-line holography,” Sci. Rep. 6(1), 1–9 (2016). [CrossRef]  

19. B. D. Haeffele, R. Stahl, G. Vanmeerbeeck, and R. Vidal, “Efficient reconstruction of holographic lens-free images by sparse phase recovery,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, (Springer, 2017), pp. 109–117.

20. J. D. Rego, K. Kulkarni, and S. Jayasuriya, “Robust lensless image reconstruction via psf estimation,” in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, (2021), pp. 403–412.

21. F. Soulez, L. Denis, Y. Tourneur, and É. Thiébaut, “Blind deconvolution of 3d data in wide field fluorescence microscopy,” in 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), (IEEE, 2012), pp. 1735–1738.

22. B. Kim and T. Naemura, “Blind depth-variant deconvolution of 3d data in wide-field fluorescence microscopy,” Sci. Rep. 5(1), 9894 (2015). [CrossRef]  

23. S. Lim and J. C. Ye, “Blind deconvolution microscopy using cycle consistent cnn with explicit psf layer,” in International Workshop on Machine Learning for Medical Image Reconstruction, (Springer, 2019), pp. 173–180.

24. J. Page and P. Favaros, “Learning to model and calibrate optics via a differentiable wave optics simulator,” in 2020 IEEE International Conference on Image Processing (ICIP), (IEEE, 2020), pp. 2995–2999.

25. F. Dubois, C. Schockaert, N. Callens, and C. Yourassowsky, “Focus plane detection criteria in digital holography microscopy by amplitude analysis,” Opt. Express 14(13), 5895–5908 (2006). [CrossRef]  

26. A. Sinha, J. Lee, S. Li, and G. Barbastathis, “Lensless computational imaging through deep learning,” Optica 4(9), 1117–1125 (2017). [CrossRef]  

27. R. T. Rockafellar and R. J.-B. Wets, Variational Analysis (Springer-Verlag, 1997).

28. A. Horé and D. Ziou, “Image quality metrics: PSNR vs. SSIM,” in 2010 20th International Conference on Pattern Recognition, (2010), pp. 2366–2369.

29. F. Yellin, B. D. Haeffele, and R. Vidal, “Blood cell detection and counting in holographic lens-free imaging by convolutional sparse dictionary learning and coding,” in 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), (2017), pp. 650–653.

Supplementary Material (1)

NameDescription
Supplement 1       Additional information and details that might be relevant to the reader

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Simulated data. (a) Example of a specimen containing RBCs, WBCs and bacteria, $15\times 15$ pixels crops illustrate shape and intensity differences between them. (b) Synthetic holograms of the same specimen imaged through different imaging models (bottom row), and the corresponding parameters of the models (top row).
Fig. 2.
Fig. 2. Results in simulated data. Comparison of SPR, ASR, and BP+AF reconstructions (a) in the planar illumination model for different PSF initializations, and (b) for the spherical and phase interference models initialized using nominal parameters for the PSF. In both cases, the first column shows the ground truth diffraction pattern, while the rows show SPR, ASR, and BP+AF reconstructions. Two enlarged regions are presented in each case for detailed comparison. Also, two color scales are used to enable visibility in cases of low-intensity reconstructions.
Fig. 3.
Fig. 3. Results in Urine dataset #1 for different PSF initializations. (a) SPR and BP+AF reconstructions with nominal parameters as a reference, and (b) SPR and ASR reconstructions at the top and bottom, respectively, for different initializations. Two enlarged regions are presented in each case for detailed comparison.
Fig. 4.
Fig. 4. Results in Urine dataset #2. (a) Comparison of SPR, ASR , and BP+AF reconstructions in holograms acquired with a light source-to-specimen distance shorter than recommended (1.5cm). As a reference, (b) shows the SPR reconstruction obtained when the same specimen is illuminated within the recommended distance (6cm). Two enlarged regions are presented in each case for detailed comparison.
Fig. 5.
Fig. 5. Results in Blood dataset #1. Comparison of SPR, ASR, and BP+AF reconstructions for (a) one image within the training set, and (b) one image excluded from the training set. BP-based reconstructions have reduced contrast, so two color scales are used to allow visibility of low-intensity reconstructions.
Fig. 6.
Fig. 6. Results in Blood dataset #2. (a) Fluorescent image indicating the presence of platelets, and (b) the corresponding SPR, ASR, and BP+AF reconstructions. (c) shows the stability of the results in a enlarged crop for different values of $\gamma$.

Tables (2)

Tables Icon

Table 1. Results in planar illumination model for different initializations. The mean (standard deviation) of each metric evaluated over N = 250 samples are presented. In all cases SPR, ASR, and BP+AF metrics are statistically different with p-value < 0.005 .

Tables Icon

Table 2. Results in spherical illumination model and phase interference model. The mean (standard deviation) of each metric evaluated over N = 250 samples are presented. In all but one case (marked with *), SPR, ASR, and BP+AF metrics are statistically different with p-value < 0.005 .

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

min X , W , μ 1 2 H W μ 1 T X F 2 + γ X 1 s.t. | W | = 1 .
X o p t = arg min X 1 2 ( H W μ 1 ) T X F 2 + γ X 1 = S F T γ ( ( H W μ 1 ) T )
S F T γ ( a ) = { 0 if  | a | γ a | a | γ | a | otherwise.
min W , μ ~ γ ( ( H W ) T μ ~ 1 ) s.t. | W | = 1 ,
γ ( Q ) = i , j { 1 2 | Q i , j | 2 if  | Q i , j | γ γ ( | Q i , j | 1 2 γ ) otherwise.
min W , B ~ , T γ ( ( H W ) T B ~ ) s.t. | W | = 1 , F { B ~ } 0 β , | F { T } | = 1 ,
Q ( k ) = Q ( k 1 ) ϵ Q γ ( ( H W ( k 1 ) ) T Q ( k 1 ) B ~ ( k 1 ) ) W ( k ) = exp ( i ( B ~ ( k 1 ) T Q ( k ) ) ) B ~ ( k ) = F 1 { K β { F { ( H W ( k ) ) T Q ( k ) } } } ,
min { W n , B ~ n } n = 1 N , T 1 N n = 1 N γ ( ( H n W n ) T B ~ n ) s.t. | W n | = n , F { B ~ n } 0 β   n , | F { T } | = 1 .
min X , W , B 1 2 H W B T X F 2 + γ X 0 s.t. | W | = 1 , F { B } 0 β ,
H = | ( X + 1 ) T W A S ( z , p , λ ) | ,
H = | ( S ( X + 1 ) ) T W A S ( z , p , λ ) | ,
H = | ( ( ( X + 1 ) T W A S ( z 2 , p , λ ) ) P ) T W A S ( z 2 , p , λ ) | ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.