Abstract

We propose a new learning-based approach for 3D particle field imaging using holography. Our approach uses a U-net architecture incorporating residual connections, Swish activation, hologram preprocessing, and transfer learning to cope with challenges arising in particle holograms where accurate measurement of individual particles is crucial. Assessments on both synthetic and experimental holograms demonstrate a significant improvement in particle extraction rate, localization accuracy and speed compared to prior methods over a wide range of particle concentrations, including highly dense concentrations where other methods are unsuitable. Our approach can be potentially extended to other types of computational imaging tasks with similar features.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Particle fields consisting of bubbles, droplets, aerosols, biological cells, or other small objects are of interest across many scientific domains. In the past few decades, three-dimensional (3D) imaging has grown in popularity for measurements of particle size, shape, position, and motion in fields such as fluid dynamics [1], environmental science [2], chemical engineering [3,4], materials science [5], biology [68], medical sciences [911], and others. Digital holography (DH) has recently emerged as a powerful tool for such imaging tasks and is particularly useful for many in situ applications [1216] owing to its simple and compact setup. DH encodes complex information from the particles (e.g., 3D position and size) onto a 2D image called a hologram by recording the interference between a reference light wave and light scattered from the particles. The information can subsequently be recovered from the hologram through a digital reconstruction process. Conventional reconstruction methods such as the angular spectrum method convolve the holograms with diffraction kernels such as the Rayleigh-Sommerfeld and Kirchhoff-Fresnel formulas [17], and extract particle positions using image segmentation [16,18] or focus metrics [1921]. Image segmentation relies on prescribed intensity thresholds to distinguish the particles from the background, and its performance can deteriorate rapidly with increasing noise in the hologram which can be caused by cross interference of scattered waves from adjacent particles as the particle concentration increases [22]. Focus metric methods employ various criteria (e.g., edge sharpness, intensity distribution, etc.) to characterize the focus level of particles. These criteria are usually sensitive to detailed features of particles and the noise level in the holograms, limiting their application to low concentration particle fields with low background and cross-interference noises. Many approaches to overcome these issues focus on hardware design to improve the hologram quality or encode more information during the recording step of holography [2325]. However, the implementation of these approaches requires sophisticated mechanical and optical components. Numerical approaches replace the mechanical complexity with computational complexity. Several inverse reconstruction methods such as deconvolution [26,27] and iterative optimization [2831] have been proposed to improve particle reconstruction accuracy. The deconvolution approach models the blurring observed in the 3D reconstruction as the product of convolution of the true object field with a point spread function (PSF). The PSF must be modelled based on known diffraction formulas or experimentally obtained through a hologram of a point-like object. Iterative optimization methods employ hologram formation models to minimize the difference between the observed and modeled holograms [2831] with a set of physical constraints like sparsity and smoothness [30,31]. However, these advanced methods are computationally intensive and require fine tuning parameters to get optimal results. More importantly, the PSFs [26,27] and hologram formation models [2831] do not incorporate dynamic noise characteristics associated with optical distortion and particle cross-interference, which substantially hampers the performance of these methods.

Recently, machine learning using deep neural networks (DNNs) has emerged as a prevailing tool for various image analysis tasks. Adoption of DNNs has drastically enhanced processing speed and yielded more accurate results than conventional inverse approaches for some applications [32]. However, compared to other fields of computational imaging, machine learning has been under-utilized in DH [3344]. Machine learning in DH has been adopted for transforming hologram reconstructions to microscopic images similar to those commonly used in biological and medical examination [3338], and classification of the particle species captured in the hologram [39,40]. Only a handful of studies have implemented machine learning for particle imaging using holography, most of which deal with single-particle holograms and using learning-based regression to extract particle depth information [4143]. Particularly, Ren et al. [41] have demonstrated that a convolutional neural network (CNN) yields more accurate particle depth than conventional reconstruction methods and other machine learning approaches. To date, only Shimobaba et al. [44] have applied machine learning for multi-object particle field reconstruction from holograms through a machine learning segmentation approach. They employed a U-net CNN architecture with an L1-regularized loss function and trained on synthetic holograms with particle concentration varying from 4.7×10−5 particle/pixel (ppp) to 1.9×10−4 ppp. Their algorithm demonstrated good reconstruction results for low concentration synthetic holograms in the presence of Gaussian noise, with rapid decays in performance with increasing particle concentrations. Such concentration increases are typically required for many practical measurement applications. Furthermore, the regularization method employed in their approach tends to be unstable, affecting the convergence of the solution [45].

Based on the literature review and compared with other learning-based image processing tasks, we have identified three unique challenges associated with 3D particle imaging using DH. First, while the signal of an individual object can spread over a large region of the hologram, the reconstructed particle field usually consists of a group of sparse objects. When a learning-based approach is used to replace the reconstruction, this sparsity causes the training process to be highly unstable and produce incorrect results. Second, 3D particle field reconstruction requires very accurate measurements for each particle which differs from many conventional learning-based imaging tasks such as classification or global regression of the image [32]. Finally, the desired metrics, recording parameters, and hologram appearance are coupled, limiting the generalizability of a model trained on a specific set of data. It is worth noting that these challenges can also appear in light field imaging [6,46,47], imaging through diffusive media [48], defocus imaging [49,50] and other methods [8].

To address the above-mentioned issues, we present in this paper a specially-designed machine learning approach for 3D particle field reconstruction in DH, which can also be employed in other computational imaging tasks sharing similar traits. Section 2 describes our methodology, followed by the assessment of our method using both synthetic and experimental data in Section 3. Section 4 offers a conclusion and a discussion on the future extension of our method.

2. Methodology

Our machine learning approach for particle field reconstruction from a hologram uses a specially-designed U-net architecture, which takes the holograms as input and computes the 3D location of particles as output. U-net is a type of CNN developed for medical and biological image segmentation [51,52] that has also been used in learning-based image-to-image transformations [3338,53] and multi-object classification from single images [54]. U-net consists of a series of encoder and decoder blocks, corresponding to solid and dashed black boxes in Fig. 1, respectively. In the encoder block, two consecutive sets of convolution layers and activation functions are used to encode local features of input images into channels. Two encoder blocks are connected by a maximum pooling layer which downsamples the feature maps in order to extract global features. The decoder block is similar but in reverse. Two consecutive convolution layers are used to decode the channels to form an image and two decoder blocks are connected by up-convolution layers to resize feature maps. Note that the output feature map of the final encoder block is connected to the first decoder block through an up-convolution layer. U-net also includes skip connections (black arrows in Fig. 1) whereby the encoder output is concatenated to the same size decoder block which combines the local and global features of images for training in the deeper stage of the network. In comparison to a simple CNN architecture without skip connections, we suggest that U-net is more suitable or particle field reconstruction from a hologram because the skip connections make use of the spread of individual particle information over the a large portion of the image (at both local and global scales). In our U-net architecture, we use a U-net with 4 encoders and 3 decoders, and the number of output encoder channels are 64 and 512 for the first and last encoder, respectively. As suggested in [51], a key feature of the U-net architecture is that it can be directly applied to images of arbitrary size (regardless of the training set image size) since there are no densely connected layers.

 figure: Fig. 1.

Fig. 1. The specially-designed U-net architecture for holographic reconstruction of 3D particle field.

Download Full Size | PPT Slide | PDF

Compared with the conventional U-net architecture, our U-net has a residual connection within each encoder and decoder block (green arrows in Fig. 1) and uses a Swish (Sigmoid-weighed Linear Unit) activation function (Eq. (1)) for all except the last layer. The residual connection increases the training speed and reduces the likelihood of the training becoming trapped at a local minimum [55]. Note that within an encoder the skip connection is achieved through the connection of channels from maximum pooling to the output channels. In the decoder the residual connection uses the channels from the previous decoder block connected by an up-convolution layer. Such configuration allows the necessary shortcut connection (i.e., skipping one or two convolution layers) for a residual net [55]. Additionally, we replace the commonly-used Rectified Linear Unit (ReLU) activation function with the recently proposed Swish activation function (f(x), Eq. (1) in our architecture [56]. Note that the x in Eq. (1) corresponds to the outputs from previous layer and f(x) is the input to the next layer.

$$f(x) = \frac{x}{{1 + {e^{ - x}}}}$$
For particle holograms, the target images (Fig. 2) are usually sparse due to the small size of particle centroids which leads to the majority of values in the feature layers being equal to 0. Therefore, during training the parameters within the model have a higher tendency to be 0, and subsequently untrainable since the derivative of ReLU is 0 once the weights are equal or less than 0 [56]. This problem causes substantial degradation for deep CNN models using the ReLU activation function [56]. Comparatively, the Swish function is smooth and non-monotonic near 0 which increases the number of effective parameters in the training, especially for sparse targets. However, Swish may affect the accuracy of prediction from the model due to the inclusion of negative output values. To solve this problem, we use a Sigmoid activation in the final decoder block (magenta arrow in Fig. 1) to produce results within the range from 0 to 1.

 figure: Fig. 2.

Fig. 2. A sample training input and training target consisting of 300 particles (i.e., concentration at 0.018 ppp) with a hologram size of 128 ×128 pixels. The hologram is formed with a pixel resolution of 10 µm with a laser illumination wavelength of 632 nm.

Download Full Size | PPT Slide | PDF

The training input consists of three channels: an original hologram, the corresponding images of pixel depth projection (i.e., depth map) and maximum phase projection (Fig. 2). The original synthetic hologram is generated following the approach in [44] and [57]. The pixel resolution is 10 µm with a laser illumination wavelength of 632 nm and an image size of 128 ×128 pixels. The particles within the holograms are randomly distributed with a distance between 1 mm and 2.28 mm to the sensor. The resolution in z direction is also 10 µm with 128 discrete depth levels. Compared with [44], the depth map and phase projections are additional information obtained from preprocessing the holograms. As suggested by [32], preprocessing is employed to incorporate existing hologram formation knowledge into the model and reduce the need of the model to fully learn the required physics during training. Additionally, training with known hologram formation physics instead of solely relying on model training avoids spurious and unphysical outputs from the trained model. Our initial tests have shown a noticeable improvement of particle extraction rate especially for high concentration cases with these preprocessing steps in comparison to training directly on the raw holograms. Using the angular spectrum method [58], a 3D complex optical field, up (x, y, z), is generated from the original hologram I(x,y) (Eq. (2)), where λ is the wavelength, k is the wave number and $\mathcal {F}$ is the Fourier transform operator. The depth map is generated by projecting the z locations where the pixels have the maximum intensity from up (x, y, z) to xy plane (Eq. (3)), and the maximum phase projection is calculated from Eq. (4).

$${u_\textrm{p}}(x,y,z) = {{{\mathcal F}}^{ - 1}}[{{\mathcal F}}(I(x,y)) \times {{\mathcal F}}(\frac{{\exp (jkz)}}{{j\lambda z}}\exp \left\{ {j\frac{k}{{2z}}[({x^2} + {y^2})]} \right\})]$$
$${z_{\textrm{approx}}} = \arg \mathop {\mathop{\rm m}\nolimits} \limits_z ax\{{{u_\textrm{p}}(x,y,z) \times \textrm{conj}[{u_\textrm{p}}(x,y,z)]} \}$$
$$P(x,y) = {\max\nolimits_z}\{ \textrm{angle}[{u_\textrm{p}}(x,y,z)]\} $$
The training target consists of two output channels. The first is a grayscale channel in which the pixel intensity corresponds to the relative depth of each particle and the second is a binary image of the particle xy centroids (Fig. 2). While the particles are encoded as only a single pixel in the xy binary channel, doing the same for the depth-encoded grayscale channel produces a trained model which generates an output with inaccurate pixel intensities and substantial background noise. To prevent this, the labeled particles in the depth-encoded grayscale target are set to a size of 3×3 pixels.

Because of the differences between the two target channels, each channel uses a different type of loss function. Specifically, a Huber loss [59] is evaluated on the output channel encoding particle depth. As shown in Eq. (4), it uses a modified mean absolute error (MAE) of the prediction (Y) relative to the ground truth (X) as the training loss when the MAE is larger than the preset δ (0.002 for the synthetic dataset), and uses a mean squared error (MSE) when the MAE is less than δ. Huber loss improves the training robustness and prediction accuracy by using MAE once the averaged pixel intensities are biased by the outliers [59]. We suggest that the parameter δ in Eq. (5) can be determined based on the measurement accuracy requirements, with a smaller δ resulting in an improved particle depth resolution. However, too small δ may lead to an unstable training process and have multiple solutions similar to using pure MAE loss [45].

$$L = \left\{ {\begin{array}{c} {\frac{1}{2}||{Y - X} ||_2^2\textrm{ }\textrm{ }\textrm{ } {{||{Y - X} ||}_1} \le \delta ,\textrm{ }}\\ {\delta {{||{Y - X} ||}_1} - \frac{1}{2}{\delta^2} \textrm{ }\textrm{ }\textrm{ }\textrm{ otherwise}\textrm{.}} \end{array}} \right.$$
An MSE loss regularized by the total variation (TV) of the prediction is used for the xy centroid channel (Eq. (6)). As shown in Eq. (7), TV is the sum of first-order gradients over the image of size Nx×Ny.
$$L = (1 - \alpha )(||{Y - X} ||_2^2) + \alpha ||Y ||_{TV}^2$$
$${||Y ||_{TV}} = \sum\limits_{i = 1}^{{N_x}} {\sum\limits_{j = 1}^{{N_y}} {\sqrt {{{({Y_{i,j}} - {Y_{i - i,j}})}^2} + {{({Y_{i,j}} - {Y_{i,j - 1}})}^2}} } }$$
TV regularization has previously been adopted in iterative optimization methods for hologram reconstruction [30,31]. TV is robust to outliers in the images and causes the model to produce a smooth background in the output xy centroid channel. Such regularization reduces the likelihood of background pixels having non-zero values which would result in the detection of ghost particles. The α in Eq. (5) is a parameter that determines the smoothness of the results. We suggest a small α (∼0.0001) for training since TV regularization acts as a low-pass filter and too much smoothing can degrade the accuracy of the results.

The U-net architecture is implemented using Keras [60] with the TensorFlow backend. The training is conducted on a Nvidia RTX 2080Ti GPU. The Adam optimizer [61] is used with the default learning rate of 0.001. Thirteen datasets, with particle concentration between 1.9×10−4 and 6.1×10−2 ppp, are generated to train and test the models. The highest particle concentration of the synthetic data is 305 times higher than the highest case (1.9×10−4 ppp) in the literature [44]. A base model is first trained on 2500 holograms with a particle concentration of 1.8×10−2 ppp for 480 epochs (in total 13.5 hours). For each subsequent particle concentration, a dataset of 2000 holograms is trained for 120 epochs with the training initialized by the base model (2.7 hours for each case). This transfer learning approach substantially decreases the training requirement (i.e., dataset size and training time) for new hologram datasets [62]. To extract the particles from the model output, the predicted particle xy centroid map is first binarized with a threshold of 0.5 (equivalent to the maximum likelihood) to extract the xy centroids of the particles. Subsequently, from the depth-encoded grayscale output, we use the intensity value of the corresponding pixels in the depth map as the particle depth.

3. Results

3.1 Assessment using synthetic holograms with constant particle concentration

We first compare the performance of our method to an implementation of the approach proposed by Shimobaba et al. [44] on a test set of 100 holograms with the same particle concentration. Because the synthetic holograms are independent of specific experimental settings and the voxel-based discretization resolution of the hologram largely determines the accuracy of our measurement system from the digital image processing point of view, we present all the results from synthetic holograms in units of voxels. The training of the Shimobaba approach is conducted on a dataset of 9000 holograms with particle concentration from 1.9×10−4 to 6.1×10−2 ppp (which is the same as our synthetic datasets). The comparison is first made on a 300-particle hologram (Fig. 3). The pairing of predicted particles to the ground truth follows the method presented in [31]. As Fig. 3 shows, our proposed approach yields a significantly higher number of extracted particles compared to the Simobaba approach. Our method yields an extraction rate of 98.7% (1.3% false negatives) with a false positive (unpaired particles from prediction) rate of 2.3%. By comparison, the Shimobaba approach achieves a 40.4% extraction rate (59.6% false negatives) and 17.5% false positive rate while the state-of-the-art regularized inverse holographic volumetric reconstruction or RIHVR [31] achieves 88.4% (11.6% false negatives) and 2.3% for the extraction and false positive rates, respectively. These three methods all have a median positioning error less than 1 pixel for x and y. The z error for our method is 1.48 voxels in comparison to 5.49 voxels for Shimobaba and 3.50 voxels for RIHVR.

 figure: Fig. 3.

Fig. 3. Prediction results from the trained model using (a) our U-net architecture and (b) the method presented in Shimobaba et al. [44] (c) and Mallery and Hong [31] for the case of 0.018 ppp (300-particle holograms). The black dots are extracted true particles, red dots are false positives (i.e., unpaired particles from ground truth) and green dots are the false negatives (unpaired particles from the ground truth).

Download Full Size | PPT Slide | PDF

To analyze the impacts of the unique features in our U-net architecture on the training process, we compare the training loss decay for the first 200 epochs of different model variants on the 1.8×10−2 ppp dataset. As shown in Fig. 4, compared to the proposed method, removing the residual connections or using a conventional loss function (MSE) both destabilize the training process. The removal of residual connections (Fig. 4(b)) leads the training process to be susceptible to local minima during training as discussed by He et al. [55]. Additionally, when using a loss function susceptible to outliers (such as MSE) without any regularization, the model is likely to produce trivial predictions in the output (i.e., spurious particles) which causes fluctuation of the loss curve even early in the training (Fig. 4(c)). The result is that the model does not converge to an optimal solution and produces very inaccurate pixel intensity predictions or white noise outputs for the worst scenarios.

 figure: Fig. 4.

Fig. 4. Demonstration of the impact of the proposed model improvements on the training process over the first 200 epochs. (a) Proposed approach, (b) using U-net architecture without residual connection and (c) using mean squared error as loss function. The loss is normalized by its initial value, and each case is randomly initialized 10 times to show the resultant instability of the training for cases (b) and (c). In the image, the green curves correspond to the maximum and minimum normalized loss at each epoch, the blue curves corresponding to each initialization, and the shaded region is the range of loss.

Download Full Size | PPT Slide | PDF

Finally, replacement of ReLU activation functions with Swish optimizes the training process by avoiding untrainable parameters (i.e., dead neurons) [56]. From our tests, the cases without Swish can produce >6000 dead neurons in the last convolution layer at the end of the first epoch which substantially degrades the model. As a result, the training is likely to reach a plateau in the first few epochs and the resulting models generate 2D white noise images.

3.2 Assessment using synthetic holograms with variable particle concentration

The particle extraction rate and positioning accuracy using the proposed transfer learning approach (see Section 2) are assessed for variable particle concentrations from 1.9×10−4 ppp to 6.1×10−2 ppp. In Fig. 5, we present a case of a 100-particle hologram (6.1×10−3 ppp) and a case of a 1000-particle hologram (6.1×10−2 ppp). The lowest concentration instance shown here has an extraction rate of 97.0% (both false positive and false negative rates of 3.0%) while the highest concentration case has a 95.0% extraction rate (false positive rate of 1.6% and false negative rate of 5.0%). From an assessment of 100 holograms for each particle concentration, the extraction rate is above 94.4% (false negative less than 5.6%) over the range for 1.9×10−4 ppp to 6.1×10−2 ppp (Fig. 6(a)) with a median particle positioning accuracy less than one voxel for x and y and less than 3.2 voxels for z for all concentrations (Fig. 6(b)). For all cases, the false positive of the model prediction is less than 10%. Using the same test data, Shimobaba et al. [44] yields an extraction rate lower than 60% for the particle concentrations higher than 1.8×10−3 ppp and RIHVR [31] shows a substantial drop in particle extraction rate starting at a concentration of 3.0×10−2 ppp. Our machine learning approach shows no such drop within the studied range (Fig. 6(a)). We suggest that our pre-processing and transfer learning approaches significantly reduce the training requirements to yield a high extraction rate and positioning accuracy for new holograms especially for high concentration cases. The increased extraction rate for high particle concentration holograms potentially enables improved spatial resolution tracer-based flow diagnostic techniques such as particle image velocimetry (PIV) and particle tracking velocimetry (PTV) [1].

 figure: Fig. 5.

Fig. 5. Comparison of prediction results with a 100-particle hologram (a) and a 1000-particle hologram (b). The black dots are extracted true particles, red dots are false positives (i.e., unpaired particles from prediction) and green dots are false negatives (unpaired particles from the ground truth).

Download Full Size | PPT Slide | PDF

 figure: Fig. 6.

Fig. 6. (a) Extraction rate under different particle concentrations of the proposed method and compared with the case of Shimobaba et al. [44] and RIHVR [31] and (b) Median position error of extracted particles for the proposed method under different particle concentrations. Note that the dashed lines correspond to the particle concentration of the base model (1.8×10−2 ppp).

Download Full Size | PPT Slide | PDF

3.3 Assessment using experimental data

The proposed method is assessed using experimental holograms of fluorescently labeled particles embedded in a solid gel. We use 2 µm fluorescent particles (ThermoScientific) at a concentration of ∼5000 particles/mm3 (2.0×10−2 ppp) dispersed in a water-based gelatin placed between glass slides. The experimental holograms (Fig. 7(a)) are recorded on a Nikon Ti-Eclipse inverted microscope using a 10X Nikon objective lens and an Andor Zyla 5.5 sCMOS (0.65 µm/px) with a collimated 660 nm diode laser illumination (QPhotonics; QLD-660-10S). The microscope can record multiple holograms of 2432×2048 pixels at distances spanning 0-200 µm below the volume, all of which are used to calculate an ensemble averaged background for image enhancement. The ground truth measurement (Fig. 7(b)) is obtained by scanning the sample at the same location using the epi-fluorescence mode of the microscope at a step size of 2.5 µm over the entire depth at the same image size and resolution. The particle positions for the ground truth measurement are then obtained through manual thresholding and intensity weighted centroid calculation. The training dataset consists of 7500 randomly cropped 128×128 pixel tiles from the enhanced hologram and their corresponding particle locations from the 3D fluorescence scan volume. Here the target images are saved with 16-bit precision to encode a large number of reconstruction planes. The Huber loss δ in Eq. (4) is set as 0.001 which is lower than the synthetic cases since the pixel intensity in the labels encode higher resolution in the depth (z direction in the experimental case). We use the same method as the synthetic case to pair the predicted particles to the ground truth. As shown in Fig. 7(c), despite the noisy input (Fig. 7(a)), the predicted results from the trained model match well to the ground truth. The test of 100 randomly cropped 128×128 pixel tiles from a validation hologram imaging a different region of this sample yields a 90% extraction rate with a positioning error less than 1 voxel (0.65 µm) in the x and y directions and 5.24 voxels (13.1 µm) in the z direction. The training of this dataset using the Shimobaba approach [44] cannot converge to a low loss value and yields a model generating strong background noise. It is difficult to apply RIHVR to this case because the background removal specified in [31] depends on the motion of particles to produce an accurate time-averaged background image. As such, RIHVR has a very high false detection rate for this case.

 figure: Fig. 7.

Fig. 7. (a) A 128×128-pixel enhanced hologram from the experimental data and corresponding volumetric image through the stacking of fluorescent bright field scanning of the same sample for determining the ground truth (b). (c). Prediction results in comparison from the machine learning model. The black dots are extracted true particles, red dots are false positives (i.e., unpaired particles from ground truth) and green dots are the false negatives (unpaired particles from the ground truth.).

Download Full Size | PPT Slide | PDF

4. Summary and discussion

In the present paper, we introduce a new learning-based approach for 3D particle field reconstruction from holograms. For holograms, the information in the training input (i.e. the diffraction fringes of an individual particle) can significantly spread beyond the in-focus location of the particle, while the training target consists of a sparse particle field where accurate measurement of each particle is crucial. To handle these traits, our specially-designed U-net architecture has three input channels (original holograms, depth, and maximum phase projection maps) and two output channels (depth and centroid maps of reconstructed particles). Compared to simple feed-forward networks, U-net combines both local and global features which is particularly suitable for particle hologram reconstruction where the reconstructed particle target is small, but its information can spread significantly beyond its in-focus position in the hologram. The 2D depth and maximum phase projection map channels use the angular spectrum method from conventional holographic reconstruction to incorporate hologram formation knowledge into the U-net training and reduce the need to fully learn the required physics. In addition, our architecture employs residual connections and the Swish activation function to reduce the likelihood of the training becoming trapped in local minima or producing a large number of dead neurons. We use two types of loss functions – Huber loss and TV-regularized MSE loss – for the output channels of particle depth and particle centroids, respectively, in order to improve the prediction accuracy, produce a smooth background, and reduce ghost particles. Lastly, a transfer learning approach is adopted to reduce the training requirements for new hologram datasets. Through an assessment of synthetic holograms, our approach is suitable for processing much denser particle concentrations than prior approaches, with a 94% extraction rate at a concentration (6.1×10−2 ppp) 305 times higher than previously demonstrated with a machine learning approach [44] and 4 times higher than the 90% extraction limit for a state-of-the-art analytical method [31]. This improvement to the maximum concentration comes while also achieving improved positioning accuracy (error of <3.2 voxels). Validating the proposed method on experimental holograms with a concentration of 0.020 ppp results in an extraction rate over 90% with a positioning error of 5.24 voxels (13.1 µm) for the depth measurement. These assessments demonstrate the unique power of machine learning for particle hologram reconstruction with a broad range of particle concentrations. Finally, our learning-based hologram reconstruction is more than 30 times faster than the analytical RIHVR method, even though minimal effort has been undertaken to optimize the speed of the current approach. We suggest that the proposed method can be generalized for other sparse field imaging tasks such as imaging individual brain neuron activities, particle localization with synthetic aperture or defocusing imaging, and imaging through diffusive media.

While our machine learning approach has equal or superior performance compared to state-of-the-art conventional hologram reconstruction methods, there remains room for improvement. First, the process of collecting experimental data with known particle locations and training the model must be repeated for each new experimental dataset. Nevertheless, the transfer learning approach detailed above can substantially reduce the training time needed to achieve accurate 3D particle field reconstruction for new datasets. For holograms collected with significantly different recording settings, the ground truth can be collected using experiments or through high-fidelity conventional reconstruction approaches such as RIHVR [31]. Ongoing work aims to synthesize holograms with sufficient fidelity to train a model suitable for processing experimental images. Such an approach can substantially reduce the cost of collecting ground truth measurements and has been proven effective for image classification tasks [63] and 2D shadow image particle segmentation [64]. Additionally, although our learning-based approach has achieved significant speed improvement in comparison to conventional reconstruction methods, more than 90% of our total processing time is consumed in pre- and post-processing steps. The processing speed of such steps can be readily enhanced through GPU processing and a more streamlined pipeline to attain real-time/onboard processing capacity for various applications.

Funding

Office of Naval Research (N00014-16-1-2755).

Acknowledgements

The authors would like to thank Prof. Xiang Cheng for access to the microscope used to generate the experimental training data.

Disclosures

The authors declare no conflicts of interest.

References

1. M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

2. M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013). [CrossRef]  

3. J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009). [CrossRef]  

4. A. Wang, Q. Marashdeh, and L. S. Fan, “ECVT imaging of 3D spiral bubble plume structures in gas-liquid bubble columns,” Can. J. Chem. Eng. 92(12), 2078–2087 (2014). [CrossRef]  

5. K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009). [CrossRef]  

6. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014). [CrossRef]  

7. S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016). [CrossRef]  

8. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017). [CrossRef]  

9. Y. S. Choi and S. J. Lee, “Three-dimensional volumetric measurement of red blood cell motion using digital inline holography,” Appl. Opt. 48(16), 2983–2990 (2009). [CrossRef]  

10. T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012). [CrossRef]  

11. K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015). [CrossRef]  

12. E. Malkiel, O. Alquaddomi, and J. Katz, “Measurements of plankton distribution in the ocean using submersible holography,” Meas. Sci. Technol. 10(12), 1142–1152 (1999). [CrossRef]  

13. H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002). [CrossRef]  

14. M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015). [CrossRef]  

15. C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016). [CrossRef]  

16. S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012). [CrossRef]  

17. J. Katz and J. Sheng, “Applications of holography in fluid mechanics and particle dynamics,” Annu. Rev. Fluid Mech. 42(1), 531–555 (2010). [CrossRef]  

18. J. Sheng, E. Malkiel, and J. Katz, “Buffer layer structures associated with extreme wall stress events in a smooth wall turbulent boundary layer,” J. Fluid Mech. 633, 17–60 (2009). [CrossRef]  

19. L. Tian, N. Loomis, J. A. Domínguez-Caballero, and G. Barbastathis, “Quantitative measurement of size and three-dimensional position of fast-moving bubbles in air-water mixture flows using digital holography,” Appl. Opt. 49(9), 1549–1554 (2010). [CrossRef]  

20. D. R. Guildenbecher, J. Gao, P. L. Reu, and J. Chen, “Digital holography simulations and experiments to quantify the accuracy of 3D particle location and 2D sizing using a proposed hybrid method,” Appl. Opt. 52(16), 3790–3801 (2013). [CrossRef]  

21. S. Shao, C. Li, and J. Hong, “A hybrid image processing method for measuring 3D bubble distribution using digital inline holography,” Chem. Eng. Sci. 207, 929–941 (2019). [CrossRef]  

22. M. Malek, D. Allano, S. Coëtmellec, and D. Lebrun, “Digital in-line holography: influence of the shadow density on particle field extraction,” Opt. Express 12(10), 2270–2279 (2004). [CrossRef]  

23. V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999). [CrossRef]  

24. B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017). [CrossRef]  

25. J. Gao and J. Katz, “Self-calibrated microscopic dual-view tomographic holography for 3D flow measurements,” Opt. Express 26(13), 16708–16725 (2018). [CrossRef]  

26. T. Latychevskaia and H. W. Fink, “Holographic time-resolved particle tracking by means of three-dimensional volumetric deconvolution,” Opt. Express 22(17), 20994 (2014). [CrossRef]  

27. M. Toloui and J. Hong, “High fidelity digital inline holographic method for 3D flow measurements,” Opt. Express 23(21), 27159 (2015). [CrossRef]  

28. N. Verrier, N. Grosjean, E. Dib, L. Méès, C. Fournier, and J. L. Marié, “Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction,” Meas. Sci. Technol. 27(4), 045001 (2016). [CrossRef]  

29. A. Berdeu, O. Flasseur, L. Méès, L. Denis, F. Momey, T. Olivier, N. Grosjean, and C. Fournier, “Reconstruction of in-line holograms: combining model-based and regularized inversion,” Opt. Express 27(10), 14951 (2019). [CrossRef]  

30. F. Jolivet, F. Momey, L. Denis, L. Méès, N. Faure, N. Grosjean, F. Pinston, J. L. Marié, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26(7), 8923 (2018). [CrossRef]  

31. K. Mallery and J. Hong, “Regularized inverse holographic volume reconstruction for 3D particle tracking,” Opt. Express 27(13), 18069 (2019). [CrossRef]  

32. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019). [CrossRef]  

33. Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018). [CrossRef]  

34. Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018). [CrossRef]  

35. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018). [CrossRef]  

36. T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019). [CrossRef]  

37. T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019). [CrossRef]  

38. K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765–4768 (2019). [CrossRef]  

39. Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018). [CrossRef]  

40. V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

41. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018). [CrossRef]  

42. M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier, “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles,” Opt. Express 26(12), 15221–15231 (2018). [CrossRef]  

43. K. Jaferzadeh, S. H. Hwang, I. Moon, and B. Javidi, “No-search focus prediction at the single cell level in digital holographic imaging with deep convolutional neural network,” Biomed. Opt. Express 10(8), 4276–4289 (2019). [CrossRef]  

44. T. Shimobaba, T. Takahashi, Y. Yamamoto, Y. Endo, A. Shiraki, T. Nishitsuji, N. Hoshikawa, T. Kakue, and T. Ito, “Digital holographic particle volume reconstruction using a deep neural network,” Appl. Opt. 58(8), 1900–1906 (2019). [CrossRef]  

45. C. Ding, Z. Ding, X. He, and H. Zha, “R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization,” in Proc. of 23rd ICML 2006-Pittsburg, (ACM, 2006), pp. 281–288.

46. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007). [CrossRef]  

47. E. M. Hall, D. R. Guildenbecher, and B. S. Thurow, “Uncertainty characterization of particle location from refocused plenoptic images,” Opt. Express 25(18), 21801–21814 (2017). [CrossRef]  

48. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017). [CrossRef]  

49. F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007). [CrossRef]  

50. P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014). [CrossRef]  

51. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Med. Image Comput. Comput. Assist. Interv. 2015-Germany, (Springer, 2015), pp. 234–241.

52. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.

53. P. Isola, J. Y. Zhu, T. Zhou, and A. A. Elfros, “Image-to-image translation with conditional adversarial networks,” in Proc. of the IEEE CVPR 2017-Honolulu, (IEEE, 2017), pp. 1125–1134.

54. L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.

55. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE CVPR 2016-Las Vegas, (IEEE, 2016), pp. 770–778.

56. P. Ramachandran, B. Zoph, and Q.-V. Le, “Searching for activation functions,” axXiv: 1710.05941 (2016).

57. F. Soulez, L. Denis, E. Thiébaut, C. Fournier, and C. Goepfert, “Inverse problem approach in particle digital holography: out-of-field particle detection made possible,” J. Opt. Soc. Am. A 24(12), 3708–3716 (2007). [CrossRef]  

58. T. Latychevskaia and H. W. Fink, “Practical algorithms for simulation and reconstruction of digital in-line holograms,” Appl. Opt. 54(9), 2424–2434 (2015). [CrossRef]  

59. J. P. Huber, “Robust estimation of a location parameter,” Ann. Math. Stat. 35(1), 73–101 (1964). [CrossRef]  

60. F. Chollet, keras. GitHub repository (2015), https://github.com/fchollet/keras.

61. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” axXiv: 1412.6980 (2014).

62. I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT Press, 2016).

63. J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

64. Y. Fu and Y. Liu, “BubGan: Bubble generative adversarial networks for synthesizing realistic bubbly flow images,” Chem. Eng. Sci. 204, 35–47 (2019). [CrossRef]  

References

  • View by:

  1. M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).
  2. M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
    [Crossref]
  3. J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
    [Crossref]
  4. A. Wang, Q. Marashdeh, and L. S. Fan, “ECVT imaging of 3D spiral bubble plume structures in gas-liquid bubble columns,” Can. J. Chem. Eng. 92(12), 2078–2087 (2014).
    [Crossref]
  5. K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
    [Crossref]
  6. R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
    [Crossref]
  7. S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016).
    [Crossref]
  8. J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
    [Crossref]
  9. Y. S. Choi and S. J. Lee, “Three-dimensional volumetric measurement of red blood cell motion using digital inline holography,” Appl. Opt. 48(16), 2983–2990 (2009).
    [Crossref]
  10. T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012).
    [Crossref]
  11. K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015).
    [Crossref]
  12. E. Malkiel, O. Alquaddomi, and J. Katz, “Measurements of plankton distribution in the ocean using submersible holography,” Meas. Sci. Technol. 10(12), 1142–1152 (1999).
    [Crossref]
  13. H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
    [Crossref]
  14. M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
    [Crossref]
  15. C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
    [Crossref]
  16. S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
    [Crossref]
  17. J. Katz and J. Sheng, “Applications of holography in fluid mechanics and particle dynamics,” Annu. Rev. Fluid Mech. 42(1), 531–555 (2010).
    [Crossref]
  18. J. Sheng, E. Malkiel, and J. Katz, “Buffer layer structures associated with extreme wall stress events in a smooth wall turbulent boundary layer,” J. Fluid Mech. 633, 17–60 (2009).
    [Crossref]
  19. L. Tian, N. Loomis, J. A. Domínguez-Caballero, and G. Barbastathis, “Quantitative measurement of size and three-dimensional position of fast-moving bubbles in air-water mixture flows using digital holography,” Appl. Opt. 49(9), 1549–1554 (2010).
    [Crossref]
  20. D. R. Guildenbecher, J. Gao, P. L. Reu, and J. Chen, “Digital holography simulations and experiments to quantify the accuracy of 3D particle location and 2D sizing using a proposed hybrid method,” Appl. Opt. 52(16), 3790–3801 (2013).
    [Crossref]
  21. S. Shao, C. Li, and J. Hong, “A hybrid image processing method for measuring 3D bubble distribution using digital inline holography,” Chem. Eng. Sci. 207, 929–941 (2019).
    [Crossref]
  22. M. Malek, D. Allano, S. Coëtmellec, and D. Lebrun, “Digital in-line holography: influence of the shadow density on particle field extraction,” Opt. Express 12(10), 2270–2279 (2004).
    [Crossref]
  23. V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999).
    [Crossref]
  24. B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
    [Crossref]
  25. J. Gao and J. Katz, “Self-calibrated microscopic dual-view tomographic holography for 3D flow measurements,” Opt. Express 26(13), 16708–16725 (2018).
    [Crossref]
  26. T. Latychevskaia and H. W. Fink, “Holographic time-resolved particle tracking by means of three-dimensional volumetric deconvolution,” Opt. Express 22(17), 20994 (2014).
    [Crossref]
  27. M. Toloui and J. Hong, “High fidelity digital inline holographic method for 3D flow measurements,” Opt. Express 23(21), 27159 (2015).
    [Crossref]
  28. N. Verrier, N. Grosjean, E. Dib, L. Méès, C. Fournier, and J. L. Marié, “Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction,” Meas. Sci. Technol. 27(4), 045001 (2016).
    [Crossref]
  29. A. Berdeu, O. Flasseur, L. Méès, L. Denis, F. Momey, T. Olivier, N. Grosjean, and C. Fournier, “Reconstruction of in-line holograms: combining model-based and regularized inversion,” Opt. Express 27(10), 14951 (2019).
    [Crossref]
  30. F. Jolivet, F. Momey, L. Denis, L. Méès, N. Faure, N. Grosjean, F. Pinston, J. L. Marié, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26(7), 8923 (2018).
    [Crossref]
  31. K. Mallery and J. Hong, “Regularized inverse holographic volume reconstruction for 3D particle tracking,” Opt. Express 27(13), 18069 (2019).
    [Crossref]
  32. G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
    [Crossref]
  33. Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
    [Crossref]
  34. Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
    [Crossref]
  35. H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
    [Crossref]
  36. T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
    [Crossref]
  37. T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
    [Crossref]
  38. K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765–4768 (2019).
    [Crossref]
  39. Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
    [Crossref]
  40. V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.
  41. Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018).
    [Crossref]
  42. M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier, “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles,” Opt. Express 26(12), 15221–15231 (2018).
    [Crossref]
  43. K. Jaferzadeh, S. H. Hwang, I. Moon, and B. Javidi, “No-search focus prediction at the single cell level in digital holographic imaging with deep convolutional neural network,” Biomed. Opt. Express 10(8), 4276–4289 (2019).
    [Crossref]
  44. T. Shimobaba, T. Takahashi, Y. Yamamoto, Y. Endo, A. Shiraki, T. Nishitsuji, N. Hoshikawa, T. Kakue, and T. Ito, “Digital holographic particle volume reconstruction using a deep neural network,” Appl. Opt. 58(8), 1900–1906 (2019).
    [Crossref]
  45. C. Ding, Z. Ding, X. He, and H. Zha, “R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization,” in Proc. of 23rd ICML 2006-Pittsburg, (ACM, 2006), pp. 281–288.
  46. T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007).
    [Crossref]
  47. E. M. Hall, D. R. Guildenbecher, and B. S. Thurow, “Uncertainty characterization of particle location from refocused plenoptic images,” Opt. Express 25(18), 21801–21814 (2017).
    [Crossref]
  48. M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
    [Crossref]
  49. F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007).
    [Crossref]
  50. P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
    [Crossref]
  51. O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Med. Image Comput. Comput. Assist. Interv. 2015-Germany, (Springer, 2015), pp. 234–241.
  52. Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.
  53. P. Isola, J. Y. Zhu, T. Zhou, and A. A. Elfros, “Image-to-image translation with conditional adversarial networks,” in Proc. of the IEEE CVPR 2017-Honolulu, (IEEE, 2017), pp. 1125–1134.
  54. L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.
  55. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE CVPR 2016-Las Vegas, (IEEE, 2016), pp. 770–778.
  56. P. Ramachandran, B. Zoph, and Q.-V. Le, “Searching for activation functions,” axXiv: 1710.05941 (2016).
  57. F. Soulez, L. Denis, E. Thiébaut, C. Fournier, and C. Goepfert, “Inverse problem approach in particle digital holography: out-of-field particle detection made possible,” J. Opt. Soc. Am. A 24(12), 3708–3716 (2007).
    [Crossref]
  58. T. Latychevskaia and H. W. Fink, “Practical algorithms for simulation and reconstruction of digital in-line holograms,” Appl. Opt. 54(9), 2424–2434 (2015).
    [Crossref]
  59. J. P. Huber, “Robust estimation of a location parameter,” Ann. Math. Stat. 35(1), 73–101 (1964).
    [Crossref]
  60. F. Chollet, keras. GitHub repository (2015), https://github.com/fchollet/keras .
  61. D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” axXiv: 1412.6980 (2014).
  62. I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT Press, 2016).
  63. J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.
  64. Y. Fu and Y. Liu, “BubGan: Bubble generative adversarial networks for synthesizing realistic bubbly flow images,” Chem. Eng. Sci. 204, 35–47 (2019).
    [Crossref]

2019 (10)

S. Shao, C. Li, and J. Hong, “A hybrid image processing method for measuring 3D bubble distribution using digital inline holography,” Chem. Eng. Sci. 207, 929–941 (2019).
[Crossref]

A. Berdeu, O. Flasseur, L. Méès, L. Denis, F. Momey, T. Olivier, N. Grosjean, and C. Fournier, “Reconstruction of in-line holograms: combining model-based and regularized inversion,” Opt. Express 27(10), 14951 (2019).
[Crossref]

K. Mallery and J. Hong, “Regularized inverse holographic volume reconstruction for 3D particle tracking,” Opt. Express 27(13), 18069 (2019).
[Crossref]

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
[Crossref]

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

K. Wang, J. Dou, Q. Kemao, J. Di, and J. Zhao, “Y-net: a one-to-two deep learning framework for digital holographic reconstruction,” Opt. Lett. 44(19), 4765–4768 (2019).
[Crossref]

K. Jaferzadeh, S. H. Hwang, I. Moon, and B. Javidi, “No-search focus prediction at the single cell level in digital holographic imaging with deep convolutional neural network,” Biomed. Opt. Express 10(8), 4276–4289 (2019).
[Crossref]

T. Shimobaba, T. Takahashi, Y. Yamamoto, Y. Endo, A. Shiraki, T. Nishitsuji, N. Hoshikawa, T. Kakue, and T. Ito, “Digital holographic particle volume reconstruction using a deep neural network,” Appl. Opt. 58(8), 1900–1906 (2019).
[Crossref]

Y. Fu and Y. Liu, “BubGan: Bubble generative adversarial networks for synthesizing realistic bubbly flow images,” Chem. Eng. Sci. 204, 35–47 (2019).
[Crossref]

2018 (8)

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Z. Ren, Z. Xu, and E. Y. Lam, “Learning-based nonparametric autofocusing for digital holography,” Optica 5(4), 337–344 (2018).
[Crossref]

M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier, “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles,” Opt. Express 26(12), 15221–15231 (2018).
[Crossref]

Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

F. Jolivet, F. Momey, L. Denis, L. Méès, N. Faure, N. Grosjean, F. Pinston, J. L. Marié, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26(7), 8923 (2018).
[Crossref]

J. Gao and J. Katz, “Self-calibrated microscopic dual-view tomographic holography for 3D flow measurements,” Opt. Express 26(13), 16708–16725 (2018).
[Crossref]

2017 (4)

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

E. M. Hall, D. R. Guildenbecher, and B. S. Thurow, “Uncertainty characterization of particle location from refocused plenoptic images,” Opt. Express 25(18), 21801–21814 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

2016 (3)

S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016).
[Crossref]

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

N. Verrier, N. Grosjean, E. Dib, L. Méès, C. Fournier, and J. L. Marié, “Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction,” Meas. Sci. Technol. 27(4), 045001 (2016).
[Crossref]

2015 (4)

M. Toloui and J. Hong, “High fidelity digital inline holographic method for 3D flow measurements,” Opt. Express 23(21), 27159 (2015).
[Crossref]

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015).
[Crossref]

T. Latychevskaia and H. W. Fink, “Practical algorithms for simulation and reconstruction of digital in-line holograms,” Appl. Opt. 54(9), 2424–2434 (2015).
[Crossref]

2014 (4)

P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
[Crossref]

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

A. Wang, Q. Marashdeh, and L. S. Fan, “ECVT imaging of 3D spiral bubble plume structures in gas-liquid bubble columns,” Can. J. Chem. Eng. 92(12), 2078–2087 (2014).
[Crossref]

T. Latychevskaia and H. W. Fink, “Holographic time-resolved particle tracking by means of three-dimensional volumetric deconvolution,” Opt. Express 22(17), 20994 (2014).
[Crossref]

2013 (2)

D. R. Guildenbecher, J. Gao, P. L. Reu, and J. Chen, “Digital holography simulations and experiments to quantify the accuracy of 3D particle location and 2D sizing using a proposed hybrid method,” Appl. Opt. 52(16), 3790–3801 (2013).
[Crossref]

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

2012 (2)

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012).
[Crossref]

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

2010 (2)

2009 (4)

J. Sheng, E. Malkiel, and J. Katz, “Buffer layer structures associated with extreme wall stress events in a smooth wall turbulent boundary layer,” J. Fluid Mech. 633, 17–60 (2009).
[Crossref]

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Y. S. Choi and S. J. Lee, “Three-dimensional volumetric measurement of red blood cell motion using digital inline holography,” Appl. Opt. 48(16), 2983–2990 (2009).
[Crossref]

2007 (3)

F. Soulez, L. Denis, E. Thiébaut, C. Fournier, and C. Goepfert, “Inverse problem approach in particle digital holography: out-of-field particle detection made possible,” J. Opt. Soc. Am. A 24(12), 3708–3716 (2007).
[Crossref]

F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007).
[Crossref]

T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007).
[Crossref]

2004 (1)

2002 (1)

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

1999 (2)

E. Malkiel, O. Alquaddomi, and J. Katz, “Measurements of plankton distribution in the ocean using submersible holography,” Meas. Sci. Technol. 10(12), 1142–1152 (1999).
[Crossref]

V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999).
[Crossref]

1964 (1)

J. P. Huber, “Robust estimation of a location parameter,” Ann. Math. Stat. 35(1), 73–101 (1964).
[Crossref]

Abdulali, A.

Abdulkadir, A.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.

Acuna, D.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Adam, H.

L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.

Adams, J. K.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Adams, M.

V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999).
[Crossref]

Agero, U.

P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
[Crossref]

Allano, D.

Alquaddomi, O.

E. Malkiel, O. Alquaddomi, and J. Katz, “Measurements of plankton distribution in the ocean using submersible holography,” Meas. Sci. Technol. 10(12), 1142–1152 (1999).
[Crossref]

Amaral, F. T.

P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
[Crossref]

Anil, C.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Avants, B. W.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Ba, J.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” axXiv: 1412.6980 (2014).

Bäckman, J.

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

Bals, S.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Baraniuk, R. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Barbastathis, G.

Batenburg, K. J.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Beals, M. J.

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

Bedrossian, M.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Bengio, Y.

I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT Press, 2016).

Berdeu, A.

Bianco, G.

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

Bianco, V.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

Birchfield, S.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Boochoon, S.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Boominathan, V.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Boppart, S. A.

T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007).
[Crossref]

Boyden, E. S.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Bramanti, A.

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

Brophy, M.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Brox, T.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Med. Image Comput. Comput. Assist. Interv. 2015-Germany, (Springer, 2015), pp. 234–241.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.

Cameracci, E.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Carcagni, P.

V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

Carney, P. S.

T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007).
[Crossref]

Castano-Graff, E.

F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007).
[Crossref]

Chen, J.

Chen, L. C.

L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.

Chen, N.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Choi, Y. S.

Chollet, F.

F. Chollet, keras. GitHub repository (2015), https://github.com/fchollet/keras .

Çiçek, Ö.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.

Coëtmellec, S.

Coranado, E. A.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Courville, A.

I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT Press, 2016).

Czerski, H.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Deming, J. W.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Denis, L.

Di, J.

Dib, E.

N. Verrier, N. Grosjean, E. Dib, L. Méès, C. Fournier, and J. L. Marié, “Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction,” Meas. Sci. Technol. 27(4), 045001 (2016).
[Crossref]

Ding, C.

C. Ding, Z. Ding, X. He, and H. Zha, “R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization,” in Proc. of 23rd ICML 2006-Pittsburg, (ACM, 2006), pp. 281–288.

Ding, Z.

C. Ding, Z. Ding, X. He, and H. Zha, “R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization,” in Proc. of 23rd ICML 2006-Pittsburg, (ACM, 2006), pp. 281–288.

Distante, C.

V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

Domínguez-Caballero, J. A.

Donaghay, P.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Dong, H.

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

Dou, J.

Ekvall, M. T.

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

Elfros, A. A.

P. Isola, J. Y. Zhu, T. Zhou, and A. A. Elfros, “Image-to-image translation with conditional adversarial networks,” in Proc. of the IEEE CVPR 2017-Honolulu, (IEEE, 2017), pp. 1125–1134.

Encina, E. R.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Endo, Y.

Fan, L. S.

A. Wang, Q. Marashdeh, and L. S. Fan, “ECVT imaging of 3D spiral bubble plume structures in gas-liquid bubble columns,” Can. J. Chem. Eng. 92(12), 2078–2087 (2014).
[Crossref]

Faure, N.

Fernando, L. P.

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

Ferraro, P.

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

Fink, H. W.

Fischer, P.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Med. Image Comput. Comput. Assist. Interv. 2015-Germany, (Springer, 2015), pp. 234–241.

Flasseur, O.

Fournier, C.

Fu, Y.

Y. Fu and Y. Liu, “BubGan: Bubble generative adversarial networks for synthesizing realistic bubbly flow images,” Chem. Eng. Sci. 204, 35–47 (2019).
[Crossref]

Fugal, J. P.

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

Gao, J.

Gharib, M.

F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007).
[Crossref]

Goepfert, C.

Goodfellow, I.

I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT Press, 2016).

Grier, D. G.

Grosjean, N.

Gude, S.

K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015).
[Crossref]

Guildenbecher, D. R.

Günaydin, H.

Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Gürücs, Z.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Haan, K.

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

Haan, K. D.

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Hall, E. M.

Hannel, M. D.

Hansson, L. A.

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

Hartmann, H.-J.

V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999).
[Crossref]

He, K.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE CVPR 2016-Las Vegas, (IEEE, 2016), pp. 770–778.

He, X.

C. Ding, Z. Ding, X. He, and H. Zha, “R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization,” in Proc. of 23rd ICML 2006-Pittsburg, (ACM, 2006), pp. 281–288.

Hernandez, J. C.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Hoffmann, M.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Hong, J.

S. Shao, C. Li, and J. Hong, “A hybrid image processing method for measuring 3D bubble distribution using digital inline holography,” Chem. Eng. Sci. 207, 929–941 (2019).
[Crossref]

K. Mallery and J. Hong, “Regularized inverse holographic volume reconstruction for 3D particle tracking,” Opt. Express 27(13), 18069 (2019).
[Crossref]

S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016).
[Crossref]

M. Toloui and J. Hong, “High fidelity digital inline holographic method for 3D flow measurements,” Opt. Express 23(21), 27159 (2015).
[Crossref]

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Hoshikawa, N.

Huber, J. P.

J. P. Huber, “Robust estimation of a location parameter,” Ann. Math. Stat. 35(1), 73–101 (1964).
[Crossref]

Hwang, S. H.

Isola, P.

P. Isola, J. Y. Zhu, T. Zhou, and A. A. Elfros, “Image-to-image translation with conditional adversarial networks,” in Proc. of the IEEE CVPR 2017-Honolulu, (IEEE, 2017), pp. 1125–1134.

Ito, T.

Jaferzadeh, K.

Jampani, V.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Javidi, B.

Jolivet, F.

Jüptner, W.

V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999).
[Crossref]

Kähler, C. J.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

Kaiser, U.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Kakue, T.

Kato, S.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Katz, J.

J. Gao and J. Katz, “Self-calibrated microscopic dual-view tomographic holography for 3D flow measurements,” Opt. Express 26(13), 16708–16725 (2018).
[Crossref]

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

J. Katz and J. Sheng, “Applications of holography in fluid mechanics and particle dynamics,” Annu. Rev. Fluid Mech. 42(1), 531–555 (2010).
[Crossref]

J. Sheng, E. Malkiel, and J. Katz, “Buffer layer structures associated with extreme wall stress events in a smooth wall turbulent boundary layer,” J. Fluid Mech. 633, 17–60 (2009).
[Crossref]

E. Malkiel, O. Alquaddomi, and J. Katz, “Measurements of plankton distribution in the ocean using submersible holography,” Meas. Sci. Technol. 10(12), 1142–1152 (1999).
[Crossref]

Kebbel, V.

V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999).
[Crossref]

Kemao, Q.

Kingma, D. P.

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” axXiv: 1412.6980 (2014).

Kompenhans, J.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

Koydemir, H. C.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Kübel, C.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Kumar, S. S.

S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016).
[Crossref]

Lam, E. Y.

Latychevskaia, T.

Le, Q.-V.

P. Ramachandran, B. Zoph, and Q.-V. Le, “Searching for activation functions,” axXiv: 1710.05941 (2016).

Lebrun, D.

Lee, S. J.

Li, C.

S. Shao, C. Li, and J. Hong, “A hybrid image processing method for measuring 3D bubble distribution using digital inline holography,” Chem. Eng. Sci. 207, 929–941 (2019).
[Crossref]

Li, G.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Lienkamp, S. S.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.

Lin, X.

Lindensmith, C. A.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Linke, H.

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

Linse, S.

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

Liu, T.

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

Liu, Y.

Y. Fu and Y. Liu, “BubGan: Bubble generative adversarial networks for synthesizing realistic bubbly flow images,” Chem. Eng. Sci. 204, 35–47 (2019).
[Crossref]

Loomis, N.

Lu, J.

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007).
[Crossref]

Lyu, M.

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Malek, M.

Malkiel, E.

J. Sheng, E. Malkiel, and J. Katz, “Buffer layer structures associated with extreme wall stress events in a smooth wall turbulent boundary layer,” J. Fluid Mech. 633, 17–60 (2009).
[Crossref]

E. Malkiel, O. Alquaddomi, and J. Katz, “Measurements of plankton distribution in the ocean using submersible holography,” Meas. Sci. Technol. 10(12), 1142–1152 (1999).
[Crossref]

Mallery, K.

Mandracchia, B.

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

Marashdeh, Q.

A. Wang, Q. Marashdeh, and L. S. Fan, “ECVT imaging of 3D spiral bubble plume structures in gas-liquid bubble columns,” Can. J. Chem. Eng. 92(12), 2078–2087 (2014).
[Crossref]

Marié, J. L.

F. Jolivet, F. Momey, L. Denis, L. Méès, N. Faure, N. Grosjean, F. Pinston, J. L. Marié, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26(7), 8923 (2018).
[Crossref]

N. Verrier, N. Grosjean, E. Dib, L. Méès, C. Fournier, and J. L. Marié, “Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction,” Meas. Sci. Technol. 27(4), 045001 (2016).
[Crossref]

Marks, D. L.

T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007).
[Crossref]

McFarland, M.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

McNeill, J.

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

Méès, L.

Memmolo, P.

V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

Mequitam, O. N.

P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
[Crossref]

Merola, F.

V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

Midgley, P. A.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Momey, F.

Moon, I.

Mugnano, M.

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

Nadeau, J. L.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Nayak, A. R.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Nishitsuji, T.

O’Brien, M.

Olivier, T.

Ozcan, A.

G. Barbastathis, A. Ozcan, and G. Situ, “On the use of deep learning for computational imaging,” Optica 6(8), 921–943 (2019).
[Crossref]

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012).
[Crossref]

Pak, N.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Papandreou, G.

L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.

Paterson, D. M.

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

Paturzo, M.

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

Pereira, F.

F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007).
[Crossref]

Perkins, R.

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

Pinston, F.

Player, M. A.

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

Prakash, P.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Prevedel, R.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Raffel, M.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

Ralston, T. S.

T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007).
[Crossref]

Ramachandran, P.

P. Ramachandran, B. Zoph, and Q.-V. Le, “Searching for activation functions,” axXiv: 1710.05941 (2016).

Raskar, R.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Ren, S.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE CVPR 2016-Las Vegas, (IEEE, 2016), pp. 770–778.

Ren, Z.

Reu, P. L.

Rider, S.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Rines, J.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Riverson, Y.

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Robinson, J. T.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Roma, P. M. S.

P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
[Crossref]

Ronneberger, O.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Med. Image Comput. Comput. Assist. Interv. 2015-Germany, (Springer, 2015), pp. 234–241.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.

Roy, S.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Sahu, P.

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

Scarano, F.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

Schrödel, T.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Schroff, F.

L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.

Serabyn, E.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Shao, S.

S. Shao, C. Li, and J. Hong, “A hybrid image processing method for measuring 3D bubble distribution using digital inline holography,” Chem. Eng. Sci. 207, 929–941 (2019).
[Crossref]

Shaw, R. A.

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

Sheng, J.

J. Katz and J. Sheng, “Applications of holography in fluid mechanics and particle dynamics,” Annu. Rev. Fluid Mech. 42(1), 531–555 (2010).
[Crossref]

J. Sheng, E. Malkiel, and J. Katz, “Buffer layer structures associated with extreme wall stress events in a smooth wall turbulent boundary layer,” J. Fluid Mech. 633, 17–60 (2009).
[Crossref]

Shimizu, T. S.

K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015).
[Crossref]

Shimobaba, T.

Shindo, K.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Shiraki, A.

Showalter, G. M.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Sijbers, J.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Siman, L.

P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
[Crossref]

Situ, G.

Soulez, F.

Spuler, S. M.

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

Stith, J. L.

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

Su, T.-W.

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012).
[Crossref]

Sullivan, J.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Sun, H.

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

Sun, J.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE CVPR 2016-Las Vegas, (IEEE, 2016), pp. 770–778.

Sun, Y.

S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016).
[Crossref]

Szymanski, C.

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

Takahashi, T.

Talapatra, S.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Tamanitsu, M.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Tans, S. J.

K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015).
[Crossref]

Taute, K. M.

K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015).
[Crossref]

Teng, D.

Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Thiébaut, E.

Thurow, B. S.

Tian, L.

To, T.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Toloui, M.

Tremblay, J.

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Twardowski, M.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Van Tenedeloo, G.

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Vaziri, A.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Veeraraghavan, A.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Vercosa, D. G.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Verrier, N.

N. Verrier, N. Grosjean, E. Dib, L. Méès, C. Fournier, and J. L. Marié, “Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction,” Meas. Sci. Technol. 27(4), 045001 (2016).
[Crossref]

Wallace, J. K.

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Wang, A.

A. Wang, Q. Marashdeh, and L. S. Fan, “ECVT imaging of 3D spiral bubble plume structures in gas-liquid bubble columns,” Can. J. Chem. Eng. 92(12), 2078–2087 (2014).
[Crossref]

Wang, H.

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Wang, K.

Wang, W.

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Wang, Z.

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

Watson, J.

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

Wei, Z.

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Wereley, S. T.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

Wetzstein, G.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Willert, C. E.

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

Wolf, P.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Wu, Y.

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Wum S, C.

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

Xu, Z.

Xue, L.

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012).
[Crossref]

Yamamoto, Y.

Yanny, K.

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Ye, F.

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Yoon, Y. G.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Yu, J.

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

Zeng, X.

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

Zha, H.

C. Ding, Z. Ding, X. He, and H. Zha, “R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization,” in Proc. of 23rd ICML 2006-Pittsburg, (ACM, 2006), pp. 281–288.

Zhang, C.

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Zhang, X.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE CVPR 2016-Las Vegas, (IEEE, 2016), pp. 770–778.

Zhang, Y.

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

Y. Wu, Y. Riverson, Y. Zhang, Z. Wei, H. Günaydin, X. Lin, and A. Ozcan, “Extended depth-of-field in holographic imaging using deep-learning-based autofocusing and phase recovery,” Optica 5(6), 704–710 (2018).
[Crossref]

Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Zhao, J.

Zhou, T.

P. Isola, J. Y. Zhu, T. Zhou, and A. A. Elfros, “Image-to-image translation with conditional adversarial networks,” in Proc. of the IEEE CVPR 2017-Honolulu, (IEEE, 2017), pp. 1125–1134.

Zhu, J. Y.

P. Isola, J. Y. Zhu, T. Zhou, and A. A. Elfros, “Image-to-image translation with conditional adversarial networks,” in Proc. of the IEEE CVPR 2017-Honolulu, (IEEE, 2017), pp. 1125–1134.

Zhu, Y.

L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.

Zimmer, M.

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Zoph, B.

P. Ramachandran, B. Zoph, and Q.-V. Le, “Searching for activation functions,” axXiv: 1710.05941 (2016).

Zou, S.

S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016).
[Crossref]

Ann. Math. Stat. (1)

J. P. Huber, “Robust estimation of a location parameter,” Ann. Math. Stat. 35(1), 73–101 (1964).
[Crossref]

Annu. Rev. Fluid Mech. (1)

J. Katz and J. Sheng, “Applications of holography in fluid mechanics and particle dynamics,” Annu. Rev. Fluid Mech. 42(1), 531–555 (2010).
[Crossref]

Appl. Opt. (5)

Appl. Phys. Lett. (1)

P. M. S. Roma, L. Siman, F. T. Amaral, U. Agero, and O. N. Mequitam, “Total three-dimensional imaging of phase objects using defocusing microscopy: application to red blood cells,” Appl. Phys. Lett. 104(25), 251107 (2014).
[Crossref]

Biomed. Opt. Express (1)

Can. J. Chem. Eng. (1)

A. Wang, Q. Marashdeh, and L. S. Fan, “ECVT imaging of 3D spiral bubble plume structures in gas-liquid bubble columns,” Can. J. Chem. Eng. 92(12), 2078–2087 (2014).
[Crossref]

Chem. Eng. Sci. (2)

S. Shao, C. Li, and J. Hong, “A hybrid image processing method for measuring 3D bubble distribution using digital inline holography,” Chem. Eng. Sci. 207, 929–941 (2019).
[Crossref]

Y. Fu and Y. Liu, “BubGan: Bubble generative adversarial networks for synthesizing realistic bubbly flow images,” Chem. Eng. Sci. 204, 35–47 (2019).
[Crossref]

Exp. Fluids (1)

F. Pereira, J. Lu, E. Castano-Graff, and M. Gharib, “Microscale 3D flow mapping with µDDPIV,” Exp. Fluids 42(4), 589–599 (2007).
[Crossref]

J. Am. Chem. Soc. (1)

J. Yu, C. Wum S, P. Sahu, L. P. Fernando, C. Szymanski, and J. McNeill, “Nanoscale 3D tracking with conjugated polymer nanoparticles,” J. Am. Chem. Soc. 131(51), 18410–18414 (2009).
[Crossref]

J. Biophotonics (1)

T. Liu, Z. Wei, Y. Riverson, K. Haan, Y. Zhang, Y. Wu, and A. Ozcan, “Deep learning-based color holographic microscopy,” J. Biophotonics 12(11), e201900107 (2019).
[Crossref]

J. Fluid Mech. (1)

J. Sheng, E. Malkiel, and J. Katz, “Buffer layer structures associated with extreme wall stress events in a smooth wall turbulent boundary layer,” J. Fluid Mech. 633, 17–60 (2009).
[Crossref]

J. Opt. Soc. Am. A (1)

Lab Chip (1)

B. Mandracchia, V. Bianco, Z. Wang, M. Mugnano, A. Bramanti, M. Paturzo, and P. Ferraro, “Holographic microscope slide in a spatio-temporal imaging modality for reliable 3D cell counting,” Lab Chip 17(16), 2831–2838 (2017).
[Crossref]

Light: Sci. Appl. (2)

Y. Riverson, Y. Zhang, H. Günaydin, D. Teng, and A. Ozcan, “Phase recovery and holographic image reconstruction using deep learning in neural networks,” Light: Sci. Appl. 7(2), 17141 (2018).
[Crossref]

Z. Gürücs, M. Tamanitsu, V. Bianco, P. Wolf, S. Roy, K. Shindo, K. Yanny, Y. Wu, H. C. Koydemir, Y. Riverson, and A. Ozcan, “A deep learning-enabled portable imaging flow cytometer for cost-effective, high-throughput, and label-free analysis of natural water samples,” Light: Sci. Appl. 7(1), 66 (2018).
[Crossref]

Meas. Sci. Technol. (4)

V. Kebbel, M. Adams, H.-J. Hartmann, and W. Jüptner, “Digital holography as a versatile optical diagnostic method for microgravity experiments,” Meas. Sci. Technol. 10(10), 893–899 (1999).
[Crossref]

N. Verrier, N. Grosjean, E. Dib, L. Méès, C. Fournier, and J. L. Marié, “Improvement of the size estimation of 3D tracked droplets using digital in-line holography with joint estimation reconstruction,” Meas. Sci. Technol. 27(4), 045001 (2016).
[Crossref]

E. Malkiel, O. Alquaddomi, and J. Katz, “Measurements of plankton distribution in the ocean using submersible holography,” Meas. Sci. Technol. 10(12), 1142–1152 (1999).
[Crossref]

H. Sun, H. Dong, M. A. Player, J. Watson, D. M. Paterson, and R. Perkins, “In-line digital video holography for the study of erosion processes in sediments,” Meas. Sci. Technol. 13(10), L7–L12 (2002).
[Crossref]

Nat. Commun. (1)

K. M. Taute, S. Gude, S. J. Tans, and T. S. Shimizu, “High-throughput 3D tracking of bacteria on a standard phase contrast microscope,” Nat. Commun. 6(1), 8776 (2015).
[Crossref]

Nat. Methods (1)

R. Prevedel, Y. G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,’,” Nat. Methods 11(7), 727–730 (2014).
[Crossref]

Nat. Phys. (1)

T. S. Ralston, D. L. Marks, P. S. Carney, and S. A. Boppart, “Interferometric synthetic aperture microscopy,” Nat. Phys. 3(2), 129–134 (2007).
[Crossref]

Opt. Express (10)

E. M. Hall, D. R. Guildenbecher, and B. S. Thurow, “Uncertainty characterization of particle location from refocused plenoptic images,” Opt. Express 25(18), 21801–21814 (2017).
[Crossref]

M. D. Hannel, A. Abdulali, M. O’Brien, and D. G. Grier, “Machine-learning techniques for fast and accurate feature localization in holograms of colloidal particles,” Opt. Express 26(12), 15221–15231 (2018).
[Crossref]

H. Wang, M. Lyu, and G. Situ, “eHoloNet: a learning-based end-to-end approach for in-line digital holographic reconstruction,” Opt. Express 26(18), 22603–22614 (2018).
[Crossref]

A. Berdeu, O. Flasseur, L. Méès, L. Denis, F. Momey, T. Olivier, N. Grosjean, and C. Fournier, “Reconstruction of in-line holograms: combining model-based and regularized inversion,” Opt. Express 27(10), 14951 (2019).
[Crossref]

F. Jolivet, F. Momey, L. Denis, L. Méès, N. Faure, N. Grosjean, F. Pinston, J. L. Marié, and C. Fournier, “Regularized reconstruction of absorbing and phase objects from a single in-line hologram, application to fluid mechanics and micro-biology,” Opt. Express 26(7), 8923 (2018).
[Crossref]

K. Mallery and J. Hong, “Regularized inverse holographic volume reconstruction for 3D particle tracking,” Opt. Express 27(13), 18069 (2019).
[Crossref]

J. Gao and J. Katz, “Self-calibrated microscopic dual-view tomographic holography for 3D flow measurements,” Opt. Express 26(13), 16708–16725 (2018).
[Crossref]

T. Latychevskaia and H. W. Fink, “Holographic time-resolved particle tracking by means of three-dimensional volumetric deconvolution,” Opt. Express 22(17), 20994 (2014).
[Crossref]

M. Toloui and J. Hong, “High fidelity digital inline holographic method for 3D flow measurements,” Opt. Express 23(21), 27159 (2015).
[Crossref]

M. Malek, D. Allano, S. Coëtmellec, and D. Lebrun, “Digital in-line holography: influence of the shadow density on particle field extraction,” Opt. Express 12(10), 2270–2279 (2004).
[Crossref]

Opt. Lett. (1)

Optica (3)

PLoS One (2)

M. T. Ekvall, G. Bianco, S. Linse, H. Linke, J. Bäckman, and L. A. Hansson, “Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles,” PLoS One 8(11), e78498 (2013).
[Crossref]

C. A. Lindensmith, S. Rider, M. Bedrossian, J. K. Wallace, E. Serabyn, G. M. Showalter, J. W. Deming, and J. L. Nadeau, “A submersible, off-axis holographic microscope for detection of microbial motility and morphology in aqueous and icy environments,” PLoS One 11(1), e0147700 (2016).
[Crossref]

Proc. Natl. Acad. Sci. U. S. A. (1)

T.-W. Su, L. Xue, and A. Ozcan, “High-throughput lensfree 3D tracking of human sperms reveals rare statistics of helical trajectories,” Proc. Natl. Acad. Sci. U. S. A. 109(40), 16018–16022 (2012).
[Crossref]

Proc. SPIE (1)

S. Talapatra, J. Sullivan, J. Katz, M. Twardowski, H. Czerski, P. Donaghay, J. Hong, J. Rines, M. McFarland, A. R. Nayak, and C. Zhang, “Application of in-situ digital holography in the study of particles, organisms and bubbles within their natural environment,” Proc. SPIE 8372, 837205 (2012).
[Crossref]

Sci. Adv. (1)

J. K. Adams, V. Boominathan, B. W. Avants, D. G. Vercosa, F. Ye, R. G. Baraniuk, J. T. Robinson, and A. Veeraraghavan, “Single frame fluorescence microscopy with ultraminiature lensless flatscope,” Sci. Adv. 3(12), e1701548 (2017).
[Crossref]

Sci. Rep. (3)

S. S. Kumar, Y. Sun, S. Zou, and J. Hong, “3D Holographic Observatory for Long-term Monitoring of Complex Behaviors in Drosophila,” Sci. Rep. 6(1), 33001 (2016).
[Crossref]

T. Liu, K. D. Haan, Y. Riverson, Z. Wei, X. Zeng, Y. Zhang, and A. Ozcan, “Deep learning-based super-resolution in coherent image systems,” Sci. Rep. 9(1), 3926 (2019).
[Crossref]

M. Lyu, W. Wang, H. Wang, H. Wang, G. Li, N. Chen, and G. Situ, “Deep-learning-based ghost imaging,” Sci. Rep. 7(1), 17865 (2017).
[Crossref]

Science (1)

M. J. Beals, J. P. Fugal, R. A. Shaw, J. Lu, S. M. Spuler, and J. L. Stith, “Holographic measurements of inhomogeneous cloud mixing at the centimeter scale,” Science 350(6256), 87–90 (2015).
[Crossref]

Ultramicroscopy (1)

K. J. Batenburg, S. Bals, J. Sijbers, C. Kübel, P. A. Midgley, J. C. Hernandez, U. Kaiser, E. R. Encina, E. A. Coranado, and G. Van Tenedeloo, “3D imaging of nanomaterials by discrete tomography,” Ultramicroscopy 109(6), 730–740 (2009).
[Crossref]

Other (13)

M. Raffel, C. E. Willert, F. Scarano, C. J. Kähler, S. T. Wereley, and J. Kompenhans, Particle image velocimetry: a practical guide. (Springer, 2018).

C. Ding, Z. Ding, X. He, and H. Zha, “R1-PCA: rotational invariant L1-norm principal component analysis for robust subspace factorization,” in Proc. of 23rd ICML 2006-Pittsburg, (ACM, 2006), pp. 281–288.

O. Ronneberger, P. Fischer, and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” in Med. Image Comput. Comput. Assist. Interv. 2015-Germany, (Springer, 2015), pp. 234–241.

Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3D U-net: learning dense volumetric segmentation from sparse annotation,” in Med. Image Comput. Comput. Assist. Interv. 2016-Germany, (Springer, 2016), pp. 424–432.

P. Isola, J. Y. Zhu, T. Zhou, and A. A. Elfros, “Image-to-image translation with conditional adversarial networks,” in Proc. of the IEEE CVPR 2017-Honolulu, (IEEE, 2017), pp. 1125–1134.

L. C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, “Encoder-decoder with atrous separable convolution for semantic image segmentation,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 801–818.

K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. of the IEEE CVPR 2016-Las Vegas, (IEEE, 2016), pp. 770–778.

P. Ramachandran, B. Zoph, and Q.-V. Le, “Searching for activation functions,” axXiv: 1710.05941 (2016).

V. Bianco, P. Memmolo, F. Merola, P. Carcagni, C. Distante, and P. Ferraro, “High-accuracy identification of micro-plastics by holographic microscopy enabled support vector machine,” in Quantitative Phase Imaging V, (SPIE, 2019), pp. 108870F-1–108870F-7.

F. Chollet, keras. GitHub repository (2015), https://github.com/fchollet/keras .

D. P. Kingma and J. Ba, “Adam: a method for stochastic optimization,” axXiv: 1412.6980 (2014).

I. Goodfellow, Y. Bengio, and A. Courville, Deep learning (MIT Press, 2016).

J. Tremblay, P. Prakash, D. Acuna, M. Brophy, V. Jampani, C. Anil, T. To, E. Cameracci, S. Boochoon, and S. Birchfield, “Training deep networks with synthetic data: bridging the reality gap by domain randomization,” in Proc. of the IEEE CVPR 2018-Salt Lake City, (IEEE, 2018), pp. 969–977.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. The specially-designed U-net architecture for holographic reconstruction of 3D particle field.
Fig. 2.
Fig. 2. A sample training input and training target consisting of 300 particles (i.e., concentration at 0.018 ppp) with a hologram size of 128 ×128 pixels. The hologram is formed with a pixel resolution of 10 µm with a laser illumination wavelength of 632 nm.
Fig. 3.
Fig. 3. Prediction results from the trained model using (a) our U-net architecture and (b) the method presented in Shimobaba et al. [44] (c) and Mallery and Hong [31] for the case of 0.018 ppp (300-particle holograms). The black dots are extracted true particles, red dots are false positives (i.e., unpaired particles from ground truth) and green dots are the false negatives (unpaired particles from the ground truth).
Fig. 4.
Fig. 4. Demonstration of the impact of the proposed model improvements on the training process over the first 200 epochs. (a) Proposed approach, (b) using U-net architecture without residual connection and (c) using mean squared error as loss function. The loss is normalized by its initial value, and each case is randomly initialized 10 times to show the resultant instability of the training for cases (b) and (c). In the image, the green curves correspond to the maximum and minimum normalized loss at each epoch, the blue curves corresponding to each initialization, and the shaded region is the range of loss.
Fig. 5.
Fig. 5. Comparison of prediction results with a 100-particle hologram (a) and a 1000-particle hologram (b). The black dots are extracted true particles, red dots are false positives (i.e., unpaired particles from prediction) and green dots are false negatives (unpaired particles from the ground truth).
Fig. 6.
Fig. 6. (a) Extraction rate under different particle concentrations of the proposed method and compared with the case of Shimobaba et al. [44] and RIHVR [31] and (b) Median position error of extracted particles for the proposed method under different particle concentrations. Note that the dashed lines correspond to the particle concentration of the base model (1.8×10−2 ppp).
Fig. 7.
Fig. 7. (a) A 128×128-pixel enhanced hologram from the experimental data and corresponding volumetric image through the stacking of fluorescent bright field scanning of the same sample for determining the ground truth (b). (c). Prediction results in comparison from the machine learning model. The black dots are extracted true particles, red dots are false positives (i.e., unpaired particles from ground truth) and green dots are the false negatives (unpaired particles from the ground truth.).

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

f ( x ) = x 1 + e x
u p ( x , y , z ) = F 1 [ F ( I ( x , y ) ) × F ( exp ( j k z ) j λ z exp { j k 2 z [ ( x 2 + y 2 ) ] } ) ]
z approx = arg m z a x { u p ( x , y , z ) × conj [ u p ( x , y , z ) ] }
P ( x , y ) = max z { angle [ u p ( x , y , z ) ] }
L = { 1 2 | | Y X | | 2 2       | | Y X | | 1 δ ,   δ | | Y X | | 1 1 2 δ 2        otherwise .
L = ( 1 α ) ( | | Y X | | 2 2 ) + α | | Y | | T V 2
| | Y | | T V = i = 1 N x j = 1 N y ( Y i , j Y i i , j ) 2 + ( Y i , j Y i , j 1 ) 2

Metrics