## Abstract

Light field microscopy, featuring with snapshot large-scale three-dimensional (3D) fluorescence imaging, has aroused great interests in various biological applications, especially for high-speed 3D calcium imaging. Traditional 3D deconvolution algorithms based on the beam propagation model facilitate high-resolution 3D reconstructions. However, such a high-precision model is not robust enough for the experimental data with different system errors such as optical aberrations and background fluorescence, which bring great periodic artifacts and reduce the image contrast. In order to solve this problem, here we propose a phase-space deconvolution method for light field microscopy, which fully exploits the smoothness prior in the phase-space domain. By modeling the imaging process in the phase-space domain, we convert the spatially-nonuniform point spread function (PSF) into a spatially-uniform one with a much smaller size. Experiments on various biological samples and resolution charts are demonstrated to verify the contrast enhancement with much fewer artifacts and 10-times less computational cost by our method without any hardware modifications required.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Light field microscopy (LFM) has been widely applied in high-speed large-scale 3D fluorescence imaging [1,2] since it has the unique advantage of capturing the four-dimensional (4D) phase space [3–5] within a snapshot. Various biomedical applications, such as whole-animal calcium imaging [1] and 3D behavioral phenotyping [6], benefit from the unprecedented volumetric imaging speed. In the past decade, different LFM schematics [3,5,7–9] are proposed to capture the light field data with tradeoffs among spatial resolution, field of view (FOV) and imaging speed. Meanwhile, researchers have designed lots of algorithms [10–12] based on geometrical optics to reduce the artifacts in light-field-synthesized depth map and digital refocus in photography. However, very few algorithms based on wave optics, except for the straight-forward 3D Richard-Lucy (RL) deconvolution [4], are exploited to fully digest the 3D information embedded in the phase-space measurements of LFM. Although the resolution of LFM can be greatly enhanced by 3D RL deconvolution algorithm [4], strong artifacts close to the native object plane are introduced in the reconstructed volume due to low spatial sampling, which can be even worse with unavoidable system aberrations [13] and low signal-noise-ratio (SNR) conditions [14]. Especially for high numerical-aperture (NA) objective lenses with a small depth of field, a lot of out-of-focus fluorescence signals are collected and the sample-induced aberrations will easily change the distribution of the point-spread-function (PSF) [15]. In this case, the periodic and sharp PSF in 3D deconvolution for LFM, which is simulated by the beam propagation after the phase modulation of a microlens array, can easily overfit for the complicated experimental data [16] and increase the artifacts. Such high-frequency sample-dependent artifacts, which are hard to be removed, greatly reduce the image contrast in 3D visualization. By applying hardware modifications to the LFM, the artifacts can be removed with an enhanced resolution at the sacrifice of the depth of field [17] or FOV [5]. By exploiting the intrinsic properties of phase space, artifact-free high-speed 3D fluorescence imaging can be achieved [18,19], but the aperture coding in these setups will reduce the light efficiency.

In addition, the 3D PSF in the traditional RL deconvolution of LFM is spatially non-uniform and usually represented as a large-scale matrix with a lot of zero elements. It greatly raises the computational cost for the deconvolution process and restricts the reconstructed depth range due to the huge memory requirement [4]. However, in fact, the effective number of pixels containing most of the information is orders-of-magnitude smaller than the total pixel number of the PSF, which provides great potential to increase the reconstruction speed [20]. Although algorithms considering the spatiotemporal sparsity in the time-lapse 3D video have shown great success to accelerate the process [2], they are still limited to certain application such as calcium imaging and cannot be applied for the observation of 3D morphological changes.

To solve these problems, we propose a phase-space deconvolution framework, which eliminates the artifacts in the 3D reconstruction of LFM data and substantially increases the image contrast and the convergence speed. The microlens array in LFM samples the local coherence in a parallel way, which captures the 4D phase-space information within a snapshot. Since the semi-transparent assumption holds for most 3D fluorescence samples, the phase-space distribution in normal imaging conditions without occlusions and nonlinear effects should be smooth. However, traditional 3D deconvolution in LFM simply up-samples the solution space with a high-resolution volume, leading to a spatially-nonuniform 3D PSF, and never considers the smooth property of phase-space distribution in linear beam propagation [3]. Therefore, the pixelated effect and reconstruction artifacts appear with the discontinuity in the phase-space domain, which is originally induced by the low spatial sampling. Here, rather than using the 3D PSF directly simulated by beam propagation with the phase modulation of a microlens array, we convert the forward imaging process into an equivalent model in the phase-space dimension, which produces spatially-uniform 3D PSFs for different spatial frequency components with much smaller size. An up-sampling interpolation process is then conducted in the phase-space domain to use the smoothness in phase space as a reconstruction prior [21]. We verify the algorithm performance in various biological samples and a USAF-1951 resolution chart to show its superior performance to conventional algorithms in terms of artifact reduction, contrast enhancement, and convergence speed.

## 2. Method

To verify our algorithm, we build a traditional LFM setup as the schematic diagram shown in Fig. 1(a). We insert a microlens array at the image plane of a commercial microscope using an objective lens with the matched NA (63 × /1.4 NA oil immersion objective lens, Zeiss Apochromat). A relay system is applied for the light field data collection to relay the back focal plane of the microlens array to the sensor plane. The magnification of the relay system is chosen as 1:1 so that each microlens (2.1 mm focal length with 100 μm pitch size, corresponding to about 1.43 μm at sample plane) covers a certain number of sensor pixels for different angular resolution (~13 × 13 here). To model the PSF, we need to get the theoretical influence of an arbitrary 3D point to different measurements by the sensor due to the incoherent property of the fluorescence imaging [4]. For simplicity, we don’t take the relay system into account during modeling. We set the 3D point located at the axial plane $z$ with a lateral coordinate $\text{p}=\left({p}_{1},{p}_{2}\right)$. An arbitrary pixel on the sensor can be defined with a relative lateral coordinate $\text{u}=\left({u}_{1},{u}_{2}\right)$ to the center of the microlens with a center position $\text{x}=\left({x}_{1},{x}_{2}\right)$, as illustrated in Fig. 1(a). The sensor pixels with the same coordinates $\text{u}$ (marked with various colors in the inset of Fig. 1(a)) represent different spatial frequency components.

Then the PSF used for traditional light field deconvolution can be marked as ${h}_{l}(\text{x}+\text{u},\text{p},z)$ [4]. With the increase of defocus distance, the wavefront of a single emitted source at each small area becomes closer to a plane wave, creating lots of zeros after each microlens (Fig. 1(b)). In addition, such a PSF is spatially-nonuniform for the sample point with different positions relative to the center of the microlens. To get rid of the redundancy and incorporate the smooth constraint in phase space, we find that we can directly realign the pixels captured by LFM to obtain the 4D phase-space measurements without any image calibration required in previous methods [5], since every microlens works as a local Fourier transform in the schematic of LFM. For the model in phase-space, the PSF can be represented as ${h}_{p}(\text{x},\text{p},z,\text{u})$ in a higher dimension, with the realignment process according to their coordinates $\text{x=}\left({x}_{1},{x}_{2}\right)$ and$\text{u}=\left({u}_{1},{u}_{2}\right)$. The generation of the phase-space PSFs from traditional PSF is also provided in Code 1 [22]. Figure 1(c) shows two PSFs of different spatial frequency components at z = 5 μm, illustrating a much smaller PSF size, compared with the PSF in 3D (Fig. 1(b)). In addition, the PSF after realigned in phase-space is more continuous and smoother than the PSF in 3D which has periodic patterns.

To analyze the PSF in phase-space in detail, we describe the representation of ${h}_{p}(\text{x},\text{p},z,\text{u})$ in the coordinates $\text{x}$ and $\text{u}$ defined previously. Then the influence of one point emitter, with a 3D location with lateral coordinate $\text{p}=\left({p}_{1},{p}_{2}\right)$ and fixed axial coordinate$z$, to one sensor pixel after a specific microlens can be represented as below:

With the Parseval’s theorem and a simple transformation, we can transform Eq. (1) into

The measurements $LF\left(\text{x}\right)$ captured by the light field data can then be represented as

For algorithm implementation, we further discrete the aforementioned imaging model, then each pixel sampling can be represented as below:

Therefore, the imaging model can be discretized as

**th spatial frequency components in the iteration number of $iter$, which is set to all ones for the first iteration and ${X}_{\text{i}}^{(1,iter\text{+}1)}\text{=}{X}_{\text{i}}^{({N}_{\text{k}}+1,iter)}$. The weight number ${w}_{\text{k}}$ corresponding to the spatial frequency ${\text{u}}_{\text{k}}$ can be calculated based on the energy distribution of the PSFs as below:**

*k*After going through all the spatial frequency components, our algorithm goes into the next large iteration until ${X}_{\text{i}}$ converges. It usually takes only 2 iterations for convergence. The whole algorithm pipeline is illustrated in Fig. 2. Benefited from the spatially-uniform PSFs for each spatial frequency components, we can use a discrete convolution operator to significantly reduce the computational cost. The detailed implementation of the phase-space deconvolution algorithm with a demo image can be found in Code 1 [22].

## 3. Results

For a quantitative analysis of both the reconstruction performance and convergence speed, we conducted a numerical simulation of two 8-μm-diameter fluorescence beads separated by 20 μm laterally with parameters same as the experimental setup described before (63 × / 1.4 NA oil objective lens and f/21 100 μm pitch size microlens). We ran both traditional 3D deconvolution and our phase-space deconvolution by Matlab 2017 on a computer with an Intel Core i9 processor (2.8 GHz) and memory of 64 Gbytes. The reconstructed SNR is calculated as below:

where $X$ represents the ground truth and $Y$ represents the reconstructed 3D volume. The reconstructed axial slices of both methods with the highest SNR are shown in Fig. 3(a), together with ground truth for detailed comparison. Our phase-space deconvolution cannot only retrieve a sharper image with higher contrast across the whole depth range, but also has much fewer artifacts close to the native object plane. The performance improvement will be more remarkable for samples with background fluorescence. Some halo effects can be observed around the beads, which may be introduced by the insufficient sampling in the spatial domain of LFM. And our deconvolution process further emphasizes the effect, which may reduce the quantification capability. But the intensity of the halo effects is much smaller than the useful information. In addition to the higher peak SNR and fewer artifacts, much fewer iteration numbers are required for convergence by using the phase-space deconvolution (Fig. 3(b)). The complicated and sharp PSF used in traditional 3D deconvolution can easily overfit for the ill-posed high-resolution 3D reconstruction problem, as illustrated by the quickly-decreasing SNR with the increase of the iteration number. On the contrary, the phase-space deconvolution is more robust with the spatially-uniform and smooth PSF. Due to the high convergence speed of phase-space deconvolution, here we calculate the computation time by multiple inner iterations within the large iteration covering all the spatial frequency components for comparison, as shown in Fig. 3(c). If considering 6dB SNR as the reference, the phase-space deconvolution (~80 seconds to reach 6dB) is more than 11 times faster than the traditional 3D RL deconvolution (~940 seconds to reach 6dB) for LFM reconstruction. In addition, we add TV regularization [26] in both algorithms for comparison. Despite slight improvements in SNR of both algorithms with better robustness to overfitting, the advanced optimization will increase the computational cost (Figs. 3(b) and 3(c)).To experimentally verify the enhanced performance of our phase-space deconvolution, we imaged different biological samples with the LFM described before (Fig. 1(a)) and compared the results to wide-field microscopy (WFM) with axial scanning by the 63 × / 1.4 NA oil objective lens. The same excitation power and exposure time were used for each frame. Firstly, we imaged the static GFP-labeled B16 cells, which were seeded on cover slides to 35%-50% confluency and cultured in RPMI-1640 medium overnight. The orthogonal maximum intensity projections (MIP) of the reconstructed volumes by LFM with the 3D RL deconvolution using 2 iterations and 20 iterations are shown in Figs. 4(a) and 4(b), respectively. With small iteration numbers, the cell looks blurry without detailed structures. Moreover, as the axial planes close to the native object plane shown on the right side, the reconstructed results have quite low resolution and appears as large pixel blocks. When we increase the iteration number as shown in Fig. 4(b), the images become sharper but great artifacts can be observed close to the native object plane. The distribution of artifacts is highly related to the sample itself in a complicated way, which cannot be regarded as a fixed noise pattern and thus is hard to be removed. On the contrary, almost no artifacts are left in Fig. 4(c) with the phase-space deconvolution (2 iterations). In addition, our result also reveals much more subcellular structures than the 3D RL deconvolution with only 2 iterations. Figure 4(d) shows the reconstructed results by WFM using 3D deconvolution with 300 iterations, illustrating similar sub-cellular structures reconstructed by our method.

Moreover, we imaged the histone-labeled *C. elegans* for a demonstration of dynamic samples (also with 63 × / 1.4 NA oil objective lens), as shown in Fig. 5 and Visualization 1. Figure 5 displays the reconstructed results by the 3D RL deconvolution with 2 iterations, the 3D RL deconvolution with 20 iterations, and the phase-space deconvolution with 2 iterations, respectively. The reconstructed volume of the 3D RL deconvolution is quite blurry without enough iterations. However, if 10 times more iterations of the 3D RL deconvolution are used for higher resolution in most axial planes, heavy artifacts will be introduced simultaneously (marked with the yellow arrows in Fig. 5(b)) and the image contrast will be greatly reduced. Our phase-space deconvolution exhibits clearer subcellular structures than other methods with no artifacts and small iteration numbers (2 iterations). Due to the large size for the reconstructed volume (1079 × 1079 × 81 voxels), corresponding to ~100 Megavoxels, it takes around 188.3 minutes of the 3D RL deconvolution to process a single frame with an acceptable resolution regardless of artifacts, while only 14.6 mins is required for our phase-space deconvolution with much better performance, as shown in Fig. 5(c). Such a computational acceleration makes the 3D reconstruction feasible to a video with hundreds of frames within one day, which is necessary for the practical use of LFM, especially for this kind of high-speed large-scale volumetric time-lapse imaging.

To further demonstrate the contrast enhancement of our proposed phase-space deconvolution algorithm, we imaged a USAF-1951 resolution chart placed at different axial positions away from the native object plane (z = 0 μm) with a 10 × / 0.3 NA dry objective lens. Figure 6(a) shows the reconstructed result (z = −150 μm) of the in-focus plane by the 3D RL deconvolution algorithm with 2 iterations. More iterations can further enhance the contrast, as shown in Fig. 6(b). While the results obtained by the phase-space deconvolution algorithm shows an even better contrast in Fig. 6(c). In addition, the line pairs (group 5 line 6, 57 cycles/mm), which can be distinguished by phase-space deconvolutions, has no contrast for 3D RL deconvolution with the same iteration number. We further placed the resolution chart in the native object plane (z = 0 μm), as shown in Figs. 6(d)-(f). Phase-space deconvolution shows higher contrast in the native object plane (z = 0 μm) without pixelated effects appeared in 3D RL deconvolution. Additionally, we plot the modulation transfer function of the reconstructed line pairs (group 5 line 6) with the resolution chart placed in different axial positions, which illustrates that both algorithms can achieve similar resolution for the defocus planes with enough iterations. However, in the reconstructed results with 3D RL deconvolution, there is a drop of MTF close to the native object plane, while phase-space deconvolution can achieve a smooth MTF curve. Our method cannot get around the low sampling density in the native focal plane, which is limited by the LFM, and cannot improve the intrinsic resolution of LFM. But with the smooth constraint applied in the phase-space domain, we can reduce the reconstruction artifacts and improve its image contrast, especially for the axial planes close to the native object plane.

In addition, we find that strong background fluorescence will aggravate the effects of artifacts in traditional 3D RL deconvolution and degrade the imaging performance, which is quite common for *in vivo* imaging [2]. But our phase-space deconvolution can achieve better performance by reducing the artifacts. To verify such capability of phase-space deconvolution with strong background fluorescence, we imaged a GFP-labeled B16 tumor spheroid with multiple cells distributed in 3D, most of which will contribute to the background. The GFP-B16 cells were maintained in RPMI1640 medium supplemented with 10%FBS, 1% Pen/Strap antibiotics and 1% NEAA (all from GIBCO). For the preparation of GFP-B16 tumor spheroids, 4 × 103 cells per well were seeded in round-bottomed 96 plates (Corning) and cultured in RPMI1640 medium supplemented with 10%FBS, 2% B-27 supplement (GIBCO), 2% methylcellulose (Sigma Aldrich), 1% Pen/Strap antibiotics and 1% NEAA. After 2days, each formed spheroids were picked and transferred as 1 spheroid per well, and cultured for another 2 days. Upon imaging, the GFP-B16 tumor spheroids were transferred to Lab-Tek II cover-glass-bottomed 8-chamber slide and imaged in HBSS supplemented with 2% FBS (All from Invitrogen) on our LFM (also with 63 × / 1.4 NA oil objective lens). As shown in Fig. 7, traditional 3D deconvolution with 2 iterations show blurry images of different axial planes with almost no difference between them. When we increase the iteration number, the images become sharper but strong artifacts can be clearly observed for the planes close to the native objective plane. On the contrary, our phase-space deconvolution achieves sharp images for all the axial planes without clear artifacts. Such a comparison shows the robustness of our algorithm for samples with strong background fluorescence.

## 4. Conclusions

To summarize, we propose a phase-space deconvolution algorithm for 3D reconstruction in LFM. Compared with the traditional 3D RL deconvolution, our method achieves much better image contrast, significantly reduces the reconstruction artifacts, and has a ten-times less computational cost. The sudden drop of resolution and the rapid increase of artifacts close to the native object plane greatly limit the effective depth range of traditional LFM methods. Such a discontinuity also avoids a clear visualization of the 3D volume and poses great challenges to quantitative cell analysis [27] and most image-processing algorithms, such as segmentation [28] and tracking [29]. The proposed phase-space deconvolution successfully retrieves the corrupted information and facilitates a smooth 3D reconstruction. We believe the improvement in both the performance and speed can further promote the use of LFM in various biological studies. In addition, since our method has no requirement of any hardware modifications like other performance-enhanced LFM methods [5,30,31], tremendous data [1] captured by LFM in previous studies can be reinterpreted with our algorithm, which may give more insights by enhanced contrast. The new modeling framework in 4D phase-space domain may also inspire better optical designs for LFM and other computational optics.

## Funding

National Natural Science Foundation of China (NSFC) (No. 61327902, 61671265). Beijing Municipal Science & Technology Commission (BMSTC) (No. Z181100003118014)

## Acknowledgment

We thank Guangshuo Ou and Hao Zhu for providing the histone-labeled *C. elegans.*

## References

**1. **R. Prevedel, Y.-G. Yoon, M. Hoffmann, N. Pak, G. Wetzstein, S. Kato, T. Schrödel, R. Raskar, M. Zimmer, E. S. Boyden, and A. Vaziri, “Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy,” Nat. Methods **11**(7), 727–730 (2014). [CrossRef] [PubMed]

**2. **T. Nöbauer, O. Skocek, A. J. Pernía-Andrade, L. Weilguny, F. M. Traub, M. I. Molodtsov, and A. Vaziri, “Video rate volumetric Ca^{2+} imaging across cortex using seeded iterative demixing (SID) microscopy,” Nat. Methods **14**(8), 811–818 (2017). [CrossRef] [PubMed]

**3. **L. Waller, G. Situ, and J. W. Fleischer, “Phase-space measurement and coherence synthesis of optical beams,” Nat. Photonics **6**(7), 474–479 (2012). [CrossRef]

**4. **M. Broxton, L. Grosenick, S. Yang, N. Cohen, A. Andalman, K. Deisseroth, and M. Levoy, “Wave optics theory and 3-D deconvolution for the light field microscope,” Opt. Express **21**(21), 25418–25439 (2013). [CrossRef] [PubMed]

**5. **L. Cong, Z. Wang, Y. Chai, W. Hang, C. Shang, W. Yang, L. Bai, J. Du, K. Wang, and Q. Wen, “Rapid whole brain imaging of neural activity in freely behaving larval zebrafish (*Danio rerio*),” eLife **6**, e28158 (2017). [CrossRef] [PubMed]

**6. **M. Shaw, H. Zhan, M. Elmi, V. Pawar, C. Essmann, and M. A. Srinivasan, “Three-dimensional behavioural phenotyping of freely moving C. elegans using quantitative light field microscopy,” PLoS One **13**(7), e0200108 (2018). [CrossRef] [PubMed]

**7. **M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz, “Light field microscopy,” ACM Trans. Graph. **25**(3), 924–934 (2006). [CrossRef]

**8. **J. Wu, B. Xiong, X. Lin, J. He, J. Suo, and Q. Dai, “Snapshot hyperspectral volumetric microscopy,” Sci. Rep. **6**(1), 24624 (2016). [CrossRef] [PubMed]

**9. **X. Lin, J. Wu, G. Zheng, and Q. Dai, “Camera array based light field microscopy,” Biomed. Opt. Express **6**(9), 3179–3189 (2015). [CrossRef] [PubMed]

**10. **Z. Xiao, Q. Wang, G. Zhou, and J. Yu, “Aliasing Detection and Reduction Scheme on Angularly Undersampled Light Fields,” IEEE Trans. Image Process. **26**(5), 2103–2115 (2017). [CrossRef] [PubMed]

**11. **M. Martínez-Corral and B. Javidi, “Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems,” Adv. Opt. Photonics **10**(3), 512 (2018). [CrossRef]

**12. **E. Lee, S. Yang, M. Han, and J. Kim, “Depth-based refocusing for reducing directional aliasing artifacts,” Opt. Express **24**(24), 28065–28079 (2016). [CrossRef] [PubMed]

**13. **J. Demmerle, C. Innocent, A. J. North, G. Ball, M. Müller, E. Miron, A. Matsuda, I. M. Dobbie, Y. Markaki, and L. Schermelleh, “Strategic and practical guidelines for successful structured illumination microscopy,” Nat. Protoc. **12**(5), 988–1010 (2017). [CrossRef] [PubMed]

**14. **X. Huang, J. Fan, L. Li, H. Liu, R. Wu, Y. Wu, L. Wei, H. Mao, A. Lal, P. Xi, L. Tang, Y. Zhang, Y. Liu, S. Tan, and L. Chen, “Fast, long-term, super-resolution imaging with Hessian structured illumination microscopy,” Nat. Biotechnol. **36**(5), 451–459 (2018). [CrossRef] [PubMed]

**15. **T. L. Liu, S. Upadhyayula, D. E. Milkie, V. Singh, K. Wang, I. A. Swinburne, K. R. Mosaliganti, Z. M. Collins, T. W. Hiscock, J. Shea, A. Q. Kohrman, T. N. Medwig, D. Dambournet, R. Forster, B. Cunniff, Y. Ruan, H. Yashiro, S. Scholpp, E. M. Meyerowitz, D. Hockemeyer, D. G. Drubin, B. L. Martin, D. Q. Matus, M. Koyama, S. G. Megason, T. Kirchhausen, and E. Betzig, “Observing the cell in its native state: Imaging subcellular dynamics in multicellular organisms,” Science **360**(6386), 1392 (2018). [PubMed]

**16. **G. Vicidomini, S. W. Hell, and A. Schönle, “Automatic deconvolution of 4Pi-microscopy data with arbitrary phase,” Opt. Lett. **34**(22), 3583–3585 (2009). [CrossRef] [PubMed]

**17. **H. Li, C. Guo, D. Kim-Holzapfel, W. Li, Y. Altshuller, B. Schroeder, W. Liu, Y. Meng, J. B. French, K.-I. Takamaru, M. A. Frohman, and S. Jia, “Fast, volumetric live-cell imaging using high-resolution light-field microscopy,” Biomed. Opt. Express **10**(1), 29–49 (2018). [CrossRef] [PubMed]

**18. **H.-Y. Liu, J. Zhong, and L. Waller, “Multiplexed phase-space imaging for 3D fluorescence microscopy,” Opt. Express **25**(13), 14986–14995 (2017). [CrossRef] [PubMed]

**19. **H.-Y. Liu, E. Jonas, L. Tian, J. Zhong, B. Recht, and L. Waller, “3D imaging in volumetric scattering media using phase-space measurements,” Opt. Express **23**(11), 14461–14471 (2015). [CrossRef] [PubMed]

**20. **T. T. Do, L. Gan, N. H. Nguyen, and T. D. Tran, “Fast and efficient compressive sensing using structurally random matrices,” IEEE Trans. Signal Process. **60**(1), 139–154 (2012). [CrossRef]

**21. **Z. Zhang, Y. Liu, and Q. Dai, “Light field from micro-baseline image pair,” CVPR in IEEE, 3800–3809 (2015).

**22. **Z. Lu, ''LightField-deconvolution,'' GitHub (2019) https://github.com/bbncWLG/LightField-deconvolution.

**23. **R. G. Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. Signal Process. Speech, and Signal Processing **29**(6), 1153–1160 (1981). [CrossRef]

**24. **G. Wu, M. Zhao, L. Wang, Q. Dai, T. Chai, and Y. Liu, “Light field reconstruction using deep convolutional network on EPI,” CVPR in (IEEE, 2017), Vol. 2017-January, pp. 1638–1646.

**25. **G. Zheng, R. Horstmeyer, and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics **7**(9), 739–745 (2013). [CrossRef] [PubMed]

**26. **X. Ou, J. Chung, R. Horstmeyer, and C. Yang, “Aperture scanning Fourier ptychographic microscopy,” Biomed. Opt. Express **7**(8), 3140–3150 (2016). [CrossRef] [PubMed]

**27. **C. Brede, M. Friedrich, A. L. Jordán-Garrote, S. S. Riedel, C. A. Bäuerlein, K. G. Heinze, T. Bopp, S. Schulz, A. Mottok, C. Kiesel, K. Mattenheimer, M. Ritz, V. von Krosigk, A. Rosenwald, H. Einsele, R. S. Negrin, G. S. Harms, and A. Beilhack, “Mapping immune processes in intact tissues at cellular resolution,” J. Clin. Invest. **122**(12), 4439–4446 (2012). [CrossRef] [PubMed]

**28. **C. J. Niedworok, A. P. Y. Brown, M. Jorge Cardoso, P. Osten, S. Ourselin, M. Modat, and T. W. Margrie, “aMAP is a validated pipeline for registration and segmentation of high-resolution mouse brain data,” Nat. Commun. **7**(1), 11879 (2016). [CrossRef] [PubMed]

**29. **J. Y. Tinevez, N. Perry, J. Schindelin, G. M. Hoopes, G. D. Reynolds, E. Laplantine, S. Y. Bednarek, S. L. Shorte, and K. W. Eliceiri, “TrackMate: An open and extensible platform for single-particle tracking,” Methods **115**, 80–90 (2017). [CrossRef] [PubMed]

**30. **N. Cohen, S. Yang, A. Andalman, M. Broxton, L. Grosenick, K. Deisseroth, M. Horowitz, and M. Levoy, “Enhancing the performance of the light field microscope using wavefront coding,” Opt. Express **22**(20), 24817–24839 (2014). [CrossRef] [PubMed]

**31. **M. A. Taylor, T. Nöbauer, A. Pernia-Andrade, F. Schlumm, and A. Vaziri, “Brain-wide 3D light-field imaging of neuronal activity with speckle-enhanced resolution,” Optica **5**(4), 345 (2018). [CrossRef]