## Abstract

It is a great challenge in two-photon microscopy (2PM) to have a high volumetric imaging speed without sacrificing the spatial and temporal resolution in three dimensions (3D). The structure in 2PM images could be reconstructed with better spatial and temporal resolution by the proper choice of the data processing algorithm. Here, we propose a method to reconstruct 3D volume from 2D projections imaged by mirrored Airy beams. We verified that our approach can achieve high accuracy in 3D localization over a large axial range and is applicable to continuous and dense sample. The effective field of view after reconstruction is expanded. It is a promising technique for rapid volumetric 2PM with axial localization at high resolution.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

In the past decades, two-photon microscopy (2PM) [1] has been widely used in biological research [2–5] thanks to its large penetration depth, high resolution and high specificity. However, its volumetric imaging speed using Gaussian beam is greatly reduced by the additional axial scanning, which is conventionally carried out by a motorized stage. Temporal resolution is crucial for functional brain imaging as some neuronal dynamics are too fast to be captured. Several techniques have been proposed to increase volumetric imaging speed, such as acousto-optic scanning [6], planar illumination [7], spatial and temporal multiplexing [8]. They offer a much higher volumetric imaging speed than conventional laser-scanning 2PM using Gaussian beam.

On the other hand, extended depth of focus (EDOF) is a promising technique for rapid volumetric imaging by illuminating a large volume at the same time. The nondiffracting Bessel and Airy beams with shape preserving feature are commonly used to excite the two-photon fluorescence [9–11]. Although the 2PM with nondiffracting beam can capture the image within a large volume in a single frame, the depth resolution is lost owing to the fact that the volumetric information is projected onto a single frame. This may hinder their application in some areas that require axial localization. To address this issue, depth information can be encoded during imaging by manipulating the illumination patterns. The depth information is then extracted by post-processing, which have been achieved in functional [12] and structural imaging [13]. In these studies, images of neurons at different depths were utilized to extract functional signals using an algorithm based on matching pursuit [12] or a convolutional neural network trained on samples of interest to correct for artefacts due to the reconstruction [13]. These methods, however, require prior knowledge of the sample to optimize the post-processing, making them less versatile to different samples. On the other hand, by leveraging the self-accelerating property of Airy beam, depth information can be retrieved by utilizing two Airy beams [14–16]. We previously developed a technique called mirrored Airy beams (MABs) which encodes the depth information by using two Airy beams with opposite accelerations [17]. The sample is imaged twice using Airy beams with opposite bending directions, such that the images are laterally shifted depending on the axial location of the sample structure and the bending direction of the Airy beam. Once the relationship between the lateral shift and the axial position is calibrated, lateral shift in the images can be calculated by comparing the two images and hence, its axial positions. Airy beam’s nondiffracting property also enables a deeper penetration depth in highly scattering medium [11]. Although the implementation of MABs is simple, it requires tedious calibration and manual calculation to extract the depth information which may become difficult with dense structure. Furthermore, only half of the Airy beam can be used to illuminate the sample due to its symmetry along the propagation direction. Therefore, a simple volumetric imaging tool is urgently needed.

Here, we further develop a new imaging scheme based on MABs and a reconstruction method for 3D reconstruction. The restriction of using half of the Airy beam is eliminated by incorporating information contained in other images at different focal planes. Dense samples that are previously difficult to manually locate the image pairs can be reconstructed automatically. In addition to the axial localization, the effective field of view (FOV) is further expanded due to the self-accelerating property. By evaluating the performance of the reconstruction, we prove that it is a promising technique for rapid volumetric imaging which not only preserves the depth information accurately, but also remains versatile in different applications.

## 2. Method

#### 2.1 Image formation

By assuming the response of the imaging system to be shift-invariant, the acquisition of signal *Y* at a 3D position ${\boldsymbol {p}} \in {{\mathbb R}^3}$ can be modelled as the convolution between the point-spread function *H* (PSF) of the imaging system and the sample distribution $\; X$. It is defined by

We can also consider a discretized model: ${\boldsymbol {y}} \in {{\mathbb R}^{\boldsymbol {m}}}$ and ${\boldsymbol {x}} \in {{\mathbb R}^{\boldsymbol {n}}}$ are vectors formed by raster scanning the sample at different focal planes and the sample volume, respectively. ${\boldsymbol {H}} \in {{\mathbb R}^{{\boldsymbol {m}} \times {\boldsymbol {n}}}}$ denotes a matrix such that matrix multiplication ${\boldsymbol {Hx}}$ corresponds to the discrete convolution of Eq. (1). The final acquired images after noise corruption can be approximated by

PSFs of the imaging system were modelled using the squared intensity profile of the Airy beam [18] and normalized by $\sum {\boldsymbol {H}} = 1$. The transverse scaling constant ${x_0}$ in the equation of the Airy beam [18] adjusts the lateral beam size and the axial depth of the Airy beam. Experimentally, this is done by regulating the phase mask on a spatial light modulator (SLM) [17]. Airy beam with longer axial length has a deeper depth of field which increases the axial imaging range. The voxel size was chosen as 0.1 ${\times} $ 0.1 ${\times} $ 0.1 µm^{3} which is around the Nyquist rate of the typical lateral resolution of 2PM.

#### 2.2 Imaging using MABs

The procedures of 2PM using MABs is shown in Fig. 1(a). The sample is imaged sequentially using two different Airy beams with opposite bending directions [17]. In order to remove the ambiguity arising from the symmetry of Airy beam (images taken at z_{1}), we capture additional images at other focal planes for extra information. This can be regarded as taking a z-stack but with a much larger z-step compared to conventional Gaussian beam. Intuitively, the lateral shift of the upper and lower sections of the MAB (two solutions) will change differently with the focal plane. For example, the lateral shift of the upper section of the MABs will decrease while that of the lower section of the MABs will increase when the focal plane is shifted upwards. Setting the z-step size to half of the axial full width at half maximum (FWHM) was empirically found to maximize the axial range without suffering from the symmetry of Airy beam. Orientations of the MABs were chosen such that the lateral shifts are in the horizontal axis to make sure that sample in the FOV will always appear in at least one of the MABs’ images.

#### 2.3 Volumetric reconstruction

Reconstruction of the 3D volume from 2D images can be reframed as a 3D deconvolution problem [19,20] as shown in Fig. 1(b). Since we only have the observed 2D images in real application, a simple approach is to minimize a loss function that measures the discrepancy between the observed images ${\boldsymbol {y}}$ and estimated images ${\boldsymbol {H\hat{x}}}$, where $\hat{{\boldsymbol {x}}} \in {{\mathbb R}^{\boldsymbol {n}}}$ denotes the raster scanning of the reconstructed volume. We chose squared error as the loss function

Gradient descent (GD) was used to minimize the loss function. ${\hat{{\boldsymbol {x}}}_{k + 1}}$ at iteration $k + 1$ was updated according to

*k*and ${{\boldsymbol {H}}^{\boldsymbol {T}}}$ denotes the adjoint of ${\boldsymbol {H}}$ which corresponds to discrete convolution with the flipped PSF. ${\alpha _k}$ was set empirically to ${10^3} \times max({{{0.9}^{k - 1}},\; 0.01} )$. The choice of initial step size is insignificant as all images are normalized. It was chosen such that the highest value of the reconstruction is around 1. A decay term was added to stabilize the convergence as the process iterates. The decay rate and the minimum decay were chosen empirically by testing different values and the reconstruction result was found to be not highly sensitive to their values. The nonnegativity constraint was imposed by projecting all negative values to zero after each iteration. The result was improved and more consistent across different axial positions if the gradients of all images were batched together as a single update instead of updating once for each image. However, it had slow convergence and often oscillates significantly as the error propagates across a large volume. A momentum term [21] was introduced to improve the rate of convergence and performance. The update became

Since deconvolution is an inverse problem that is ill-conditioned, regularization terms were also introduced. One common regularization is to use the ${\ell _2}$ norm, also known as Tikhonov regularization. The loss function and its gradient became

Since fluorescent signal from many biological sample is usually sparse, we can exploit the sparsity constraint by using ${\ell _1}$ norm as the regularization [22], also known as LASSO. This is particularly important for volumetric imaging using MABs as a single pixel of an image will propagate along the Airy beam’s trajectory into a large volume. By enforcing a sparsity constraint, the artefacts due to Airy beam’s trajectory can be reduced. The loss function and its gradient became

## 3. Results

#### 3.1 Comparison of different methods

Samples consisting of 1-µm fluorescent beads with different densities were simulated. Images at three focal planes were generated using MABs with axial FWHM being 20 µm. Maximum intensity projection (MIP) of the reconstruction results using different methods were shown in Fig. 2. Reconstructions using GD without momentum and regularization were unsuccessful since the result oscillated between over-addition and zero. Richardson-Lucy deconvolution [25,26] had a much shorter effective axial range than ${\ell _1}$- and ${\ell _2}$-GD. It was more susceptible to noise as shown in Fig. 2(b) which may be due to its noise amplification property [24]. For ${\ell _1}$- and ${\ell _2}$-GD, the lateral and axial positions of the beads matched well with the ground truth within a large axial range, which shows the localization capability of volumetric imaging using MABs despite having a low SNR. The lateral size of the beads appeared smaller and less smooth than the ground truth. This may be due to the lateral beam size and the decaying nature of both ${\ell _1}$ and ${\ell _2}$ regularizations. Running more iterations will further reduce the lateral size of the beads, especially with ${\ell _1}$ regularization. This is also true in the axial dimension which is particularly useful for minimizing the axial elongation due to the bending MABs. The axial elongation can be interpreted as the uncertainty of the axial position as the axial position is located by intersecting two Airy beams. Furthermore, ${\ell _1}$ regularization has higher contrast and better suppression of artefacts due to Airy beam’s trajectory compared to ${\ell _2}$ regularization as shown in the zoom-in inset in Fig. 2(b). This shows that sparsity constraints such as ${\ell _1}$ regularization should have better performance in volumetric imaging using MABs. The remaining result in this paper was reconstructed using ${\ell _1}$-GD.

#### 3.2 Expansion of the effective FOV

One advantage of using MABs is the expansion of the effective FOV thanks to its bending nature. In Fig. 2, some beads which were out of the scanning area were also reconstructed successfully as they were effectively shifted into the FOV. Although these beads only appeared in one of the MABs’ images, they also appeared in another image at a different focal plane which made reconstruction possible. Furthermore, regions close to the edges of the FOV could also be reconstructed by the same means, without the need of cropping the edges. Due to the orientations of the MABs, mainly FOV in the $x$-direction was expanded. However, expansion in the $y$-direction is also possible due to the side lobes of the Airy beam.

#### 3.3 Localization accuracy

The performance of reconstructing different areas in the effective FOV using ${\ell _1}$ regularization was further evaluated by creating artificial testing samples made up of point sources at different axial positions. The lateral and axial errors of the reconstructions were shown in Fig. 3. The lateral and axial positions were perfectly located over a large axial range with FWHM of ∼0.3 µm and ∼0.65–0.7 µm in the lateral and axial dimensions, respectively [Fig. 3(a)]. This shows that volumetric imaging using MABs has a consistent performance at different positions within the effective axial range. The performance of reconstructing out of the FOV was the first to decrease at both ends of the z-stack. This is because the object only appeared in one image taken at the 10-µm focal plane, as the lateral shift at 0-µm focal plane was insufficient and the SNR at 20-µm focal plane was too low. The performance of reconstructing the edges was slightly worse than that at the center. This is because the object was shifted out of the FOV in one of the MABs images. In other words, it was similar to reconstructing using only one Airy beam at multiple focal planes.

Axial accuracy was less isotropic than the lateral accuracy due to the fact that the degree of bending of the Airy beam is different at different depths. The axial elongations were more severe near the first and the last focal planes than the middle focal plane and the tails of the z-stack. This can be explained by the lower degree of bending near the focus of the Airy beam than its tails, which leads to a larger axial intersection. Quantitatively, this can be understood as the slope $\mathrm{\Delta }z/\mathrm{\Delta }x$ being larger near the focus [17] and thus, for a given sampling rate in the lateral direction (due to limited resolution), it has a larger axial uncertainty near the focus. It should be noted that the lateral resolution using Airy beam is inherently anisotropic because of the low eccentricity of its main lobe.

#### 3.4 Effect of different MABs’ sizes

To demonstrate the effect of different trajectories of the Airy beam, the testing samples were repeated using MABs with double the axial FWHM. The overall trend was similar to that of the shorter MABs. This is important because consistent performance among different MABs allows us to optimize the geometry of MABs for different imaging tasks. With the axial FWHM doubled, the effective axial range increased by a factor of two. However, the performance was also decreased due to the larger beam size and weaker bending. It has a smaller lateral shift for a given axial position which leads to lower axial resolvability. The FWHM of lateral and axial dimensions were ∼0.3 µm and ∼0.8–0.9 µm, respectively. The performance in the lateral dimension did not degrade much as the main lobe’s lateral FWHM of the longer Airy beam was increased by ∼30% only. It is important to note that the lateral and axial widths of the reconstruction will further decrease by executing more iterations due to ${\ell _1}$ regularization.

The effective axial range was around $({0.5 + 0.5N} )\times FWHM$ for Airy beam of different lengths, where *N* denotes the number of focal planes included (Fig. 3). It should be noted that the effective axial range depends on the density of the fluorescent signals in the projected images, which is related to both the sample density and the axial FWHM of the MABs. Assuming the step size for taking z-stack using Gaussian beam to be 0.4 µm, the number of images required to image the same volume were ∼17 and ∼33 times of that required using 20-µm and 40-µm MABs, respectively. Since the major bottleneck in rapid volumetric imaging is the axial scanning speed, the imaging speed can be significantly improved by a similar amount using MABs, without sacrificing all the 3D positions.

#### 3.5 Effect of a sample’s density

The effect of density of the sample and axial length of MABs was further investigated. As shown in Fig. 4(a), denser fluorescent objects degraded the quality of the reconstructions. This is because there was a higher chance for different objects to overlap with each other after they were projected onto an image. Without sufficient sampling in the axial direction, the volume could not be uniquely reconstructed. As a result, reconstructions using MABs with longer axial length had lower quality than shorter MABs did as there were more objects being projected onto a single image, which led to more occurances of overlapping. However, longer MABs had longer effective axial range [Fig. 4(b)]. By comparing Figs. 4(c) and (d), reconstruction using shorter MABs had better contrast, smaller axial elongation and fewer false positives. This shows one of the advantages of using MABs as the MABs can be optimized for different samples and applications.

#### 3.6 Reconstructing a sample with a complex structure

Lastly, an experimental PSF was produced by converting a Gaussian beam into Airy beam using an SLM before the scanning system of a custom-built microscope [ Fig. 5(a)]. The experimental setup can be referred to [17]. The measured PSF were applied to a simulated network of microtubules in a cell [27]. This is normally inapplicable to techniques utilizing EDOF as the network is densely located in the axial dimension, which is likely to overlap after projection. However, it could still be reconstructed using MABs, as shown in Fig. 5(b-g), thanks to its bending nature. Sparser features, such as the tips of the microtubules, had better reconstruction quality than the denser region did, such as the central region. The axial positions of these features were also located precisely. The reconstruction was slightly affected by the noises present in the images, leading to artefacts such as the empty holes in the structures and some background spots. There were also axial elongations which degraded the axial resolvability. Although the point response of the reconstruction is not shift invariant and introduced artefacts to the reconstructed volume, the amount of artefacts present was not severe and meaningful interpretation of the reconstructed volume was still possible as long as there were sufficient sampling focal planes. It should be noted that even though the experimental MABs were not perfectly aligned, complex and continuous structure like microtubules could still be reconstructed successfully. This work can potentially be expanded to other depth-encoding EDOF techniques.

#### 3.7 Computational specifications

Reconstructions were executed in MATLAB (2019b) on a commercial laptop equipped with Intel Core i7-9750H, 16GB RAM and NVIDIA GeForce RTX 2060 for accelerating FFT. Computation time for each iteration (with 3 focal planes) was around 36 and 83 seconds for MABs of 20-µm and 40-µm axial FWHM, respectively. One major bottleneck is the limitation in memory as there were nearly one billion voxels for the longer MABs. However, memory footprint and computation time can be significantly reduced by using 16-bit precision floating points (MATLAB 2019b does not offer GPU support for 16-bit) or using larger voxel size, especially in the axial dimension, without significant reduction in performance.

## 4. Discussion

2PM has played an important role in the recent advancements in neuroscience. However, its low temporal resolution limits its applications in large-scale study on neuronal dynamics. Many different advanced techniques have been proposed to greatly improve the temporal resolution.

Despite their robust performance, they may not be commonly utilized by biologists possibly due to a number of reasons, such as high cost, requiring complex modifications, etc. Thus, an easily accessible tool for volumetric imaging with high temporal resolution is highly valuable.

Here, we demonstrated that volumetric imaging using MABs can be reconstructed to a 3D volume with high localization accuracy and minimal loss in resolution. It offers a much high volumetric imaging speed than using Gaussian beam to image an equivalent volume. Instead of cropping the edges of the FOV which is common deconvolution process to reduce the artefacts, MABs can retain the full FOV with a further expansion of the effective FOV, which is beneficial for large-scale study. The nondiffracting feature of Airy beam also enables a deeper imaging depth which is important for deep in vivo brain imaging. The setup only requires the addition of a modulator such as SLM [18] or digital micromirror device (DMD) [28] and the computational requirement can be satisfied by a common commercial computer. Not only the prior knowledge of the sample is not required, but the size of the MABs can also be adjusted for different features of the sample. Rather than being restricted to only imaging sparse sample using EDOF, MABs can be adjusted accordingly to accommodate the denser sample for better reconstruction quality. This versatility allows MABs to be applied for a large range of applications and we anticipate that these advancements will foster the application of Airy beam in biological research, such as monitoring neuronal activity spanning a large volume.

In the future, applications of MABs in three-photon microscopy (3PM) can also be explored. 3PM using Bessel beam has been shown to have significantly higher contrast than 2PM due to background suppression and have an imaging depth beyond 1.0 mm [9]. With MABs, however, depth information of the sample can also be retrieved with similar imaging performance. Airy beam can be further engineered to have reduced side-lobes [29]. Since our method can be applied to any excitation beam that encodes the depth information, it can potentially be combined with other nondiffracting beams or PSF engineering to optimize the encoding-decoding process to further improve image quality and reconstruction. Recently, a PSF designed using deep learning has outperformed the traditional Tetrapod in 3D localization [30]. It would be interesting to applied our work to other nondiffracting beams and PSF engineering techniques.

## 5. Conclusion

We have demonstrated a reconstruction method for volumetric 2PM using MABs. By leveraging the self-accelerating property of Airy beam, both axial and lateral positions can be retrieved with high accuracy over a large axial range. It has significant benefits over Gaussian beam in terms of the volumetric imaging speed. Although EDOF technique is mostly for sparse sample, our work is also applicable to dense sample. The size of MABs can also be optimized for different densities of the sample to have different imaging range and reconstruction quality. We foresee these advancements will lay the foundation for a wide variety of applications in optical microscopy.

## Funding

Research Grants Council, University Grants Committee (CityU T42-103/16-N, E-HKU701/17, HKU 17200219, HKU 17209018, HKU C7047-16G); National Natural Science Foundation of China (N_HKU712/16).

## Disclosures

The authors declare no conflicts of interest.

## References

**1. **W. Denk, J. H. Strickler, and W. W. Webb, “Two-photon laser scanning fluorescence microscopy,” Science **248**(4951), 73–76 (1990). [CrossRef]

**2. **F. Helmchen and W. Denk, “Deep tissue two-photon microscopy,” Nat. Methods **2**(12), 932–940 (2005). [CrossRef]

**3. **H. Cheng, S. Tong, X. Deng, H. Liu, Y. Du, C. He, P. Qiu, and K. Wang, “Deep-brain 2-photon fluorescence microscopy in vivo excited at the 1700nm window,” Opt. Lett. **44**(17), 4432–4435 (2019). [CrossRef]

**4. **W. Denk, K. R. Delaney, A. Gelperin, D. Kleinfeld, B. W. Strowbridge, D. W. Tank, and R. Yuste, “Anatomical and functional imaging of neurons using 2-photon laser scanning microscopy,” J. Neurosci. Methods **54**(2), 151–162 (1994). [CrossRef]

**5. **K. Svoboda and R. Yasuda, “Principles of Two-Photon Excitation Microscopy and Its Applications to Neuroscience,” Neuron **50**(6), 823–839 (2006). [CrossRef]

**6. **G. Katona, G. Szalay, P. Maák, A. Kaszás, M. Veress, D. Hillier, B. Chiovini, E. S. Vizi, B. Roska, and B. Rózsa, “Fast two-photon in vivo imaging with three-dimensional random-access scanning in large tissue volumes,” Nat. Methods **9**(2), 201–208 (2012). [CrossRef]

**7. **T. V. Truong, W. Supatto, D. S. Koos, J. M. Choi, and S. E. Fraser, “Deep and fast live imaging with two-photon scanned light-sheet microscopy,” Nat. Methods **8**(9), 757–760 (2011). [CrossRef]

**8. **A. Cheng, J. T. Gonçalves, P. Golshani, K. Arisaka, and C. Portera-Cailliau, “Simultaneous two-photon calcium imaging at different depths with spatiotemporal multiplexing,” Nat. Methods **8**(2), 139–142 (2011). [CrossRef]

**9. **B. Chen, X. Huang, D. Gou, J. Zeng, G. Chen, M. Pang, Y. Hu, Z. Zhao, Y. Zhang, Z. Zhou, H. Wu, H. Cheng, Z. Zhang, C. Xu, Y. Li, L. Chen, and A. Wang, “Rapid volumetric imaging with Bessel-Beam three-photon microscopy,” Biomed. Opt. Express **9**(4), 1992–2000 (2018). [CrossRef]

**10. **G. Thériault, M. Cottet, A. Castonguay, N. McCarthy, and Y. De Koninck, “Extended two-photon microscopy in live samples with Bessel beams: steadier focus, faster volume scans, and simpler stereoscopic imaging,” Front. Cell. Neurosci. **8**, 139 (2014). [CrossRef]

**11. **X.-J. Tan, C. Kong, Y.-X. Ren, C. S. W. Lai, K. K. Tsia, and K. K. Y. Wong, “Volumetric two-photon microscopy with a non-diffracting Airy beam,” Opt. Lett. **44**(2), 391–394 (2019). [CrossRef]

**12. **A. Song, A. S. Charles, S. A. Koay, J. L. Gauthier, S. Y. Thiberge, J. W. Pillow, and D. W. Tank, “Volumetric two-photon imaging of neurons using stereoscopy (vtwins),” Nat. Methods **14**(4), 420–426 (2017). [CrossRef]

**13. **A. F. Valle and J. D. Seelig, “Two-photon Bessel beam tomography for fast volume imaging,” Opt. Express **27**(9), 12147–12162 (2019). [CrossRef]

**14. **P. Zammit, A. R. Harvey, and G. Carles, “Extended depth-of-field imaging and ranging in a snapshot,” Optica **1**(4), 209–216 (2014). [CrossRef]

**15. **S. Jia, J. C. Vaughan, and X. Zhuang, “Isotropic three-dimensional super-resolution imaging with a self-bending point spread function,” Nat. Photonics **8**(4), 302–306 (2014). [CrossRef]

**16. **Y. Zhou, P. Zammit, V. Zickus, J. M. Taylor, and A. R. Harvey, “Twin-Airy Point-Spread Function for Extended-Volume Particle Localization,” Phys. Rev. Lett. **124**(19), 198104 (2020). [CrossRef]

**17. **H. He, C. Kong, X.-J. Tan, K. Y. Chan, Y.-X. Ren, K. K. Tsia, and K. K. Y. Wong, “Depth-resolved volumetric two-photon microscopy based on dual Airy beam scanning,” Opt. Lett. **44**(21), 5238–5241 (2019). [CrossRef]

**18. **G. A. Siviloglou, J. Broky, A. Dogariu, and D. N. Christodoulides, “Observation of Accelerating Airy Beams,” Phys. Rev. Lett. **99**(21), 213901 (2007). [CrossRef]

**19. **J. G. McNally, T. Karpova, J. Cooper, and J. A. Conchello, “Three-Dimensional Imaging by Deconvolution Microscopy,” Methods **19**(3), 373–385 (1999). [CrossRef]

**20. **D. S. C. Biggs, “3D Deconvolution Microscopy,” Curr. Protoc. Cytom. **52**(1), 12.19.1 (2010). [CrossRef]

**21. **N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural Networks **12**(1), 145–151 (1999). [CrossRef]

**22. **N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica **5**(1), 1–9 (2018). [CrossRef]

**23. **J. B. de Monvel, S. Le Calvez, and M. Ulfendahl, “Image Restoration for Confocal Microscopy: Improving the Limits of Deconvolution, with Application to the Visualization of the Mammalian Hearing Organ,” Biophys. J. **80**(5), 2455–2470 (2001). [CrossRef]

**24. **N. Dey, L. Blanc-Feraud, C. Zimmer, P. Roux, Z. Kam, J.-C. Olivo-Marin, and J. Zerubia, “Richardson–Lucy algorithm with total variation regularization for 3D confocal microscope deconvolution,” Microsc. Res. Tech. **69**(4), 260–266 (2006). [CrossRef]

**25. **W. H. Richardson, “Bayesian-Based Iterative Method of Image Restoration,” J. Opt. Soc. Am. **62**(1), 55–59 (1972). [CrossRef]

**26. **L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. **79**, 745 (1974). [CrossRef]

**27. **D. Sage, L. Donati, F. Soulez, D. Fortun, G. Schmit, A. Seitz, R. Guiet, C. Vonesch, and M. Unser, “DeconvolutionLab2: An open-source software for deconvolution microscopy,” Methods **115**, 28–41 (2017). [CrossRef]

**28. **Q. Xu, Y. Wang, S. Y. Siew, J. Lin, and Y. Zhang, “Generating self-accelerating Airy beams using a digital micromirror device,” Appl. Phys. B **117**(1), 141–144 (2014). [CrossRef]

**29. **S. Barwick, “Reduced side-lobe Airy beams,” Opt. Lett. **36**(15), 2827–2829 (2011). [CrossRef]

**30. **E. Nehme, D. Freedman, R. Gordon, B. Ferdman, L. E. Weiss, O. Alalouf, T. Naor, R. Orange, T. Michaeli, and Y. Shechtman, “DeepSTORM3D: dense 3D localization microscopy and PSF design by deep learning,” Nat. Methods **17**(7), 734–740 (2020). [CrossRef]