Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

3D computational cannula fluorescence microscopy enabled by artificial neural networks

Open Access Open Access

Abstract

Computational cannula microscopy (CCM) is a high-resolution widefield fluorescence imaging approach deep inside tissue, which is minimally invasive. Rather than using conventional lenses, a surgical cannula acts as a lightpipe for both excitation and fluorescence emission, where computational methods are used for image visualization. Here, we enhance CCM with artificial neural networks to enable 3D imaging of cultured neurons and fluorescent beads, the latter inside a volumetric phantom. We experimentally demonstrate transverse resolution of ∼6µm, field of view ∼200µm and axial sectioning of ∼50µm for depths down to ∼700µm, all achieved with computation time of ∼3ms/frame on a desktop computer.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Neurons are distributed in the brain in a complex manner in 3D space. Therefore, neural imaging requires data in all 3 spatial dimensions, ideally with fast acquisition. Computational Cannula Microscopy (CCM) has shown to be effective for imaging fluorescence deep inside the brain with minimal trauma [1,2]. In CCM, the cannula acts as a lightpipe delivering excitation light deep inside tissue. Fluorescence is then collected by the same cannula and redirected to a sensor for recording. Since the cannula is a non-imaging element, the images are obtained by computational reconstruction. Since no scanning is involved, CCM is very fast and can achieve diffraction-limited resolution. Previous work also demonstrated the potential of computational refocusing to achieve quasi-3D imaging (in air) [3]. Recently, we also demonstrated the application of Artificial Neural Networks (ANNs) to drastically improve the speed of image reconstructions in CCM [4]. Here, we extend our previous work to enable ANNs to perform fluorescence imaging in a 3D volume. Specifically, we investigated three different ANNs to enable 3D CCM, and demonstrated 3D imaging using cultured neurons and fluorescent beads, both in air and inside a volumetric phantom.

3D imaging of neurons in vivo in the intact brain is typically achieved with 2-photon imaging. Although impressive resolution, field of view and speeds have been demonstrated recently [5], such mesoscopes require fairly expensive equipment, complex procedures and are typically limited to depths of a few hundred micrometers. Many other approaches exist for imaging fixed/dead neurons including clearing tissue to render them transparent [6] and utilizing polymeric expansion techniques [7]. Lightsheet microscopy [8,9] and structured illumination approaches [10] have also been successfully employed for fast high-resolution volumetric imaging of transparent samples. Tomography from multiple 2D images using deep convolutional ANNs have been demonstrated for semiconductor-device inspection using reflected light [11]. Alternative machine-learning approaches have been combined with tomography [12] and light-field microscopy [13]. Computational refocusing can also achieve 3D imaging either with optics-free setups [1416] or with diffractive masks [17]. Recent work has applied similar principles to 3D wide-field fluorescence microscopy of clear samples using miniaturized mesoscopes [18]. In contrast to these approaches, CCM is able to image deep inside opaque or highly-scattering tissue such as the mouse brain [2,19]. Furthermore, CCM has the advantage that the ratio of field-of-view to probe diameter is close to 1, thereby allowing for minimally-invasive imaging. In our experiments, the probe (cannula) diameter is 220µm. Alternative approaches to imaging through multi-mode fibers have also been described [2023], but most of these rely on the temporal coherence of the excitation light, and thereby require more complex equipment and computational methods.

2. Experiment

The schematic of our CCM setup is shown in Fig. 1(a) (and in Supplement 1, Fig. S1). The cannula (FT200EMT, Thorlabs) can be inserted into the sample. The excitation light (LED with center wavelength = 470 nm, M470L3, Thorlabs) is coupled to the cannula through an objective lens. Fluorescence from the sample is collected by the same cannula (therefore, an epi-configuration). The image at the top (distal) end of the cannula is imaged onto a sCMOS camera (C11440, Hamamatsu). Reflected excitation light is rejected by a dichroic mirror and an additional filter. An exemplary image is shown on the right inset. A reference microscope is placed underneath the sample to image the same region as the cannula and the corresponding image is shown in the bottom right inset of Fig. 1(a). First, we performed a series of experiments to determine that the volume of interest (constrained by the fluorescence collection and excitation efficiencies) is limited to approximately 100µm from the bottom surface of the cannula. We chose a maximum depth of 100µm from the cannula because we experimentally determined that the fluorescent signal was too weak for larger distance (Supplement 1, section 1 and Figs. S2 and S3). Subsequently, we restricted our experiments to 3 layers spaced by 50µm in close proximity to the cannula (as illustrated in Fig. 1(a)).

 figure: Fig. 1.

Fig. 1. Overview of computational-cannula microscopy (CCM). (a) Simplified schematic of our microscope. Signal arising from 3 layers inside the sample are captured during training of the ANNs. Fluorescence from deeper layers was too weak. These layers are spaced by 50µm from the proximal end of the cannula. Right insets show recorded images with CCM (top) and with the reference microscope (bottom). (b) Details of ANN1_r that is trained to take the input CCM image and output the reconstructed image of 1 layer. A modified version of this network, ANN2 outputs 3 images, one for each layer (see Supplement 1). (c) Details of ANN1_c, which classifies the input CCM image into one of the 3 layers.

Download Full Size | PDF

Similar to our previous work [4], here we used both mouse primary hippocampal cultured neurons and slides with fluorescent beads to create a dataset for training the ANNs. However, unlike our previous work, we acquired this dataset for 3 layers as illustrated in the left inset of Fig. 1(a). Details of sample preparation are described in Supplement 1, section 2. A total of 16,700 images from each layer were recorded.

Figure 1(b) shows the architecture of ANN1_r, which is used to convert the input CCM image into the fluorescence image. It consists of dense blocks that prevent the gradients from vanishing too fast. Each dense block includes 3 individual layers: 2 convolutional layers with RELU activation function followed by a batch-normalization layer. The structure is a typical U-net with the skip connections to concatenate the encoder and decoder outputs [4]. A second ANN, referred to as ANN1_c (Fig. 1(c)) was used to classify the images into the 3 layers (layer index is metadata in the images). It includes 8 blocks and a final classifier. Each block consists of one 2D convolution layer with RELU activation function followed by a batch-normalization layer. We add a Maxpool function between every two blocks, which down-samples input images and prevents overfitting by taking max value in a filter region (2 × 2 filter). For both ANNs, the loss function is the pixel-wise cross-entropy, defined as:

$$\textrm{L} = \frac{1}{\textrm{N}}\mathop \sum \limits_\textrm{i} - {\textrm{g}_\textrm{i}}\textrm{log}{\textrm{p}_\textrm{i}} - ({1{\; } - {\textrm{g}_\textrm{i}}} )\log ({1 - {\textrm{p}_\textrm{i}}} ),$$
where gi and pi represent the ground truth and predicted pixel intensity, respectively. This loss function imposes sparsity. An alternative ANN, referred to as ANN2 that outputs 3 layer images was also explored and described in Supplement 1, section 3.

Data for both training and testing the ANNs were acquired by placing various samples under the cannula, and capturing the CCM and reference images simultaneously, while varying both the transverse (x,y) position of the sample relative to the optical axis of the cannula, and the depth z, between the face of the cannula and the top of the sample. Details of the image acquisition, training and testing procedures are described in section 4 of Supplement 1. It is important to clarify the differences between the different ANNs used here. ANN1_r outputs a single 2D image, which can be located at any one of the 3 planes used for training it. The corresponding ANN1_c is used to classify the input image among one of the 3 planes. Without ANN1_c, ANN1_r alone is not able to predict the location in z of the image. This is the significance of ANN1_c (i.e., the need for classification). We note that ANN1_c performs an optical sectioning operation similar to a confocal microscope, but with no scanning. Therefore, ANN1_c and ANN1_r together are able to produce the 3D information. ANN2, on the other hand, produces three 2D images, each located at the three planes used for training.

3. Results

Imaging results using ANN1_r and ANN1_c for cultured neurons and fluorescent beads are summarized in Fig. 2(a). The ground-truth images were obtained by the reference microscope as described earlier and confirm the accuracy of the ANN outputs. The layer index predicted by ANN1_c was verified by the metadata of the corresponding reference images. The structural similarity index (SSIM) and maximum-average error (MAE), both averaged over 1000 test images, were 90% and 1% for ANN1_r, respectively. The classification accuracy averaged over 1000 images, of ANN1_c was 99.8%. Figures 2(b) and 2(c) show results from ANN2 with cultured neurons and fluorescent beads, respectively.

 figure: Fig. 2.

Fig. 2. Experimental results of 3D CCM. (a) Fluorescence samples were reconstructed using ANN1_r, while the layer index was predicted by ANN1_c. The first 3 rows show cultured neurons, while the last row shows fluorescent beads (diameter=4µm). (b) ANN2 produces 3 reconstructed images, one for each layer. This is an example, where layer 2 contains the neuron. (c) Another example from ANN2 where layer 3 contains the neuron. Many additional examples from all networks are included in the Supplement 1.

Download Full Size | PDF

The performance of ANN2 averaged over 1000 test images and 3 layers per image was 96% (SSIM) and 0.4% (MAE). We further evaluated the computation time on a computer equipped with Intel Core i7-4790 CPU (clock frequency of 3.60 GHz, memory of 16.0 GB) and NVIDIA GeForce GTX 970. The average reconstruction time/frame for ANN1_r and ANN2 was 3.3ms and 3.4ms, respectively. The average classification time/frame for ANN1_c was 3.6ms.

We also compared the performance of the two reconstruction ANNs by applying these to the same input image in Fig. 3(a) and ground-truth image in Fig. 3(b) (layer index is labelled 2). The reconstructed result from ANN1_r is in Fig. 3(c). The output of ANN1_c is 2. The corresponding output from ANN2 is shown in Figs. 3(d)–(f) for layers 1, 2 and 3, respectively.

 figure: Fig. 3.

Fig. 3. (a-f) Comparison of results from the different reconstruction ANNs using the same input image. In (c), the output of ANN1_c is labeled as 2 on top. (g-k) shows images of a single fluorescent bead (diameter=4µm) obtained using the 2 networks (the bead is in layer 1). Size of images (a)-(f) are the same and those of (g),(i) and (k) are the same.

Download Full Size | PDF

In order to estimate resolution, we imaged a single fluorescent bead, whose ground truth image and cross-section through the bead are shown in Figs. 3(g) and 3(h), respectively. A bead diameter of 5.9µm was measured. The corresponding outputs from ANN1_r and ANN2 are in Figs. 3(i)–(l), respectively. The corresponding measured bead diameters were 6.6µm and 5.6µm, respectively.

Finally, we fabricated a phantom made of agarose dispersed with fluorescent beads as illustrated by the photographs in Fig. 4(a) and in Supplement 1. The cannula was carefully inserted into the phantom, while CCM images were recorded. The ANN1_r was retrained with a synthetic dataset comprised of combining the 3-layer CCM images into a single “synthetic” CCM image (see section 4 of Supplement 1). We refer to this new network as ANN1_r*, which is now trained to reconstruct an image comprised of the projection of the fluorescence signal from within 100µm of the proximal end of the cannula onto a single plane. The CCM images and corresponding output images of ANN1_r* at various depths are shown in Fig. 4(b). Only a subset of the images are shown here and the complete set is included in Supplement 1, section 5. This stack of 2D images can then be combined into a reconstructed 3D image as shown in Fig. 3(c) (Supplement 1).

 figure: Fig. 4.

Fig. 4. Imaging inside a volumetric phantom: (a) Photographs of phantom. Bottom image shows cannula inserting into the phantom. Blue light is excitation. (b) Output of ANN1_r* trained on a synthetic dataset (see text for details) at various z, depths inside the phantom (see Visualization 1). (c) Reconstructed 3D image. The streaks in the z direction indicate that the cannula pushed some beads down inside the phantom during the experiment (see Visualization 2).

Download Full Size | PDF

One of the advantages of ANNs over previous approaches that utilize singular-value decomposition (SVD) [13] is the much higher computation speed. In Table 1, we summarize the performance of the 2 ANN approaches and the SVD method. The performance of the 2 ANN approaches are similar, and were averaged over 1000 test images, each containing 3 layers. Classification accuracy was defined as the ratio of the number of images with correctly predicted layer index to the total number of images tested (1000). Data used for SVD was from a single layer data, hence classification accuracy is not applicable.

Tables Icon

Table 1. Comparison of the performance of each ANN and linear algorithm.

4. Conclusion

In conclusion, we demonstrated 3D imaging of fluorescent beads in a volumetric phantom using a surgical cannula as the lightpipe for both excitation and fluorescence. Image reconstructions in multiple planes were achieved using trained artificial neural networks. We trained two types of neural networks on both experimental and quasi-experimental data (augmented by synthesizing multiple plane images together). The system was able to achieve lateral resolution of ∼6µm, axial sectioning of ∼50µm and imaging depths as large as 0.7mm (limited primarily by the length of the cannula). The field of view was approximately equal to the diameter of the cannula, thereby allowing for imaging a wide area with minimal invasive surgery.

Funding

National Science Foundation (1533611); National Institutes of Health (1R21EY030717).

Disclosures

The authors declare no conflicts of interest.

See Supplement 1 for supporting content.

References

1. G. Kim, N. Nagarajan, M. Capecchi, and R. Menon, “Cannula-based computational fluorescence microscopy,” Appl. Phys. Lett. 106(26), 261111 (2015). [CrossRef]  

2. G. Kim, N. Nagarajan, E. Pastuzyn, K. Jenks, M. Capecchi, J. Shepherd, and R. Menon, “Deep-brain imaging via epi-fluorescence computational cannula microscopy,” Sci. Rep. 7(1), 44791 (2017). [CrossRef]  

3. G. Kim and R. Menon, “An ultra-small 3D computational microscope,” Appl. Phys. Lett. 105(6), 061114 (2014). [CrossRef]  

4. R. Guo, Z. Pan, A. Taibi, J. Shepherd, and R. Menon, “Computational Cannula Microscopy of neurons using neural networks,” Opt. Lett. 45(7), 2111–2114 (2020). [CrossRef]  

5. R. Lu, Y. Liang, G. Meng, P. Zhou, K. Svoboda, L. Paninski, and N. Ji, “Rapid mesoscale volumetric imaging of neural activity with synaptic resolution,” Nat. Methods 17(3), 291–294 (2020). [CrossRef]  

6. C. M. Franca, R. Riggers, J. L. Muschler, M. Widbiller, P. M. Lococo, A. Diogenes, and L. E. Bertassoni, “3D-imaging of whole neuronal and vascular networks of the human dental pulp via CLARITY and light sheet microscopy,” Sci. Rep. 9(1), 10860 (2019). [CrossRef]  

7. F. Chen, P.W. Tillberg, and E.S. Boyden, “Expansion Microscopy,” Science 347(6221), 543–548 (2015). [CrossRef]  

8. M. B. Bouchard, V. Voleti, C. S. Mendes, C. Lacefield, W. B. Grueber, R. S. Mann, R. M. Bruno, and E. M. C. Hillman, “Swept confocally-aligned planar excitation (SCAPE) microscopy for high speed volumetric imaging of behaving organisms,” Nat. Photonics 9(2), 113–119 (2015). [CrossRef]  

9. V. Voleti, K. B. Patel, W. Li, C. P. Campos, S. Bharadwaj, H. Yu, C. Ford, M. J. Casper, R. W. Yan, W. Liang, C. Wen, K. D. Kimura, K. L. Targoff, and E. M. C. Hillman, “Real-time volumetric microscopy of in vivo dynamics and large-scale samples with SCAPE 2.0,” Nat. Methods 16(10), 1054–1062 (2019). [CrossRef]  

10. Z. Li, Q. Zhang, S.-W. Chou, Z. Newman, R. Turcotte, R. Natan, Q. Dai, E. Y. Isacoff, and N. Ji, “Fast widefield imaging of neuronal structure and function with optical sectioning in vivo,” Sci. Adv. 6(19), eaaz3870 (2020). [CrossRef]  

11. A. Goy, G. Rughoobur, S. Li, K. Arthur, A. I. Akinwande, and G. Barbastathis, “High-resolution limited-angle phase tomography of dense layered objects using deep neural networks,” Proc. Natl. Acad. Sci. U. S. A. 116(40), 19848–19856 (2019). [CrossRef]  

12. U. S. Kamilov, I. N. Papadopoulos, M. H. Shoreh, A. Goy, C. Vonesch, M. Unser, and D. Psaltis, “Learning approach to optical tomography,” Optica 2(6), 517–522 (2015). [CrossRef]  

13. N. C. Pégard, H.-Y. Liu, N. Antipa, M. Gerlock, H. Adesnik, and L. Waller, “Compressive light-field microscopy for 3D neural activity recording,” Optica 3(5), 517–524 (2016). [CrossRef]  

14. E. Scullion, S. Nelson, and R. Menon, “Optics-free imaging of complex, non-sparse QR-codes with Deep Neural Networks,” arXiv:2002.11141 [eess.IV] (2020).

15. G. Kim, K. Isaacson, R. Palmer, and R. Menon, “Lensless photography with only an image sensor,” Appl. Opt. 56(23), 6450–6456 (2017). [CrossRef]  

16. A. Ozcan and E. McLeod, “Lensless imaging and sensing,” Annu. Rev. Biomed. Eng. 18(1), 77–102 (2016). [CrossRef]  

17. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

18. Yujia Xue, Ian G. Davison, David A. Boas, and Lei Tian, “Single-Shot 3D Widefield Fluorescence Imaging with a Computational Miniature Mesoscope,” arXiv:2003.11994 (2020).

19. G. Kim and R. Menon, “Numerical analysis of computational cannula microscopy,” Appl. Opt. 56(9), D1–D7 (2017). [CrossRef]  

20. E. Kakkava, B. Rahmani, N. Borhani, U. Teğin, D. Loterie, G. Konstantinou, C. Moser, and D. Psaltis, “Imaging through multimode fibers using deep learning: The effects of intensity versus holographic recording of the speckle pattern,” Opt. Fiber Technol. 52, 101985 (2019). [CrossRef]  

21. M. Plöschner, T. Tyc, and T. Čižmár, “Seeing through chaos in multimode fibres,” Nat. Photonics 9(8), 529–535 (2015). [CrossRef]  

22. T. Čižmár and K. Dholakia, “Exploiting multimode waveguides for pure fibre-based imaging,” Nat. Commun. 3(1), 1027 (2012). [CrossRef]  

23. N. Shabairou, E. Cohen, O. Wagner, D. Malka, and Z. Zalevsky, “Color image identification and reconstruction using artificial neural networks on multimode fiber images: towards an all-optical design,” Opt. Lett. 43(22), 5603–5606 (2018). [CrossRef]  

Supplementary Material (3)

NameDescription
Supplement 1       Supplementary Material
Visualization 1       Animation showing multiple frames.
Visualization 2       Visualization of 3D image.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Overview of computational-cannula microscopy (CCM). (a) Simplified schematic of our microscope. Signal arising from 3 layers inside the sample are captured during training of the ANNs. Fluorescence from deeper layers was too weak. These layers are spaced by 50µm from the proximal end of the cannula. Right insets show recorded images with CCM (top) and with the reference microscope (bottom). (b) Details of ANN1_r that is trained to take the input CCM image and output the reconstructed image of 1 layer. A modified version of this network, ANN2 outputs 3 images, one for each layer (see Supplement 1). (c) Details of ANN1_c, which classifies the input CCM image into one of the 3 layers.
Fig. 2.
Fig. 2. Experimental results of 3D CCM. (a) Fluorescence samples were reconstructed using ANN1_r, while the layer index was predicted by ANN1_c. The first 3 rows show cultured neurons, while the last row shows fluorescent beads (diameter=4µm). (b) ANN2 produces 3 reconstructed images, one for each layer. This is an example, where layer 2 contains the neuron. (c) Another example from ANN2 where layer 3 contains the neuron. Many additional examples from all networks are included in the Supplement 1.
Fig. 3.
Fig. 3. (a-f) Comparison of results from the different reconstruction ANNs using the same input image. In (c), the output of ANN1_c is labeled as 2 on top. (g-k) shows images of a single fluorescent bead (diameter=4µm) obtained using the 2 networks (the bead is in layer 1). Size of images (a)-(f) are the same and those of (g),(i) and (k) are the same.
Fig. 4.
Fig. 4. Imaging inside a volumetric phantom: (a) Photographs of phantom. Bottom image shows cannula inserting into the phantom. Blue light is excitation. (b) Output of ANN1_r* trained on a synthetic dataset (see text for details) at various z, depths inside the phantom (see Visualization 1). (c) Reconstructed 3D image. The streaks in the z direction indicate that the cannula pushed some beads down inside the phantom during the experiment (see Visualization 2).

Tables (1)

Tables Icon

Table 1. Comparison of the performance of each ANN and linear algorithm.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

L = 1 N i g i log p i ( 1 g i ) log ( 1 p i ) ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.