High-speed volumetric imaging represents a challenge in microscopy applications. We demonstrate a technique for acquiring volumetric images based on the extended depth of field microscopy with a fast focal scan and modulated illumination. By combining two frames with different illumination ramps, we can perform local depth ranging of the sample at speeds of up to half the camera frame rate. Our technique is light efficient, provides diffraction-limited resolution, enables axial localization that is largely independent of sample size, and can be operated with any standard widefield microscope based on fluorescence or darkfield contrast as a simple add-on. We demonstrate the accuracy of axial localization and applications of the technique to various dynamic extended samples, including in-vivo mouse brain.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
There is a need for microscopes to provide high-speed volumetric imaging. Such a microscope could be used to characterize the fast dynamics of samples distributed in three dimensions, including bacteria in natural environments [1–3], tracer molecules [4,5], or neuronal activity in brains [6–8]. Standard techniques for volumetric imaging are generally limited in speed, requiring the acquisition of image stacks, as in laser scanning  or light sheet microscopy [10,11]. Other techniques such as light-field  or tomographic microscopes offer limited resolution, or require iterative inversion algorithms . These techniques can also impose illumination and detection geometries that are sometimes inconvenient or sacrifice light efficiency.
A simple technique for fast volumetric microscopy is based on the use of extended depth-of-field (EDOF) imaging [14–16]. Part of the attractiveness of this technique is that it can often be implemented as an add-on to a conventional microscope, either scanning [17, 18] or widefield [19–21]. But EDOF microscopy only offers a partial solution to the problem of 3D imaging. While it can provide diffraction-limited lateral resolution, it offers no axial resolution. We describe an EDOF-based technique that maintains the attractiveness of simple implementation with a standard widefield microscope, while also providing depth information in the form of axial localization. Our technique is light efficient and versatile, able to provide user-defined depths-of-field (DOF) up to a few hundred microns, and can operate with fluorescent or non-fluorescent samples.
The core of our technique involves sweeping the focal plane of a microscope at an axial position zs over a scan range D. There are many ways of implementing a focal sweep system ; while any such system would be acceptable, our method is to insert a deformable mirror (DM) into the back focal plane, or pupil plane, of an otherwise standard epi-fluorescence microscope (described in ). The curvature of the DM can be swept from positive to negative, causing the focal plane of the microscope to sweep over a scan range from to , during a single camera exposure. The key benefits of using a DM are light efficiency, achromaticity, speed (we use a DM that provides a 20kHz update rate), and scan range (we have shown that even with a modest DM stroke of 2.5 − 3μm, we can extend the standard depth of field of the microscope by ≈ 70× ). In existing focal-sweeping systems, the EDOF image is a single-shot projection of the sample along the axial direction. But, as noted above, a single EDOF image does not contain axial information: it is impossible to determine the depth of objects from the EDOF intensity alone. Moreover, if two objects are distributed along the same z axis, it is impossible to determine which is above the other, or indeed, if there are two objects at all.
As we demonstrate, all that is required to recover axial information is a control of the illumination power during the focal sweep. This is experimentally straightforward since our illumination source is a LED and control can be achieved with a simple function generator (see Fig. 1) synchronized to the DM modulation rate. While the technique itself is straightforward, the recovery of axial localization information from our images is somewhat less so. In this paper, we describe a fast (open-loop) deconvolution algorithm that simultaneously recovers diffraction-limited lateral resolution and axial localization information, largely independent of the sample geometry. We present the theory behind our technique, and example applications involving both non-fluorescence and fluorescence imaging, including in-vivo imaging of mouse neuronal activity.
The basic principle behind our technique is that a modulation of the illumination power during the focal sweep convolves the modulation with the sample structure. Our method is a simple modulation in the form of a linear ramp. We acquire two EDOF images in rapid succession, one with the ramp increasing with focal depth (image I+) and the other decreasing with focal depth (image I−). Now consider the function Zest = (I+ − I−)/(I+ + I−), applied to each pixel. If a point-like object is situated midpoint in the scan range, then Zest = 0. If it is deeper than the mid-point, then Zest > 0 and if it is shallower Zest < 0. In fact, the value of Zest provides an excellent estimate of the object depth (scaled by the scan range D). However there is a major caveat associated with this strategy: it only works with point-like objects. If the object is large, for example if it is a uniform plane occupying the entire field of view (FOV), then Z = 0 regardless of the axial location of the plane. In effect, the scaling factor relating Z to object depth is not a constant, but depends on the lateral extent of the sample. To assure that our strategy can work independent of the sample geometry, we must understand this size-dependent scaling factor and correct for it.
To begin, we denote as M±(zs) the normalized illumination intensities used to acquire the two EDOF images, one increasing linearly with focal scan depth zs and the other decreasing. That is, we write
Denoting as I± (ρ) the corresponding EDOF image intensities recorded at the camera position ρ, we have then
We can further define the sum and difference EDOF images to be
Let us first evaluate ĨΣ (Eq. (4)). If we assume that the object of interest is located well within the focal scan range (i.e. z0 ≪ D), we can make the approximation23] associated with the scan range D, defined by
This extended OTF is discussed in detail in Ref. , where it is expressed in exact and approximate forms. We note that the right-hand side of Eq. (6) is the projection of the object along the axial direction. In other words, the sum image ΣĨ (κ⊥) is the same as a conventional EDOF image acquired with constant illumination power.
From Eq. (6) we recover an expression for the extended object stripped of axial information, given by
The evaluation of ĨΔ (Eq. (5)) can be pursued in a similar manner. Following a change of variable, z = zs − z0, this may be recast as
Making use of the fact that OTF (κ⊥; z) is symmetric in z (and hence zOTF (κ⊥; z) is anti-symmetric), which leads to
Using this modulated OTF, we can obtain an expression that recovers axial localization information:
We recall that ρ corresponds to a pixel coordinate on the camera. In other words, Eq. (14) provides an estimate of the object depth at every pixel.
Plots of EOTF (κ⊥; D) and MOTF (κ⊥; D) are shown in Fig. 2, where the lateral frequency κ⊥ is normalized to the diffraction-limited bandwidth of the microscope defined by 2NA/λ, NA being the microscope numerical aperture and λ being the optical wavelength in vacuum.
Two important features of EOTF (κ⊥; D) and MOTF (κ⊥; D) are apparent. First, both transfer functions decay to zero at the diffraction limit. This is expected, since no spatial frequencies beyond this limit can be transferred from the object to the camera. However, it poses a difficulty when implementing Eqs. (8) and (13), since any high frequency noise in the EDOF images becomes detrimentally amplified. The standard solution to this problem  is to introduce a regularization parameter δ2 in the denominator of both Eqs. (8) and (13), and rewrite these as
This regularization scheme works well at high frequency, but we are faced with the difficulty that another zero occurs in MOTF (κ⊥; D) in the low-frequency limit κ⊥ → 0. The origin of this zero is just as fundamental. Consider a laterally uniform object such that O (ρ0, z0) is a constant, independent of ρ0. In this case, the image intensity recorded at the camera for both increasing and decreasing ramps is also a uniform, with intensity value independent of z0. This reflects the principle that a standard widefield microscope fundamentally cannot resolve the depth of a laterally uniform object. As a result, another regularization parameter is introduced in Eq. (13) to prevent a vanishing of the denominator at low frequencies. The optimal value chosen for this parameter depends on the sample in question. The larger the value, the more immune the reconstruction is to noise, but the less accurate the depth ranging becomes for objects of large lateral extent (i.e. small κ⊥). In practice, the low (δ0) and high (δ1) frequency regularization parameters need not be the same, and we can make use of a frequency-dependent regularization parameter defined by
3. Experimental validation
To begin, we experimentally evaluate the accuracy of our axial localization with modulated-illumination EDOF (MI-EDOF) strategy. In particular, we evaluate the effect of the deconvolution corrections used in Eqs. (15) and (16). For this comparison, we introduce the uncorrected depth estimator, given simply by
We measured the 3D position of 1μm diameter fluorescent beads while axially translating the beads with a mechanical, calibrated stage. A Thorlabs M470 L3-C Blue LED was calibrated to produce linearly modulated intensity and used to generate fluorescence at about 500nm, and an Olympus BX51 microscope with a 20×, 0.5NA objective was used to collect the light, providing a conventional depth-of-field of about 2μm. The camera was a PCO Edge 4.2LT, with full-frame rate up to 40Hz. A 140-actuator MultiDM from Boston Micromachines Corporation (BMC) with up to 3.5 μm stroke was used to achieve a total focal-scan range of about D = 60μm.
For isolated beads (Fig. 3), both Z(ρ) (Eq. (14)) and Zu(ρ) (Eq. (18)) provide accurate axial localization over the focal-scan range. Outside this range, the accuracy of the axial localization decreases dramatically, as expected. However, for larger lateral structures such as groups or rafts of beads (Fig. 4), axial localization values are only accurately recoverable when deconvolution is applied. In other words, while deconvolution is not required for sparse, point-like objects, it becomes critical for laterally extended objects.
As pointed out in the introduction, our MI-EDOF technique also works with non-fluorescence imaging, provided the sample is illuminated with spatially incoherent light, in which case the microscope imaging properties are similar to those obtained with fluorescence. To demonstrate this, we image 4μm fluorescent beads suspended in PDMS with both fluorescence and darkfield contrasts. A Thorlabs M625 L3 Red LED was used to provide darkfield illumination (calibrated for linearity) from below the sample using a Cassegrain condenser with a specially cut optical block, allowing a straightforward comparison between the two imaging modes (Fig. 5). Fluorescence and darkfield images yield identical relative axial localization for each of the 4μm bead samples, with an offset between the two imaging modalities of about 3.2μm (Fig. 5). This offset, which is slightly larger than the microscope conventional depth-of-field, resulted from an apparent shift in the nominal focal plane possibly caused by the change in imaging wavelength from 500nm to 625nm.
Figures 3, 4, and 5 illustrate the capacity of our MI-EDOF technique to perform axial localization of both fluorescent and non-fluorescent objects. A crucial requirement for this localization, however, is that the objects do not overlap one another in the axial direction. In the event such overlap occurs, our technique returns an overall intensity-weighted average of the depth. For example, if two punctate objects lie at the same lateral position but at different depths z1 and z2, our technique returns an image of only a single object located at an apparent depth (z1 + z2)/2. This weighted-intensity axial localization is apparent in Fig. 6, which shows a darkfield image of a cylindrospermum algae (Carolina Biological Supply) suspended in water. The algae is generally sparse enough to identify the depth profile of individual strands; where the strands overlap, the depth is identified as an intensity-weighted average.
An application of our MI-EDOF technique is 3D tracking of dynamic objects within large volumes. For example, we imaged E. Coli swimming in water using a 40× 0.8NA Olympus objective over an extended volume (Fig. 7). The E. Coli were fluorescently labeled with GFP. The axial range was chosen to be about 30μm, and the frame rate was 8Hz. This frame rate was limited not by the camera, but by the weak fluorescence intensity of the E. Coli, which imposed a SNR constraint on the exposure time. Video images of the E. Coli (Fig. 7) shows two bacteria trajectories that intersect laterally. Without the axial information supplied by our technique, it would be difficult to determine with certainty whether the E. Coli are close enough to interact. From the modulated-intensity video (see Visualization 1), we find that the two trajectories are in fact at different depths, separated by about 20μm (see Fig. 7). In other words, the added depth information provided by our modulated illumination technique facilitates the disambiguation of the trajectories. We emphasize that the 3D trajectories of every E. Coli bacterium in our FOV can be monitored in this manner in parallel, and that our technique can be applied to a large number of bacteria over large FOVs.
Careful observation reveals a slight apparent depth gradient across the bacteria body, particularly the dark-blue colored E. Coli in Fig. 7(c). We attribute this to lateral translation which can lead to a motion-induced edge artifact. Specifically, because our technique is based on frame-by-frame subtraction, any motion faster than the camera frame rate can appear as a change in depth. While this nominally restricts the use of MI-EDOF to sample dynamics slower than the camera frame rate, a knowledge of the sample trajectory can potentially be used to correct, or at least identify, this artifact.
Another important application of our MI-EDOF technique is in functional brain imaging. For example, when performing widefield epi-fluorescence imaging of neurons labeled with a calcium indicator, it is not uncommon that neurons situated one on top of another are difficult to distinguish. In such cases, intensity variations of the indicators that are signatures of neuronal activity can be difficult to associate with specific neurons, leading to erroneous signal interpretation. Our technique of axial localization provides an added dimension into which signal is encoded, thus increasing signal diversity and facilitating the identification of signal origin. As an example of this, we imaged GCaMP-labeled mouse striatum neurons in vivo. We used a 20× 0.4NA Mitutoyo objective, and acquired videos at a frame rate of 22Hz, easily fast enough to capture the GCaMP fluorescence dynamics. To correct for motion artifacts, we registered the brain images prior to deconvolution. Axial localization was performed at each frame, followed by binning and temporal filtering to reduce noise. This resulted in spatio-temporally filtered axial-localization videos that could be superposed onto the conventional intensity maps of brain activity.
Figure 8 shows neuronal activity of two distinct overlapping neurons taken with an MI-EDOF video (see Visualization 2). Intensity plots of the overlapping (purple) and non-overlapping (green, blue) regions show that the overlap region exhibits calcium transients associated with either neuron (Fig. 8(a)–(b)). However, the overlap intensity alone would not be sufficient to enable the association of a particular transient to a particular neuron, without recourse to statistical correlations over non-overlapping regions ([25, 26]). Using our axial localization technique, analysis of the axial positioning data (Fig. 8(c)) indicates that when the green neuron is active, the apparent depth of the overlap increases (Fig. 8(d1)), whereas when the blue neuron is active, the apparent depth decreases (Fig. 8(d3)). When both neurons are simultaneously active, the depth appears unchanged (Fig. 8(d2)), as our technique provides the intensity-averaged axial position as indicated in Fig. 6. In other words, the association of calcium transients to specific neurons can be achieved locally using information obtained from a single image point, rather than requiring delocalized cross-correlations obtained from spatially separated image points.
This advantage becomes particularly evident when there is a slight time delay in the activity between the two neurons. Since the fluorescence intensities from the neurons add, only a single transient is apparent in the overlap region (Fig. 8(e1)). That is, using only the intensity obtained from the overlap region, it would appear that both neurons were active simultaneously. From the axial localization trace, however, there is a clear time delay between the activity of the two neurons: first the apparent depth drops, indicating the blue neuron is active, and then the depth rises, indicating the green neuron becomes active (Fig. 8(e2)). This sequence of neuron activity is verified by the intensity traces obtained from the non-overlapping regions of each neuron, (imaged in Fig. 8(f1)–8(f3)), confirming a time delay between the two neurons of about 0.5s.
We have presented a modulated-illumination technique that provides axial localization in volumetric samples at video-rate acquisition times. The technique works in combination with EDOF imaging, which can be implemented as a simple add-on with a standard widefield microscope, operating with fluorescence or darkfield contrast. The signature advantage of our technique is speed. EDOF by itself provides quasi-volumetric imaging at kilohertz rates, while sacrificing axial resolution. Our modulated-illumination technique recovers axial information while only moderately slowing our EDOF acquisition.
We emphasize that our technique does not provide axial resolution per se. Rather it provides axial localization information, in the form of an intensity-weighted average along the depth axis at each pixel. While alternative strategies to obtain such axial localization have been described , they do not benefit from the extended range provided by EDOF. Axial information beyond a simple intensity-weighted average could of course be obtained with more sophisticated modulated illumination strategies involving the acquisition of more than two image frames, but this would undermine our speed advantage. As it is, the addition of axial localization information alone is generally sufficient to resolve ambiguous signals, such as those obtained when tracking overlapping particles or filaments, or when monitoring overlapping neuronal signals.
An important component of our technique is the implementation of a deconvolution algorithm to largely render our axial localization accuracy independent of object lateral extent (provided the object remains smaller than the microscope FOV). This algorithm is not restricted to focal-scanning with a DM, and can be applied more generally to any focal-scanning strategy for obtaining EDOF, such as stage-scanning  or scanning with a tunable acoustic gradient lens . As with any deconvolution strategy, detection noise can lead to erroneous results, which are somewhat exacerbated in our case since we rely on image subtraction. We used a simple regularization strategy to mitigate the effects of this noise, though our strategy remains largely subjective as implemented. Detection noise, when significant, can also prescribe a somewhat different implementation of modulated illumination, where the illumination ramps do not taper all the way to zero but rather taper to a finite value (the depth ranging algorithm must be modified accordingly, but the modification is straightforward). Moreover, issues can arise when the illumination control is not linear, in which case a compensating lookup table may be required to recover linearity in the illumination ramps.
In either case, whether depth ranging accuracy is sought, or whether only signal diversity is sought to facilitate signal disambiguation, our strategy of modulated illumination remains easy to implement, making it attractive as a general tool for widefield microscopy.
National Science Foundation Industry/University Cooperative Research Center for Biophotonic Sensors and Systems (IIP-1068070); National Institute of Health (R21EY027549).
We thank Lei Tian and Anne Sentenac for helpful discussions. E. Coli were supplied by the Mo Khalil laboratory. Mice were supplied by the Xue Han laboratory.
T. Bifano acknowledges a financial interest in Boston Micromachines Corporation.
References and links
2. W. Bishara, U. Sikora, O. Mudanyali, T.-W. Su, O. Yaglidere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using a fiber-optic array,” Lab Chip 11, 1276–1279 (2011). [CrossRef] [PubMed]
3. A. Wang, R. F. Garmann, and V. N. Manoharan, “Tracking E. coli runs and tumbles with scattering solutions and digital holographic microscopy,” Optics Express 50, 23719–23725 (1960).
4. P. Memmolo, L. Miccio, M. Paturzo, G. DiCaprio, G. Coppola, P. A. Netti, and P. Ferraro, “Recent advances in holgraphic 3D particle tracking,” Adv. Opt. Photon. 7, 713–755 (2015). [CrossRef]
5. T. H. Chen, J. T. Ault, H. A. Stone, and C. B. Arnold, “High-speed axial-scanning wide-field microscopy for volumetric particle tracking velocimetry,” Exp. Fluids 58, 1–7 (2017). [CrossRef]
6. Y. Gong, C. Huang, J. Z. Z. Li, B. F. Grewe, Y. Zhang, S. Eismann, and M. J. Schnitzer, “High-speed recording of neural spikes in awake mice and flies with a fluorescent voltage sensor,” Science 350, 1361–1366 (2015). [CrossRef] [PubMed]
7. N. Ji, J. Freeman, and S. L. Smith, “Technologies for imaging neural activity in large volumes,” Nat. Neuro. 19, 1154–1164 (2016). [CrossRef]
8. W. Yang and R. Yuste, “In vivo imaging of neural activity,” Nat. Meth. 14, 349–359 (2017). [CrossRef]
9. J. E. Pawley, Handbook of Biological Confocal Microscopy, 3rd. Ed. (Springer, 2006). [CrossRef]
10. J. Huisken, J. Swoger, F. Del Benne, J. Wittbrodt, and E. H. K. Stelzer, “Optical Sectioning Deep Inside Live Embryos by Selective Plane Illumination Microscopy,” Science 305, 1007–1009 (2004). [CrossRef] [PubMed]
11. M. B. Ahrens, M. B. Orger, D. N. Robson, J. M. Li, and P. J. Keller, “Whole-brain functional imaging at cellular resolution using light-sheet microscopy,” Nat. Meth. 10, 413–420 (2013). [CrossRef]
12. M. Levoy, “Light fields and computational imaging,” IEEE Comp. Soc. , 3945–55 (2006). [CrossRef]
13. P. Llull, X. Yuan, L. Carin, and D. J. Brady, “Image translation for single-shot focal tomography,” Optica 2, 822–825 (2015). [CrossRef]
14. W. T. Welford, “Use of Annular Apertures to Increase Focal Depth,” J. Opt. Soc. Am. 50, 749–753 (1960). [CrossRef]
15. G. Hausler, “A method to increase the depth of focus by two step image processing,” Opt. Commun. 6, 38–42 (1972). [CrossRef]
17. P. Dufour, M. Piché, Y. De Koninck, and N. McCarthy, “Two-photon excitation fluorescence microscopy with a high depth of field using an axicon,” Appl. Opt. 45, 9246–9252 (2006). [CrossRef] [PubMed]
18. R. Lu, W. Sun, Y. Liang, A. Kerlin, J. Bierfeld, J. D. Seelig, D. E. Wilson, B. Scholl, B. Mohar, M. Tanimoto, M. Koyama, D. Fitzpatrick, M. B. Orger, and N. Ji, “Video-rate volumetric functional imaging of the brain at synaptic resolution,” Nat. Neurosci. 20, 620–628 (2017). [CrossRef] [PubMed]
19. S. Abrahamsson, S. Usawa, and M. Gustafsson, “A new approach to extended focus for high-speed, high-resolution biological microscopy,” Proc. SPIE 6090, 60900N (2006). [CrossRef]
21. B. F. Grewe, F. F. Voigt, M. van ’t Hoff, and F. Helmchen, “Fast two-layer two-photon imaging of neuronal cell populations using an electrically tunable lens,” Biomed. Opt. Express 2, 2035–2046 (2011). [CrossRef] [PubMed]
24. M. Bertero and P. Boccacci, Introduction to Inverse Problems in Imaging (IOP Publishing, 1998). [CrossRef]
26. L. Theis, P. Berens, E. Froudarakis, J. Reimer, M. R. Roson, T. Baden, T. Euler, A. S. Tolias, and M. Bethge, “Benchmarking Spike Rate Inference in Population Calcium Imaging,” Neuron 90, 471–482 (2016). [CrossRef] [PubMed]
27. M. Watanabe and S. K. Nayar, “Rational filters for passive depth from defocus,” Int. J. Comp. Vision 27, 203–225 (1998). [CrossRef]