Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Experimental characterization of 3D localization techniques for particle-tracking and super-resolution microscopy

Open Access Open Access

Abstract

Three-dimensional (3D) particle localization at the nanometer scale plays a central role in 3D particle tracking and 3D localization-based super-resolution microscopy. Here we introduce a localization algorithm that is independent of theoretical models and therefore generally applicable to a large number of experimental realizations. Applying this algorithm and a convertible experimental setup we compare the performance of the two major 3D techniques based on astigmatic distortions and on multiplane detection. In both methods we obtain experimental 3D localization accuracies in agreement with theoretical predictions and characterize the depth dependence of the localization accuracy in detail.

©2009 Optical Society of America

1. Introduction

Imaging a point-like source of light with conventional lenses results in a focus of diffraction-limited size, typically half of the observed wavelength wide (x and y-direction) and about two to three times that value along the optical (z-) axis. The focal intensity distribution usually resembles an Airy-disk shape in the image plane with a full width at half maximum (FWHM) of d=0.61 λ/(n sin(α)), where λ denotes the wavelength, n the refractive index, and α the half opening angle of the objective lens. The center of this intensity distribution can be determined with an accuracy or standard deviation, σ, usually much better than the diffraction limit, d. For negligible background noise and sampling bin sizes in the detection process, σ is simply proportional to d times Ndet -1/2 where Ndet is the number of detected photons. This fact has been successfully utilized in particle-tracking experiments for decades now to follow objects at or below the resolution limit with accuracies down to the single nanometer level [1–3]. Recently, this concept has also entered the emerging field of super-resolution microscopy [4, 5]. In these ‘FPALM’, ‘PALM’, ‘STORM’, or ‘PALMIRA’ called techniques biological samples are labeled by photoactivatable fluorescent molecules. Only a sparse distribution of single fluorophores is activated, and hence imaged, at any time by a sensitive camera. This allows spatial separation of the diffraction-limited intensity distributions of practically every fluorescing molecule and to localize them individually with σ typically in the 10 nm range. By bleaching or deactivation of the fluorescing molecules during the read-out process and simultaneous activation of additional fluorophores, a large fraction of the probe molecules are imaged over a series of many image frames. A super-resolved image at typically 20-30 nm resolution (measured as the FWHM of a distribution; ~2.4σ) is finally assembled from the determined single molecule positions.

Recently, particle-tracking of sub-cellular fluorescent components and localization-based super-resolution microscopy techniques have advanced from a two-dimensional (2D) imaging method to the third dimension. Localization in the z-direction is complicated by the fact that camera images are 2D. Different z-positions do not result in easily detectable shifts of the center of mass as it is in the 2D case. The axial position has to be deduced from the defocused 2D intensity distributions taking the complex dependence of the focal intensity distribution in the axial direction into account. Analyzing the diameter of the rings appearing in the defocused images, for example, allows conclusions on its z-position [6, 7]. A major obstacle is the axial symmetry of the intensity distribution (in a perfect microscope): for an observed 2D image an axial position of z 0 is equally possible as −z 0. To break this symmetry, two concepts have been developed and successfully demonstrated in particle tracking and localization-based super-resolution microscopy.

  • Introducing astigmatism into the detection path (typically by a cylindrical lens), leads to intensity distributions which are elliptically stretched along one lateral axis or the other depending on the axial position of the imaged particle. Based on a publication by Kao and Verkman who used this approach to track fluorescent particles in 3D [8], this approach has recently been applied to track single quantum dots in cells [9] as well as in localization-based 3D super-resolution microscopy [10].
  • Recording images in different focal planes simultaneously also provides means to determine the axial position of a particle uniquely. This multi-plane detection approach has successfully been used in slightly varying arrangements to track particles down to single quantum dots within cells [11–15] and has been recently applied to localization-based 3D super-resolution microscopy [16].

Determining the 3D position of a particle by any of these methods utilizes fitting of a model function to the experimental data. From the parameters that fit the experimental data best, according to a chosen figure of merit, the particle position (typically also its brightness and a background value) can be deduced. In most cases an analytical function is used to reasonably model the characteristics that dominantly describe the 3D particle position, e.g. the diameter of the defocused image or its ellipticity in the case of astigmatism. Mapping of the determined fit parameters to real spatial positions is achieved by calibrating the model function with imaged particles located at known positions. This indirect method, especially of acquiring the z-position from abstract fit parameters such as ellipticity or ring diameter, is however prone to artifacts in the analysis process. Experimental deviations from the theoretical descriptions by the model function can lead to divergence between real and measured particle positions. Additionally, every model function is limited to a certain optical setup and weighs the information content of the raw data differently. This prevents a direct comparison of different optical setups.

In this publication, we describe a novel method that utilizes experimentally obtained 3D point-spread functions (PSFs) to fit data sets obtained by either a multi-plane or astigmatism approach. Practically all raw data which contributes to the image of a particle is taken into account by the fitting process according to its statistical weight which is especially important for photon-limited applications such as imaging single molecules. Additional calibration steps are not required because the raw data and fit-PSF are acquired by the same setup.

This new approach allows us to methodically compare the experimental performance of both optical approaches which we realized in the same microscopy setup. For the comparison presented here, we have chosen weak signal conditions similar to those observed in particle-tracking experiments of dim fluorescent particles or localization-based super-resolution microscopy.

2. Setup

2.1 Optical setup

We have realized a setup that is easily convertible between multiplane detection and an astigmatic detection scheme as shown schematically in Fig. 1. The microscope is based on a commercial inverted microscope stand (Axio Observer D1, Carl Zeiss MicroImaging, Inc., Thornwood, NY). The beam of a 532 nm diode-pumped solid state laser (AiXiZ, Houston, TX) is expanded and illuminates a rectangular field aperture (FA) of ~3 mm × 6 mm size. The laser power can be adjusted by reflective neutral density filters (Edmund Optics, Barrington, NJ). The illuminated field aperture is then projected into the microscope sample by a 500 mm focal length singlet lens and a 63×/1.2 NA water immersion objective (Plan-Apo 63×/1.2 w, Carl Zeiss MicroImaging, Inc., Thornwood, NY) resulting in a rectangular illumination field of about 15 μm × 30 μm size. The laser light enters the stand at the back through the port which is usually used as lamp port. All optical elements apart from the dichroic beamsplitter (FF552-Di02, Semrock, Rochester, NY), that reflects the laser light into the objective, have been removed from the illumination path in the stand. By adjusting the mirrors (M1, M2) located between the field aperture and the stand, we centered the illumination field in the middle of the field of view as seen through the eye pieces as well as the focused laser beam in the middle of the objective’s back aperture to assure illumination conditions nearly independent of the axial sample position. To scan the sample axially, the objective was mounted on a piezo actuator (PIFOC P-721.CLQ, Physik Instrumente L.P., Irvine, CA).

 figure: Fig. 1.

Fig. 1. Schematic of combined biplane and astigmatism 3D localization setup. FA – Field aperture; L – lens; obj – objective; N – neutral beamsplitter cube; M – mirrors; CL – cylindrical lens; D- dichroic; TL – tube lens; FP – focal plane; F – band pass filter. Not shown: laser beam expansion, shutter and mirrors in front of the field aperture. The dashed box in (a) can be configured as shown in (b) and (c).

Download Full Size | PDF

Fluorescence is collected through the objective, passing the dichroic beamsplitter, a bandpass filter also mounted in the filter cube (FF01-585/40, Semrock) and the 1× tube lens leaving the stand through one of the side ports. The intermediate focal plane at the side port exit is then imaged onto an electron-multiplying CCD camera (DU897DCS-BV, Andor Technology, South Windsor, CT; 16 μm × 16 μm pixel size) by achromat lenses L1 and L2 (AC254-050-A1, f = 50 mm and AC254-200-A1, f = 200 mm, Thorlabs, Newton, NJ) mounted on a rail system (MDR, Siskiyou Corp., Grants Pass, OR). For multiplane detection, a neutral beamsplitter cube, N, (BS016, Thorlabs) between the camera and L2 can be flipped into the beam reflecting about 42% towards a mirror, M4, which directs the light back to the camera as described in our recent publication [16] and shown in Fig. 1(c). For astigmatic detection, an f = 1000 mm plano-convex cylindrical lens, CL, (LJ1516L1-A, Thorlabs) between the intermediate focal plane and L1 can alternatively be introduced into the beam as depicted in Fig. 1(b). CL was oriented vertically resulting in additional focusing in the x-direction in the image plane.

Moving CL along the beam path relative to L1 allows for adjustment of the degree of astigmatism. To alter the distance between the planes in biplane mode, we move M4 closer or further away from N and adjust its angle accordingly. Apart from small variations due to these adjustments, the overall optical magnification in the detection path is 252× corresponding to a camera pixel size equivalent to 63 nm in the sample.

Switching of the setup between the two imaging modes can be performed in less than a minute. Illumination, especially the field of view and the intensity, as well as the camera and all components in the stand and the sample remain unchanged.

2.2 Software

Image acquisition software that also controls camera and piezo actuator parameters was written in LabVIEW 8.2 (National Instruments, Austin, TX) under Windows XP. Recorded data is stored in a raw data format that is later analyzed by separate analysis software programmed in C which runs on a Linux computer cluster (31 compute nodes, each equipped with two dual-core AMD Opteron processors and 16 GB of RAM and connected by a single gigabit Ethernet network). Alternatively, the code, embedded in a LabVIEW environment, can be executed on regular personal computers. Currently, the complete localization process requires about three seconds of single processor time per particle which makes use of a cluster preferable. Since the major fraction of computing time is required to calculate multiple Fourier transforms (see Subsection 2.3), the recent development of using inexpensive graphics processing units (GPUs) for numerical computations is a promising step to accelerate processing dramatically (see for example [17]).

In brief, regions of interest (ROIs) corresponding to the illuminated field of view are cut out automatically. In biplane detection mode, the two ROIs in every frame representing the two detected planes are co-registered by slightly tilting and magnifying one ROI according to calibration values determined earlier. Particles are identified as the brightest pixels in smoothed versions of the ROIs. For every identified particle, one (two) ROI of 15 × 15 pixels at 2 × 2 binning, corresponding to 1.9 μm × 1.9 μm in the sample, is cut out from the non-smoothed data centered on the identified brightest pixel in the astigmatic (biplane) detection mode. This data is then corrected for an electronic offset in the signal stemming from the camera, translated from counts into number of photons and fed into the fit algorithm. This algorithm (see Subsection 2.3) puts out the best estimates for the three spatial particle coordinates as well as the amplitude and a background value. The results are then stored together with other parameters which indicate quality of the fit (χ2-values, number of iterations before convergence, etc.) in ASCII data lists. The lists are later compiled into the data presented below using Microsoft Excel, Origin (OriginLab, Northampton, MA), and LabVIEW.

The same software, especially the same fit algorithm, was applied to data from both imaging modes with the only difference being that in biplane mode two ROIs are used instead of one. Combined with the required minimal changes in the optical setup this warrants optimal conditions for a direct and thorough comparison of the two methods of 3D localization.

2.3 Fit algorithm

Our particle localization routine performs for every identified particle a least-squares fit based on the Nelder-Mead downhill simplex method [18]. This method, in short, finds the best fit by successively contracting an n-dimensional polytope (“simplex” with m+1 vertices) around the minimum of the figure-of-merit function in m-dimensional parameter space.

The figure-of-merit function, χ2, is calculated as squares of the error-weighted differences between the observed number of photons, nj, and a model function, F, which depends on a set of fit parameter values (v, b, a; see below) summed over all pixels j, which describe the image of an identified particle:

χ2(v,b,a)=j(njFv,b,a(xj)σj)2

The 3D positions, x j, describe the coordinates in the sample and correspond to the lattice of 15 × 15 × 2 or 15 × 15 × 1 extracted pixels in biplane and astigmatic detection mode, respectively. Alternatively, they can represent any distribution matching the experimental imaging conditions. σj is the estimated statistical error of nj, which we assume to be nj 1/2 since shot noise is our main error contribution. To fit the particle data, our model function F v,b,a(x) = vh a(x) + b depends on m = 5 parameters, namely the particle’s 3D position a = (ax, ay, az), the number of photons, v, at the intensity maximum detected over the area of one pixel and the number of background photons, b, per pixel. h a(x) describes the normalized instrument response at point x for a particle located at position a. It resembles a normalized PSF shifted by a and is derived from experimentally obtained PSFs as described in Subsection 2.5.

Our PSF h 0(ξ l) is defined on a lattice of voxel coordinates ξ l. which are obtained from the original PSF data set as described in Subsection 2.5. The required values h a(x j) have to be determined from h 0(ξ l) by interpolation, making use of the fact that for a translationally invariant system h a(x) = h 0(xa). PSF values therefore have to be interpolated from the discrete function h 0 for positions xa. Simple linear interpolation and related methods generate points of non-differentiability which can cause failure of proper convergence of the simplex method and induce localization artifacts. To address this issue, we developed the following interpolation method based on Fourier transforms.

The problem of estimating the value of a sampled function at a certain point of interest, xa can be interpreted as determining the function value at the nearest node ξ l after shifting the whole function by the amount necessary to make (xa) coincide with ξ l. This shift can be achieved by convolving the function with a Dirac delta distribution:

ha(xj)=h0(ξl)δ(ξlDa(xj))

where D a(x j) = ξ l − (x ja) describes the vector between the position x ja and its closest neighbor ξ l. In Fourier space the convolution assumes the simple form of a multiplication,

FT{ha(xj)}=H0(κl)×eiκlDa(xj)

H 0(κ l) is the optical transfer function (OTF) of the system and is calculated once at the beginning of the fit procedure as the Fourier transform of h 0(ξ l). Multiplication with the parameter-dependent phase factor exp(-iκ l D a(x j)) and inverse Fourier transform yield the shifted PSF h a(x j).

It can be easily realized that the spacing of the pixel positions, x j, is an integer multiple of the spacing of the PSF nodes ξ l. In this case, D a(x) is independent of x j and a single inverse Fourier transformation of Eq. (3) is sufficient to find all the PSF values required to calculate χ2 for a given shift a.

Because of small experimental differences between the two biplane detection PSFs, we use a slight modification of the described method which performs the PSF translation as described by Eq. (3) simultaneously for both PSFs. Values h a(x j) are extracted from the appropriate OTFs according to the detection plane x j is located in. To calculate the discrete Fourier transforms, our routine uses the freely available FFTW library by M. Frigo and S. G. Johnson [19].

2.4 Sample

In all experiments, we imaged fluorescent latex beads of 100 nm diameter with an emission maximum at 560 nm (F-8800, Invitrogen, Carlsbad, CA). Beads were adhered on poly-L-lysine coated (Sigma-Aldrich, St. Louis, MO) cover slips, immersed in water and mounted on a slide. Bead density was chosen so low, that only about eight to twelve beads were visible in the field of view when imaging. This guaranteed that fluorescence from neighboring particles was not influencing the analysis.

2.5 Generation of the point-spread function

In our localization algorithm, an experimentally obtained PSF replaces theoretical models used elsewhere. To rule out localization artifacts caused by this PSF, great care is exercised in its generation.

The same bead samples as in the later experiments were imaged at maximum electron-multiplying gain of the camera without pixel binning. Single frames were recorded with acquisition times of 30 ms at 50 nm axial piezo steps over a range of 10 μm. Typically, ~3,000 photons were detected from each bead at each z-position near the focal plane. Single beads were then identified visually from the recorded data stacks, and ROIs of 3.8 μm × 3.8 μm size centered on the signal maximum were extracted. In the case of biplane detection, stacks were cut out for both recorded planes resulting in two correlated PSFs. The extracted stacks were then loaded into the data processing software Imspector (written by Dr. Andreas Schoenle, Max Planck Institute for Biophysical Chemistry, Goettingen, Germany, available via Max-Planck-Innovation GmbH, Munich, Germany).

 figure: Fig. 2.

Fig. 2. Experimental point-spread functions and their profiles for biplane detection (a-c) and astigmatic detection (d-f). (a) Scanning a single point-like particle in z-direction with the biplane detection scheme yields two 3D data stacks depicted by their center cross sections. Due to the different image planes of the beam paths the particle is in focus at different axial positions. (b) Profiles of the PSFs along the optic axis for the transmitted (black) and reflected (red) light paths. Due to the 42:58 splitting ratio of the beamsplitter cube, the reflected PSF shows a lower intensity. (c) FWHM of both PSFs in x and y-direction as a function of the axial position. (d), (e) and (f) show the same information as depicted in (a), (b) and (c), respectively, for the astigmatic detection case. Please note, that in this case only one PSF is created. The small cuts in (d) show sections in yz and x-z orientation through the PSF center. Scale bar in (a) and (d) 500 nm. The data displayed in (c) and (f) is averaged over 12 PSFs. The color tables in (a) and (d) are normalized to the individual maximum values in the 3D stacks.

Download Full Size | PDF

In Imspector, background was removed and the data was corrected for bleaching that occurred during the imaging process by dividing the data set values by exp(−λz). The constant λ, which is a measure for the degree of bleaching, was determined by comparing the intensity values of two 3D data sets of the same bead recorded immediately after one another. z is the axial coordinate of every pixel. The PSFs were then cut to a size of approx. 3.8 μm × 3.8 μm × 7.5 μm with the PSF centered in the middle. In biplane mode, both PSFs were cut in an identical way, so that the stack centers were located axially in the middle between the two PSF centers and the PSF centers maintain their original axial distance. The PSFs were then resampled to voxel sizes close to the resolution limit (x = y = 127 nm, z = 200 nm) which reduced noise. This also assured maximum processing speed in the fit algorithm, which due to the inclusion of Fourier transformation steps depends strongly on the number of PSF voxels, without altering the optical characteristics of the original PSF. The PSF was further normalized to a maximum value of 1 for easier determination of reasonable start parameters for the fit algorithm. In biplane mode, the brighter PSF was normalized to 1, the other one to an accordingly lower value.

Figure 2 shows typical PSFs obtained in both modes and displays their axial profiles as well as the variation of the lateral FWHMs measured at different z-positions of the 3D-PSF. The single 3D-PSFs recorded by the biplane setup resemble conventional widefield PSFs axially shifted according to the distance between the two image planes. The PSF recorded in astigmatic detection mode shows the expected anti-symmetric relationship between a center x-z and a center yz-section caused by astigmatism. For future reference, we denote the shift between the two PSFs in biplane mode as well as the distance between the plane of maximum x-focusing and the one of maximum focusing in y-direction by the letter Δ (see Figs. 2(c) and 2(f)).

Table 1 shows the obtained FWHM values of the recorded PSF intensity profiles. For different axial distances between the two detection planes in the biplane detection scheme these values varied only slightly. Considering the detection wavelength range of 565 nm to 605 nm, all obtained FWHM values are in good agreement with theoretical predictions for regular widefield detection PSFs. The best lateral FWHM values in the astigmatic mode are comparable to the best values in biplane detection.

Tables Icon

Table 1. FWHM of measured PSFs

3. Experimental results

To quantify localization accuracy and axial localization range of biplane and astigmatic detection we imaged and analyzed beads over a range of defined axial objective positions. In contrast to theoretical approaches, we cannot easily derive a lower estimate for the localization accuracy, such as the Cramer-Rao bound [20], from experimental data. Instead, we decided to repeatedly measure and analyze fluorescent beads 100 times and use the standard deviation of the distribution of measured particle positions as indicator for the achieved localization accuracy. The fact that this approach is very close to the practical application has on one hand the advantage of avoiding any assumptions or simplifications of a more theoretical approach, but is influenced on the other hand by other factors such as drift over the course of the measurement. To reduce temporal effects, we minimized image acquisition times. No significant drift or intensity variations could be observed over the course of 100 recorded frames.

A z-range of 4 μm roughly centered on the middle between the two planes and the astigmatic focal plane, respectively, was scanned in 50 nm steps. One hundred images were recorded at each axial position. The EM-CCD camera was set to maximum electron-multiplying gain, corresponding to a calibration factor of 52 counts per detected photon, and 2 × 2 binning. With a laser intensity of ~200 mW/cm2 and an acquisition time of 30 ms/frame, typically Ndet = 400 photons/bead were detected in every frame for positions within a few hundred nanometers of the focal planes. Approximately 1 photon/pixel were measured as standard deviation of the background noise. These numbers represent typical values detected from single photoactivatable fluorescent proteins in cultured cells used in FPALM or PALM [21, 22] and also correspond to noise levels observed in Biplane FPALM imaging. For particles far out of focus, a decrease in signal as a result from blurring of the intensity distribution over areas larger than the ROIs could be observed. The recorded frames were then fed into our particle identification and localization algorithm described in Subsection 2.2 and 2.3.

3.1 Biplane detection

In Biplane detection mode, we compared the performance achieved with distances between the two planes of 400 nm, 500 nm and 600 nm in the sample. These values can be realized by placing mirror M4 approximately 25 mm, 32 mm and 38 mm away from the center of the beamsplitter cube, respectively.

Figure 3 shows results from localizing beads with these three arrangements. At about 400 detected photons per bead and frame, the beads could be localized over a range at least twice as large as the plane distance. Over the ~5 min recording time, we observed small sample drift of less than 100 nm. In real particle tracking or super-resolution microscopy applications, this can be compensated for example by imaging fiduciary markers fixed to the cover slip in parallel to the particles of interest which allows monitoring sample drift. Subtracting the piezo-actuator driven z-movement from the determined z-positions (grey data points in Figs. 3(a), 3(c) and 3(e)), shows that even for particles located beyond the space contained between the two focal planes the algorithm does not introduce any significant distortion in localizing particles axially.

Lateral localization accuracies show a similar axial dependency as the lateral FWHM (shown for example in Fig. 2 for Δ= 500 nm) but the curves are flatter. The reason is that at every z-position the sharper one of the two detected images contributes more strongly to the lateral localization. x and y-localization accuracies are nearly identical. Small differences can be explained by a minor lateral anisotropy of the PSFs. z-localization is generally less accurate than lateral localization. In fact, the ratio between the two values is about a factor 2 to 3 which agrees with the ratio of axial to lateral FWHMs of our experimental PSFs. This shows that the approximate relationship of proportionality between localization accuracy and FWHM can be generalized to three dimensions.

In our experiment, observed axial localization accuracy is worse for negative axial positions than for positive axial positions. The reason is a slight axial asymmetry of our PSFs: a particle positioned at positive axial positions shows distinct intensity rings in the more distant detection plane, while a particle positioned on the other side shows a more homogeneous intensity distribution in its distant detection plane. The strong intensity modulation in the first case makes it easier to locate the particle axially since already small z-shifts lead to distinct changes in the ring pattern. This effect is strongest if the particle is nearly in focus in one of the planes: the lateral intensity distribution in focus is insensitive to small axial shifts and the axial localization therefore depends in this case solely on the out-of-focus detection plane.

 figure: Fig. 3.

Fig. 3. Localization of fluorescent beads with the biplane detection scheme at plane distances of 400 nm (a-b), 500 nm (c-d) and 600 nm (e-f) in the sample. 100 images were taken at each objective z-position with 50 nm z-steps in between. (a, c, e) show the results of a single bead localized by the fit algorithm. Determined x, y and z-positions are shown in blue, green and red, respectively. The dark gray points show the z-position corrected for the actual z-movement of the objective. All position values are offset by a constant, arbitrary value to avoid overlap between the curves. The insets show the area depicted by the black rectangles enlarged to give a better view of the data. (b, d, f) show the localization accuracy determined as standard deviation σ from the 100 images taken at each z-position. The gray area denotes the fraction Φ of images in which the fit properly converged and the bead could be located.

Download Full Size | PDF

Our fit algorithm converges in close to 100% of the cases (see gray areas in Fig. 3) over a range of 1 to 2 μm as denoted in Figs. 3(b), 3(d) and 3(f) by the fraction Φ of images in which the particle could be localized correctly. Φ depends on Δ and the number of detected photons, Ndet, (see Subsection 3.3). Only for axial particle positions far away from either of the focal detection planes our algorithm fails to converge properly.

3.2 Astigmatic detection

For astigmatic detection mode, we placed a cylindrical lens as described in Subsection 2.1 into the detection path. By placing the lens ~10 mm, ~20 mm and ~30 mm in front of the lens L1, the distance Δ between the plane of maximum focusing in the x-direction and the plane of maximum y-focusing could be adjusted to ~375 nm, ~475 nm and ~575 nm in the sample, respectively. For all three realizations, we recorded beads at different axial objective positions and determined their 3D position in the same way as for the biplane measurements described in the previous subsection.

 figure: Fig. 4.

Fig. 4. Localization of fluorescent beads with the astigmatic detection scheme at distances Δ of 375 nm (a-b), 475 nm (c-d) and 575 nm (e-f) between the planes of maximum x and y-focusing in the sample. Data was analyzed and is presented as described for Fig. 3.

Download Full Size | PDF

Figure 4 shows the experimental results of one bead for each Δ. The beads could be localized over a range of about 1.2 μm. The axial objective position is nicely reproduced by the axial fit results. As expected, the dependence of localization accuracy on the axial particle position differs from the biplane detection scheme. In the lateral direction, it follows closely the trend provided by the lateral FWHM of the PSF as seen in Fig. 2(f). Lateral localization accuracy depends therefore more strongly on the axial particle position than in biplane detection.

Best localization accuracies for each individual direction are found at different axial particle positions. This limits the shared window in which the localization accuracies in all directions stay within certain limits, for example a factor of two above their minimum values. The best axial localization accuracy is again about two to three times the best lateral values and corresponds to the axially elongated shape of the PSF. The overall minima in lateral localization accuracy are slightly smaller than in biplane detection (about 20%). x and y-localization accuracies vary however much stronger with the axial particle position.

3.3 Signal dependence of localization accuracy and range

3D localization accuracy as well as the axial range over which particles can efficiently be detected and localized depend on the number of detected photons. To investigate these relationships, we compared the data obtained from beads of different brightness imaged with the two detection schemes.

Beads were recorded in the same way as described above. To determine the number of detected photons, the signal of a bead located close to the detection planes was summed up over a ROI of 9 × 9 pixels (for biplane detection in both planes) and corrected for the background level determined from the surrounding pixels. The determined signal level was averaged over 100 recorded frames of the bead at the same position. Figure 5 summarizes the results. The axial localization range (ALR) is defined as the range of axial positions in which the bead could be identified and localized correctly in at least 50 of the 100 recorded frames.

 figure: Fig. 5.

Fig. 5. Localization accuracy, σ, and axial localization range, ALR, as a function of particle brightness. Every data point represents the results of the imaging series of a single bead. (a, b) Localization accuracy in x, y and z-direction for the biplane and the astigmatism detection scheme, respectively, follow a square root dependence on the number of detected photons for 180 photons and above. The values stem from beads centered between the detection planes (biplane) and a plane where ellipticity is small (astigmatism). (c, d) The axial range over which each particle could be properly localized in at least 50% of the recorded frames. The different symbols used in (a-d) represent measurements at different focal plane distances Δ as specified in the figure legends.

Download Full Size | PDF

We observe roughly a square root dependence of the x, y and z-localization accuracy on the number of detected photons for both detection schemes (Figs. 5(a) and 5(b)) as expected. Below ~180 detected photons, we can observe a deviation of σ from this square root behavior which results from a non-negligible background level. In biplane and astigmatic detection, the axial localization accuracy is about 2.5-fold and 3-fold worse than in lateral direction, respectively. No strong Δ-dependency of the localization accuracy can be observed within the tested range of Δ. Variations between the different realizations stay mostly within 10%. The largest difference is observed for the axial localization accuracy which is about 20% worse in astigmatic detection than in biplane detection.

The ALR also depends notably on the number of detected photons (Figs. 5(c) and 5(d)). With an increasing number of detected photons, the localization range grows for biplane and astigmatic detection. A steep increase can be observed up to about 300 detected photons. Above this number the dependency is weaker but still noticeable. Generally, biplane detection offers a significantly larger ALR compared to astigmatic detection (~2200 nm vs. ~1200 nm for 500 detected photons). The ALR in the biplane detection scheme for Δ=400 nm is significantly smaller than Δ=500 nm, however the ALR does not increase notably for values of Δ=600 nm and 700 nm. No Δ-dependence can be observed for astigmatic detection within the tested range.

4. Conclusions

To compare the experimental performance of biplane and astigmatic detection modes for 3D localization, we set up a microscope which can be easily switched between both modes while sharing most of the components in both of them. The measured FWHMs of the obtained PSFs feature very similar minimum values for both detection modes and are in agreement with theoretical predictions. This setup combined with the new algorithm, which could be successfully applied to both modes, therefore enables a performance comparison on the basis of the same framework. Differences in the setup, the sample or the localization algorithm could be ruled out as sources of stronger or weaker performance.

We chose samples of sparsely distributed fluorescent beads illuminated at low laser intensities. The 400 photons/bead typically detected in each frame mimics conditions observed for single fluorescent molecules as they are used in FPALM, PALM or STORM. The particle density of <0.05 particles/μm2 assures that artifacts induced by signal stemming from neighboring particles are avoided even for out-of-focus particles about 1 μm away from the detection planes.

Considering the different approaches to 3D localization used by the astigmatic and biplane detection modes, the obtained results are remarkably similar and are in good agreement with theoretical predictions published recently [23]. The best localization accuracy values in x, y and z-direction for a given number of detected photons agree within a 20% range. Axial localization accuracy is in both cases about 2.5 to 3-fold worse than in the lateral direction. Over a range of Ndet = 200 to >2,000 the results scale approximately with Ndet -1/2 as predicted by simple models. Differences arise in the dependence of localization accuracy on the axial particle position: in astigmatic detection, the axial position for best x and y-localization differ depending on the degree of astigmatism. In biplane mode, these values are relatively constant over a larger axial range. Axial localization accuracies show similar degrees of variation as a function of the axial position comparing biplane with astigmatic detection. Major differences arise in our experiments in the ALR: biplane detection is capable of localizing particles over a range nearly twice the value achievable by astigmatic detection. This is a key feature in imaging of thick biological samples and biplane mode therefore seems to be favorable for these applications. The fact that the signal in biplane detection is spread over double the number of pixels does not have a detectable negative effect in our setup. For cameras with non-negligible readout noise, astigmatic detection might however be advantageous. Imaging beads which contain multiple dye molecules in random orientations, we could not observe polarization-dependent effects. For single molecules without significant rotational freedom, the fixed dipole orientation leads however to significant variations in the observed signal. Among other shapes, elliptically deformed intensity distributions have been reported for this case [24]. A stringent investigation how this influences the determined axial particle position, especially in astigmatic detection mode which takes advantage of similar intensity distribution changes, still needs to be carried out.

The presented comparison has been facilitated by a novel 3D particle localization algorithm which works independently of a theoretical model function and instead uses experimentally obtained PSFs. This enables the algorithm to account for experimental deviations from perfect theoretical descriptions which are often difficult to include accurately in theoretical models and reduces artifacts in the localization process. In fact, the use of a matching PSF to fit the experimental data is crucial for accurate position determination: fitting experimental bead data used in Fig. 3 with a theoretical PSF, which was calculated for the correct wavelength and objective but disregarded the non-ideal behavior of the setup, resulted in standard deviations about 1.5-fold larger than the values achieved with the matching experimental PSF.

Using experimentally obtained PSFs requires, however, careful consideration of conditions for data acquisition and proper data processing: simulations where we added additional white noise to the PSF later used for fitting resulted in significant worsening of the localization accuracy. Good signal-to-noise ratio in the PSF recording and careful smoothing in the processing step as described above is therefore highly recommended. It is worth to point out that the optical quality of the PSF plays on the other hand, within boundaries, a less significant role. The achieved localization accuracy scales with the FWHM of the PSF. A setup featuring a PSF which is a few percent larger than an optimized one will therefore result in slightly poorer localization accuracies but will not lead to dramatically worse results.

Our algorithm currently assumes a spatially invariant PSF throughout the imaged volume. Depth-dependent spherical aberrations, for example, can therefore cause systematic localization errors [25]. A possible solution is an expansion of the algorithm, which selects the matching PSF out of a larger set of PSFs according to a first axial position estimate.

Our approach of interpolating PSF-values as required in the fit algorithm from a discrete data set also enables utilization of complex theoretical models by numerically generating a PSF to feed into the algorithm. Small systematic deviations of the determined positions from the actual one, which can stem from differences between used PSFs and data generated during imaging or from the fact that the PSF data set is of finite size, can be readily corrected by proper calibration curves.

Importantly, acquiring PSFs experimentally allows general applicability to a range of 3D localization strategies without the need to develop individual theoretical model functions for every case. Our new approach is readily applied to the recently reported detection schemes of the 4Pi microscopy related method of iPALM [26] and double-helix PSFs [27]. While a fitting approach is significantly slower than direct position determination such as calculation of the centroid [26, 27], it typically provides more accurate results [27].

We want to point out that our algorithm is neither limited to the five fitting parameters of x, y and z-position, amplitude and background used here, nor to 3D PSFs. It can be easily expanded to include parameters such as interference phase, polarization or wavelength by providing experimental PSF data sets which include the necessary information in a fourth, fifth or even higher dimension. Also, less parameters and dimensions are possible and application to 2D imaging therefore possible.

Acknowledgments

The authors thank Mark Lessard for help with sample preparation and setup of the microscope and Joachim Spatz for support.

References and links

1. J. Gelles, B. J. Schnapp, and M. P. Sheetz, “Tracking kinesin-driven movements with nanometre-scale precision,” Nature 331, 450–453 (1988). [CrossRef]   [PubMed]  

2. A. Yildiz, J. N. Forkey, S. A. McKinney, T. Ha, Y. E. Goldman, and P. R. Selvin, “Myosin V walks handover-hand: single fluorophore imaging with 1.5-nm localization,” Science 300, 2061–2065 (2003). [CrossRef]   [PubMed]  

3. K. Murase, T. Fujiwara, Y. Umemura, K. Suzuki, R. Iino, H. Yamashita, M. Saito, H. Murakoshi, K. Ritchie, and A. Kusumi, “Ultrafine membrane compartments for molecular diffusion as revealed by single molecule techniques,” Biophys. J. 86, 4075–4093 (2004). [CrossRef]   [PubMed]  

4. S. W. Hell, “Far-field optical nanoscopy,” Science 316, 1153–1158 (2007). [CrossRef]   [PubMed]  

5. S. W. Hell, “Microscopy and its focal switch,” Nature Methods 6, 24–32 (2009). [CrossRef]   [PubMed]  

6. M. Speidel, A. Jonas, and E. L. Florin, “Three-dimensional tracking of fluorescent nanoparticles with subnanometer precision by use of off-focus imaging,” Opt. Lett. 28, 69–71 (2003). [CrossRef]   [PubMed]  

7. M. Wu, J. W. Roberts, and M. Buckley, “Three-dimensional fluorescent particle tracking at micron-scale using a single camera,” Exp. Fluids 38, 461–465 (2005). [CrossRef]  

8. H. P. Kao and A. S. Verkman, “Tracking of single fluorescent particles in three dimensions: use of cylindrical optics to encode particle position,” Biophys. J. 67, 1291–1300 (1994). [CrossRef]   [PubMed]  

9. L. Holtzer, T. Meckel, and T. Schmidt, “Nanometric three-dimensional tracking of individual quantum dots in cells,” Appl. Phys. Lett. 90, 053902 (2007). [CrossRef]  

10. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008). [CrossRef]   [PubMed]  

11. P. Prabhat, S. Ram, E. S. Ward, and R. J. Ober, “Simultaneous imaging of different focal planes in fluorescence microscopy for the study of cellular dynamics in three dimensions,” IEEE Trans. Nanobiosci. 3, 237–242 (2004). [CrossRef]  

12. S. Ram, J. Chao, P. Prabhat, E. S. Ward, and R. J. Ober, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 1–7 (2007).

13. P. Prabhat, Z. Gan, J. Chao, S. Ram, C. Vaccaro, S. Gibbons, R. J. Ober, and E. S. Ward, “Elucidation of intracellular recycling pathways leading to exocytosis of the Fc receptor, FcRn, by using multifocal plane microscopy.,” Proc. Natl. Acad. Sci. USA 104, 5889–5894 (2007). [CrossRef]   [PubMed]  

14. E. Toprak, H. Balci, B. H. Blehm, and P. R. Selvin, “Three-dimensional particle tracking via bifocal imaging,” Nano Lett. 7, 2043–2045 (2007). [CrossRef]   [PubMed]  

15. S. Ram, P. Prabhat, J. Chao, E. S. Ward, and R. J. Ober, “High Accuracy 3D Quantum Dot Tracking with Multifocal Plane Microscopy for the Study of Fast Intracellular Dynamics in Live Cells,” Biophys. J. 95, 6025–6043 (2008). [CrossRef]   [PubMed]  

16. M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub-100 nm resolution fluorescence microscopy of thick samples,” Nature Methods 5, 527–529 (2008).

17. http://www.nvidia.com/object/cuda_home.html.

18. W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, Numerical Recipes: The Art of Scientific Computing, 3rd ed. ed. (Cambridge University Press, 2007).

19. M. Frigo and S. G. Johnson, “FFTW,” http://www.fftw.org/ (2008).

20. S. Ram, E. S. Ward, and R. J. Ober, “How accurately can a single molecule be localized in three dimensions using a fluorescence microscope?,” Proc. SPIE 5699, 426–435 (2005). [CrossRef]   [PubMed]  

21. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-High Resolution Imaging by Fluorescence Photoactivation Localization Microscopy,” Biophys. J. 91, 4258–4272 (2006). [CrossRef]   [PubMed]  

22. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313, 1642–1645 (2006). [CrossRef]   [PubMed]  

23. C. von Middendorff, A. Egner, C. Geisler, S. W. Hell, and A. Schoenle, “Isotropic 3D Nanoscopy based on single emitter switching,” Opt. Express 16, 20774–20788 (2008). [CrossRef]   [PubMed]  

24. J. Enderlein, E. Toprak, and P. R. Selvin, “Polarization effect on position accuracy of fluorophore localization,” Opt. Express 14, 8111–8120 (2006). [CrossRef]   [PubMed]  

25. Y. Deng and J. W. Shaevitz, “Effect of aberration on height calibration in three-dimensional localization-based microscopy and particle tracking,” Appl. Opt. 48, 1886–1890 (2009). [CrossRef]   [PubMed]  

26. G. Shtengel, J. A. Galbraith, C. G. Galbraith, J. Lippincott-Schwartz, J. M. Gillette, S. Manley, R. Sougrat, C. M. Waterman, P. Kanchanawong, M. W. Davidson, R. D. Fetter, and H. F. Hess, “Interferometric fluorescent super-resolution microscopy resolves 3D cellular ultrastructure,” Proc. Natl. Acad. Sci. USA 106, 3125–3130 (2009). [CrossRef]   [PubMed]  

27. S. R. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. Schematic of combined biplane and astigmatism 3D localization setup. FA – Field aperture; L – lens; obj – objective; N – neutral beamsplitter cube; M – mirrors; CL – cylindrical lens; D- dichroic; TL – tube lens; FP – focal plane; F – band pass filter. Not shown: laser beam expansion, shutter and mirrors in front of the field aperture. The dashed box in (a) can be configured as shown in (b) and (c).
Fig. 2.
Fig. 2. Experimental point-spread functions and their profiles for biplane detection (a-c) and astigmatic detection (d-f). (a) Scanning a single point-like particle in z-direction with the biplane detection scheme yields two 3D data stacks depicted by their center cross sections. Due to the different image planes of the beam paths the particle is in focus at different axial positions. (b) Profiles of the PSFs along the optic axis for the transmitted (black) and reflected (red) light paths. Due to the 42:58 splitting ratio of the beamsplitter cube, the reflected PSF shows a lower intensity. (c) FWHM of both PSFs in x and y-direction as a function of the axial position. (d), (e) and (f) show the same information as depicted in (a), (b) and (c), respectively, for the astigmatic detection case. Please note, that in this case only one PSF is created. The small cuts in (d) show sections in yz and x-z orientation through the PSF center. Scale bar in (a) and (d) 500 nm. The data displayed in (c) and (f) is averaged over 12 PSFs. The color tables in (a) and (d) are normalized to the individual maximum values in the 3D stacks.
Fig. 3.
Fig. 3. Localization of fluorescent beads with the biplane detection scheme at plane distances of 400 nm (a-b), 500 nm (c-d) and 600 nm (e-f) in the sample. 100 images were taken at each objective z-position with 50 nm z-steps in between. (a, c, e) show the results of a single bead localized by the fit algorithm. Determined x, y and z-positions are shown in blue, green and red, respectively. The dark gray points show the z-position corrected for the actual z-movement of the objective. All position values are offset by a constant, arbitrary value to avoid overlap between the curves. The insets show the area depicted by the black rectangles enlarged to give a better view of the data. (b, d, f) show the localization accuracy determined as standard deviation σ from the 100 images taken at each z-position. The gray area denotes the fraction Φ of images in which the fit properly converged and the bead could be located.
Fig. 4.
Fig. 4. Localization of fluorescent beads with the astigmatic detection scheme at distances Δ of 375 nm (a-b), 475 nm (c-d) and 575 nm (e-f) between the planes of maximum x and y-focusing in the sample. Data was analyzed and is presented as described for Fig. 3.
Fig. 5.
Fig. 5. Localization accuracy, σ, and axial localization range, ALR, as a function of particle brightness. Every data point represents the results of the imaging series of a single bead. (a, b) Localization accuracy in x, y and z-direction for the biplane and the astigmatism detection scheme, respectively, follow a square root dependence on the number of detected photons for 180 photons and above. The values stem from beads centered between the detection planes (biplane) and a plane where ellipticity is small (astigmatism). (c, d) The axial range over which each particle could be properly localized in at least 50% of the recorded frames. The different symbols used in (a-d) represent measurements at different focal plane distances Δ as specified in the figure legends.

Tables (1)

Tables Icon

Table 1. FWHM of measured PSFs

Equations (3)

Equations on this page are rendered with MathJax. Learn more.

χ2(v,b,a)=j(njFv,b,a(xj)σj)2
ha(xj)=h0(ξl) δ(ξlDa(xj))
FT{ha(xj)}=H0(κl) ×eiκlDa(xj)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.