Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Limits of 3D dipole localization and orientation estimation for single-molecule imaging: towards Green’s tensor engineering

Open Access Open Access

Abstract

The 3D orientation and location of individual molecules is an important marker for the local environment and the state of a molecule. Therefore dipole localization and orientation estimation is important for biological sensing and imaging. Precise dipole localization is also critical for superresolution imaging. We propose and analyze wide field microscope configurations to simultaneously measure these parameters for multiple fixed dipole emitters. Examination of the images of radiating dipoles reveals how information transfer and precise detection can be improved. We use an information theoretic analysis to quantify the performance limits of position and orientation estimation through comparison of the Cramer-Rao lower bounds in a photon limited environment. We show that bi-focal and double-helix polarization-sensitive systems are attractive candidates for simultaneously estimating the 3D dipole location and orientation.

©2012 Optical Society of America

1. Introduction

The photo-physical properties of individual fluorophores depend on both the orientation and location of the molecule with respect to its environment. Therefore, direct measurement of these properties is of interest [1, 2] for sampling the local environment, detecting chemical reactions, measuring molecular motions, sensing conformational changes and as a means to realize optical resolution beyond the diffraction limit [37]. Furthermore, using wide-field microscopy for single-molecule detection allows for parallelized information throughput from a three-dimensional volume, potentially containing many events of interest. However, previously reported single-molecule orientation techniques normally operate within a reduced depth of defocus [810] and/or on one molecule at a time [10, 11], significantly limiting their applicability. While some of these techniques could be extended to operate in a longer field of view, our study below shows that they are not optimal or sensitive enough. These limitations restrict the number of available degrees of freedom to analyze a three dimensional (3D) volume including a multitude of molecules.

Single molecules that freely rotate can be modeled as point emitters as a result of the rapid and random orientation changes on a time scale much shorter than the integration time of the detection device. The limitations of standard optical microscopes in localizing isotropically emitting molecules in all three dimensions has been overcome with the use of point spread functions (PSF) engineered specifically for 3D localization of isotropic emitters. Techniques that use multiple defocused image planes [12, 13], astigmatic optics [3], and Double-Helix PSFs [4, 14, 15] have been particularly successful in demonstrating that the optical system response can be tailored to enhance 3D localization performance [16]. Efficient estimators have demonstrated experimentally the possibility of reaching the fundamental limit of 3D localization precision provided by the Cramer-Rao Lower Bound (CRLB) [17, 18]. The use of an accurate system model, proper estimators, and calibration are critical to achieve the localization precision limit and avoid bias [15, 1719]. However, the application of these techniques to dipole emitters such as fixed single molecules, where the isotropic assumption is not valid, is not straightforward and if the proper model and estimator are not used they can lead to orientation-dependent systematic errors [2023]. If present, this bias can be eliminated by proper system design and matched reconstruction.

This paper addresses the design of optical microscope systems for the specific task of estimating the location and/or 3D orientation of multiple fixed dipoles in a wide field system. The goal is to create a system (or systems) that can precisely distinguish among different dipole positions and orientations in 3D space. The response of a system to a dipole input for different positions and orientations is the dipole spread function or more precisely the Green’s tensor. Thus Green’s tensor engineering for the estimation of dipole localization and orientation is the generalization of PSF engineering for the case of isotropic emitters (see Fig. 1(a) ). The key difference is the a priori assumption about the nature of the emitting particles and its implications for the optical system design. PSF engineering assumes the imaging of point emitters and has demonstrated the possibility of generating information efficient responses that encode the desired parameters. Similarly, Green’s tensor engineering addresses the possibility of shaping the optical response to fixed dipoles at varying orientations. With the additional degrees of freedom in dipole orientation, the prior PSF designs may no longer provide optimum information efficient solutions, hence opening opportunities for novel task-specific designs. In this paper, solutions based on polarization encoded imaging are presented and shown to overcome the limitations of polarization insensitive systems currently in use [810, 19, 20, 24]. In section 2, we describe the analytic expressions used to model microscope systems that image the field distributions of fixed dipoles. We present the field distributions for representative dipole orientations and for specific microscope systems. In section 3, we use the CRLB to compare these systems based on their ultimate capacity to estimate the location and 3D orientation of fixed dipoles. In section 4, we compare the 3D localization limits of the fixed dipole emitter with that of an isotropic point emitter.

 figure: Fig. 1

Fig. 1 Point Spread Function Engineering versus Green’s tensor engineering: (a) The PSF is the response of the system to a point source whereas the Green’s tensor is the response of the system to a dipole input. The output of the Green’s tensor system is a vector function of the dipole orientation. Each of the rows, shown at the output, corresponds to a unique dipole orientation, showing the total intensity and the intensity of two transverse orthogonal components of the electric field. (b) The position and orientation of a dipole with respect to an objective lens defines the input space. The origin (0,0,0) is at the focal point of the objective lens with the z-axis parallel to the optical axis. Here (x0, y0, z0) represent the position of the dipole and (Θ,Φ) represent the polar and azimuthal orientation angles respectively.

Download Full Size | PDF

2. Optical system model and analysis

The electric field distribution resulting from dipole radiation has known analytic solutions [25]. Given the position (x0,y0,z0) and orientation (Θ,Φ) (see Fig. 1(b)) of a dipole immersed in a medium of refractive index n1, the far-field radiation pattern in the spherical coordinate system is given by,

Eθo=Π(θ)[cosΘsinθ+sinΘcosθcos(φΦ)]
Eφo=Π(θ)sinΘsin(φΦ)]
where Eθo and Eφo denote the polar and azimuthal fields, respectively, and angles θ and φ denote the polar and azimuthal angle spherical coordinates. The superscript o represents the fields on the object side. Similarly, fields at the back aperture are represented by the subscript b. Π(θ) is the phase factor introduced due to the position of the dipole and is given by,
Π(θ)=exp[ikn1(x0sinθcosφ+y0sinθsinφz0cosθ)]
The lens acts as a coordinate transformation element that maps the fields from spherical coordinates (object space) to cylindrical coordinates (pupil plane). Accordingly, the fields at the pupil plane of the microscope objective are [25]
[EφbEρb]=[EφoEθo]n2n1cosθ
where n2 is the refractive index after the microscope objective and the ratio n2/n1cosθ is required for energy conservation. The Eρb and Eφb fields obtained are in cylindrical coordinates but for convenience we decompose the fields in orthogonal linear polarizations.
Eyb=Eρbsinφ+Eφbcosφ
Exb=EρbcosφEφbsinφ
The Green’s tensor can now be modified by placing a polarization element described by Jones matrix JOE as follows
[Exb'Eyb']=JOE[ExbEyb]
Furthermore, a phase/amplitude mask of transmittance function PMask(x,y) in the Fourier plane can also modify the transfer function as follows: Exb'=ExbPMask(x,y) and Eyb'=EybPMask(x,y). In either case, the field at the pupil plane is then focused on the detector using a tube lens which performs to a good accuracy a scaled Fourier transform (FT) of the field at the pupil plane, i.e. Ex=FT{Exb'}|λfand Ey=FT{Eyb'}|λf. The total intensity at the detector is given by
I(r,φ,Θ,Φ)(ExEx*+EyEy*)
From the above equations, it is clear that the emission pattern of the dipole, the intensity I, and intensity of the two linear polarizations |Ex|2 and |Ey|2 depend on the dipole orientation. Figure 2 shows the associated intensity distributions for a dipole located at the focal plane (Fig. 2(a)) and at 0.2μm from the focal plane (Fig. 2(b)). Each row provides the resulting intensity distributions for each unique dipole orientation, namely, O1: dipole along y^ (Θ = 90° Φ = 90°); O2: dipole along z^ (Θ = 0° Φ = 0°); O3: dipole along Θ = 45° Φ = 45°). Figure 2 also shows different intensity distributions demonstrating the variability when using either total intensity or two different polarization state decompositions. Here, all systems in consideration have been standardized to use an objective lens with numerical aperture (NA) of 1.4 and assume the emission wavelength of the emitter at λ = 532 nm. The dipole is assumed to be immersed in a medium of refractive index n1 = 1.52.

 figure: Fig. 2

Fig. 2 Simulation of the dipole spread function along three representative orientations: (I) is the detected total Intensity (single channel system), |Ex|2, |Ey|2, |E1|2, and |E2|2 (left to right) correspond to images obtained in two-channel systems when using either orthogonal polarizers (|Ex|2, |Ey|2) or a quarter waveplate with orthogonal polarizer (see text for details), whereas |ExDH|2 and |EyDH|2 represent the intensity distributions of the two orthogonal linear polarizations using the double-helix phase mask (DH). In (a), the dipole is at the focal plane, while in (b) it is located 0.2 μm from the focal plane. The dipole is oriented along (from top to bottom) y^ (Θ = 90° Φ = 90°), z^ (Θ = 0° Φ = 0°) and Θ = 45° Φ = 45°

Download Full Size | PDF

For a dipole oriented along y^ the intensity |Ex|2 is zero and all the energy lies in |Ey|2. This owes to the fact that the electric field of a dipole is linearly polarized along the dipole axis; which is true even when the dipole is defocused, implying that there is no information about the z-position of the dipole in |Ex|2 for a dipole along y^. In order to make sure that irrespective of the orientation of the dipole, neither of the two orthogonal polarizations states have zero intensity; we propose a set of elliptical polarization images. The elliptical polarizations are obtained by superposing the orthogonal linear polarizations and can be realized by using a quarter wave plate with principal axis at 45° with the x-axis followed by polarizers along the x and y axes,

E1b=(Exb+iEyb)/2
E2b=(ExbiEyb)/2
From the above equations it can be seen that E1b and E2bform an orthogonal basis set. The intensity distributions of these two elliptical polarizations are shown as the fourth and fifth columns in Fig. 2(a) and Fig. 2(b). It can be seen that for the dipole orientations considered here, the elliptical polarization method results in a more uniform energy distribution between the two images.

The Green’s tensor response can also be tailored by using phase masks. For instance, the last two columns of Fig. 2 show a polarization sensitive (PS) system that uses a double helix (DH) phase mask [14] in the Fourier plane. The DH phase mask has been extensively used for 3D localization of isotropic emitters over an extended depth range. Here we analyze its use for Green’s function engineering applied to fixed dipoles. Owing to the design of the DH mask, it generates two lobes that rotate as the dipoles are defocused, but for fixed dipoles the relative strength and lobe shape are significantly affected by the dipole orientation.

The simulated images in Fig. 2 reveal that dipole localization/orientation information is carried in the images at the orientations investigated and that the total intensity microscope, the linear polarization microscope (with or without the phase mask), and the elliptical polarization microscope are worth investigating as potential candidate solutions.

In what follows we compare different optical systems which are designed to employ either total intensity images, linear polarization images, or elliptical polarization images (Fig. 3 ) as a means to retrieve information from the system towards localization/orientation estimation. In addition to investigating the utility of polarization modulation, we propose including the bi-focal microscope configuration, i.e. simultaneously capturing the images at two different focal planes. This configuration has already been demonstrated to be useful for axial localization of isotropic emitters [12, 13]. It is noteworthy that a myriad of different systems could be realized. The systems considered here represent an interesting subset and act as a proof of principle of the possibilities available for Green’s tensor engineering. Also, because of the inherent low signal collection in single-molecule imaging, each system is selected so that no photons exiting the objective pupil are lost beyond the neglected minor losses at the passive devices (polarizers, waveplates, lenses, and beam splitters).

 figure: Fig. 3

Fig. 3 Schematic of the systems considered for dipole location and orientation estimation: (a) A traditional microscope system with a signal processing unit for Green’s function engineering. Category A shows three signal processing units that focus at the same plane whereas Category B shows the three signal processing units that focus at two different planes leading to a bifocal system. Parts (b) and (e), represent systems that measures the total intensity, parts (c) and (f) represent systems with two orthogonal polarization channels, imaging the intensities |Ex|2 and |Ey|2 separately, and parts (d) and (g) represent the systems with two polarization channels imaging the intensities of the elliptical polarizations components, (|Ex + iEy|2)/2 and (|iEx + Ey|2)/2 separately. (h) Shows the linear polarization system with a double-helix phase mask in the Fourier plane. (i) Shows the five dipole orientations used to compare these six systems on a unit sphere. TL - tube lens, L1, L2 -relay lenses, OL1 - objective lens, DM – dichroic mirror, PBS – polarizing beam splitter, QWP – Quarter wave plates with fast axis along 45° from x-axis.

Download Full Size | PDF

The schematic in Fig. 3(a) shows the excitation laser and the microscope objective and represents the signal processing unit as a black box. Seven optical signal processing systems are split into three categories for analysis purpose. Category A (Fig. 3(b), Fig. 3(c), and Fig. 3(d)) requires that the system collects information from a single focal plane. Category B ((Fig. 3(e), Fig. 3(f), and Fig. 3(g)) uses two images located at two different focal depths. Also, as shown, the systems in Fig. 3(b) and Fig. 3(e) image the total intensity without polarization sensitivity. The systems in Fig. 3(c) and Fig. 3(f) consider the use of two imaging channels with orthogonal linear polarization states where the dipole emission is collected by a microscope objective and split by a polarizing beam splitter in the pupil plane. The systems in Fig. 3(d) and Fig. 3(g) show the use of two imaging channels that employ orthogonal elliptical polarizations as described in Eqs. (9) and (10). The emission light goes through a quarter wave plate with fast-axis aligned at 45° and is then split using a polarizing beam splitter; each channel is imaged separately using a pair of tube lenses. Category C considers the use of linear polarization system with the addition of phase masks. In particular, Fig. 3(h) shows the PS-DH system [4, 14] with a DH phase mask place in the Fourier plane.

The intensity distributions in Fig. 2 show that for dipoles oriented along O1:(Θ = 90°, Φ = 90°) and O2:(Θ = 0°, Φ = 0°) some of the systems in Fig. 3 might be lacking in information about the dipole’s position and/or orientation whereas there is always finite information for dipoles oriented along O3:(Θ = 45°, Φ = 45°). We further consider the results for two intermediate orientations at O4:(Θ = 30°,Φ = 30°) and O5:(Θ = 60°, Φ = 60°). Figure 3(h) shows these five dipole orientations. These orientations were chosen as a representative set of the full 4pi steradian solid angle.

3. Cramer-Rao lower bound

References [26] and [27] introduced information theoretic analyses for the study of the limits of precision in dipole orientation. In Ref [26]. we analyzed the 5D dipole estimation problem for the polarization system of Fig. 3(c). Meanwhile, Ref [27]. performed Fisher information calculations for orientation estimation using configurations that allow estimation of only one molecule at a time. Therefore the analysis did not include localization estimation or the effects of defocus. In contrast, here we analyze both the orientation and localization precision limits as functions of defocus and orientation for multiple configurations. All the systems analyzed allow for widefield imaging and hence the estimation of location and orientation of multiple dipoles in parallel.

We evaluate the performance of single-molecule localization/orientation estimation by use of Cramer-Rao lower bound (CRLB) analysis. The CRLB is a fundamental quantity associated with the lowest realizable variance to calculate the parameters of interest with unbiased estimation methods [28]. The lowest possible standard deviation of an unbiased estimator is found from the square-root of the variance (CRLB)

σLB=CRLB
The standard deviation directly yields the error lower bound in the same units as the measured data. We assume the imaging systems to be shift-invariant in the transverse direction, which is a good approximation in the central region of the field of view. Hence, the CRLB remains constant with transverse shifts. For 3D imaging and localization, we are interested in the minimum localization volume. One measure of this uncertainty volume is
σ3D=4π3σxσyσz
Here, σx, σy, and σz represent the lower bound standard deviation along the three Cartesian co-ordinates and σ3D is the volume ellipsoid generated by using the these standard-deviations as the three semi-principal axis. Similarly, for estimating the orientation of a dipole, we can define the solid angle error as,
σΩ=sinΘσΘσΦ
Here, σΘand σΦ are the lower bound standard deviation for the polar and azimuthal angles and σΩ represents the solid angle of the cone generated using these values as the polar and azimuthal angles. Fixed dipoles lead to a 5-parameter estimation problem and defining the quantities in the above equations facilitates the analysis, comparison, and visualization. Appendix A presents further details of the calculation of CRLB.

3.1 Estimation error bounds as a function of defocus

We compare the CRLB for dipole position and orientation in the shot noise limit using 5000 photons per image for the systems previously discussed. Figures 4(a) and 4(b) show the average of the standard deviation for 3D position estimation (σ3D) and solid angle estimation (σΩ) respectively, over the five dipole orientations shown in Fig. 3(h). (For solid angle estimation, we average over four dipole orientations because for a dipole along the optical axis (Θ = 0°), sinΘ = 0°, and the solid angle error is indeterminate.) These are respectively denoted as avg(σ3D) and avg(σΩ). It can be seen from Fig. 4 that, for an in-focus molecule, the avg(σ3D) and avg(σΩ) for the single channel system [TI np: Fig. 3(b)] and the linear polarization system [Lin pol: Fig. 3(c)] increases rapidly, whereas for the elliptical system [Elp pol: Fig. 3(d)] they have a relatively smaller value. These high averages are due to the fact that near focus, these three systems carry either none or very little information about z-position variations of the dipoles that lie in the x-y plane (Θ = 90°) and dipoles that are oriented along the optical axis (Θ = 0°). Also, far from focus, the linear and elliptical polarization systems exhibit more precise localization than the total intensity system On the other hand, the PS-DH system shows a finite avg(σ3D) and avg(σΩ) over the complete defocus range. In a smaller defocus range and away from focus, the avg(σ3D) is less precise than the clear aperture polarization sensitive systems. As for the solid angle error, the PS-DH system has the lowest and most uniform avg(σΩ). Thus, if we need uniform performance over a defocus range of –z to + z, the widely used single channel system [810,19,20,24] will not be the best candidates.

 figure: Fig. 4

Fig. 4 Estimation error bounds as a function of defocus: (a) Average of volume localization -σ3D(4π/3σxσyσz) (b) Average of Solid angle error - σΩ(sinΘσΘσΦ)for the five representative dipole orientations with respect to the axial position of the dipole. For the bifocal systems, the two focal planes were offset by 0.4 μm and the x-axis represents the center of the two planes. The legends represent the systems compared here, namely single measurement-total intensity (TI np: solid blue), linear polarization (Lin pol: green o), elliptical polarization (Elp pol: red dash-dot), bi-focal total intensity (Bf-TI np: solid cyan), bi-focal with linear polarization (Bf-Lin pol: magenta dash), bi-focal with elliptical polarization (Bf-Elp pol: yellow + ), and linear polarization system that used a double helix phase mask in the Fourier plane (PS-DH-Lin pol: black ∆).

Download Full Size | PDF

In order to analyze the bi-focal systems, a defocus of 0.4 μm was chosen by optimizing the average CRLB for the dipole oriented along O3:(Θ = 45°, Φ = 45°). This orientation was chosen since it gives a finite CRLB for all the different systems and for all defocus values. Among the three bi-focal systems, the system that measures the total intensity has a substantially higher avg(σ3D) and avg(σΩ) throughout the defocus region compared to the bi-focal systems that employ polarization. The bi-focal systems with linear and elliptical polarization present a more uniform curve in the region of interest with the linear polarization system showing a lower CRLB than the elliptical one for solid angle estimation. It is noteworthy that the bi-focal linear curve is asymmetric about z0 = 0 and has a spike at defocus −0.2 μm. Since the radiation of a dipole is linearly polarized, the Ex channel of the bi-focal linear system, for a dipole along y^ (Θ = 90°, Φ = 90°), has no information at focus and this coupled with the Ey channel at z0 = −0.4 μm results in a spike in the CRLB curve at z0 = −0.2 μm. Thus, for 3D localization, depending on the region of interest, either the bi-focal elliptical system or one of the single-plane linear or elliptical systems would be suitable candidates. However, for orientation estimation, the PS-DH system shows the lowest CRLB among the systems considered followed closely by the bi-focal linear polarization system.

3.2 Estimation error bounds as a function of azimuthal and polar angles

Localization and orientation estimation of a dipole are functions of both, the dipoles position and orientation. In Fig. 5 , we show the lower bound of the standard deviation for volume localization (σ3D) and the orientation solid angle (σΩ) for a dipole with respect to the azimuthal and polar angles. Figures 5(a) and 5(b) show σ3Dand σΩ respectively, with the top row displaying them as a function of angle Φ for Θ = 90°, and the bottom row displaying them as a function of angle Θ for Φ = 0°. Note that at Θ = 90° an in-focus dipole has rapidly increasing CRLB, thus these plots were made for a defocus of z0 = 0.1 μm to gain a qualitative insight. Both σ3D and σΩ have a nearly constant CRLB for all seven systems as a function of azimuthal angle Φ. For estimation of the solid angle, the Lin pol, the Bf-Lin pol system and the PS-DH system show the lowest CRLB followed by the Elp pol and TI np systems. Indeed, as the dipole rotates in Φ, the intensity distributions of the (Bf-)TI np and the (Bf-) Elp pol system rotate, thus rotating the major-axis of the elliptical pattern of the dipole emission, whereas for the (Bf -) Lin pol and PS-DH systems, there is energy exchange between the two channels. As for volume localization, the Elp pol and Lin pol systems that focus at the same plane have a better precision but only over a short range in the axial dimension (see Fig. 4(a)).

 figure: Fig. 5

Fig. 5 Estimation error bounds as a function of dipole orientation angle: (a) Volume localization - σ3D(4π/3σxσyσz) and (b) Solid angle error - σΩ(sinΘσΘσΦ)as a function of the azimuthal angle Φ (top row) and polar angle Θ (bottom plot). For the plots of category A, where system collects information from a single focal plane chosen at a defocus z0 = 0.1μm. For plot against Φ the angle Θ = 90 and for the plots against Θ the angle Φ was chosen to be 0°. For the bifocal systems, the two focal planes were offset by 0.4μm and the x-axis represents the center of the two planes. The legends represent the systems compared here, namely single measurement- total intensity (TI np: solid blue), linear polarization (Lin pol: green o), elliptical polarization (Elp pol: red dash-dot), bi-focal total intensity (Bf-TI np: solid cyan), bi-focal with linear polarization (Bf-Lin pol: magenta dash), bi-focal with elliptical polarization (Bf-Elp pol: yellow + ), and linear polarization system that used a double helix phase mask in the Fourier plane (PS-DH-Lin pol: black ∆).

Download Full Size | PDF

The bottom row in Fig. 5 shows the volume and solid angle estimation precision w.r.t to the polar angle Θ. As shown in Fig. 5(a), for volume localization w.r.t. Θ, all systems except the non-polarization sensitive systems provide a pretty uniform and low CRLB, implying a better lower bound for estimation error. On the other hand, from Fig. 5(b) it can be seen that, as a function of Θ, the estimation of the solid angle becomes difficult using the non-polarization sensitive systems, whereas the PS-DH system has the smallest lower bound in estimating the solid angle. Thus, overall the PS-DH system has the lowest and most uniform σΩ for solid angle estimation but not far from the Bf-Lin pol system.

3.3 Estimation error bound (σLB) as a function of defocus and polar angle

For shift invariant systems, σ3Dand σΩ are in general functions of Θ, Φ, and z. Therefore, they could be represented in a 3D space for joint optimization. The cross sections presented in Fig. 4 and Fig. 5 are representative of the behavior of the systems and help identify the best systems. To further clarify the power of the CRLB analysis we show, in Fig. 6(a) and Fig. 6(b), surface plots of the lower bounds for the commonly used single channel imaging system [810, 19, 20, 24] and the best two-channel systems identified above. These plots show the striking improvement in precision achievable by design via the CRLB metric. Typical improvements are threefold in 3D position estimation and fourfold in orientation estimation. Also from Fig. 4(a) it can be seen that the Bf-Elp pol system has a more uniform CRLB than the Lin pol system, although it performs worse near focus. We compare the volume localization of these two systems in Fig. 6(c). Similarly, for solid angle error (Fig. 5), the PS-DH system and the Bf-Lin pol system are the strongest contenders. In Fig. 6(d) we compare these two systems as functions of defocus and polar angle Θ. This analysis can be extended to include parametric surfaces as a function of specific system parameters, such as number of photons, background noise, etc., which could be used for further system optimization.

 figure: Fig. 6

Fig. 6 3D localization and orientation estimation design via the CRLB: Parts (a) and (c) show the volume localization lower bound- σ3D(4π/3σxσyσz). Parts (b) and (d) show the solid angle lower bound error - σΩ(sinΘσΘσΦ)as a function of the polar angle Θ and defocus (z0). The systems compared in the above plots are the single channel total intensity system (TI np: blue surface), Linear polarization system (Lin Pol: green surface), bifocal linear polarization system (Bf-Lin Pol: red surface), bifocal elliptical polarization system (Bf-Elp Pol: brown surface), linear polarization with the DH mask (PS-DH: yellow surface), and the DH system for the isotropic emitter (Iso-DH: cyan surface). For the bifocal system, the two focal planes are separated by the distance dz = 0.4 μm and z0 represents the center of the two planes.

Download Full Size | PDF

4. Localization of an isotropic point emitter vs. dipole emitter

A freely and randomly rotating dipole can be modeled as an isotropic point source emitter. The localization of isotropic emitters constitutes a different problem than that of the localization of fixed dipoles because isotropic emitters lead to a three-parameter estimation problem, while fixed dipoles require the estimation of five parameters. Therefore, because the prior knowledge about the object to be localized is different, special care has to be taken in understanding the limitations of a performance comparison.

Here we compare the CRLB of the dipole localization with that of a point source emitter. A point source emits a spherical wave making the intensity equal in all directions, unlike that of a dipole where the intensity varies as sin2θ, where θ is the angle measured from the axis of the dipole. We assume that the total number of photons emitted by the point source and the dipole is equal. Thus, for a dipole oriented perpendicular to the optical axis we derive the ratio of the number of photons captured by the lens as

DipolePhotonCountPointsourcePhotonCount=1+costm+cos2tm
where, tm is the half-angle of the cone captured by the objective lens. For a system with NA = 1.4 and index of immersion medium n = 1.52, tm ≈67° and the above ratio ≈1.5. Thus, if the number of detected photons for the fixed dipole oriented perpendicular to the optical axis is 5000, for the isotropic case it will be ≈3333. The CRLB is calculated in a similar way using Poisson noise (see details in Appendix A) and the lower bound of the volume localization error calculated as in Eq. (12).

Figure 7 shows the volume localization error for a fixed dipole compared with that of the isotropic emitter. The localization of the isotropic emitter is clearly independent of the dipole orientation (Θ, Φ). Because we assume the fixed dipole and the isotropic emitter emit the same number of photons, the fixed dipole can be localized more precisely at orientations around the normal to the optical axis, which are the directions of maximum radiation. Similarly, fixed dipoles oriented between 0° and 50° from the optical axis have poorer localization accuracy. The relative difference is explained by the fact that the number of photons detected for the dipole is larger as long as the dipoles are oriented closer to the transverse plane (Θ = 90°), while the difference in image shape has only a second order effect.

 figure: Fig. 7

Fig. 7 Comparison of CRLB for 3D localization for a fixed dipole and an isotropic emitter: The 3D volume localization lower bound- σ3D(4π/3σxσyσz) is plotted as a function of the polar angle Θ and defocus distance (z0). The systems compared in the above plots are the linear polarization with the DH mask (PS-DH: blue surface), and the DH system for the isotropic emitter (Iso-DH: green surface). For this comparison it is assumed that both emitters emit the same number of photons leading to varying number of detected photons.

Download Full Size | PDF

5. Conclusion

In conclusion, the CRLB analysis provides a powerful tool for the design of fixed dipole localization/orientation imaging systems. The main conclusion from this analysis is that when imaging fixed dipoles under shot noise limited conditions, systems that are sensitive to polarization are stronger candidates for estimating the 3D position and 3D orientation of the dipole. In particular we have shown that the commonly used systems that acquire the total intensity of a defocused single image provide the poorest localization and orientation performance among the systems considered here. Clearly, this is primarily due to the fact that the light emitted from a fixed dipole is polarized. Hence, splitting the emitted radiation in orthogonal polarization states helps estimate these parameters more efficiently by making the system more sensitive to changes in position or orientation.

Furthermore, we quantified the performance limits from a set of candidate imaging systems by comparing their CRLB. We also demonstrated the importance of multifocal imaging in terms of the CRLB for localization and orientation estimation. The CRLB analysis establishes that position estimations can be uniformly improved by using a two channel bi-focal polarization sensitive system, while a single focus plane polarization system might provide a lower CRLB for a short range defocus region. On the other hand, the orientation of a dipole is best estimated using a two channel bi-focal linear polarization sensitive system or a polarization-sensitive double-helix system. These results open further possibilities to solve the five-parameter estimation problem for fixed dipoles via Green’s tensor function engineering.

6. Appendix A: Cramer-Rao lower bound calculations

In this section we present the details of the Cramer-Rao lower bound (CRLB) calculation for the various systems described in the paper. The CRLB is the inverse of the Fisher Information (FI) matrix and is given by [28]

CRLBψ[m]=FIψ1[m,m]
where, ψ is the unknown parameter to be estimated, which in the case of dipole estimation is the position and orientation of the dipole. Thus ψ=[x0,y0,z0,Θ,Φ]and m is 1, 2, 3, 4 or 5. The FI matrix is a 5x5 matrix calculated as follows
FIψ[m,n]=i,jE[lnpi,j(k|ψ)ψ[m]lnpi,j(k|ψ)ψ[n]]
where pi,j(k|ψ) is the probability density function (PDF) for the pixel in ith row and jth column, E refers to expectation, ln the natural logarithm, and the indices m, n are 1, 2, 3, 4 or 5. FI is additive and thus the summation denotes the addition of the FI over all the pixels of the detector. For the multiple channel systems described in the main text, the FI of the system is calculated by adding the FI of each channel. Different noise sources can be chosen by appropriately choosing the PDF.

In order to calculate the CRLB, we first calculate the image at the detector as described in Section 2. A Poisson noise model is then considered in the images. The derivative of the natural logarithm of these images (PDF) is taken with respect to each of the five variables. Finally, the expectations from all the pixels are added together to get the FI. This procedure is used for all channels of the system and the total FI is obtained by adding them. The CRLB is then obtained by inverting the FI matrix and using the respective diagonal elements for each of the unknown parameters. The standard deviation σ is then calculated taking the square root of the CRLB. The number of photons captured depends on the orientation of the dipole with respect to the objective lens because the intensity of dipole radiation varies as sin2θ, where θ is the angle from the axis of the dipole. Thus, a dipole that is perpendicular to the optical axis will have more photons detected than all other dipole orientations as long as the half-angle of the captured cone is less than 90°. Thus, among the representative orientations considered, the dipole along y^ (Θ = 90° Φ = 90°) will have the most photons captured and the PDF is normalized with respect to the dipole along y^ for CRLB calculations. We use a total of 5000 photons for the dipole alongy^.

The CRLB for the isotropic point emitter is calculated in a similar way using the Poisson noise model. But, since localization of an isotropic point source is a 3-parameter problem, the unknown parameter in this case is given by ψ=[x0,y0,z0] leading to a FI matrix of size 3x3.

Acknowledgments

We thankfully acknowledge support from NSF awards DBI-0852885 and DGE-0801680.

References and links

1. W. E. Moerner, “New directions in single-molecule imaging and analysis,” Proc. Natl. Acad. Sci. U.S.A. 104(31), 12596–12602 (2007). [CrossRef]   [PubMed]  

2. E. Toprak and P. R. Selvin, “New fluorescent tools for watching nanometer-scale conformational changes of single molecules,” Annu. Rev. Biophys. Biomol. Struct. 36(1), 349–369 (2007). [CrossRef]   [PubMed]  

3. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319(5864), 810–813 (2008). [CrossRef]   [PubMed]  

4. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. U.S.A. 106(9), 2995–2999 (2009). [CrossRef]   [PubMed]  

5. E. Betzig, G. H. Patterson, R. Sougrat, O. W. Lindwasser, S. Olenych, J. S. Bonifacino, M. W. Davidson, J. Lippincott-Schwartz, and H. F. Hess, “Imaging intracellular fluorescent proteins at nanometer resolution,” Science 313(5793), 1642–1645 (2006). [CrossRef]   [PubMed]  

6. S. T. Hess, T. P. K. Girirajan, and M. D. Mason, “Ultra-high resolution imaging by fluorescence photoactivation localization microscopy,” Biophys. J. 91(11), 4258–4272 (2006). [CrossRef]   [PubMed]  

7. M. J. Rust, M. Bates, and X. Zhuang, “Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM),” Nat. Methods 3(10), 793–796 (2006). [CrossRef]   [PubMed]  

8. M. Böhmer and J. Enderlein, “Orientation imaging of single molecules by wide-field epifluorescence microscopy,” J. Opt. Soc. Am. B 20(3), 554 (2003). [CrossRef]  

9. K. I. Mortensen, L. S. Churchman, J. A. Spudich, and H. Flyvbjerg, “Optimized localization analysis for single-molecule tracking and super-resolution microscopy,” Nat. Methods 7(5), 377–381 (2010). [CrossRef]   [PubMed]  

10. M. A. Lieb, J. M. Zavislan, and L. Novotny, “Single-molecule orientations determined by direct emission pattern imaging,” J. Opt. Soc. Am. B 21(6), 1210 (2004). [CrossRef]  

11. M. R. Foreman, C. M. Romero, and P. Török, “Determination of the three-dimensional orientation of single molecules,” Opt. Lett. 33(9), 1020–1022 (2008). [CrossRef]   [PubMed]  

12. M. F. Juette, T. J. Gould, M. D. Lessard, M. J. Mlodzianoski, B. S. Nagpure, B. T. Bennett, S. T. Hess, and J. Bewersdorf, “Three-dimensional sub-100 nm resolution fluorescence microscopy of thick samples,” Nat. Methods 5(6), 527–529 (2008). [CrossRef]   [PubMed]  

13. S. Ram, J. Chao, P. Prabhat, E. S. Ward, and R. J. Ober, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007). [CrossRef]  

14. S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express 17(22), 19644–19655 (2009). [CrossRef]   [PubMed]  

15. G. Grover, S. Quirin, C. Fiedler, and R. Piestun, “Photon efficient double-helix PSF microscopy with application to 3D photo-activation localization imaging,” Biomed. Opt. Express 2(11), 3010–3020 (2011). [CrossRef]   [PubMed]  

16. G. Grover, S. R. P. Pavani, and R. Piestun, “Performance limits on three-dimensional particle localization in photon-limited microscopy,” Opt. Lett. 35(19), 3306–3308 (2010). [CrossRef]   [PubMed]  

17. F. Aguet, S. Geissbühler, I. Märki, T. Lasser, and M. Unser, “Super-resolution orientation estimation and localization of fluorescent dipoles using 3-D steerable filters,” Opt. Express 17(8), 6829–6848 (2009). [CrossRef]   [PubMed]  

18. S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for superresolution microscopy with aberrations and engineered point spread functions,” Proc. Natl. Acad. Sci. U.S.A. 109(3), 675–679 (2012). [CrossRef]   [PubMed]  

19. D. Patra, I. Gregor, and J. Enderlein, “Image Analysis of Defocused Single-Molecule Images for Three-Dimensional Molecule Orientation Studies,” J. Phys. Chem. A 108(33), 6836–6841 (2004). [CrossRef]  

20. A. P. Bartko and R. M. Dickson, “Imaging Three-Dimensional Single Molecule Orientations,” J. Phys. Chem. B 103(51), 11237–11241 (1999). [CrossRef]  

21. J. Engelhardt, J. Keller, P. Hoyer, M. Reuss, T. Staudt, and S. W. Hell, “Molecular orientation affects localization accuracy in superresolution far-field fluorescence microscopy,” Nano Lett. 11(1), 209–213 (2011). [CrossRef]   [PubMed]  

22. J. Enderlein, E. Toprak, and P. R. Selvin, “Polarization effect on position accuracy of fluorophore localization,” Opt. Express 14(18), 8111–8120 (2006). [CrossRef]   [PubMed]  

23. S. Stallinga and B. Rieger, “Accuracy of the Gaussian Point Spread Function model in 2D localization microscopy,” Opt. Express 18(24), 24461–24476 (2010). [CrossRef]   [PubMed]  

24. E. Toprak, J. Enderlein, S. Syed, S. A. McKinney, R. G. Petschek, T. Ha, Y. E. Goldman, and P. R. Selvin, “Defocused orientation and position imaging (DOPI) of myosin V,” Proc. Natl. Acad. Sci. U.S.A. 103(17), 6495–6499 (2006). [CrossRef]   [PubMed]  

25. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University Press, 2006), Chap. 10.

26. A. Agrawal, S. Quirin, G. Grover, and R. Piestun, “Limits of 3D Dipole Localization and Orientation Estimation with Application to Single-Molecule Imaging - OSA Technical Digest (CD),” in Computational Optical Sensing and Imaging (Optical Society of America, 2011), p. CWA4.

27. M. R. Foreman and P. Török, “Fundamental limits in single-molecule orientation measurements,” New J. Phys. 13(9), 093013 (2011). [CrossRef]  

28. S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory (v. 1) (Prentice Hall, 1993), Chap. 3.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 Point Spread Function Engineering versus Green’s tensor engineering: (a) The PSF is the response of the system to a point source whereas the Green’s tensor is the response of the system to a dipole input. The output of the Green’s tensor system is a vector function of the dipole orientation. Each of the rows, shown at the output, corresponds to a unique dipole orientation, showing the total intensity and the intensity of two transverse orthogonal components of the electric field. (b) The position and orientation of a dipole with respect to an objective lens defines the input space. The origin (0,0,0) is at the focal point of the objective lens with the z-axis parallel to the optical axis. Here (x0, y0, z0) represent the position of the dipole and (Θ,Φ) represent the polar and azimuthal orientation angles respectively.
Fig. 2
Fig. 2 Simulation of the dipole spread function along three representative orientations: (I) is the detected total Intensity (single channel system), |Ex|2, |Ey|2, |E1|2, and |E2|2 (left to right) correspond to images obtained in two-channel systems when using either orthogonal polarizers (|Ex|2, |Ey|2) or a quarter waveplate with orthogonal polarizer (see text for details), whereas | E x DH | 2 and | E y DH | 2 represent the intensity distributions of the two orthogonal linear polarizations using the double-helix phase mask (DH). In (a), the dipole is at the focal plane, while in (b) it is located 0.2 μm from the focal plane. The dipole is oriented along (from top to bottom) y ^ (Θ = 90° Φ = 90°), z ^ (Θ = 0° Φ = 0°) and Θ = 45° Φ = 45°
Fig. 3
Fig. 3 Schematic of the systems considered for dipole location and orientation estimation: (a) A traditional microscope system with a signal processing unit for Green’s function engineering. Category A shows three signal processing units that focus at the same plane whereas Category B shows the three signal processing units that focus at two different planes leading to a bifocal system. Parts (b) and (e), represent systems that measures the total intensity, parts (c) and (f) represent systems with two orthogonal polarization channels, imaging the intensities |Ex|2 and |Ey|2 separately, and parts (d) and (g) represent the systems with two polarization channels imaging the intensities of the elliptical polarizations components, (|Ex + iEy|2)/2 and (|iEx + Ey|2)/2 separately. (h) Shows the linear polarization system with a double-helix phase mask in the Fourier plane. (i) Shows the five dipole orientations used to compare these six systems on a unit sphere. TL - tube lens, L1, L2 -relay lenses, OL1 - objective lens, DM – dichroic mirror, PBS – polarizing beam splitter, QWP – Quarter wave plates with fast axis along 45° from x-axis.
Fig. 4
Fig. 4 Estimation error bounds as a function of defocus: (a) Average of volume localization - σ 3D ( 4π /3 σ x σ y σ z ) (b) Average of Solid angle error - σ Ω (sinΘ σ Θ σ Φ ) for the five representative dipole orientations with respect to the axial position of the dipole. For the bifocal systems, the two focal planes were offset by 0.4 μm and the x-axis represents the center of the two planes. The legends represent the systems compared here, namely single measurement-total intensity (TI np: solid blue), linear polarization (Lin pol: green o), elliptical polarization (Elp pol: red dash-dot), bi-focal total intensity (Bf-TI np: solid cyan), bi-focal with linear polarization (Bf-Lin pol: magenta dash), bi-focal with elliptical polarization (Bf-Elp pol: yellow + ), and linear polarization system that used a double helix phase mask in the Fourier plane (PS-DH-Lin pol: black ∆).
Fig. 5
Fig. 5 Estimation error bounds as a function of dipole orientation angle: (a) Volume localization - σ 3D ( 4π /3 σ x σ y σ z ) and (b) Solid angle error - σ Ω (sinΘ σ Θ σ Φ ) as a function of the azimuthal angle Φ (top row) and polar angle Θ (bottom plot). For the plots of category A, where system collects information from a single focal plane chosen at a defocus z0 = 0.1μm. For plot against Φ the angle Θ = 90 and for the plots against Θ the angle Φ was chosen to be 0°. For the bifocal systems, the two focal planes were offset by 0.4μm and the x-axis represents the center of the two planes. The legends represent the systems compared here, namely single measurement- total intensity (TI np: solid blue), linear polarization (Lin pol: green o), elliptical polarization (Elp pol: red dash-dot), bi-focal total intensity (Bf-TI np: solid cyan), bi-focal with linear polarization (Bf-Lin pol: magenta dash), bi-focal with elliptical polarization (Bf-Elp pol: yellow + ), and linear polarization system that used a double helix phase mask in the Fourier plane (PS-DH-Lin pol: black ∆).
Fig. 6
Fig. 6 3D localization and orientation estimation design via the CRLB: Parts (a) and (c) show the volume localization lower bound- σ 3D ( 4π /3 σ x σ y σ z ) . Parts (b) and (d) show the solid angle lower bound error - σ Ω (sinΘ σ Θ σ Φ ) as a function of the polar angle Θ and defocus (z0). The systems compared in the above plots are the single channel total intensity system (TI np: blue surface), Linear polarization system (Lin Pol: green surface), bifocal linear polarization system (Bf-Lin Pol: red surface), bifocal elliptical polarization system (Bf-Elp Pol: brown surface), linear polarization with the DH mask (PS-DH: yellow surface), and the DH system for the isotropic emitter (Iso-DH: cyan surface). For the bifocal system, the two focal planes are separated by the distance dz = 0.4 μm and z0 represents the center of the two planes.
Fig. 7
Fig. 7 Comparison of CRLB for 3D localization for a fixed dipole and an isotropic emitter: The 3D volume localization lower bound- σ 3D ( 4π /3 σ x σ y σ z ) is plotted as a function of the polar angle Θ and defocus distance (z0). The systems compared in the above plots are the linear polarization with the DH mask (PS-DH: blue surface), and the DH system for the isotropic emitter (Iso-DH: green surface). For this comparison it is assumed that both emitters emit the same number of photons leading to varying number of detected photons.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

E θ o =Π(θ)[cosΘsinθ+sinΘcosθcos(φΦ)]
E φ o =Π(θ)sinΘsin(φΦ)]
Π(θ)=exp[ik n 1 ( x 0 sinθcosφ+ y 0 sinθsinφ z 0 cosθ)]
[ E φ b E ρ b ]=[ E φ o E θ o ] n 2 n 1 cosθ
E y b = E ρ b sinφ+ E φ b cosφ
E x b = E ρ b cosφ E φ b sinφ
[ E x b ' E y b ' ]= J OE [ E x b E y b ]
I(r,φ,Θ,Φ)( E x E x * + E y E y * )
E 1 b = ( E x b +i E y b ) / 2
E 2 b = ( E x b i E y b ) / 2
σ LB = CRLB
σ 3D = 4π 3 σ x σ y σ z
σ Ω =sinΘ σ Θ σ Φ
Dipole Photon Count Point source Photon Count =1+cos t m + cos 2 t m
CRL B ψ [ m ]=F I ψ 1 [ m,m ]
F I ψ [ m,n ]= i,j E[ ln p i,j ( k|ψ ) ψ[ m ] ln p i,j ( k|ψ ) ψ[ n ] ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.