Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

High-speed, 3-dimensional, telecentric imaging

Open Access Open Access

Abstract

The design, testing and operation of a system for telecentric imaging of dynamic objects is presented. The simple system is capable of rapid electronic scanning of a single focal plane within a specimen or of simultaneous focusing on multiple planes whose depth and relative spacing within the specimen can be changed electronically. Application to studies of dynamic processes in microscopy is considered.

©2006 Optical Society of America

1. Introduction

Imaging 3-dimensional structures usually involves re-focusing the imaging system in order to record a series of image on planes at different depths through the object. Such a ‘through focal series’ delivers a series of images with equal magnification (the object and image distances remain constant as the imaging system, or the specimen, is physically translated to focus on a new layer in the object). However, in the case of applications in which the object may have dynamic properties the lengthy ‘through focal series’ process loses valuable information about the time-dependent 3-dimensional object structure.

For some applications the use of wave-front coding [1,2] provides an alternative solution, but this process can severely compromise signal to noise and the images recorded are not easily interpreted without signal processing. The need to process all images in order to decide whether the data contains interesting features can be a significant drawback to such an approach. For laser-illuminated specimens holographic recording techniques [3] may be used, but such a scheme cannot work with self-luminous objects (e.g. fluorescence microscopy), requires high resolution camera systems and, as an interferometric method using highly-coherent beams, suffers from laser speckle and from the effects of instrumental vibration. Scanning confocal microscopy is now widely used for 3-dimesional imaging, but in the context of the imaging of living biological structures the flux inefficiency of this technique can lead to specimen damage through over-exposure. A review of these various approaches for live-cell imaging applications has been published [4].

A recent development has facilitated simultaneous recording, on a single focal plane, of 2-dimensional slices at different depth through the object [5]. However, in the method described to date the object to image distance changes slightly for each plane imaged, with the result that the images of the different object layers are recorded with a different magnification [6]. The need to re-scale the images of each layer when reconstructing the 3-dimensional object not only complicates the data processing, but introduces the possibility of error. Lens systems that provide equal magnification for all object distances are described as telecentric and widely used in metrology in order to avoid such magnification and re-scaling.

When imaging dynamic processes such as virus attack [7], transport across cell membranes, the dynamics of cell division [8] or micro-fluidics [9] an ability simultaneously to image multiple depths within the specimen, or rapidly to change the depth at which the specimen is in focus, offers access to additional information about the dynamics of the process studied. To achieve this with constant magnification simplifies interpretation.

Here we describe a system in which multiple depths within the specimen can be imaged simultaneously, or in rapid succession, with constant magnification and under electronic control. In the former case the method is intrinsically a narrow-band method, suitable for imaging fluorescent markers or laser-illuminated specimens. In the latter case the method is also suitable for full-colour imaging. The two implementations may be combined to provide flexible, programmable 3-dimensional imaging. Electronic control or simultaneous multi-depth imaging in such a system can be used in vitro or in vivo and in dynamic imaging applications, or in high-precision metrology where it is necessary to make very accurate measurements and to define accurately the plane to which measurements refer.

In the following we will firstly formulate a description of the telecentric 3-dimensional imaging system, showing how these properties are achieved, and then consider some of the applications for which the system is suited.

2. Basic optical design

The basic lens equation for optical image formation may be written

1f=1u+1v,

where f, u and v represent the lens focal length, the object distance and the image distance respectively. The sign convention used [10] is that the focal length, f , is positive for a converging lens, the object and image distances are positive with the object to the left of the lens and the image plane to the right of the lens (note that u may be treated as negative in some sign conventions). The magnification of the image may be written [10] as shown in Eq. (2), where a positive magnification indicates an erect image

m=vu=fvf.

In the case that the lens is a compound system (such as an achromat) rather than a simple lens, the object and image distances are to be measured from the First and Second Principal planes of the compound system respectively.

A general compound lens system may be treated by evaluating the combination focal length and the positions of the principal planes relative to one of the lenses. Simple formulae for this are given and may also be found in lens catalogues [11]. The combination focal length, fc , for a system consisting of two lenses of focal lengths f 1 and f 2 may be expressed

fc=f1f2f1+f2s,

where s is the unsigned separation between the two lenses. Either or both of these lenses may, of course, be a compound lens. The positions of the First ( p 1 ) and Second ( p 2 ) Principal Planes of the compound system relative to the First and Second Principal Plane of lens f 1 may be written

p1=sf1f1+f2s

and

p2=s(f1s)f1+f2s

respectively. In both cases the positive direction is to the right (i.e. image side) of the lens.

To express the positions of the Principal Planes with respect to lens f 2 one merely reverses the roles of f 1 and f 2 in Eqs. (4) and (5). The compound system acts as a lens having the focal length defined in Eq. (3), but the compound lens is effectively located in plane p 1 as far as the object side of the system is concerned and in plane p 2 as far as the image side of the system is concerned. The distance between the planes p 1 and p 2 may be disregarded in terms of the imaging equations – it matters only when considering the physical space occupied by the imaging system.

From Eq. (1) applied to a compound lens we have 1/fc =1/(u + p 1) + 1(v - p 2), where u , v, p 1 and p 2 are all measured relative to the First and Second Principal Plane of lens f 1, as appropriate. Multiplying through by v - p 2 and re-arranging we can write the image magnification as m = -(v - p 2)/(u+ p 1) = 1 - (v - p 2)/fc . Substituting for fc and p 2 using Eqs. (3) and (5) leads to

m=(f1s)(vs)f1f2+vf1f1.

The second term on the right hand side in (6) is the usual magnification, as expressed in (2) for a simple, thin lens. The first term on the rhs in (6) is quadratic in s and vanishes when s= f 1 , in which case the image magnification becomes independent of f 2.

Starting again with Eq. (1) applied to a compound lens, we find that the in-focus object distance is given by

u=fc(vp2)vp2fcp1.

Substitution for fc, p 1 and p 2 in (7) does not lead to a useful expression in terms of the dependence of u on f 1,f 2 and s , so the equation is left in the above form.

3. Telecentricity

If, now, we allow s= f 1 we find that Eqs. (3), (4) and (5) reduce to

fc=f1
p1=f12f2.
p2=0

The image magnification, expressed through Eq. (6) depends only on f 1 and from Eq. (7) the object distance is

u=vf1vf1f12f2.

Thus for a fixed objective focal length f 1 and fixed image distance v , the object plane brought to focus may be changed at constant magnification by varying f 2. In consequence the system is telecentric, but the focus of the system may be changed by altering the focal length of f 2 using a tuneable lens such as a liquid crystal lens [12], a tuneable water/oil interface [13], a deformable mirror [14] or any other variable focal length system, including off-axis Fresnel lenses [5].

Clearly if f 2 , the focal length of the second lens, is electronically programmable Eqs. (8) and (9) show that when s = f 1 the in-focus object plane may be scanned electronically through the object depth, whilst maintaining the image magnification and the position of the in-focus image plane. An interesting application of this would be to include a programmable lens of long focal length, positioned to achieve telecentric operation, in the construction of a family of microscope objectives. Such objectives would all deliver electronically-variable focus at an image magnification determined by the objective focal length, although the flux-collection efficiency would vary somewhat with the object plane brought to focus and the variable-focus lens may need to subsume the correction of other aberrations, such as spherical aberration, as a function of the object depth brought to focus.

In the case of off-axis Fresnel lenses the focal length f 2 is different in each diffraction order and if the diffraction order considered is denoted by q the focal length for each diffraction order can be designated q f 2 . Thus, from Eq. (9) the layers brought simultaneously to focus on a single focal plane are separated along the z-axis by

Δzq=f12qf2

and all have magnification given by Eq. (6) with s = f 1. Consequently the system acts as a telecentric system simultaneously delivering spatially-separated images of different object layers in the different diffraction orders. As has been shown previously [5] the focal length of an off-axis Fresnel lens varies inversely with the diffraction order considered. Thus Eq. (8) shows that use of higher-order diffracted beams can be used to obtain a sequence of equally-separated, in-focus object layers, all with the same magnification.

To increase the flexibility of this optical system we recall that f 2 can also be the focal length of a compound lens system. Suppose, therefore, that the compound system consists of an off-axis Fresnel lens with diffraction-order dependent focal length q f 2 mounted co-planar with a programmable lens of liquid crystal or other type. If the focal length of the programmable lens is fp we see from Eq. (3) that the focal length of the compound second lens becomes pfq=(qf2 fP)/(qf2 + fp). If the compound, second lens is located one focal length from the first lens Eqs. (8) will apply and we then find that

fc=f1
p1=f12pfq=f12fp+f12qf2.
p2=0

The expression for p 1 in Eqs. (11) shows that the separation of the planes brought simultaneously to focus depends only on f 1 and qf2 , the diffraction-order dependent focal length of the off-axis Fresnel lens, but these displacements are measured from a mean position determined only by f 1 and fp. For fp = ∞ Eqs. (11) become identical to Eqs. (8), thus by changing the focal length of the programmable lens the fixed-separation planes can be scanned through the specimen depth electronically. A further significance of Eqs. (11) is that fp could represent the order-dependent focal length of a second off-axis Fresnel lens and that this second Fresnel lens may be used [5] to image 9 equi-separated in-focus object planes on a single detector plane. Such considerations can be taken to higher diffractive orders, although the spreading of the available flux between such multiple images will prove a practical limitation. The Fresnel lens designs may be modified to include corrections for depth-dependent spherical aberration [5].

4. Experimental tests

In order to test the tolerances and to confirm the above conclusions using real lenses, a series of experimental tests have been conducted both to assess the positions of the in-focus planes and the magnifications of the images in those planes.

To validate the separation of the in-focus planes we used a 50mm focal-length achromat and a monomode, He-Ne laser-energised fibre source arranged to give a magnification of approximately 4x (Fig. 1). The source position and the position of an off-axis Fresnel lens could both be changed using a micrometer translation stage. For a series of positions (s) of the off-axis Fresnel lens the source was translated in order to determine the position of the in-focus object planes (u) in each diffraction order. These measurements are compared to the quadratic curve calculated from Eq. (7) in Fig. 2.

 figure: Fig. 1.

Fig. 1. Schematic of the optical system

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. The distance to the in-focus object plane, for each diffraction order, as a function of the separation between the lens and the off-axis Fresnel lens.

Download Full Size | PDF

A second set of experiments was conducted using a USAF bar chart as an object. The energised fibre was moved further back and combined with a lens to create a collimated illumination beam for the bar chart, which was placed in the position of the point source in Fig. 1. As in the first set of experiments, for varying positions (s) of the off-axis Fresnel lens the target was translated to an in-focus position. The separation between features in the chart was measured in the image plane in order to assess the image magnification. The in-focus object-distance data from this data is shown in Fig. 3 plotted against a theoretical curve obtained from Eq. (7).

 figure: Fig. 3.

Fig. 3. The distance to the in-focus object plane, for each diffraction order, as a function of the separation between the lens and the off-axis Fresnel lens using a USAF bar chart as the object.

Download Full Size | PDF

Systematic errors in the measurements might be expected from several sources. For the experiments conducted and at the wavelength used the lens manufacturer gives a focal length for the achromat of 50.1±0.5mm, a 1% tolerance. The objective lens and the off-axis Fresnel lens were mounted in lens tubes and our estimate of the discrepancy between the external fiducial marks and the planes of these elements is ±1mm and ±0.5mm respectively. The positions of the principal planes of the achromat were quoted for λ=588nm and not for the 633nm radiation used. The off-axis Fresnel lens was plotted on a high-accuracy system, but no assessment to verify the effective focal length of this element was possible nor has the effect of its substrate on the converging beam been taken into account. Finally, the position of the photo-sensitive surface of the CCD was estimated using a fiducial mark on the camera case that we estimate had an error ±0.5mm relative to the CCD chip and we were unable to confirm that all optical components were accurately normal to the optic axis. It was found that a good fit of the measured data to the theoretical curves is obtained by assuming a cumulative systematic error of 2mm, and a further error of 1.5mm introduced by repositioning of the equipment between the first and second experiments. The results shown in Figs. 2 and 3 are based on these estimates of the systematic errors, the fits to these curves give 20–40μm standard deviation between measurement and theory. It is clear that these fits (especially the zero-order focus in Fig. 3) contain residual systematic errors that could be reduced further through a careful optimization of the experimental parameters.

Random errors arise from visual estimation of ‘best focus’, pixellation of the images when determining image size, micrometer reading and ruler readings (especially parallax problems, although since these readings were made infrequently at the start of data runs they can almost be regarded as systematic in nature).

Finally, the measured image magnification as a function of the lens to off-axis Fresnel lens separation is plotted in Fig. 4 and plotted against the theoretical curves calculated using Eq. 6 and the same parameters as used in Fig. 3. The spacing in the bar chart was measured experimentally and we have assumed a 3% error in this measurement. This gives the best fit between theory and experiment. Here the granularity in the results due to the assessment of magnification in the pixellated images is clearly visible in the data.

 figure: Fig.4.

Fig.4. Magnification plot for experiment 2 showing image magnification in the diffraction orders as a function of lens to off-axis Fresnel lens separation.

Download Full Size | PDF

5. Discussion

Physically, all telecentric versions of the imaging system described above preserve the effective image distance and the effective object distance and thus preserve the image magnification, which depends on the ratio of these two. However, the reference plane from which the object distance is to be measured, the Primary Principal Plane p 1, can be displaced without altering any other properties of the imaging system. This displacement of the Primary Principal Plane displaces the in-focus object plane by an identical distance.

This system can therefore be exploited for 3-dimensional imaging in applications such as metrology or bio-medical research, where it is important that the images formed throughout the depth of a specimen are imaged at the same magnification in order to avoid the labour and errors associated with a need to re-scale images formed in a system.

Three different approaches to 3-dimensional imaging using the approach discussed here can be taken.

Firstly, the inclusion within a microscope objective of focal length fobj of an electronically-tuneable lens located at a plane fobj from the objective Secondary Principal Plane, makes it is possible to construct a microscope system with constant magnification that can be re-focused electronically. Such electronic re-focusing facilitates the rapid recording of through-focal series, the electronic tracking of single or multiple objects in vitro, or fast-scanning in depth to locate an interesting section of the object.

Secondly, through the use of an off-axis Fresnel zone plate, planes at multiple depths within an object may be imaged simultaneously and at constant magnification.

Thirdly, by combining the first two approaches one may scan in-focus layers with constant plane separation through the object.

The technique described is reasonably efficient from a photometric viewpoint. The use of a single programmable lens to obtain fast electronic scanning is highly efficient and the binary (2-level) diffraction grating can deliver approximately 80% of the available flux in the first three diffraction orders (i.e. about 20% loss).

In the case of tracking fluorescent particles in fluid-flow or in vivo/in vitro imaging, the process of imaging different object planes yields de-focused images of the point source emitters, from which the three-dimensional positions of those emitters may be determined [9]. A time sequence of such images provides (x,y,z,t) – information on the dynamics of the processes observed. We have demonstrated [15] that the data thus obtained can yield wavefront sag with ±0.7nm rms which, for a 2mm diameter objective with a 10mm focal length, translates to an uncertainty of ±35nm rms in depth.

Whichever of the three approaches described above is used to achieve 3-dimensional imaging, steps will need to be taken to compensate the effects of spherical aberration when the planes imaged are far from the normal focus condition for the optical system. As noted earlier, the effects of spherical aberration on multi-plane images may be compensated by modifying the off-axis ‘Fresnel lens’ design (through the inclusion of 4th-order curvature in the design of this diffractive element [5]). For LC-operated, or other programmable lens elements, the inclusion of spherical aberration correction could, at least in principle, be subsumed within the operation of the device.

Typical focal lengths for the off-axis Fresnel lenses considered here are ~1–2m. For a 2mm diameter lens this corresponds to an optical path length shift of about 500nm on axis. Modally-addressed LC-operated programmable lenses with 5mm diameter have already been demonstrated with focal lengths as short as 0.5m [12]. Existing liquid-crystal lenses appear suitable for video-rate operation, although developments of dual-frequency liquid crystals appear to offer increased switching speed [16]. At the cost of more-complicated optics, membrane mirrors capable of much higher speeds could be used [14]. These technologies have been demonstrated in combination for tracking in confocal microscopy [17].

The 3-dimensional imaging technique has been presented here in the context of bright-field imaging, but the approach should be equally valid when applied to dark-field imaging and to phase-contrast imaging. Phase-contrast imaging applications may be particularly interesting because the absence of bleachable dyes means that specimen damage is reduced.

Acknowledgements

Funding from the EPSRC and from PPARC under the Smart Optics Faraday Partnership, with support from DSTL, is acknowledged. HIC acknowledges support from the UK ATC.

References and links

1 . S. Bradburn , W.T. Cathey , and E.R. Dowski , “ Realizations of focus invariance in optical digital systems with wave-front coding ,” Appl. Opt. 36 , 9157 – 9166 ( 1997 ). [CrossRef]  

2 . G. Muyo and A.R. Harvey , “ Wavefront coding for athermalization of infrared imaging systems ,” in Electro-optical and infrared systems: technology and applications , R.G. Driggers and D.A. Huckridge , eds. ( Proc. SPIE 5612 , 227 – 235 , 2004 ) [CrossRef]  

3 . P. Marquet , et al., “ Digital holographic microscopy: a noninvasive contrast imaging technique allowing quantitative visualization of living cells with subwavelength axial accuracy ,” Opt. Lett. 30 , 468 – 470 ( 2005 ). [CrossRef]   [PubMed]  

4 . D.J. Stephens and V.J. Allan , “ Light microscopy techniques for live cell imaging ,” Science 300 , 82 – 86 ( 2003 ). [CrossRef]   [PubMed]  

5 . P.M. Blanchard and A.H. Greenaway , “ Simultaneous multiplane imaging with a distorted diffraction grating ,” Appl. Opt. 38 , 6692 – 6699 ( 1999 ). [CrossRef]  

6 . P.M. Blanchard and A.H. Greenaway , “ Broadband simultaneous multiplane imaging ,” Opt. Commun 183 , 29 – 36 ( 2000 ). [CrossRef]  

7 . G. Seisenberger , et al., “ Real-time single-molecule imaging of the infection pathway of an adeno-associated virus ,” Science 294 , 1929 – 1932 ( 2001 ). [CrossRef]   [PubMed]  

8 . A.K. Warner , J.H. Keen , and Y.L. Wang , “ Dynamics of membrane clathrin-coated structures during cytokinesis ,” Traffic 7 , 205 – 215 ( 2006 ). [CrossRef]   [PubMed]  

9 . C.E. Towers , et al., “ Three dimensional particle imaging by wave-front sensing ,” Opt. Lett. 31 , 1220 – 1222 ( 2006 ). [CrossRef]   [PubMed]  

10 . E. Hecht , “ Optics ” ( Addison Wesley Publishing Co. , 1997 )

11 . Melles Griot , “ Optics Guide ”, http://www.mellesgriot.com/products/optics/toc.htm , ( accessed March 2006 )

12 . A.F. Naumov , G.D. Love , M.Yu. Loktev , and F.L. Vladimirov , “ Control optimization of spherical modal liquid crystal lenses ,” Opt. Express 4 , 344 – 352 ( 1999 ). [CrossRef]   [PubMed]  

13 . L. Saurei , et al., “ Tunable liquid lens based on electrowetting technology : principle, properties and applications ,” presented at the 10th Annual Micro-optics Conference, Jena, Germany, 1 – 3 Sept 2004 .

14 . P. Kurczynski , H.M. Dyson , and B. Sadoulet , “ Large amplitude wavefront generation and correction with membrane mirrors ,” Opt. Express 14 , 509 – 517 ( 2006 ). [CrossRef]   [PubMed]  

15 . S. Djidel and A.H. Greenaway , “ Nanometric wavefront sensing ,” in 3rd International Workshop on Adaptive Optics in Industry and Medicine , S.R. Restaino and S. Teare , eds. ( Starline Printing Inc. , 2002 ).

16 . A.K. Kirby and G.D. Love , “ Fast, large and controllable phase modulation using dual frequency liquid crystals ,” Opt. Express 12 , 1470 – 1475 ( 2004 ). [CrossRef]   [PubMed]  

17 . A.J. Wright , et. al., “ Dynamic closed-loop system for focus tracking using a spatial light modulator and a deformable membrane mirror ,” Opt. Express 14 , 222 – 228 ( 2005 ). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Schematic of the optical system
Fig. 2.
Fig. 2. The distance to the in-focus object plane, for each diffraction order, as a function of the separation between the lens and the off-axis Fresnel lens.
Fig. 3.
Fig. 3. The distance to the in-focus object plane, for each diffraction order, as a function of the separation between the lens and the off-axis Fresnel lens using a USAF bar chart as the object.
Fig.4.
Fig.4. Magnification plot for experiment 2 showing image magnification in the diffraction orders as a function of lens to off-axis Fresnel lens separation.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

1 f = 1 u + 1 v ,
m = v u = f v f .
f c = f 1 f 2 f 1 + f 2 s ,
p 1 = s f 1 f 1 + f 2 s
p 2 = s ( f 1 s ) f 1 + f 2 s
m = ( f 1 s ) ( v s ) f 1 f 2 + v f 1 f 1 .
u = f c ( v p 2 ) v p 2 f c p 1 .
f c = f 1
p 1 = f 1 2 f 2 .
p 2 = 0
u = v f 1 v f 1 f 1 2 f 2 .
Δ z q = f 1 2 q f 2
f c = f 1
p 1 = f 1 2 p f q = f 1 2 f p + f 1 2 q f 2 .
p 2 = 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.