Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate single image depth detection using multiple rotating point spread functions

Open Access Open Access

Abstract

In this article we present the simulation and experimental implementation of a camera-based sensor with low object-space numerical aperture that is capable of measuring the distance of multiple object points with an accuracy of 8.51 µm over a range of 20 mm. The overall measurement volume is 70 mm × 50 mm × 20 mm. The lens of the camera is upgraded with a diffractive optical element (DOE) which fulfills two tasks: replication of the single object point to a predefined pattern of K spots in the image plane and adding a vortex point spread function (PSF), whose shape and rotation is sensitive to defocus. We analyze the parameters of the spiral phase mask and discuss the depth reconstruction approach. By applying the depth reconstruction to each of the K replications and averaging the results, we experimentally show that the accuracy of the reconstructed depth signal can be improved by a factor of up to 3 by the replication approach. This replication method (also called multipoint method) not only improves accuracy of depth reconstruction but also of lateral position measurement. Therefore, the presented concept can be used as a single camera 3D position sensor for multiple points with high lateral as well as depth resolution.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Image based position measurement provides a valuable tool for a wide range of industrial applications, such as deformation measurement of large structures [1], pose estimation of machine tools [2] or position measurement of coordinate measuring machines [3]. Those systems are based on frame-by-frame localization of features, such as markers, emitters or patterns attached to the object of interest. They typically use two or more cameras to reconstruct three-dimensional (3D) information based on triangulation.

In case of single-camera applications only two-dimensional (2D) position information can be acquired, since the axial dimension ($z$) is lost during the process of imaging 3D world coordinates to a 2D image sensor. However, there are several techniques known from microscopy, that allow precise depth reconstruction from a 2D image. A simple approach to extract 3D position information of an object point from a single 2D image is achieved by analyzing the point spread function (PSF) of the imaging system. The PSF represents the detected light distribution on the image plane when observing a point object. When the object gets out of the depth of field, the imaged spot will be defocused and, therefore, is getting larger in diameter. This change of diameter can be attributed to a change of depth. The sensitivity of such techniques is highly dependent on the object-space numerical aperture (NA) of the imaging system. Large numerical apertures correspond to high sensitivity but small measurement range.

Especially in the field of biomedical microscopy and particle tracking, modified PSFs are frequently applied to enhance the precision and measurement range of depth measurement or to achieve imaging resolution beyond the diffraction limit [4]. The PSF is modified in a way, that a $z$ position change of an object can be reconstructed by the changing shape of the PSF. This is achieved by phase modulation using a computer generated hologram (CGH) encoded to a diffractive optical element (DOE) or loaded to a spatial light modulator (SLM). Popular examples of PSF modifications are depth detection methods based on astigmatism [5,6], corkscrew PSF (CS-PSF) [7], Tetrapod PSF (TP-PSF) [8,9], self-bending PSF (SB-PSF) [10] and double-helix PSF (DH-PSF) [1113].

The astigmatic approach is characterized by differing focal points of sagittal and meridional rays, leading to an expanding elliptic PSF, whose orientation is rotated by 90$^{\circ }$ if the object is passing the object sided plane of focus. In case of CS- and DH-PSF, the observed PSF consists of one or two points rotating around each other depending on defocus shift. The TP-PSF consists of an information optimized distribution consisting of two points whose distance increases with defocus and forming a complex pattern when arriving near focus.

For sake of comparison, in Fig. 1 an overview of the measurement range to accuracy ratio for the different methods is shown. As Time-of-Flight (ToF) cameras also offer the possibility to measure the 3D position of an object with a single image, they are also included to this chart. It can be seen, that the rotating PSFs (double-helix, corkscrew and self-bending) together with the Tetrapod PSF reach ratios up to 561, astigmatism up to 333 and ToF cameras up to 1000. The ratio of the proposed method is 2350. However, one has to keep in mind, that most of the microscope applications work with fluorescent or scattering particles and, thus, the number of collected photons is smaller compared to the proposed method with an active light source, improving the signal-to-noise ratio (SNR). In [14] the influence of the signal level on the achievable localization precision is discussed for different point emitters.

 figure: Fig. 1.

Fig. 1. Overview of the achievable measurement range - accuracy ratios for the different depth reconstruction methods: Corkscrew PSF (CS-PSF) [7], double-helix PSF (DH-PSF) [12,13,15], Tetrapod-PSF (TP-PSF) [8], SB-PSF [10], proposed Multipoint double-helix PSF (MP-DH-PSF), Astigmatism (Astig.) [5,6] and Time-of-Flight (ToF) [1618]. Note: In the references no distinction between accuracy and precision was made.

Download Full Size | PDF

For most industrial applications, the measurement ranges of the previously mentioned microscopy approaches is not large enough. Our goal is to develop a low cost single camera position sensor, that is able to measure the 3D positions of multiple point emitters with an axial measurement range of millimeters to centimeters. The point emitters can, for example, be attached to the tool-center-point (TCP) and the work piece (WP) of a drilling machine or a 3D printer, to measure its relative position, as it is described in [3,19]. Therefore, in this paper we investigate the application of a single-image depth measurement technique known from microscopy to a low NA camera system and combine it with a holographic replication technique in order to reach both, high precision and large measurement range. In contrast to other publications, that apply a multi-spot pattern for illumination of the specimen [2022], our replication technique is used in the image plane to improve the measurement accuracy. Other related publications for single image mesoscopic 3D imaging are [23,24]. There the PSF of an imaging system is modified in order to reconstruct depth information for a whole scene.

The methods astigmatism and TP-PSF are not suited for our purpose, because the PSF consists of an increasing elliptical pattern or two diverging points. For large defocus, those growing patterns interfere with the replications made by the multipoint method. In case of the CS-PSF, two sequential images with different phase masks need to be acquired to localize the axis of rotation. This time-sequential process leads to errors for moving objects. The SB-PSF uses a more complicated setup with multiple lenses and two beams of different polarization, making it unsuited for our application. In case of the DH-PSF two spots rotate around each other depending on defocus. The centroids of both spots define the angle of rotation. No further components are required and the phase mask can be applied either by a SLM or a transmission DOE, making it perfectly suited for our purpose.

2. Principle of depth detection

2.1 Double-helix depth detection

Rotating optical beams are distinguished by an intensity profile, whose transversal component rotates around the propagation axis [25,26]. In case of the DH-PSF, those rotating components are two helixes rotating around each other, forming two rotating spots in the image plane. The degree and direction of rotation of those spots is dependent on the defocus of the observed object point. By calculating the centroids of both spots, the rotation angle can be derived easily. To create the DH-PSF, in [11] an efficient and easy realizable way of PSF modification is described. By applying a discrete spiral phase modulation, the generated spiral phase mask (SPM) offers full control of the shape and the rotation rate of the DH-PSF.

The complex field of the SPM can be calculated by [11]:

$$S(r) = \begin{cases} \sum_{n=1}^{N} S_n , S_n=\mathrm{exp}(i \cdot l_n \cdot \phi) & \text{if } R\cdot\sqrt{\frac{n-1}{N}} < r <R\cdot\sqrt{\frac{n}{N}}\\ 0 & r > R \end{cases}$$
where $r$ is the radial and $\phi$ the azimuthal sampling coordinate. $R$ denotes the radius of the phase mask, $N$ is the total number of the radial zones and the index of a single zone is $n$. $l_n$ is the topological charge of the zones, which can be calculated to $l_n= l_1+(n-1)\cdot \Delta l$. $\Delta l$ defines the number of the spots of the helical PSF. If $\Delta l=2$ the number of spots is 2. The application of a SPM with $N$ = 19 radial zones encoded to a transmission DOE is illustrated in Fig. 2. The distance change $\Delta z$ of the object is modified by the phase-mask to a rotation of the two spots.

 figure: Fig. 2.

Fig. 2. Generation of a DH-PSF with a spiral phase mask (SPM) consisting of $N$ = 19 radial zones. The SPM is encoded in a DOE, whose phase function is described by Eq. (1). A point light source at distance $z_0$$\pm$$\Delta z$ is imaged to the image plane at constant distance $z'_0$ by a lens, which has a DOE mounted in front of it. The distance change of the light source is converted to a rotation of the two spots that are visible in the image plane, as illustrated in the series of 7 images. For sake of visualization, the images are separated in $z'$ but belong to the $z'_0$ plane.

Download Full Size | PDF

The number of radial zones $N$ and the NA of the system is affecting the distance between the two spots and therefore the amount of rotation sensitivity with respect to defocus. Those are the main design parameters for a DH-PSF depth measurement system (compare section 3).

2.2 Multipoint method

The multipoint method is a technique to enhance the accuracy of lateral position measurement in imaging applications [27]. The fundamental limitation of this accuracy is the wave nature of light, namely diffraction. The location, where a single photon emitted by a perfect mathematical point arrives on the camera sensor, can only be described statistically. The area characterizing this location is the PSF. Due to the uncertainty, where the photon will impinge the sensor (photon noise), increasing the number of photons improves the accuracy of position measurement. The total number of photons can be increased using temporal averaging, thereby reducing quantization, photon noise and thermal noise. However, fixed pattern noise and discretization cannot be reduced by temporal averaging. The principle behind the multipoint method is spatial averaging. In conventional spot position measurement, the position information of the emitted photons is detected by a few pixels. By making the object point brighter and using a DOE for spot replication, the number of photons carrying useful position information is increased. The process of spot replication is shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. Multipoint method. (a) Scheme of the multipoint method; (b) Image section showing one cluster with $K$ = 21 spots.

Download Full Size | PDF

The lithographically fabricated DOE replicates each single object point to a predefined cluster of spots in the image plane. If the object point moves, all replicated spots are shifted by the same amount in the image plane. The multipoint replication can be described with a phase-only transmission hologram $\xi (\vec {r}_\perp )$ manipulating the phase of the light field according to

$$H(\vec{r}_\perp) = \mathrm{exp}(i \xi(\vec{r}_\perp))$$
The hologram $\xi (\vec {r}_\perp )$ is optimized using the direct binary search algorithm [28] and generates a convolution between the point spread function and $K$ separated delta-functions. In Cartesian coordinates it can be expressed as
$$h(x',y') = \sum_K \delta(x'-x_K, y'-y_K)$$
In Fig. 3(b) an image section showing one cluster with $K$ = 21 spots is illustrated. By averaging the centroids of all spots per cluster, the measurement accuracy of the lateral object position can be improved ideally by a factor of $\sqrt {K}$. This was shown in various publications [2931] and accuracies of up to 0.0017 pixels are reported [32]. Beside lateral position measurement, this spatial replication technique can also be applied to depth reconstruction, as shown in the following subsection.

2.3 Combination of both methods

In this section, the concept of spatial averaging achieved by the multipoint method and depth reconstruction based on DH-PSF is combined, which is illustrated in Fig. 4. The phase-mask encoded in the DOE is a combination of a multipoint hologram and a double-helix phase mask. In this example $K$ = 4 copies of the rotating PSF are generated. Each copy consists of the two rotating spots. If the light source is positioned at distance $z_0$, the rotation angle of each replication is equal to zero. With growing defocus, all replicated DH-PSF start to rotate by the same amount. By not only averaging the lateral positions of each replication but also the rotational angles, both, lateral and axial object position can be measured with improved accuracies.

 figure: Fig. 4.

Fig. 4. Combination of the DH-PSF and the multipoint method. The phase function encoded to the DOE is the sum of the SPM and the multipoint hologram. The point light source is replicated to four copies by the multipoint hologram and each copy consists of two rotating spots formed by the SPM.

Download Full Size | PDF

3. Simulation of the rotating PSF

As already described, the distance of the two spots that rotate around each other is defined by the number of radial zones of the phase mask. The goal of the simulation is, to find a compromise between rotation sensitivity and the distance of the two spots. Following [33] the rotation sensitivity to the axial depth change is given by

$$\frac{d \Theta}{d \Delta z}=\frac{\pi \rm{NA}^{2}}{\lambda N \Delta l}$$
where $\Theta$ is the rotation angle between the two spots, NA is the object-space numerical aperture, $\lambda$ is the wavelength and $N$ is the number of radial zones of the SPM. The rotation sensitivity to defocus increases with the square of the NA. So, for a system with small NA the only design parameter that can be used to manipulate the sensitivity is the number of radial zones $N$ ($\Delta l\,=\,2$ for two spots). On the other hand, the distance between both spots is also affected by $N$. For a system with small focal length $N$ must be big, so that the two spots can still be separated with a pixelated camera sensor. To further investigate the influence of $N$ in a low NA imaging system, a numerical simulation based on Fourier optics has been performed.

The emitted monochromatic light of a point source (wavelength $\lambda$) is transformed by the phase mask encoded in the DOE (4). The SPM is specified by $S$, the multipoint replication $H$, the aperture function $G$ and a quadratic phase term representing the focusing properties of the lens. In accordance with [11,34], the complex amplitude $U$ in the image plane is given by the Fourier-transform as

$$U(\vec{r}\,'_\perp) \sim \iint _{-\infty}^{+\infty} S(\vec{r}_\perp)\, H(\vec{r}_\perp)\, G(\vec{r}_\perp)\, \mathrm{exp}\left[ i k\left(\frac{1}{2z_0}-\frac{1}{2z}\right) |\vec{r}\,'_\perp|^{2} \right] \, \mathrm{exp}\left[ 2 \pi i \frac{\vec{r}_\perp{\cdot}\vec{r}\,'_\perp}{\lambda z\,'}\right] \, d\vec{r}_\perp$$
where $\vec {r}_\perp \,=\,(\vec {x}_0\cos {\phi }+\vec {y}_0\sin {\phi })r_\perp$ and $\vec {r}\,'_{\perp }\,=\,(\vec {x}\,'_0\cos {\phi }+\vec {y}\,'_0\sin {\phi })r\,'_\perp$, the wavenumber $k\,=\,2\pi /\lambda$, $z\,=\,z_0+\Delta z$ and $z'\,=\,z'_0+\Delta z'$. $z_0, z'_0$ denote the object and detection plane respectively, $\Delta z, \Delta z'$ are the axial shifts around those planes and $(\vec {x}_0, \vec {x}\,'_0)$ are the unit vectors. If the point source and the detection plane are at the positions $z_0$ and $z'_0$, the image is sharp. By keeping the distance of the detection plane $z'_0$ constant and changing the distance of the point source to the position $z_0+\Delta z$, the image is defocused and, therefore, the PSF is subject to rotation. In our simulation and in the experiments the distance $z'_0$ between sensor and lens is kept constant.

In Fig. 5 the simulation results for different $N$ and defocusing shifts $\Delta z$ are shown. The simulation parameters are $D_{Aperture}$ = 28 mm, $\Delta l$ = 2, $l_1$ = 1, $z_0$ = 232 mm, $\Delta z$ = {0 mm, 2 mm, 4 mm} and $N$ = {4, 10, 16, 22, 28}. In (a) $N$ equals 4 and no defocus shift is present. The two spots are barely visible and cannot be separated with a conventional pixelated sensor. With increasing $N$ (images (b) to (e)) the distance between the two spots grows and they become elongated, forming two thin rhombuses. In (c) to (e) the distance between both spots is more than 5 µm. The two spots might be separated in the experiment and the centroids could be estimated. In (f) to (j) a defocus shift of $\Delta z$ = 2 mm is simulated for the different $N$. As expected from Eq. (4) the rotation angle becomes smaller with increasing $N$. In addition, the two spots form a tail which is growing with increasing rotation angle. For large rotation angles like in (k) and (l) the tails form a spiral vortex. Based on the simulation results two DOEs with $N$ = 16 and $N$ = 22 radial zones have been manufactured.

 figure: Fig. 5.

Fig. 5. Simulation of the rotating PSF for different number of radial zones $N$ = {4, 10, 16, 22, 28} and defocus shifts $\Delta z$ = {0 mm, 2 mm, 4 mm}.

Download Full Size | PDF

In Fig. 6 a comparison between simulation and experiment for $N$ = 22 is shown. In (a) to (d) the simulated and in (e) to (h) the corresponding experimental images are shown. For better visualization only one spot of the multipoint cluster is illustrated. The defocus shifts are $\Delta z$ = {0 mm, 5 mm, 9 mm, 13 mm}. For small defocus shifts (e) and (f), simulation and experiment deviate. Especially in case of no defocus, the two spots from (a) are squashed to one blob. This difference can be caused by manufacturing tolerances of the DOE. For large defocus shifts the intensity is almost equally distributed over the spiral vortex formed by the two tails.

 figure: Fig. 6.

Fig. 6. Comparison of simulation and experiment for $N$ = 22 at different defocus shifts $\Delta z$ = {0 mm, 5 mm, 9 mm, 13 mm}. Images (a) to (d) show the simulation and (e) to (h) the corresponding experimental result. For better visualization only one spot of the multipoint cluster is shown. The whole process of rotation can be seen in Visualization 2.

Download Full Size | PDF

4. Experimental setup and measurement results

The experimental setup is shown in Fig. 7. It consists of a linear stage (Walter Uhl GT6-BO01), that is used to position a point light source (fiber coupled laser, $\lambda$ = 633 nm) inside the measurement range. The imaging system consists of an entocentric lens (Edmund Optics HP Series, $f'$ = 50 mm, NA = 0.0595, distortion < 1%), which is upgraded with the DOE performing the replication of the DH-PSF. The DOE is glued to a retaining ring (Thorlabs SM37RR) and directly attached to the filter mount of the entocentric lens. The DOEs used in this work were fabricated with a laser lithography process in a clean room. The substrate is a BK-7 glass plate coated with a positive photoresist. The photoresist layer was exposed with a laser direct writing system and then developed using a wet-chemical process. Known artifacts of this fabrication process occur at sharp edges due to the high NA of the laser writing system. Those sharp edges can be slightly rounded after the chemical development process. The image sensor (Ximea MC124MG-SY) has a pixel pitch of $d_{pix}$ = 3.45 µm and a resolution of 4112 pixels $\times$ 3008 pixels. The distance between the point source and the DOE is $z_0$ = 232 mm.

 figure: Fig. 7.

Fig. 7. Experimental setup. (1) linear stage; (2) light source (fiber coupled laser); (3) DOE with SPM and multipoint hologram; (4) objective lens; (5) camera. $(x_w, y_w,z_w)$ is the world coordinate system.

Download Full Size | PDF

In [11,13,33] the images have been evaluated by centroid calculation of both spots of the DH-PSF. From these spot positions the rotation angle can be derived. The relationship between rotation angle and point source distance is fitted using a linear [13] or polynomial curve fit [7].

In case of our measurement system the simulation shows, that due to the small NA and $f'$, it is not possible to detect two separated spots. With growing defocus, the two spots form a tail that makes it difficult to detect the outlines of the actual spots. Furthermore, this tail affects the centroid position of the spots and, therefore, the angular measurement. One approach to quantify the degree of rotation can be the four-quadrant energy distribution known from measurement systems based on astigmatism [6]. However, this approach does not lead to satisfying results, because with growing defocus and, therefore, tail length, the sensitivity decreases drastically.

A more promising method and the one used in this publication is the cross correlation with a reference image. With this method, every pixel containing valuable information is used. We use the normalized cross correlation between a measured image $I$ and a reference image $T$, which is defined as [35]

$$R(x, y) = \frac{\sum_{x',y'}I(x + x', y + y')T(x', y')}{\sqrt{\sum_{x',y'}I(x + x', y + y')^{2}\sum_{x',y'}T(x', y')^{2}}}$$
where $x, y$ and $x', y'$ are the measured and reference image coordinates respectively. To create a reference image stack, the light source (2) is axially positioned to $C_{ref}$ points within the measurement range using the linear stage (1). At each position an image is acquired and stored. When this reference image stack is created, the actual measurement is done, by positioning the light source to an arbitrary point inside the measurement range, acquire an image and calculate the cross correlation (Eq. (6)) with each image in the reference image stack. For each of the $C_{ref}$ results the maximum correlation energy is stored.

The image correlation is carried out between a measurement image $I$ containing the whole cluster of DH-PSFs and a reference image $T$ consisting of an image section containing the whole cluster (like illustrated in Fig. 8(b)). This ensures, that all pixel information of all spots, including defocusing tails and central parts, are used in the correlation. The central, zeroth diffraction order is also part of the image. Only for the single spot evaluation presented later in this section, the zeroth diffraction order is not considered.

 figure: Fig. 8.

Fig. 8. (a) Correlation energy over distance change $z$ for a measurement image at stage position $z_{stage}$ = 14.871 mm. The inset shows the 30 points around the peak value, that are used to fit a parabolic function. The peak of the parabola corresponds to the measurement value, which is $z_{meas}$ = 14.874 mm. (b) An image section showing the MP-DH-PSF cluster. The cluster consists of $K$ = 24 replications plus the central, zeroth diffraction order.

Download Full Size | PDF

In Fig. 8 an example measurement at stage position $z_{stage}$ = 14.871 mm is shown. The acquired image at this position is cross correlated with all reference images and the maximum value of the correlation energy in $R$ is plotted for each reference image. The peak of this curve marks the highest correspondence between the reference and the measurement image. In order to measure the accurate peak value, the 30 points around the highest value are used to fit a parabolic function, as it is shown in the enlarged plot in Fig. 8. The peak value of the parabola is $z_{meas}$ = 14.874 mm, so in this example the difference between reference and measurement is $z_{stage}\,-\,z_{meas}$ = $-$ 3 µm.

All measurements have been performed with two different DOEs, whose number of radial zones are $N$ = 16 and $N$ = 22, following named N16 and N22. For calibration, the light source is positioned to $C_{ref}\,=\,2001$ equidistant points within a measurement range of 20 mm. At each position multiple reference images are acquired and averaged, to reduce noise contributors like thermal, photon and quantization noise. Inside this calibration range of 20 mm three measurements with $C_{meas}\,=\,\{660, 520, 360\}$ equidistant measurement points are carried out. For the actual measurement only one image is acquired. The rotation of the multiple DH-PSF depending on axial shift $z$ is shown for both DOEs N16 and N22 in Visualization 1 and Visualization 2. The videos start showing the rotation of a single spot, then of four spots and at the end of the video for the whole cluster.

In Fig. 9(a) the measurement results for N16 and in Fig. 9(a) for N22 are shown. For each of the series of three measurements, $C_{meas}\,=\,660$ in the upper, $C_{meas}\,=\,520$ in the middle and $C_{meas}\,=\,380$ in the lower chart, the difference between reference and measurement is plotted over the distance change. It can be seen, that N16 has larger deviations than N22. In case of N16 the curves of all three measurements look similar, but the axis scaling is different. This is also reflected in high standard deviations for N16, which are $\sigma _{660}$ = 58.45 µm, $\sigma _{520}$ = 26.94 µm and $\sigma _{380}$ = 35.66 µm. In case of N22, the reproducibility of the measurements is very good. The curves show the same course and also the axis scaling is almost the same. The standard deviations for N22 are $\sigma _{660}$ = 10.20 µm, $\sigma _{520}$ = 7.86 µm and $\sigma _{380}$ = 7.49 µm. On average the standard deviation for N22 is $\overline {\sigma }_{N22}$ = 8.51 µm. Since N22 reaches much better results than N16, the subsequent accuracy analysis per spot is performed for the measurement results of N22.

 figure: Fig. 9.

Fig. 9. Measurement results for N16 and N22 for a series of three different measurements consisting of 660 (upper), 520 (middle) and 380 (lower plot) equidistant points within the measurement range of 20 mm. (a) Error plotted over distance change for N16; (b) Error plotted over distance change for N22.

Download Full Size | PDF

To analyze and quantify the accuracy improvement achieved by the multipoint method, each of the 24 spots consisting of a DH-PSF is evaluated separately. For this purpose, both, the reference images and the measurement images are separated into 24 image sections, each containing one spot. Those image sections are used for the cross correlation based evaluation. An image of the spot enumeration of $K$ = 24 spots is illustrated in Fig. 8(b). The central, zeroth diffraction order is not considered, because it is subject to interference effects and does not show the defocus based rotation as the other 24 spots do. In Fig. 10(a) the error per spot is plotted over the distance change $z$ for the N22 measurement with 520 points. To show the error signal of all 24 spots in one chart, each signal is shifted by an offset of 100 µm times spot number. In Fig. 10(f) to Fig. 10(f) five image sections of spot number 7 at different distance changes $z$ = {1 mm, 6.1 mm, 10 mm, 15 mm, 19 mm} are illustrated.

 figure: Fig. 10.

Fig. 10. Error evaluation of each spot for the N22 measurement with 520 measurement points. (a) Error signals for each spot over distance change $z$. To show all spot signals in one chart, each signal is shifted by an offset of 100 µm times spot number, so that for example the signal of spot number 9 is shifted by 900 µm with respect to the left axis. The assignment between signal and spot number is done using the right axis. (b) to (f) Images of spot number 7 at positions $z$ = {1 mm, 6.1 mm, 10 mm, 15 mm, 19 mm} with the peak signal-to-noise ratios (PSNR). The defocus based rotation for N22 is shown in Visualization 2.

Download Full Size | PDF

It can be seen, that around position $z$ = 6.1 mm, most spots show an increasing error. The spot image at this position is shown in Fig. 10(c). At this position the image is sharp and no defocus shift $\Delta z$ is present. Therefore, all information is stored in a few bright pixels. This together with the presence of image noise makes it difficult to reach high precision with the method of image correlation. Depending on which spot is evaluated, the standard deviation of axial position reconstruction of a single spot is improved by a factor of up to 3 when the averaged multipoint signal is used. The exposure time of the camera was set to a constant value of 5 µs. Therefore, the peak signal-to-noise ratio (PSNR) is getting small with large defocus (see Fig. 10(b) to Fig. 10(f)). In this manuscript the PSNR is calculated as the quotient of highest signal value to highest noise value within an area outside the cluster. The localization accuracy might be improved by the use of a defocus dependent adaption of the exposure time. Due to the fact, that the used exposure time is very small, the background noise level does almost not change compared to the signal level when, for example, the exposure time is doubled, leading to an improved SNR.

The angular precision is an important parameter to evaluate the performance of DH-PSF applications. For classical DH-PSF applications it is defined by the precision of centroid calculation of the two rotating lobes and its value is ranging around 1$^{\circ }$. The proposed method achieves an angular precision of 0.2$^{\circ }$. It can be derived by dividing the theoretical rotation angle of 457.6$^{\circ }$ (calculated with Eq. (4)) by the number of steps that can be distinguished within the measurement range, namely the measurement range to accuracy ratio of 2350. Furthermore, with the proposed system absolute measurements can be done within the whole measurement range. Classical microscopy DH-PSF applications are facing the challenge of ambiguity for rotation angles larger than 180$^{\circ }$.

5. Discussion

The achieved results of the presented depth measurement method are very promising. It offers the possibility to create an accurate and cost-effective single image 3D position sensor with mesoscopic measurement range. The measurement principle allows one to measure multiple points simultaneously in one image. However, there are still topics that need to be studied in more detail:

  • • All presented measurements are acquired at constant linear stage coordinates in $X$ and $Y$, so that the image position did not change much while the light source is moved in $Z$-direction. For field dependency it has to be analyzed, to what extent the reference stack can be used, if the light source is laterally shifted in $X$ or $Y$ and in which way this influences the measurement result.
  • • Lateral shift introduces field dependent aberrations such as coma or astigmatism, locally deforming the shape of the PSFs. Also field curvature has to be considered, as it can intrinsically cause $Z$-position offsets. However, the authors expect that the template matching approach can handle those error contributors, because not only one but multiple reference stacks on a $XY$-grid in the measurement field can be acquired. Therefore, the reference images are subject to almost the same aberrations as the measurement images. The assignment between axial position and deformed PSF is preserved and positions between the reference grid points can be interpolated.
  • • A 3D calibration is necessary to use the proposed principle as an absolute sensor.
  • • The multipoint method offers the means to measure the lateral image position of a light source very accurately, by averaging the centers of all replicated spots per cluster [3,27,31,32]. Although we do not expect it to have a strong influence, it still has to be analyzed, how accurate the lateral position measurement works for the multipoint DH-PSFs.
Just like any other single image depth measurement sensor, the proposed system has advantages and disadvantages. The two spots formed by the double-helix are deformed and cannot be detected as single spots anymore. With increasing defocus, each spot is forming a growing tail. Compared to microscopy applications with two rotating spots, for the proposed system it is necessary to use other methods to process the image data. We have chosen the method cross correlation, because the information of each pixel is used and complex shapes can be analyzed rather easy. One disadvantage of cross correlation is, that it is computationally intensive. However, it is not necessary to correlate the measurement image with the whole reference stack, as it was done in this contribution. More sophisticated search algorithms can be used to find the maximum correlation energy with a significantly less number of correlations. Furthermore, the arithmetic operation of correlation can be accelerated considerably by computing it on the Graphics Processing Unit (GPU).

The precision of the proposed method depends on the amount of light that can be captured from the object points to be measured. In this contribution we applied a fiber coupled laser as active point light source to achieve a high SNR. Another possibility could be a light emitting diode (LED) with a pinhole attached in front of it, combined with a narrow band pass filter for the camera lens. However, those light sources can be bulky or difficult to attach to the objects, whose position is to be measured. One solution to this problem could be the use of fluorescent particles combined with external illumination. In this case it has to be examined carefully, if the available amount of light is sufficient to be able to split it into multiple spots in the image plane. If the amount of captured light must be increased by considerably extending the exposure time, other error contributors become larger as well and the improvement of the multipoint method can be diminished. The application of the proposed method in other fields of vortex imaging, such as biomedical research, is also mainly limited by the available amount of light. A possible field of application is seen in vortex topography, as described in [36]. In contrast to biological tissue, here the illumination intensity can be increased without the risk of drying out the specimen.

The spatial replication method (multipoint method) used in this work enables to use very high SNR at pixel level, given enough light. The general limitation of photon noise is then related to the quantum well capacity (QWC) of the camera sensor. The more photons carrying relevant information can be collected, the more accurate the position of the PSF can be estimated. So by spatially replicating the object point to $K$ spots, $K$ times more photons can be collected and the QWC is artificially increased by a factor of $K$. This together with the uncorrelated discretization error between the replications leads to the improved accuracy shown in various experiments.

The advantages of the proposed method include the cost-effective single camera setup and the good accuracy that can be reached. The measurement range to accuracy ratio of 20 mm / 8.51 µm = 2350 is very high compared to other single camera depth measurement techniques (see Fig. 1). Furthermore, the three measurements in Fig. 9(b) show good reproducibility. This indicates, that the residual error is of systematic nature and calibration would improve the accuracy even further. In addition, the proposed system can be retrofitted to existing applications by attaching point light sources to the objects that are to be measured. Applications could be in 3D printers, milling or turning machines.

6. Summary and conclusion

The proposed single camera depth measurement system is based on the detection of point emitters that can be attached to one or several objects whose position is to be measured. With one image the 3D position of all emitters can be measured simultaneously. The imaging system consists of a low NA lens with a DOE mounted in front of it, to replicate the light source to a predefined pattern of spots in the image plane (multipoint method). Each replicated spot consists of a DH-PSF, whose rotation depends on the distance of the light source.

The image processing to reconstruct the distance of the light source is done by cross correlating the measured image with a previously acquired reference image stack. A series of three measurements with different number of measurement points are presented. The achieved accuracy is on average $\overline {\sigma }_{N22}$ = 8.51 µm within an axial measurement range of 20 mm (comparison to other methods see Fig. 1). The measurement volume of the used setup is around 70 mm $\times$ 50 mm $\times$ 20 mm. The $X$ and $Y$ values are calculated using the magnification of the imaging system and the sensor size of the camera. Obviously, the measurement range in $X$ and $Y$ can be enlarged by simply changing the working distance and therefore the magnification. But one has to keep in mind, that this affects the NA and hence the accuracy in $Z$.

The multipoint method improves the accuracy of lateral position measurement by a factor of up to 4 [3]. In this contribution we demonstrated, that the axial depth reconstruction can be improved by a factor of 1.5 to 3, depending on which spot is evaluated. Therefore, the presented single image depth measurement principle offers the possibility to create a cost effective 3D position measurement sensor with good lateral and axial accuracy.

Funding

Deutsche Forschungsgemeinschaft (279064222, SFB 1244).

Acknowledgments

We thank Christof Pruß for the insightful discussions and Kevin Treptow for manufacturing the DOEs used in this work.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. S. Kim, H. K. Kim, C. Lee, and S. Kim, “A vision system for identifying structural vibration in civil engineering constructions,” in 2006 SICE-ICASE International Joint Conference, (2006), pp. 5813–5818.

2. M. Riedel, “Methodik zur modellierung photogrammetrischer messungen zur charakterisierung der genauigkeit von werkzeugmaschinen,” Ph.D. thesis (2020).

3. S. Hartlieb, M. Tscherpel, F. Guerra, T. Haist, W. Osten, M. Ringkowski, and O. Sawodny, “Highly accurate imaging based position measurement using holographic point replication,” Measurement 172, 108852 (2021). [CrossRef]  

4. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. 106(9), 2995–2999 (2009). [CrossRef]  

5. L. Li, C. Kuang, D. Luo, and X. Liu, “Axial nanodisplacement measurement based on astigmatism effect of crossed cylindrical lenses,” Appl. Opt. 51(13), 2379–2387 (2012). [CrossRef]  

6. W.-Y. Hsu, Z.-R. Yu, P.-J. Chen, C.-H. Kuo, and C.-H. Hwang, “Development of the micro displacement measurement system based on astigmatic method,” in 2011 IEEE International Instrumentation and Measurement Technology Conference, (2011), pp. 1–4.

7. M. D. Lew, S. F. Lee, M. Badieirostami, and W. E. Moerner, “Corkscrew point spread function for far-field three-dimensional nanoscale localization of pointlike objects,” Opt. Lett. 36(2), 202–204 (2011). [CrossRef]  

8. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. E. Moerner, “Optimal point spread function design for 3d imaging,” Phys. Rev. Lett. 113(13), 133902 (2014). [CrossRef]  

9. Y. Shechtman, L. Weiss, A. Backer, S. Sahl, and W. Moerner, “Precise three-dimensional scan-free multiple-particle tracking over large axial ranges with tetrapod point spread functions,” Nano Lett. 15(6), 4194–4199 (2015). [CrossRef]  

10. S. Jia, J. Vaughan, and X. Zhuang, “Isotropic 3d super resolution imaging with self-bending point spread function,” Biophys. J. 104(2), 668a (2013). [CrossRef]  

11. M. Baranek and Z. Bouchal, “Rotating vortex imaging implemented by a quantized spiral phase modulation,” J. Eur. Opt. Soc. 8, 13017 (2013). [CrossRef]  

12. C. Roider, A. Jesacher, S. Bernet, and M. Ritsch-Marte, “Axial super-localisation using rotating point spread functions shaped by polarisation-dependent phase modulation,” Opt. Express 22(4), 4029–4037 (2014). [CrossRef]  

13. M. Teich, M. Mattern, J. Sturm, L. Büttner, and J. W. Czarske, “Spiral phase mask shadow-imaging for 3d-measurement of flow fields,” Opt. Express 24(24), 27371–27381 (2016). [CrossRef]  

14. P. Bouchal and Z. Bouchal, “Flexible non-diffractive vortex microscope for three-dimensional depth-enhanced super-localization of dielectric, metal and fluorescent nanoparticles,” J. Opt. 19(10), 105606 (2017). [CrossRef]  

15. Z. Wang, Y. Cai, Y. Liang, X. Zhou, S. Yan, D. Dan, P. R. Bianco, M. Lei, and B. Yao, “Single shot, three-dimensional fluorescence microscopy with a spatially rotating point spread function,” Biomed. Opt. Express 8(12), 5493–5506 (2017). [CrossRef]  

16. Basler, “Blaze-101,” https://docs.baslerweb.com/blaze-101. [Online; accessed 15-Dec-2021].

17. Odos, “Swift-E,” https://www.odos-imaging.com/swift-e/. [Online; accessed 15-Dec-2021].

18. Y. He, B. Liang, Y. Zou, J. He, and J. Yang, “Depth errors analysis and correction for time-of-flight (tof) cameras,” Sensors 17(1), 92 (2017). [CrossRef]  

19. S. Hartlieb, M. Tscherpel, F. Guerra, T. Haist, W. Osten, M. Ringkowski, and O. Sawodny, “Hochgenaue kalibrierung eines holografischen multi-punkt positionsmesssystems,” tm - Technisches Messen (2020).

20. A. Jesacher, M. Ritsch-Marte, and R. Piestun, “Three-dimensional information from two-dimensional scans: a scanning microscope with postacquisition refocusing capability,” Optica 2(3), 210–213 (2015). [CrossRef]  

21. S. Li, J. Wu, H. Li, D. Lin, B. Yu, and J. Qu, “Rapid 3d image scanning microscopy with multi-spot excitation and double-helix point spread function detection,” Opt. Express 26(18), 23585–23593 (2018). [CrossRef]  

22. Z. Wang, Y. Cai, J. Qian, T. Zhao, Y. Liang, D. Dan, M. Lei, and B. Yao, “Hybrid multifocal structured illumination microscopy with enhanced lateral resolution and axial localization capability,” Biomed. Opt. Express 11(6), 3058–3070 (2020). [CrossRef]  

23. R. Berlich, A. Bräuer, and S. Stallinga, “Single shot three-dimensional imaging using an engineered point spread function,” Opt. Express 24(6), 5946–5960 (2016). [CrossRef]  

24. R. Berlich and S. Stallinga, “High-order-helix point spread functions for monocular three-dimensional imaging with superior aberration robustness,” Opt. Express 26(4), 4873–4891 (2018). [CrossRef]  

25. S. N. Khonina, V. V. Kotlyar, V. A. Soifer, M. Honkanen, J. Lautanen, and J. Turunen, “Generation of rotating gauss-laguerre modes with binary-phase diffractive optics,” J. Mod. Opt. 46, 227–238 (1999). [CrossRef]  

26. Y. Y. Schechner, R. Piestun, and J. Shamir, “Wave propagation with rotating intensity distributions,” Phys. Rev. E 54(1), R50–R53 (1996). [CrossRef]  

27. T. Haist, S. Dong, T. Arnold, M. Gronle, and W. Osten, “Multi-image position detection,” Opt. Express 22(12), 14450–14463 (2014). [CrossRef]  

28. M. A. Seldowitz, J. P. Allebach, and D. W. Sweeney, “Synthesis of digital holograms by direct binary search,” Appl. Opt. 26(14), 2788–2798 (1987). [CrossRef]  

29. T. Haist, M. Gronle, T. Arnold, D. A. Bui, and W. Osten, “Verbeserung von positionsbestimmungen mittels holografischer mehrpunktgenerierung, in Forum Bildverarbeitung 2014, F. Puente Leon and M. Heizmann, eds. (KIT Scientific Publishing, Karlsruhe, 2014), Technisches Messen, pp. 239–247.

30. T. Haist, M. Gronle, B. Duc Anh, and W. Osten, “Holografische mehrpunktgenerierung zur positionsanalyse,” Tech. Mess. 82(5), 273–279 (2015). [CrossRef]  

31. T. Haist, M. Gronle, B. Duc Anh, B. Jiang, C. Pruss, F. Schaal, and W. Osten, “Towards one trillion positions,” Proceedings of SPIE - The International Society for Optical Engineering9530 (2015).

32. S. Hartlieb, M. Ringkowski, T. Haist, O. Sawodny, and W. Osten, “Multi-positional image-based vibration measurement by holographic image replication,” Light. Adv. Manuf. 2, 1 (2021). [CrossRef]  

33. M. Baránek and Z. Bouchal, “Optimizing the rotating point spread function by SLM aided spiral phase modulation, in 19th Polish-Slovak-Czech Optical Conference on Wave and Quantum Aspects of Contemporary Optics, vol. 9441 International Society for Optics and Photonics (SPIE, (2014), 161.

34. J. W. Goodman, “Introduction to fourier optics,” Introduction to Fourier optics, 3rd ed., by JW Goodman. Englewood, CO: Roberts & Co. Publishers, 20051 (2005).

35. G. Bradski, “The OpenCV Library,” Dr. Dobb’s Journal of Software Tools (2000).

36. P. Bouchal, L. Štrbková, Z. Dostál, and Z. Bouchal, “Vortex topographic microscopy for full-field reference-free imaging and testing,” Opt. Express 25(18), 21428–21443 (2017). [CrossRef]  

Supplementary Material (2)

NameDescription
Visualization 1       The video shows the multiimage double helix rotation depending on axial shift generated with a DOE with N = 16 radial zones. At first, only one spot is visible, then four and at the end of the video the whole multipoint cluster is shown.
Visualization 2       The video shows the multiimage double helix rotation depending on axial shift generated with a DOE with N = 22 radial zones. At first, only one spot is visible, then four and at the end of the video the whole cluster of 25 DH-PSF is shown.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Overview of the achievable measurement range - accuracy ratios for the different depth reconstruction methods: Corkscrew PSF (CS-PSF) [7], double-helix PSF (DH-PSF) [12,13,15], Tetrapod-PSF (TP-PSF) [8], SB-PSF [10], proposed Multipoint double-helix PSF (MP-DH-PSF), Astigmatism (Astig.) [5,6] and Time-of-Flight (ToF) [1618]. Note: In the references no distinction between accuracy and precision was made.
Fig. 2.
Fig. 2. Generation of a DH-PSF with a spiral phase mask (SPM) consisting of $N$ = 19 radial zones. The SPM is encoded in a DOE, whose phase function is described by Eq. (1). A point light source at distance $z_0$$\pm$$\Delta z$ is imaged to the image plane at constant distance $z'_0$ by a lens, which has a DOE mounted in front of it. The distance change of the light source is converted to a rotation of the two spots that are visible in the image plane, as illustrated in the series of 7 images. For sake of visualization, the images are separated in $z'$ but belong to the $z'_0$ plane.
Fig. 3.
Fig. 3. Multipoint method. (a) Scheme of the multipoint method; (b) Image section showing one cluster with $K$ = 21 spots.
Fig. 4.
Fig. 4. Combination of the DH-PSF and the multipoint method. The phase function encoded to the DOE is the sum of the SPM and the multipoint hologram. The point light source is replicated to four copies by the multipoint hologram and each copy consists of two rotating spots formed by the SPM.
Fig. 5.
Fig. 5. Simulation of the rotating PSF for different number of radial zones $N$ = {4, 10, 16, 22, 28} and defocus shifts $\Delta z$ = {0 mm, 2 mm, 4 mm}.
Fig. 6.
Fig. 6. Comparison of simulation and experiment for $N$ = 22 at different defocus shifts $\Delta z$ = {0 mm, 5 mm, 9 mm, 13 mm}. Images (a) to (d) show the simulation and (e) to (h) the corresponding experimental result. For better visualization only one spot of the multipoint cluster is shown. The whole process of rotation can be seen in Visualization 2.
Fig. 7.
Fig. 7. Experimental setup. (1) linear stage; (2) light source (fiber coupled laser); (3) DOE with SPM and multipoint hologram; (4) objective lens; (5) camera. $(x_w, y_w,z_w)$ is the world coordinate system.
Fig. 8.
Fig. 8. (a) Correlation energy over distance change $z$ for a measurement image at stage position $z_{stage}$ = 14.871 mm. The inset shows the 30 points around the peak value, that are used to fit a parabolic function. The peak of the parabola corresponds to the measurement value, which is $z_{meas}$ = 14.874 mm. (b) An image section showing the MP-DH-PSF cluster. The cluster consists of $K$ = 24 replications plus the central, zeroth diffraction order.
Fig. 9.
Fig. 9. Measurement results for N16 and N22 for a series of three different measurements consisting of 660 (upper), 520 (middle) and 380 (lower plot) equidistant points within the measurement range of 20 mm. (a) Error plotted over distance change for N16; (b) Error plotted over distance change for N22.
Fig. 10.
Fig. 10. Error evaluation of each spot for the N22 measurement with 520 measurement points. (a) Error signals for each spot over distance change $z$. To show all spot signals in one chart, each signal is shifted by an offset of 100 µm times spot number, so that for example the signal of spot number 9 is shifted by 900 µm with respect to the left axis. The assignment between signal and spot number is done using the right axis. (b) to (f) Images of spot number 7 at positions $z$ = {1 mm, 6.1 mm, 10 mm, 15 mm, 19 mm} with the peak signal-to-noise ratios (PSNR). The defocus based rotation for N22 is shown in Visualization 2.

Equations (6)

Equations on this page are rendered with MathJax. Learn more.

S ( r ) = { n = 1 N S n , S n = e x p ( i l n ϕ ) if  R n 1 N < r < R n N 0 r > R
H ( r ) = e x p ( i ξ ( r ) )
h ( x , y ) = K δ ( x x K , y y K )
d Θ d Δ z = π N A 2 λ N Δ l
U ( r ) + S ( r ) H ( r ) G ( r ) e x p [ i k ( 1 2 z 0 1 2 z ) | r | 2 ] e x p [ 2 π i r r λ z ] d r
R ( x , y ) = x , y I ( x + x , y + y ) T ( x , y ) x , y I ( x + x , y + y ) 2 x , y T ( x , y ) 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.