Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Accurate detection of spherical objects in a complex background

Open Access Open Access

Abstract

The automated detection of particles in microscopy images has become a routinely used method for quantitative image analysis in biology, physics, and other research fields. While the majority of particle detection algorithms have been developed for bulk materials, the detection of particles in a heterogenous environment due to surfaces or other objects in the studied material is of great interest. However, particle detection is hindered by a complex background due to the diffraction of light resulting in a decreased contrast and image noise. We present a new heuristic method for the reliable detection of spherical particles that suppresses false detections due to a heterogenous background without additional background measurements. Further, we discuss methods to obtain particle coordinates with improved accuracy and compare with other methods, in particular with that of Crocker and Grier.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The extraction of quantitative results from microscopy images often involves the detection and accurate localization of spherical objects. Especially in colloidal suspensions and materials containing nano- or micron-sized particles, the particle positions are the primary information contained in the image. The development of algorithms for reliable and accurate particle detection had a tremendous impact for the study of soft materials and in particular colloidal particles [14] that are of interest for applications and as model systems to study the interactions between colloidal particles [57], the phase behaviour of 2d systems [814], crystallization [1522] and melting [23], the glass transition [24], gelation [25], or ordering under shear [26].

Various methods for the detection of particles have been applied. A simple method is the binarization of the image using an appropriate cut-off intensity. This can be applied when all particles appear with a characteristic brightness and on an approximately constant background, which is often the case in 2d images; but this method is usually not used for 3d data. A more versatile method for the detection of bright features that can appear on a slowly varying background was introduced by Crocker and Grier [27]. This method has been applied by many researchers and has also been improved for reliability [28] and accuracy [29] as discussed in section 2. A significant step towards highly accurate particle coordinates and particle sizes extracted from microscopy images was made with the full reconstruction of the image using a large number of fit parameters for the particle positions, particle properties, and characteristic properties of the imaging system [30]. This is computationally much more demanding than heuristic approaches such as that of Crocker and Grier and it is difficult to apply in the presence of unknown background objects.

In recent years, machine learning has become more and more popular for the extraction of various information such as particle coordinates from images. Machine learning has certainly an advantage to detect features in images with a complex background that may be beyond the flexibility of other methods. The disadvantage of machine learning is, however, that it often is not transparent for the user and may be tedious to train to obtain reliable results. For the detection of spherical objects in a homogeneous environment, machine learning is usually not necessary, and heuristic methods using a limited number of input parameters are sufficient, give reproducible results, and are transparent for the user.

In many research fields, e.g. biology or colloidal science, the detection of particles close to interfaces separating different materials or compartments is of great interest. However, the development of particle-detection algorithms has mainly focused on bulk materials such as colloidal suspensions. In section 3, we present a heuristic method for the detection of spherical particles in 2d and 3d images that aims to detect spherical particles in a surrounding with a background that can appear as bright as the particles of interest and with edges similar to those of the wanted particles. We build on the method of Crocker and Grier to find the centroids of spherical particles and to extend a heuristic and fast particle-detection method to images with a quite complex background that is not easily removed or excluded but is part of the sample under study. Particle tracking in the presence of a complicated background has been studied in earlier work focusing on obtaining accurate coordinates for particles of various sizes observed in two-dimensional (2d) images [31]. The goal of the work presented here is not only to obtain accurate coordinates but also to reliably identify spherical particles with an expected diameter and to avoid false positives, which can be difficult to exclude after particle detection and can reduce the quality of the obtained data.

A varying background or an apparent overlap of neighboring particles can hinder their detection and reduce the accuracy of coordinates extracted from the image [32]. This problem often occurs in 3d confocal images, as the resolution of the microscope is usually reduced along its axial direction. A new method to overcome this complication without prior knowledge about the resolution and point-spread function of the microscope is presented in section 4. The performance of our new method is discussed and compared with other methods in section 5.

2. Centroid method of Crocker and Grier

The method of Crocker and Grier (CG) [27] detects features in microscopy images based on their brightness compared to the average brightness of the object and its close surrounding. This is quite general and can be applied for spherical particles in 2d or 3d images and also other objects with more complicated convex shapes. A tutorial for identifying and tracking particles with the CG method has been put online by J. C. Crocker and E. R. Weeks [33].

As colloidal particles imaged with a confocal microscope typically have a radius $\sim 1\; \mu$m, the resolution of the microscope allows to clearly see individual particles, but their images are blurred, as they are not much larger than the resolution of the microscope. This blurring can be understood as a convolution of the particle with the point-spread function of the microscope, which gives the image obtained from a point source. The center-to-center distance of neighboring particles should be 10 pixels or more to obtain a clear image of the particle and also of its local surrounding. While 5 pixels are in principle sufficient to capture the intensity peaks of neighboring particles and the intensity minimum between them, this is not sufficient when significant noise is present. For colloidal particles with a diameter of 1.5 $\mu$m and a microscope with a lateral resolution of 0.22 $\mu$m, a pixel size of about 0.13 $\mu$m would be ideal. This also results in some oversampling as required by the Nyquist-Shannon theorem [34], as discussed in more detail in Ref. [32].

The raw image $I$, see the example in Fig. 1, is first smoothed to average out any noise before the background is removed to obtain separate intensity peaks for all the features of interest. The smoothed image $I_s = I \otimes G$ is obtained by convoluting the raw image with the Gaussian kernel

$$\begin{aligned} G(i,j,k) & = \frac{1}{N} \exp \left(\frac{-(i^2+j^2+k^2)}{2\sigma^2} \right) \\ \textrm{where}\;\;\; N &= \sum_{i,j,k} \exp \left(\frac{-(i^2+j^2+k^2)}{2\sigma^2} \right), \end{aligned}$$
where $i$, $j$, and $k$ refer to the pixel number in the cartesian reference frame. An anisotropic kernel is discussed in section 1 of the Supplement 1. This convolution removes noise with a correlation length on the order of or shorter than the standard deviation $\sigma$ of the Gaussian kernel. The noise in confocal microscopy images is usually due to fluctuations in the number of photons counted per pixel and may also contain contributions due to fast intensity fluctuations of the illuminating light and due to electronic noise. This noise is expected to vary significantly from pixel to pixel and, therefore, $\sigma$ is usually close to the size of one pixel. We use the size of a pixel or voxel as the unit of length: $\sigma \approx 1$ pixel (p). An example of a smoothed image $I_s$ obtained from a raw image $I$ is shown in Fig. 1.

 figure: Fig. 1.

Fig. 1. Left to right: raw, smoothed, background, and clipped sectional views in a measured 3d confocal-microscopy image. The $z$ direction (upper row) is along the axial direction of the microscope. Scale bars of 5 $\mu$m are shown in the upper-left panel. The scale is the same for $x$- and $y$-direction. The expected resolution is $(220\pm 10)$ nm in $x$- and $y$ direction and $(400\pm 10)$ nm in $z$ direction.

Download Full Size | PDF

To obtain an estimate of the background, the raw image is convolved with a spherical boxcar kernel

$$\begin{aligned} k_b(i,j,k) & = \begin{cases} \frac{1}{N_b} & \textrm{for}\;\; \sqrt{i^2+j^2+k^2} < R_\textrm{o} \\ 0 & \textrm{otherwise} \end{cases} \\ \textrm{with}\;\;\; N_b &= \sum_{\underset{\sqrt{i^2+j^2+k^2} < R_\textrm{o}}{i,j,k}} 1, \end{aligned}$$
where $R_{\textrm {o}}$ is somewhat larger than the radius of the largest particles of interest. An example of the convolved image $I_b = I \otimes k_b$ is shown in Fig. 1. The difference image $I_{d} = I_{s} - I_{b}$ is clipped at zero to have $I_{dc} \ge 0$ everywhere:
$$I_{dc}(\boldsymbol r) = \begin{cases} I_d(\boldsymbol r) & I_d(\boldsymbol r) > 0\\ 0 & I_d(\boldsymbol r) \le 0 \end{cases}.$$

It contains quite sharp and isolated peaks corresponding to the wanted features, as shown in Fig. 1 on the right. Here, we have assumed that the pixel size is the same for all three cartesian axes. The convolution kernels for anisotropic pixels are given in section 1 of the Supplement 1.

The pixels in the image $I_{dc}$ are then sorted according to their brightness and are considered as approximate particle centers in the order of decreasing brightness. When a new particle is found, the pixels in a spherical surrounding with the radius of the smallest expected particle are excluded from the further search for more particles, as particles are not allowed to overlap. In a final step, the centroid of each particle is calculated to obtain more accurate positions. This is done by calculating the ‘center of mass’ in terms of the intensity in $I_{dc}$ in the surrounding given by the expected particle size.

In addition to the position, a few further characteristic values are calculated for each feature from the pixel intensities in $I_{dc}$ within a spherical surrounding with radius $R_f$ given by the expected feature size. All pixels or voxels with an intensity above a threshold value are taken into account within that spherical surrounding. The brightness of the most intense voxel, the fraction of voxels above the threshold, the total brightness $B$ as well as the intensity-weighted radius of gyration $R_g$ are calculated:

$$\begin{aligned} B &= \sum_{\underset{|(i,j,k) -\boldsymbol R| < R_f}{i,j,k}} I_{i,j,k} \\ R_g &= \left[ \frac{1}{B} \sum_{\underset{|(i,j,k) -\boldsymbol R| < R_f}{i,j,k}} I_{i,j,k}\, \left[(i,j,k) - \boldsymbol R \right]^2 \right]^{1/2}, \end{aligned}$$
where $(i,j,k)$ is a voxel position and $\boldsymbol R$ is the position of a feature found in the image. Both values are given in pixel or voxel units. $R_f$ is the expected feature radius.

The CG method is quite general in the sense that it detects bright and compact features irrespective of their shape, it does not explicitly require spherical objects. This implies that the method easily detects unwanted bright features that need to be identified and excluded after all features have been detected. Such unwanted features may appear due to image noise in the space between wanted features or due to the presence of voids and regions that do not contain any of the wanted features. The exclusion of unwanted features can be cumbersome. Additional information such as the total intensity, the peak hight of the feature intensity, or the radius of gyration of the feature can be used to detect unwanted features, but this is often not straight forward. Especially in 3d data, the integrated intensity and the peak height can vary significantly with the depth along the axial direction of the 3d image and also with the noise present in the 3d image. A simple intensity cut-off for the exclusion of unwanted features is therefore not applicable. Also, the radius of gyration often does not help to identify unwanted features, as it is correlated with the integrated intensity $B$.

A problem that often occurs is pixel biasing, i.e. the identified feature positions are more probable to be close to an integer pixel position than in the space between integer pixel positions (see section 2 of the Supplement 1 for details) [33]. This has been found to be due to the simple centroid calculation of the CG method to determine the final feature position. A significant improvement was introduced by the Kilfoil group [29] with repeated centroid fitting using images shifted by a fraction of a pixel size to center it at the estimated position of a feature. About 20 iterations of this ‘fracshift’ fitting procedure have been found to remove all pixel biasing and to result in final positions that are at least five times more accurate than those obtained with the original CG method. Further, the repeated centroid fitting can cause unwanted features to converge to the position of a real feature, which allows to remove part of the false positives of the CG method.

Other groups have followed similar strategies to improve the accuracy of the feature positions without changing the basic treatment of the original image with a Gaussian and a boxcar filter. Lu et al. [35] have published code for feature finding optimized for speed and with a small improvement of the position accuracy. The improved accuracy is mainly due to a parabolic intensity fit to determine the $z$ coordinate along the optical axis of the microscope with higher precision. Jenkins et al. have used the expected image of a spherical particle, the sphere-spread function, to obtain more reliable coordinates by fitting the local image intensity with the sphere-spread function [32]. The accuracy of the coordinates was measured with the pair-correlation function, $g(r)$, of dense suspensions of hard spheres. The precision of the local least-squares fit performed for each particle can be used to further characterize the quality of the determined coordinates. While the CG method was intended for monodisperse particles and modest polydispersities, it was found by Kurita et al. that precise measurements of the polydispersity can be obtained without a modification of the procedure for feature-finding [36].

We use a calculated 3d image containing 4769 spherical particles, densely packed with volume fraction 0.43, at known positions to measure the error of the particle positions determined with the CG method and also the effect of the fracshift correction. The spherical particles are located at positions obtained from a Monte-Carlo simulation of hard spheres, have a size polydispersity of 4%, and a 10% variability in brightness. The average particle diameter corresponds to 10 voxels and the voxel size is the same in $x$-, $y$-, and $z$ direction of the $200\times 200\times 200$ voxel 3d image. The image is convolved with an anisotropic 3d Gaussian with standard deviations $\sigma _x = \sigma _y = 1$ and $\sigma _z =1.8$ pixels to approximate the resolution obtained with a confocal microscope. Noise is added to the image to obtain a brightness distribution that is close to a measured microscopy image, see Fig. 2. The noisy image is obtained by multiplying the original image with a normally distributed factor, $f_I$, varying randomly from voxel to voxel. The average factor is $\langle f_I \rangle =1$, and the standard deviation of the factor is $\sigma _{f_I} = 0.12$. Subsequently, random noise with Poisson distribution and a mean intensity of 15 is added to the image. The maximum brightness in the raw image is 255 to cover the full dynamic range of an image with depth of 8 bits. The signal-to-noise ratio of the resulting image is 4.8 (see section 5 of the Supplement 1).

 figure: Fig. 2.

Fig. 2. (a) Calculated image (top) and the resulting noisy image (bottom) of a suspension with volume fraction 0.43. (b) Distribution of position errors $\Delta r = \sqrt {\Delta x^2+ \Delta y^2+ \Delta z^2}$ as obtained from the CG method without further correction ($*$), including the fracshift correction ($\color{red}{\bigtriangleup}$), including the enhancement of brightness maxima ($\diamond$), and including both the fracshift correction and the enhancement of brightness maxima ($\color{red}{\bullet}$).

Download Full Size | PDF

Using the CG method, we obtain feature coordinates that we assign to the closest sphere. Any remaining features are counted as false positives. 11 spheres could not be assigned to a feature position obtained with the CG method, which implies an error $\Delta r > 5$ p for these 11 spheres. We use the mean and the median error in the positions to measure the quality of the coordinates extracted from the image. As shown in Table 1, the CG method gives particle positions with an error of about 0.5 p. The fracshift correction improves $x$-, $y$-, and $z$ coordinates by about the same amount, as the pixel size in all three directions is the same in this example. In Fig. 2(b), the distribution of $\Delta r$ is shown for the CG method ($*$) and the CG method with fracshift ($\color{red}{\bigtriangleup}$). The largest errors occur for particles located close to each other such that their images overlap. The space between the particles, therefore, appears rather bright and the brightness centroids of the particles are shifted towards each other. As the CG method is fully based on the local brightness, particles with overlapping images are most problematic. This problem can be addressed by including information about the shape of the particle instead of exclusively searching for the brightness centroid.

Tables Icon

Table 1. Mean ($\langle \cdots \rangle$) and median ($\tilde {\cdots }$) values of the $x$-, $y$-, and $z$-coordinate errors and of the distance $\Delta r$ from the real position for the densely packed particles of the example shown in Fig. 2. The fracshift and Hessian-matrix corrections are indicated by ‘F’ and ‘H’, respectively. All values are given in pixels and are obtained from the image $I_{dc}$ for the CG method and from the image $I_2$ for the B&$\nabla$ method. Run times are given in the last column.

3. Detection of spherical particles

Information about the specific shape of the objects of interest is not used in the CG method but can be included to specifically detect wanted features and to exclude unwanted ones. This is of particular importance, when many false positives are present, i.e. many detected features do not represent a particle. The separation of wanted and unwanted features is usually not straightforward with the CG method. This is the case for the example shown in Fig. 3(a), where unwanted features are detected inside a bright region that is impenetrable for the colloidal suspension. In such a situation, the CG method finds many unwanted features inside the bright but impenetrable region, where the bright pixels mimic a high peak brightness and close-by darker pixels ensure a background image $I_b$ with reduced brightness.

 figure: Fig. 3.

Fig. 3. (a) Section of a measured 3d confocal-microscopy image with colloidal particles next to an impenetrable region. The scale bar corresponds to 10 $\mu$m. (b) 2d sketch of the convolution kernel $\boldsymbol k_c$ containing unit vectors pointing towards the center of the kernel inside a spherical region and zero vectors outside the spherical region. (c) Sections through the kernel $\boldsymbol k_c$: $(x,y)$ plane (upper left) and $(x,z)$ plane (upper right) of $k_{c,z}$, $(x,y)$ plane of $k_{c,x}$ (lower left), and the $(x,y)$ plane of $k_{c,y}$ (lower right). (d) Image $I_{dc}$ as obtained with the CG method. (e) Image $I_{gc}$, see Eq. (4). (f) Combined image $I_2= I_{dc}\cdot I_{gc}$.

Download Full Size | PDF

3.1 Detection using brightness and brightness gradient

Here, we show how the brightness-gradient of the image can be used to more reliably detect spherical particles and to exclude unwanted features detected by the CG method. A different approach to this problem is given in Ref. [37]. The smoothed image $I_s$, the background $I_b$, and the difference-image $I_d$ are calculated as explained in the previous section. In addition, we use the gradient image

$$\nabla I_{s} = \left( \partial_{x}, \partial_{y}, \partial_{z} \right)^T I_{s}, $$
which, for large images, is best calculated using the Fourier transform of the image, see section 3 of the Supplement 1. The gradient is expected to have a significant magnitude at the periphery of a spherical particle, where the gradient is directed towards the center of the particle. Therefore, we convolute $\nabla I_s$ using a kernel $\boldsymbol k_c$ to detect gradients directed towards the center of a spherical region with the expected particle radius, see the sketch shown in Fig. 3(b). The kernel is calculated as
$$\boldsymbol k_c = \begin{cases} -\hat{\boldsymbol r} & \textrm{for}\; |\boldsymbol r| \le R \\ 0 & \textrm{for}\; |\boldsymbol r| > R \end{cases}$$
with $\hat {\boldsymbol r} = \boldsymbol r / |\boldsymbol r|$ and $R$ the expected particle radius. Figure 3(c) shows sections through the center of the kernel $\boldsymbol k_c$. The resulting image
$$I_{g} = \sum_{i} \partial_{i} I_{s} \otimes \boldsymbol k_{c\, i}$$
is displayed in Fig. 3(e); it shows bright dots at the center of spherical particles resulting from the brightness gradient directed towards the center. These bright spots represent features with a shape that is close to spherical, and the brightness of the spots is proportional to the brightness gradient in the smoothed image $I_s$. Features with pronounced increase of the brightness from the outside towards the center therefore give intense spots in $I_g$. In analogy to the CG method, this image is clipped to have $I_{gc} \ge 0$, which results in well separated peaks for spherical features with a radius close to $R$, the radius used to calculate the kernel $\boldsymbol k_c$. An efficient calculation of the image $I_g$ is given in section 3 of the Supplement 1.

For spherical particles, the wanted features are expected to be both bright and also spherical. While the image $I_{gc}$ highlights objects that are close to spherical, it is not suited to separate bright from dark features. But the CG method is designed to find bright features and, therefore, both methods can be combined to find features that are both bright and close to spherical (B&$\nabla$ method). To use both criteria for the detection of the wanted features, we multiply the images: $I_2= I_{dc} \cdot I_{gc}$, an example is show in Fig. 3(d-f). The features are then detected based on the intensity in $I_2$ using the same procedure as for the CG method. The fracshift correction can be applied to improve the accuracy of their coordinates.

The combination of the information contained in the image brightness and its gradient allows to obtain more accurate particle positions. We again use the calculated image shown in Fig. 2(a) to test the accuracy of the particle positions obtained with the combined image $I_2$. The resulting errors in the positions are listed in Table 1 in the rows for the B&$\nabla$ method. On average, the positions are significantly more accurate than the ones obtained with the CG method alone. Without fracshift correction, $\langle \Delta r \rangle$ is reduced from ($0.51\pm 0.01$) p to ($0.34\pm 0.01$) p. This is the consequence of including the brightness-gradient information in the detection algorithm. The distribution of the position errors $\Delta r$ is shown in Fig. 4. Remarkably, the B&$\nabla$ method finds all particles; each sphere can be identified with a feature found in $I_2$.

The use of brightness- and gradient information also allows to obtain a clear separation of wanted and unwanted features using the total brightness, radius of gyration, peak height, and fraction of pixels above threshold calculated for all features. We demonstrate this with feature positions obtained from the measurement shown in Fig. 3, for which we have separated good and bad features by direct inspection. As shown in Fig. 5(a), the CG method does not allow to clearly separate wanted and unwanted features. The best separation is obtained with the radius of gyration and the peak height of the features, but the separation is not complete. However, the same characteristic values obtained with the combined image $I_2$ do allow to separate the point clouds of wanted and unwanted features, as shown in Fig. 5(b). In this example, the point clouds can be separated with a cut-off value of the peak height. When both the peak height and the total brightness of the features are considered, an even clearer separation can be obtained. This enhanced separation of the point clouds is expected, because the intensity in $I_2$ depends on both the brightness and the spherical shape of the features. Thus, any non-spherical bright features or dark and approximately spherical brightness fluctuations in the raw image have a low intensity in the combined image $I_2$.

 figure: Fig. 4.

Fig. 4. Distribution of position errors $\Delta r = \sqrt {\Delta x^2+ \Delta y^2+ \Delta z^2}$ as obtained using the B&$\nabla$ method without further correction ($*$), including the fracshift correction ($\color{red}{\bigtriangleup}$), including the enhancement of brightness maxima ($\diamond$), and including both the fracshift correction and the enhancement of brightness maxima ($\color{red}{\bullet}$).

Download Full Size | PDF

 figure: Fig. 5.

Fig. 5. Point-clouds representing features found in a 3d microscopy image of the sample shown in Fig. 3(a). Both, features corresponding to a particle (${\bullet}$) and unwanted features ($\color{red}{\blacktriangle}$) found in a bright portion of the image or in a void between particles are shown. (a) CG method with features determined with image $I_{dc}$. (b) Features obtained with the combined image $I_2$.

Download Full Size | PDF

3.2 Gradient encoded with spherical harmonics

As presented in the previous section, the gradient information in $I_s$ can be used to improve the reliability and accuracy of the determined particle positions. But a further improvement is possible with a more selective detection of antiparallel gradients on opposite sides of the spherical objects that are to be detected. This enhanced selectivity can be obtained with a representation of the gradient direction based on the spherical harmonic functions $Y_{2m}(\theta, \phi )$, where $\theta$ and $\phi$ are defined by the gradient vector $\nabla I_s = |\nabla I_s(\boldsymbol r)| [\sin \theta (\boldsymbol r) \cos \phi (\boldsymbol r), \sin \theta (\boldsymbol r) \sin \phi (\boldsymbol r), \cos \theta (\boldsymbol r)]$. When the direction of $\nabla I_s$ is continuously changed by the angle $\pi$, the phase of $Y_{2m}(\theta, \phi )$ makes a full turn of $2\pi$, and this faster change results in an enhanced selectivity for the gradient direction. Further, the symmetry $Y_{2m}(\theta, \phi ) = Y_{2m}(\pi -\theta, 2\pi -\phi )$ is used to ensure that contributions from opposite sides of a spherical particle add up constructively. The gradient information in the image $I_s$ is then encoded as

$$I_{2m}(\boldsymbol r) = |\nabla I_{s}(\boldsymbol r)|\, Y_{2m}[\hat{\nabla I_{s}}(\boldsymbol r)], $$
where $m = -2, -1, 0, 1, 2$. Only the images for $m = 0, 1$, and 2 need to be calculated due to the symmetry $Y_{2-m}(\theta, \phi ) = (-1)^m\, Y_{2m}^*(\theta, \phi )$, where $^*$ indicates complex conjugation. The kernel used to detect gradients pointing towards the center of a spherical particle is then given by
$$ k_{2m}(\boldsymbol r) = \begin{cases} Y_{2m}(\hat{\boldsymbol r}) & \textrm{for}\; |\boldsymbol r| \le R \\ 0 & \textrm{for}\; |\boldsymbol r| > R \end{cases}$$
and the filtered gradient image is obtained as
$$I_Y = \sum_{m={-}2}^2 ({-}1)^m k_{2-m} \otimes I_{2m}. $$

In the convolution, the kernel is chosen to have the opposite phase of image $I_{2m}$ carrying the gradient information. This ensures that the gradient of a spherical object gives a real contribution to $I_Y$, while the complex phase does not cancel for other arrangements that contribute a significant imaginary part to $I_Y$. Considering only the real part $\Re I_Y$ allows to exclude a large part of configurations that do not fit the kernel. In addition, the image is clipped as for the CG and B&$\nabla$ method:

$$I_{Yc} = \begin{cases} \Re I_{Y} & \textrm{for}\; \Re I_{Y} > 0 \\ 0 & \textrm{otherwise.} \end{cases}$$

Finally, the particles are again detected in a combined image $I_{Y2} = I_{dc}\cdot I_{Yc}$ to include both the brightness- and gradient information. As for the B&$\nabla$ method, the images are most efficiently calculated in Fourier space. This is explained in section 3 of the Supplement 1.

As can be seen in Fig. 6(b) and (c), the particles are represented by significantly sharper peaks in $I_{Y2}$ compared to $I_2$. The accuracy of the obtained particle coordinates is further enhanced, as shown in Table 1 in the row labeled ‘B&Y’ and by the histogram of deviations from the true particle positions shown in Fig. 6(d). Interestingly, the row with label ‘B&Y + F’ shows that the accuracy of the coordinates is not significantly improved by the fracshift correction. This indicates that the $I_{Y2}$ image used by the B&Y method represents the particle positions as accurately as the image $I_s$, which is used as input for the fracshift correction.

 figure: Fig. 6.

Fig. 6. (a) Calculated image of a suspension with volume fraction 0.45 containing a sinusoidal wall. (b) shows the image $I_2$ of the B&$\nabla$ method, and (c) is the image $I_{Y2}$ of the B&Y method. (d) Distribution of position errors $\Delta r = \sqrt {\Delta x^2+ \Delta y^2+ \Delta z^2}$ as obtained using the B&$\nabla$ method with enhancement of brightness maxima and fracshift ($*$), using the B&Y method without ($\color{red}{\diamond}$) and with ($\color{red}{\bullet}$) fracshift correction, and using the B&Y method with enhancement of brightness maxima without (blue$\bigtriangleup$) and with (blue$\Box$) fracshift correction.

Download Full Size | PDF

The B&Y method is similar to a particle detection method for holographic microscopy, where a large number of circular interference fringes is analyzed using the intensity gradient [41]. The orientation of the gradient $\phi$ is encoded with the phase factor $e^{2i\phi }$, which gives a better sensitivity for gradients pointing in opposite directions.

To compare the exclusion of false features obtained with the B&$\nabla$ and B&Y method, we use a calculated image containing a sinusoidal ‘wall’ that the particles cannot enter, see Fig. 6(a). The curvature of the ‘wall’ is close to that of the particle surface and the diameter of the wall is close to the largest particle diameters in the image. The polydispersity is 0.07 and the noise added to the calculated image is the same as for the calculated example presented in the previous section. In the treated image of the B&$\nabla$ method, $I_2$, the sinusoidal wall is still visible (Fig. 6(b)), while it is suppressed to a large extend in the image $I_{Y2}$ of the B&Y method (Fig. 6(c)). The false features that are detected within the wall are, however, easily removed, as they are clearly separated from the main cloud of particles. Using the peak height and the radius of gyration, $R_g$, of the detected features, bad features appear as a cloud with higher $R_g$. The bad features are shown by the red point cloud in Fig. 7. Some very dark bad features with peak height $<1$ are also easily excluded. The B&$\nabla$ method misses 6 particles in this image, while the B&Y method finds all particles and no false particles are within the main cloud of good features.

 figure: Fig. 7.

Fig. 7. Point-clouds representing features found in the 3d calculated image shown in Fig. 6(a). Both, features corresponding to a particle (${\bullet}$) and unwanted features ($\color{red}{\blacktriangle}$) found in the sinusoidal wall close to the edge of the image are shown. (a) B&$\nabla$ method with features determined with image $I_{2}$. (b) B&Y method with features obtained with the combined image $I_{Y2}$.

Download Full Size | PDF

4. Enhanced brightness maxima for accurate particle detection

Images of dense colloidal suspensions obtained with confocal microscopy often show quite bright ‘bridges’ between particles, as can be seen in the image $I_{dc}$ but also in the images $I$ and $I_s$ in the upper row of Fig. 1. These are most pronounced along the direction with the lowest resolution, which usually is the direction along the optical axis of the microscope, the $z$ direction in Fig. 1. The point-spread function of an optical microscope is generally quite narrow in the lateral direction, the plane perpendicular to the optical axis, and significantly wider along the optical axis. The pinhole used for confocal scanning improves the resolution along the optical axis [38] but an equal resolution in all three directions is not reached. The resulting ‘bridges’ between nearest-neighbor features hinder the clear separation of objects in terms of brightness. In the centroid calculation of the CG method, the ‘bridges’ cause a small shift of the feature positions towards the ’bridge’, as also discussed in [32].

A reduction of the ‘bridges’ can in principle be obtained with a deconvolution, if the point-spread function is known. However, the point-spread function is often not known with good precision and the noise in the image complicates the deconvolution [32]. If the resolution is sufficient to separate the features of interest such that an intensity maximum is present for each feature in the smoothed image $I_s$, the ‘bridges’ can be suppressed by reducing the intensity of all pixels that are not close to a maximum. These pixels can be found using the Hessian matrix given by the second derivatives of $I_s$,

$$H_{ij}(\boldsymbol r) = \left[ \frac{\partial^2 I_s(\boldsymbol r)}{\partial r_i \partial r_j} \right], $$
where $i$ and $j$ refer to the cartesian components of the position $\boldsymbol r$ in the image. The calculation of $H(\boldsymbol r)$ is done in Fourier space, as explained in section 4 of the Supplement 1. The Hessian matrix contains the information about the curvature of the intensity. It is symmetric and real, which implies that it can be diagonalized in every point $\boldsymbol r$ of the image $I_s$. The principal curvatures at $\boldsymbol r$ are given by the eigenvalues of $H(\boldsymbol r)$, which are the second derivatives along the principal axes: $\lambda _i = \frac {\partial ^2 {I'}_s}{\partial {r'_i}^2}$, where the prime ($'$) refers to the representation with diagonal matrix. Close to a brightness maximum (a particle in the image), the intensity drops in all directions and, therefore, the curvature is negative along all three principal axes. This implies that the three eigenvalues $\lambda _i$ of $H(\boldsymbol r)$ are negative. Close to an intensity minimum, all principal curvatures are positive ($\lambda _i >0$), while only one or two curvatures are negative close to a saddle point of the intensity. Thus the eigenvalues of $H(\boldsymbol r)$ allow to identify locations (voxels) that are either close to a brightness maximum or close to a minimum or a saddle point. To suppress the ‘bridges’ in the image, we reduce the intensity close to minima and saddle points using the curvatures (eigenvalues) calculated for each voxel:
$$I_s(\boldsymbol r) \rightarrow \begin{cases} I_s(\boldsymbol r) & \lambda_i(\boldsymbol r) < 0\; \forall\, i \\ I_s(\boldsymbol r) \left(1 - \frac{|\textrm{det}\, H(\boldsymbol r)|}{\max |\textrm{det}\, H|} \right) & \textrm{otherwise} \end{cases},$$
where $\max |\textrm {det}\, H|$ is the maximum absolute value of the determinant of $H$ found in the whole image. As shown in Fig. 8, the ‘bridges’ between neighboring particles are suppressed and the intensity peaks of the features of interest are more clearly separated. Note that $H(\boldsymbol r)$ is calculated with the smoothed image $I_s$ to avoid any rapid changes in $H(\boldsymbol r)$ caused by pixel noise. This method can be applied without prior knowledge of the point-spread function of the microscope.

 figure: Fig. 8.

Fig. 8. Comparison of images without (a-d) and with (e-h) suppression of minima and saddle points. The smoothed image $I_s$ (a, c, e, g) and the resulting, clipped difference image $I_{dc}$ (b, d, f, h) are shown. All images are obtained from a measured 3d confocal-microscopy image. Scale bars of 5 $\mu$m are shown in panel g. The scale is the same for $x$- and $y$-direction.

Download Full Size | PDF

To test the effect of this suppression of brightness minima and saddle points, we use it with the CG-, the B&$\nabla$-, and the B&Y method and determine the feature positions in the calculated image shown in Fig. 2. The resulting position errors are listed in Table 1 in the rows containing the label ‘H’. The suppression of ‘bridges’ improves the accuracy of the coordinates for the CG- and the B&$\nabla$ method. As expected, the largest improvement is obtained for the $z$ coordinates, the axial direction with the lowest resolution in the image. Overall, the improvement of the coordinates is about as important as the fracshift correction. For both the CG- and the B&$\nabla$ method, the most accurate particle coordinates are obtained when the suppression of ‘bridges’ is combined with the fracshift correction. The B&Y method combined with the suppression of ‘bridges’ gives less accurate coordinates than the B&Y method alone. The change in the gradient due to the suppression of pixels close to minima and saddle points apparently has a negative effect. As ‘bridges’ between particles are strongly suppressed with the B&Y method compared to the B&$\nabla$ method, it is not surprising that the manipulation of minima and saddle points does not give a better result for B&Y.

5. Discussion

The primary goal of the feature detection presented in section 3 is to find all spherical particles and to minimize the detection of false features, which is especially important in crowded suspensions as in the examples shown in the previous sections. We find that the combination of brightness- and gradient information does indeed result in a clear improvement compared to the CG method. In the example with known particle positions discussed in sections 2 and 3 and shown shown in Fig. 2, our B&$\nabla$ and B&Y methods find all particles. This is the case both with and without the suppression of ‘bridges’ presented in section 4 and it is the consequence of the clearer separation of the particles in the combined images $I_2$ and $I_{Y2}$, which significantly improves the reliability of the feature detection. The CG method misses about 0.23% of the spheres (11 out of 4769). This is improved to zero misses when the CG method is combined with the suppression of ‘bridges’. Therefore, our B&$\nabla$ method alone or in combination with the suppression of ‘bridges’ significantly improve the reliability to detect all particles. The best reliability is, however, obtained when the orientation of the intensity gradient is represented in terms of the $Y_{2m}$ spherical harmonics with the B&Y method.

As apparent from Fig. 3(d-f), the combination of brightness and gradient information in the images $I_2$ and $I_{Y2}$ does allow to exclude many unwanted features obtained with the CG method. The image $I_{dc}$ (panel d), used as input for the CG method, is noisy in the area which is not accessible for the particles. This noise results in the detection of a large number of false features. On the other hand, the combined image $I_2$ (panel f) is virtually black in the inaccessible area, the noise is suppressed by multiplying $I_{dc}$ (panel d) with $I_{gc}$ (panel e) and the number of false features is reduced. This effect is even more pronounced in $I_{Y2}$, where a background not fitting the convolution kernel results in an imaginary part in $I_{Y2}$ and, therefore is suppressed.

Our heuristic feature detection methods presented in section 3 are based on the concurrence of brightness maxima and spherical features detected using the gradient image. This information is combined in the images $I_2 = I_{dc} I_{gc}$ or $I_{Y2} = I_{dc} I_{Yc}$, and the involved convolution kernels are adapted for the reliable detection of spherical particles and the suppression of false features. Further, these combined images may also be modified by the suppression of ‘bridges’. All these manipulations contained in $I_2$ or $I_{Y2}$ improve the reliability of particle detection, but they do not necessarily result in an image that allows to obtain the most accurate particle positions. The maxima in $I_2$ and $I_{Y2}$ are displaced relative to the brightness maxima in the original image due to the background subtraction in $I_{dc}$ and due to the sensitivity of the gradient image $\nabla I_s$ to variations in the background. These effects cause the position errors $\langle \Delta r\rangle = 0.51$ p, 0.34 p, and 0.20 p for the CG, B&$\nabla$, and B&Y method, respectively, as listed in Table 1. The combined images $I_2$ and $I_{Y2}$ appear to give more accurate feature positions than the image $I_{dc}$ used for the CG method, but both $I_{dc}$ and $I_2$ are not optimized to obtain the most accurate coordinates. Coordinates with higher accuracy are obtained with a final correction using the fracshift procedure, see the rows with label ‘F’ in Table 1. However for B&Y fracshift does not give a significant improvement. The input for the fracshift correction are a set of initial particle positions that are assumed to be close to the true particle positions and the smoothed image $I_s$, which represents the true positions in terms of brightness more accurately than the images $I_2$ or $I_{dc}$ and with about the same accuracy as the image $I_{Y2}$.

The best position accuracy obtained in the example presented in section 4 and Fig. 2 is $\langle \Delta r\rangle \approx 0.21$ p with the B&$\nabla$ method combined with the suppression of ‘bridges’ and the fracshift correction and 0.20 p with the B&Y method with or without fracshift (see Table 1). The size polydispersity in this example is 4%, and the average particle has a diameter of 10 p. The most important contribution to the average position error $\langle \Delta r \rangle$ is the limited resolution of the microscope given by the point-spread function. When we calculate the image including the effect of the point-spread function but without any noise in the image, $\langle \Delta r \rangle$ is reduced to about 0.19 p for B&$\nabla$ and 0.17 p for B&Y, an improvement of about 10%, while $\langle \Delta r \rangle$ obtained without the smearing due to the point-spread function but including noise in the image is about 0.13 p, an improvement of 38%. Therefore, the noise in our images has a relatively small effect on the accuracy of the particle coordinates compared to the point-spread function of the microscope. When we use the as-calculated image without noise and without applying the point-spread function to determine the positions, $\langle \Delta r \rangle$ is about 0.05 p. This error is mainly due to nearest neighbor particles. The convolution kernels $k_b$, $\boldsymbol k_c$, and $k_{2m}$ used to obtain the images $I_d$, $I_g$, and $I_{Y}$ are sensitive to the intensity tails of nearest neighbor particles.

Comparing the bare B&Y method and the B&$\nabla$ method including the suppression of ‘bridges’ and fracshift with the CG method without further correction, the fraction of particles that cannot be identified is reduced by at least an order of magnitude, and the errors in the positions are reduced by about a factor of 0.42. The best accuracy of $\langle \Delta r\rangle \approx 0.20$ p is good for a heuristic method but, as expected, the accuracy of a full image reconstruction is not reached. Further, the position errors presented here are for particles that are fully visible and not for those at the edge of the image, where quite accurate coordinates can be obtained with a full image reconstruction [30]. Our methods including the brightness gradient are designed to detect spherical particles and, therefore, are not expected to give improved results for particles that are not fully visible in the image. Our methods tend to suppress particles cropped by the edge of the image, as these do not match the spherical convolution kernels. Nevertheless, heuristic methods for image analysis are of interest, as they are much faster than a full image reconstruction with thousands of fit parameters and because they give a good estimate of the particle positions that are needed as input for an image reconstruction code such as PERI [30].

In addition to image noise and the point-spread function of the microscope, an additional complication for the detection of colloidal particles is size polydispersity. When the average particle diameter is about 10 pixels, the convolution kernels used for the B&$\nabla$-, B&Y-, and the CG method allow for a size variation of about 1 pixel. A reduced reliability of the particle detection should be expected as soon as polydispersity and the suspension structure become correlated, i.e. the polydispersity has a measurable effect on the suspension structure [39]. We find the B&$\nabla$ method without suppression of ‘bridges’ to give reliable particle positions up to a polydispersity $\sigma _p = 6\pm 1$%, while the fraction of particles that is not found increases to about $5\cdot 10^{-4}$ at $\sigma _p= 0.07$ and about $3\cdot 10^{-3}$ at $\sigma _p = 0.1$. In the examples discussed in sections 2 and 3, the polydispersity is $\sigma _p = 0.04$. With the suppression of bridges we find the fraction of unrecognized particles to be about an order of magnitude lower.

Comparing the run times of the presented particle-detection methods listed in the last column of Table 1, the B&Y method with or without the fracshift correction appears to be the most efficient one. Its run time is shorter than that for B&$\nabla$ with the suppression of ‘bridges’ and fracshift and it gives the most accurate particle positions. The listed run times were obtained on a MacBook Pro laptop (16-inch, 2019, with 2.6 GHz 6-Core Intel Core i7 CPU and 16 GB 2667 MHz DDR4 random-access memory) running IDL 8.7.3 under macOS Catalina. The IDL code is available on GitHub [42].

The use of the Hessian matrix to improve the detection of features is reminiscent of neural networks used for the detection of objects in images, which use Gaussian smoothing with various standard deviations, first and second derivatives of the brightness, as well as the Hessian matrix and its invariants (eigenvalues, trace, determinant) as input for the neural network. An example is the shallow neural network used by ilastik [40], which can be trained with a quite small number of examples. As spheres are quite simple objects, a heuristic method as presented here gives accurate positions without the use of a neural network, even with a complex background. Further, an heuristic particle detection algorithm always gives reproducible results, while the output of a neural network depends on the training that may vary from case to case.

6. Conclusions

Building on the method of Crocker and Grier (CG) [27] to detect bright features in a microscopy image, our particle detection algorithms use both the brightness and the brightness gradient of the image to obtain isolated brightness peaks for the spherical particles to be detected. The information included with the brightness gradient allows to specifically detect spherical objects and improves the reliability to find all wanted particles by one to two orders of magnitude compared to the CG method, while a large number of unwanted features that are detected with the CG method can be excluded. Further, the accuracy of the particle coordinates is improved compared to the CG method.

More accurate particle coordinates can be obtained using the fracshift correction introduced by the Kilfoil group [29]. However, the B&Y method appears to produce a filtered image $I_{Y2}$ that is about as accurate as the smoothed image $I_s$ that is used for the fracshift correction and, therefore, fracshift does not give a significant improvement for this method. The enhancement of brightness maxima (Section 4) detected with the Hessian matrix given by the second derivatives of the brightness improves the particle coordinates for the B&$\nabla$ method, while it deteriorates the coordinates in combination with the B&Y method. This must be due to the B&Y methods high sensitivity for changes in the brightness gradient. The suppression of brightness minima and saddle points allows to obtain a clearer separation of the peaks representing the particles, but the B&Y method appears to be more effective for this separation, as, compared to the B&$\nabla$ method, it gives a stronger suppression of the intensity in locations not fitting spherical particles.

The error of the determined particle positions is mostly due to the point-spread function of the microscope. The noise in the image has a modest effect $\sim 10\%$ for the examples presented here with a particle diameter $\approx 10$ pixels. More accurate particle positions can be obtained with smaller pixel size (more pixels per particle), if the noise in the image can be kept constant. With our B&$\nabla$ and B&Y methods, a $\sim 5\%$ size polydispersity of the particles has only a small effect on the particle coordinates, while a polydispersity of $10\%$ is found to have an appreciable effect on the reliability of particle detection.

Our fast, heuristic B&Y method is fund to be the most efficient algorithm. It improves both the reliability and accuracy for the detection of spherical particles compared to the widely used CG method. Moreover, the exclusion of unwanted features is improved. This is of particular interest for particle detection in heterogeneous environments close to surfaces or near other structures present in the sample.

Funding

Swiss National Science Foundation (200020_153050, 200020_184839).

Disclosures

The authors declare no conflicts of interest.

Data availability

The IDL code for the presented algorithms and part of the data underlying the results presented in this paper are available on GitHub [42]. Other data may be obtained from the authors upon reasonable request.

Supplemental document

See Supplement 1 for supporting content.

References

1. A. D. Dinsmore, E. R. Weeks, V. Prasad, A. C. Levitt, and D. A. Weitz, “Three-dimensional confocal microscopy of colloids,” Appl. Opt. 40(24), 4152–4159 (2001). [CrossRef]  

2. D. Semwogerere and E. R. Weeks, “Confocal microscopy,” in Encyclopedia of Biomaterials and Biomedical Engineering (Taylor and Francis, 2005).

3. P. Lu, “Confocal scanning optical microscopy and nanotechnology,” in Handbook of Microscopy for Nanotechnology, N. Yao and Z. Wang, eds., (Kluwer, 2005), pp. 3–24. eds. ConfocalLu.pdf

4. V. Prasad, D. Semwogerere, and E. R. Weeks, “Confocal microscopy of colloids,” J. Phys.: Condens. Matter 19(11), 113102 (2007). [CrossRef]  

5. P. Keim, G. Maret, U. Herz, and H. v. Grünberg, “Harmonic lattice behavior of two-dimensional colloid crystals,” Phys. Rev. Lett. 92(21), 215504 (2004). [CrossRef]  

6. D. Reinke, H. Stark, H.-H. v. Grünberg, A. B. Schofield, G. Maret, and U. Gasser, “Noncentral forces in crystals of charged colloids,” Phys. Rev. Lett. 98(3), 038301 (2007). [CrossRef]  

7. S. K. Sainis, V. Germain, and E. R. Dufresne, “Statistics of particle trajectories at short time intervals reveal fn-scale colloidal forces,” Phys. Rev. Lett. 99(1), 018303 (2007). [CrossRef]  

8. H. v. Grünberg, P. Keim, K. Zahn, and G. Maret, “Elastic behavior of a two-dimensional crystal near melting,” Phys. Rev. Lett. 93(25), 255703 (2004). [CrossRef]  

9. C. Eisenmann, U. Gasser, P. Keim, and G. Maret, “Anisotropic defect-mediated melting of two-dimensional colloidal crystals,” Phys. Rev. Lett. 93(10), 105702 (2004). [CrossRef]  

10. C. Eisenmann, P. Keim, U. Gasser, and G. Maret, “Melting of anisotropic colloidal crystals in two dimensions,” J. Phys.: Condens. Matter 16(38), S4095–S4102 (2004). [CrossRef]  

11. H. Löwen, R. Messina, N. Hoffmann, C. N. Likos, C. Eisenmann, P. Keim, U. Gasser, G. Maret, R. Goldberg, and T. Palberg, “Colloidal layers in magnetic fields and under shear flow,” J. Phys.: Condens. Matter 17(45), S3379–S3386 (2005). [CrossRef]  

12. Y. Han, N. Ha, A. Alsayed, and A. Yodh, “Melting of two-dimensional tunable-diameter colloidal crystals,” Phys. Rev. E 77(4), 041406 (2008). [CrossRef]  

13. Y. L. Han, Y. Shokef, A. M. Alsayed, P. Yunker, T. C. Lubensky, and A. G. Yodh, “Geometric frustration in buckled colloidal monolayers,” Nature 456(7224), 898–903 (2008). [CrossRef]  

14. U. Gasser, C. Eisenmann, G. Maret, and P. Keim, “Melting of crystals in two dimensions,” ChemPhysChem 11(5), 963–970 (2010). [CrossRef]  

15. A. D. Dinsmore, A. G. Yodh, and D. J. Pine, “Phase diagrams of nearly hard-sphere binary colloids,” Phys. Rev. E 52(4), 4045–4057 (1995). [CrossRef]  

16. A. D. Dinsmore, J. C. Crocker, and A. G. Yodh, “Self-assembly of colloidal crystals,” Curr. Opin. Colloid Interface Sci. 3(1), 5–11 (1998). [CrossRef]  

17. K.-h. Lin, J. C. Crocker, V. Prasad, A. Schofield, D. A. Weitz, T. C. Lubensky, and A. G. Yodh, “Entropically driven colloidal crystallization on patterned surfaces,” Phys. Rev. Lett. 85(8), 1770–1773 (2000). [CrossRef]  

18. U. Gasser, E. R. Weeks, A. Schofield, P. N. Pusey, and D. A. Weitz, “Real-space imaging of nucleation and growth in colloidal crystallization,” Science 292(5515), 258–262 (2001). [CrossRef]  

19. R. Blaak, S. Auer, D. Frenkel, and H. Löwen, “Crystal nucleation of colloidal suspensions under shear,” Phys. Rev. Lett. 93(6), 068303 (2004). [CrossRef]  

20. U. Gasser, “Crystallization in three- and two-dimensional colloidal suspensions,” J. Phys.: Condens. Matter 21(20), 203101 (2009). [CrossRef]  

21. K. Sandomirski, E. Allahyarov, H. Löwen, and S. U. Egelhaaf, “Heterogeneous crystallization of hard-sphere colloids near a wall,” Soft Matter 7(18), 8050–8055 (2011). [CrossRef]  

22. F. Ziese, G. Maret, and U. Gasser, “Heterogeneous nucleation and crystal growth on curved surfaces observed by real-space imaging,” J. Phys.: Condens. Matter 25(37), 375105 (2013). [CrossRef]  

23. A. M. Alsayed, M. F. Islam, J. Zhang, P. J. Collings, and A. G. Yodh, “Premelting at defects within bulk colloidal crystals,” Science 309(5738), 1207–1210 (2005). [CrossRef]  

24. E. R. Weeks, J. C. Crocker, A. C. Levitt, A. Schofield, and D. A. Weitz, “Three-dimensional direct imaging of structural relaxation near the colloidal glass transition,” Science 287(5453), 627–631 (2000). [CrossRef]  

25. P. J. Lu, E. Zaccarelli, F. Ciulla, A. B. Schofield, F. Sciortino, and D. A. Weitz, “Gelation of particles with short-range attraction,” Nature 453(7194), 499–503 (2008). [CrossRef]  

26. X. Cheng, X. Xu, S. A. Rice, A. R. Dinner, and I. Cohen, “Assembly of vorticity-aligned hard-sphere colloidal strings in a simple shear flow,” Proc. Natl. Acad. Sci. U. S. A. 109(1), 63–67 (2012). [CrossRef]  

27. J. C. Crocker and D. G. Grier, “Methods of digital video microscopy for colloidal studies,” J. Colloid Interface Sci. 179(1), 298–310 (1996). [CrossRef]  

28. K. E. Jensen and N. Nakamura, “Note: An iterative algorithm to improve colloidal particle locating,” Rev. Sci. Instrum. 87(6), 066103 (2016). [CrossRef]  

29. Y. Gao and M. L. Kilfoil, “Accurate detection and complete tracking of large populations of features in three dimensions,” Opt. Express 17(6), 4685–4704 (2009). [CrossRef]  

30. M. Bierbaum, B. D. Leahy, A. A. Alemi, I. Cohen, and J. Sethna, “Light microscopy at maximal precision,” Phys. Rev. X 7(4), 041007 (2017). [CrossRef]  

31. S. S. Rogers, T. A. Waigh, X. Zhao, and J. R. Lu, “Precise particle tracking against a complicated background: polynomial fitting with gaussian weight,” Phys. Biol. 4(3), 220–227 (2007). [CrossRef]  

32. M. C. Jenkins and S. U. Egelhaaf, “Confocal microscopy of colloidal particles: towards reliable, optimum coordinates,” Adv. Colloid Interface Sci. 136(1-2), 65–92 (2008). [CrossRef]  

33. J. C. Crocker and E. R. Weeks, “Particle tracking tutorial,” http://www.physics.emory.edu/faculty/weeks//idl/tracking.html.

34. F. M. Reza, An Introduction to Information Theory (Dover, 1994).

35. P. J. Lu, P. Sims, H. Oki, J. Macarthur, and D. A. Weitz, “Target-locking acquisition with real-time confocal (TARC) microscopy,” Opt. Express 15(14), 8702–8712 (2007). [CrossRef]  

36. R. Kurita, D. B. Ruffner, and E. R. Weeks, “Measuring the size of individual particles from three-dimensional imaging experiments,” Nat. Commun. 3(1), 1127 (2012). [CrossRef]  

37. R. Parthasarathy, “Rapid, accurate particle tracking by calculation of radial symmetry center,” Nat. Methods 9(7), 724–726 (2012). [CrossRef]  

38. J. Pawley, Handbook of Biological Confocal Microscopy (Third Edition) (Springer, 2006), 3rd ed.

39. M. Kotlarchyk and C. Sow-Hsin, “Analysis of small angle neutron scattering spectra from polydisperse interacting colloids,” J. Chem. Phys. 79(5), 2461–2469 (1983). [CrossRef]  

40. S. Berg, D. Kutra, T. Kroeger, C. N. Straehle, B. X. Kausler, C. Haubold, M. Schiegg, J. Ales, T. Beier, M. Rudy, K. Eren, J. I. Cervantes, B. Xu, F. Beuttenmueller, A. Wolny, C. Zhang, U. Koethe, F. A. Hamprecht, and A. Kreshuk, “ilastik: interactive machine learning for (bio)image analysis,” Nat. Methods 16(12), 1226–1232 (2019). [CrossRef]  

41. B. J. Krishnatreya and D. G. Grier, “Fast feature identification for holographic tracking: the orientation alignment transform,” Opt. Express 22(11), 12773 (2014). [CrossRef]  

42. U. Gasser, “IDL code for accurate detection of spherical particles in a complex background,” GitHub, 2021, https://github.com/ursgasser/Accurate-detection-of-spherical-objects-in-a-complex-background.

Supplementary Material (1)

NameDescription
Supplement 1       Supplemental Document

Data availability

The IDL code for the presented algorithms and part of the data underlying the results presented in this paper are available on GitHub [42]. Other data may be obtained from the authors upon reasonable request.

42. U. Gasser, “IDL code for accurate detection of spherical particles in a complex background,” GitHub, 2021, https://github.com/ursgasser/Accurate-detection-of-spherical-objects-in-a-complex-background.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Left to right: raw, smoothed, background, and clipped sectional views in a measured 3d confocal-microscopy image. The $z$ direction (upper row) is along the axial direction of the microscope. Scale bars of 5 $\mu$m are shown in the upper-left panel. The scale is the same for $x$- and $y$-direction. The expected resolution is $(220\pm 10)$ nm in $x$- and $y$ direction and $(400\pm 10)$ nm in $z$ direction.
Fig. 2.
Fig. 2. (a) Calculated image (top) and the resulting noisy image (bottom) of a suspension with volume fraction 0.43. (b) Distribution of position errors $\Delta r = \sqrt {\Delta x^2+ \Delta y^2+ \Delta z^2}$ as obtained from the CG method without further correction ($*$), including the fracshift correction ($\color{red}{\bigtriangleup}$), including the enhancement of brightness maxima ($\diamond$), and including both the fracshift correction and the enhancement of brightness maxima ($\color{red}{\bullet}$).
Fig. 3.
Fig. 3. (a) Section of a measured 3d confocal-microscopy image with colloidal particles next to an impenetrable region. The scale bar corresponds to 10 $\mu$m. (b) 2d sketch of the convolution kernel $\boldsymbol k_c$ containing unit vectors pointing towards the center of the kernel inside a spherical region and zero vectors outside the spherical region. (c) Sections through the kernel $\boldsymbol k_c$: $(x,y)$ plane (upper left) and $(x,z)$ plane (upper right) of $k_{c,z}$, $(x,y)$ plane of $k_{c,x}$ (lower left), and the $(x,y)$ plane of $k_{c,y}$ (lower right). (d) Image $I_{dc}$ as obtained with the CG method. (e) Image $I_{gc}$, see Eq. (4). (f) Combined image $I_2= I_{dc}\cdot I_{gc}$.
Fig. 4.
Fig. 4. Distribution of position errors $\Delta r = \sqrt {\Delta x^2+ \Delta y^2+ \Delta z^2}$ as obtained using the B&$\nabla$ method without further correction ($*$), including the fracshift correction ($\color{red}{\bigtriangleup}$), including the enhancement of brightness maxima ($\diamond$), and including both the fracshift correction and the enhancement of brightness maxima ($\color{red}{\bullet}$).
Fig. 5.
Fig. 5. Point-clouds representing features found in a 3d microscopy image of the sample shown in Fig. 3(a). Both, features corresponding to a particle (${\bullet}$) and unwanted features ($\color{red}{\blacktriangle}$) found in a bright portion of the image or in a void between particles are shown. (a) CG method with features determined with image $I_{dc}$. (b) Features obtained with the combined image $I_2$.
Fig. 6.
Fig. 6. (a) Calculated image of a suspension with volume fraction 0.45 containing a sinusoidal wall. (b) shows the image $I_2$ of the B&$\nabla$ method, and (c) is the image $I_{Y2}$ of the B&Y method. (d) Distribution of position errors $\Delta r = \sqrt {\Delta x^2+ \Delta y^2+ \Delta z^2}$ as obtained using the B&$\nabla$ method with enhancement of brightness maxima and fracshift ($*$), using the B&Y method without ($\color{red}{\diamond}$) and with ($\color{red}{\bullet}$) fracshift correction, and using the B&Y method with enhancement of brightness maxima without (blue$\bigtriangleup$) and with (blue$\Box$) fracshift correction.
Fig. 7.
Fig. 7. Point-clouds representing features found in the 3d calculated image shown in Fig. 6(a). Both, features corresponding to a particle (${\bullet}$) and unwanted features ($\color{red}{\blacktriangle}$) found in the sinusoidal wall close to the edge of the image are shown. (a) B&$\nabla$ method with features determined with image $I_{2}$. (b) B&Y method with features obtained with the combined image $I_{Y2}$.
Fig. 8.
Fig. 8. Comparison of images without (a-d) and with (e-h) suppression of minima and saddle points. The smoothed image $I_s$ (a, c, e, g) and the resulting, clipped difference image $I_{dc}$ (b, d, f, h) are shown. All images are obtained from a measured 3d confocal-microscopy image. Scale bars of 5 $\mu$m are shown in panel g. The scale is the same for $x$- and $y$-direction.

Tables (1)

Tables Icon

Table 1. Mean ( ) and median ( ~ ) values of the x -, y -, and z -coordinate errors and of the distance Δ r from the real position for the densely packed particles of the example shown in Fig. 2. The fracshift and Hessian-matrix corrections are indicated by ‘F’ and ‘H’, respectively. All values are given in pixels and are obtained from the image I d c for the CG method and from the image I 2 for the B& method. Run times are given in the last column.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

G ( i , j , k ) = 1 N exp ( ( i 2 + j 2 + k 2 ) 2 σ 2 ) where N = i , j , k exp ( ( i 2 + j 2 + k 2 ) 2 σ 2 ) ,
k b ( i , j , k ) = { 1 N b for i 2 + j 2 + k 2 < R o 0 otherwise with N b = i , j , k i 2 + j 2 + k 2 < R o 1 ,
I d c ( r ) = { I d ( r ) I d ( r ) > 0 0 I d ( r ) 0 .
B = i , j , k | ( i , j , k ) R | < R f I i , j , k R g = [ 1 B i , j , k | ( i , j , k ) R | < R f I i , j , k [ ( i , j , k ) R ] 2 ] 1 / 2 ,
I s = ( x , y , z ) T I s ,
k c = { r ^ for | r | R 0 for | r | > R
I g = i i I s k c i
I 2 m ( r ) = | I s ( r ) | Y 2 m [ I s ^ ( r ) ] ,
k 2 m ( r ) = { Y 2 m ( r ^ ) for | r | R 0 for | r | > R
I Y = m = 2 2 ( 1 ) m k 2 m I 2 m .
I Y c = { I Y for I Y > 0 0 otherwise.
H i j ( r ) = [ 2 I s ( r ) r i r j ] ,
I s ( r ) { I s ( r ) λ i ( r ) < 0 i I s ( r ) ( 1 | det H ( r ) | max | det H | ) otherwise ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.