Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Deep learning for precise axial localization of trapped microspheres in reflective optical systems

Open Access Open Access

Abstract

High-precision axial localization measurement is an important part of micro-nanometer optical measurement, but there have been issues such as low calibration efficiency, poor accuracy, and cumbersome measurement, especially in reflected light illumination systems, where the lack of clarity of imaging details leads to the low accuracy of commonly used methods. Herein, we develop a trained residual neural network coupled with a convenient data acquisition strategy to address this challenge. Our method improves the axial localization precision of microspheres in both reflective illumination systems and transmission illumination systems. Using this new localization method, the reference position of the trapped microsphere can be extracted from the identification results, namely the “positioning point” among the experimental groups. This point relies on the unique signal characteristics of each sample measurement, eliminates systematic repeatability errors when performing identification across samples, and improves the localization precision of different samples. This method has been verified on both transmission and reflected illumination optical tweezers platforms. We will bring greater convenience to measurements in solution environments and will provide higher-order guarantees for force spectroscopy measurements in scenarios such as microsphere-based super-resolution microscopy and the surface mechanical properties of adherent flexible materials and cells.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Precision micro-nanometer measurement systems have been used extensively in micromanipulation and super-resolution imaging in biology. As a result, an increasing number of applications require high precision in the measurement and control of micro-nanometer precision measurement systems. For both biomechanical measurements and super-resolution imaging, localization tracking of microspheres during micro-nano manipulation is a vital feedback tool, and much work has been done on this aspect, such as single-molecule force spectroscopy [1,2] as well as the use of micro and nanorobots [3,4]. In molecular dynamics studies, force spectroscopy measurement systems are used to apply mechanical forces to samples by manipulating the radial motion of spheres with microscale diameters. With this approach, sufficient sensitivity and resolution can be achieved. In recent years, with the development of microsphere super-resolution techniques [5,6] and controllable rotation of microspheres using optical tweezers [7,8], higher demands have been placed on microsphere axial localization tracking techniques [916]. In addition, the axial localization of microspheres is also a valuable parameter in controllable single-molecule studies such as DNA supercoiling [17,18] and protein tracking using optically trapped microspheres [1921]. Furthermore, due to systematic errors such as instrument setup errors and repeatability errors of precision localization platforms, inevitably, the exact position of the microspheres trapped by the optical tweezers differs from the expected calculation result. These deviations can lead to biases in the data in force spectroscopy analysis and super-resolution imaging [22,23].

Current optical measurement methods for axial localization of microspheres in optical traps fall into two categories: methods for analyzing light intensity and methods for image processing. Light intensity analysis methods include critical angle [24], forward-scattered light [25], back-scattered light [26], stereoscopy [27], digital holography [28], information entropy [29], and chromatic confocal [30] methods. These methods perform well in detecting the localization in transmission illumination optical systems at the nm scale but are limited by technical challenges involving the complex system structure, set-up, and maintenance. Image processing methods primarily use out-of-focus images as look-up tables (LUTs) in order to obtain localization information. While these methods have a simple system structure, they typically require pre-calibration, which is a time-consuming process and are limited by imaging conditions and poor universality, and small changes in lighting conditions, objects or the environment will affect the measurement accuracy. The algorithms used by image processing methods can be divided into image similarity comparison algorithms that use out-of-focus images as LUTs, such as image difference and correlation coefficient methods, and methods for characterizing microsphere images, such as identifying diffraction ring radii and image feature vectors (e.g., calculating local gradients). In some methods, force and axial localization cannot be measured simultaneously [25,26].

This study investigates the axial localization precision in reflection systems and compares it with transmission systems. Figure 1 shows that when we use the same system (details of the system parameters we built can be seen in 3.1) and the same sample, the use of different illumination methods (transmission illumination and reflection illumination) leads to different imaging effects and characteristics. The diffraction rings of microspheres in transmission illumination systems are mainly formed by illumination light, while the diffraction rings in reflection systems are mainly formed by reflected light. In addition, the illumination light intensity differs significantly between the two systems. Analysis of both the intensity of the illumination light and the illumination light source that forms the diffraction rings shows that the illumination conditions of the transmission systems are better than those of the reflection illumination systems. As shown in Fig. 1(b), diffraction rings of order 8-9 can be observed in the microsphere images collected by the transmission system, with abundant features, but only diffraction rings of order 3-4 can be observed in the reflection system sample images of the same sample, which is the direct result of the low signal-to-noise ratio of the reflection imaging samples. Reflective systems thus require algorithms with stronger feature extraction capabilities in order to achieve the same level of localization precision as transmission systems. Based on testing results, none of the conventional methods applicable to microsphere imaging in transmitted light optical systems could achieve good results in microsphere image localization in reflective optical systems. The surface of the opaque samples, however, can’t be seen by the transmitted illumination system, thus, the surface of the opaque samples can be observed only by the reflected illumination system. Thus, there is a need to investigate microsphere localization algorithms in reflection systems that can observe a wider variety of samples. Unfortunately, microsphere localization algorithms that are applicable to such systems have rarely been investigated.

 figure: Fig. 1.

Fig. 1. (a) Schematic diagram of sampling at different heights. (b) Comparison of transmission and reflected illumination sampling. (c) Reflected illumination configuration diagram. (d) Transmitted illumination configuration diagram.

Download Full Size | PDF

A novel axial localization tracking algorithm and cross-sample localization method, known as the “positioning point”method, have been proposed to eliminate systematic errors. We began by demonstrating the applicability of the new localization algorithm and the conventional localization algorithms to systems under different kinds of illumination. Secondly, “positioning points” are obtained from experimental results based on the new ResCNN localization algorithm and data acquisition method. Then, system setup and repeatability errors are eliminated by matching the “positioning points” in cross-sample group measurements. Thirdly, the localization algorithm was experimentally verified. In conclusion, we have compared the localization precision of the new localization algorithm in different illumination systems, both with and without the use of the “positioning point” method, and estimated the limit of microsphere axial localization precision of convolutional neural network (CNN) models in reflected illumination systems.

2. Data processing methods

2.1 Microsphere ROI identification method in full field-of-view

In full field-of-view, there is more than one microsphere suitable for testing and there may be multiple microspheres adhering to or in close contact with other microspheres. Once the microsphere is located, it is difficult to ensure that the microsphere is in the center of the field-of-view and occupies most of the space. Therefore, after obtaining the approximate position of an individual microsphere, the region-of-interest (ROI) tracking algorithm is used to track the exact location of the microsphere. In terms of transmission illumination, Hough transform is commonly used to identify diffraction rings and determine the center of microsphere images. This method is not applicable to reflective systems because the number of diffraction rings and the contrast of the image obtained are significantly lower than those of transmission illumination systems.

After obtaining microsphere images in the reflective optical system, we first performed histogram equalization on 360*360-pixel microsphere images which include the microsphere center and enhanced image details. Then, the images were binarized and the binarization threshold was set as 80% of the gray value range. The images were eroded and then dilated, and the central circle of the microsphere image was completely obtained. We found the center of mass of the central portion of the acquired microsphere images to obtain the exact ROI position of the microsphere, reacquire the images with the microsphere in the center, and construct the training and test datasets. Compared to the Hough circle detection method, this method has a better anti-interference detection ability.

2.2 Application of the ResNet model to microsphere axial localization

The microsphere localization algorithm was developed based on the out-of-focus plane method of vision-based localization, so the algorithm can be widely and easily applied to various reflective optical systems. The common out-of-focus plane methods include localization based on the diffraction ring radii of microspheres, local gradient feature localization, and correlation coefficient localization. This study compared these three methods with the new algorithm.

Deep learning, which has evolved rapidly in recent years, is a method for extracting features through supervised or unsupervised learning. Deep neural networks consisting of linear and nonlinear transformations are typically used to decompose and abstract the features into layer weights for feature fitting in a data-driven manner for downstream tasks. Empirically, network depth is critical to model performance. When the number of layers increases, the network can extract more complex features. Therefore, a deeper network can produce a better result theoretically. However, experiments have shown that deep networks have degradation problems [35]. The identification accuracy of the model saturates or even decreases when the number of layers increases. Deep residual network (ResNet) is a very effective CNN image feature extractor and has good feature extraction ability [35]. The ResNet model improves the performance of deep networks by adding a residual learning block to the architecture. Residual learning is easier than direct learning of the original features. When the residual is zero, new layers simply perform identity mapping, and the network performance is not degraded. In fact, the residual cannot be zero, so the stacked layers can learn new features based on the input features, thereby leading to enhanced network performance.

In this study, we used the classical ResNet architecture and adopted the ResNet50 model because it achieves better test results and requires less computation time. Then, we changed the final fully connected layer (fc) output layer to a multilayer fully connected layer, which was connected by dropout and rectified linear activation functions (ReLU). Finally, we used LogSoftmax for classification and used different axial positions of microspheres as categories. The detailed network structure and the hyperparameters selected after optimization are shown in Table 1. We selected NLLLoss as the loss function, which was used as a feedback signal during the training process. Experimental data were collected on the optical tweezer platform and the axial position of the microsphere was obtained using a precision localization platform. During the experiment, the precise axial position data of the microsphere and out-of-focus images of many groups of microspheres obtained by the script are divided into different groups of axial positions according to the set step values, and the out-of-focus images are randomly divided into training set, validation set and test set according to the ratio of 3:1:1 after the pre-processing is completed. Subsequently, we calculated the loss function based on the position predicted by the model and the actual position, and iteratively trained the model.

Tables Icon

Table 1. Model Network Structure

The Adam optimizer was used to train the model and data augmentation and dropout techniques such as translation, rotation, vertical flipping, and random cropping of transforms were implemented during the training process to improve the model performance. During the selection of hyperparameters, we first conducted the hyperparameter grid search experiment on the data set without data augmentation, which is similar to the “rough tuning” in the microscope system. We found the combination of hyperparameters with the best effect, and then used this combination to carry out experimental verification on the data set after data augmentation, and carried out “fine tuning”. The most suitable hyperparameter, a batch size of 16, was selected through iterations and grid searches during training. The most suitable value for the initial learning rate was 1e-4, and the learning rate gradually decreased as the number of iterations increased. Unlike non-differentiable empirical methods, neural network models can be easily trained using the well-established deep-learning backpropagation methods. Therefore, the proposed model can be used as a feature recognition algorithm for more complex deep learning tasks such as analyzing image data for microsphere axial localization. This can also be achieved by training the proposed model jointly with other deep learning components.

The algorithm was composed in Python and the PyTorch [36] library was used to build, train, and evaluate our deep-learning model. This program required a CUDA-compatible GPU and a fast CPU to accelerate training and inference. We conducted the test on a workstation with an NVIDIA GeForce GTX 3060Ti GPU, an Intel Xeon W-2155 CPU, and 16 GB RAM. The test can also be conducted on other similar platforms.

3. Experiment

3.1 Experimental set-up

A reflection illumination configuration system was constructed. The experimental setup included a reflecting optical tweezer as shown in Fig. 1. The trapping laser and illumination light are coupled to the same objective, and an image is produced based on the reflected visible light from the surface of the microsphere, as shown in Fig. 1(c). The trapping laser was a 1064 nm laser (Spectra-Physics, J20I-BL-106C, 5 W) and a Zeiss objective (C-Apochromat, 63x/1.2) was used to generate optical traps and trap microspheres. For precise measurement of the localization of microspheres and data acquisition, a piezoelectric stage (Physik Instrumente, P-517, displacement of 20 µm in the Z-axis, precision of 2 nm in the closed-loop mode) was used to move the cuvette. Kohler illumination was used as the bright-field illumination for the microscope. The images of trapped microspheres were obtained using a CMOS camera (Basler, acA1300-60 gm, 10 bits/pixel, 60 Hz). Additionally, bio-affinity should be considered during the experiment, so biotin-coated microspheres with a diameter of 1.58 um were chosen. The biotin-coated microspheres can be used for flexible material measurements.

This study utilized a cuvette with an internal space of 0.22 mm on a slide and a coverslip using double-sided adhesive tape (Parafilm). A solution containing microbeads (diameter of 1.58 µm, silicon dioxide) was injected into the cuvette and sealed with vacuum grease. Once the cuvette was prepared, we fixed it on the piezoelectric stage under the objective for data acquisition.

3.2 Data acquisition methods

For optical systems in which the object lens is only used for imaging, such as magnetic tweezers [37], the focal plane of the imaging system can be adjusted by moving the objective lens back and forth only, thereby obtaining a dataset of out-of-focus images in the form of LUTs and the out-of-focus axial localization of the real images. For optical systems in which the objective is responsible for both the imaging and control of the microspheres, like reflected illumination optical tweezers systems, the imaging optical subsystem is coupled to the laser optical subsystem in the same objective imaging system, and it is not possible to obtain out-of-focus images of the microspheres by moving the objective. If this is the case, out-of-focus images of the microspheres are still required. During calibration, adhesives are often used to bond the microspheres to the cuvette slide, thus, imaging of microspheres at different axial positions can be obtained directly via sliding displacement and hence LUTs. More generally, if the microsphere used for calibration isn’t directly used for measurement, this will lead to a loss of consistency between calibration and testing and an inaccurate measurement of the spatial position of the microsphere when it is trapped by the optical trap during sample replacement. Calibration and experiments are not performed on the same sample, resulting in measurement errors [3134]. In practice, nonadherent microspheres and the microspheres adhered to the slide are not identical in imaging and the localization of nonadherent microsphere has significant implications for measurement accuracy and precision.

The adherent and non-adherent microsphere techniques used in the experiment are separate experimental groups from each other. There are two general methods to adhere microspheres, bioencapsulation and physical fixation. Biological encapsulation means that the microspheres are encapsulated on the surface of the microspheres by streptavidin, etc., and the glass base is modified with amino or hydroxyl groups to immobilize the microspheres on the substrate by chemical bonding. Physical fixation is done by immobilizing the microspheres by heating, etc., such as on slides that can be fixed by heating and melting by a heated magnetic stirrer such as IKA C-MAG HS 7. In contrast, non-adherent microspheres use ultrapure water as a solvent for the microspheres in the cuvette.

Here, we dynamically immobilized the beads on the slide using optical trapping forces [12], rather than relying on the previously used adhesives. Therefore, the same microsphere can be used for calibration and testing.

As shown in Fig. 2, a microsphere is trapped by the optical trap in the middle of the cuvette. As the cuvette is moved along the Z-axis by the piezoelectric displacement platform, one side of the cuvette, such as the slide in the figure, touches the microsphere. When the cuvette is moved continuously in this direction, the microsphere gradually leaves the original optical binding position, appears out of focus, and reaches a new equilibrium position through the restoring force FZ of the optical trap and the supporting force FN of the slide. At this point, the microsphere is in an equilibrium state under the action of the two forces, FZ and FN. At this time, the precise axial position data of the microsphere is obtained by means of the LabVIEW program in the upper computer of the precision piezoelectric displacement table. Due to the precision of the piezoelectric displacement platform, the microsphere is precisely moved out from the center of the optical trap and can be moved to any axial position in the range of $\Delta z$ with nanoscale resolution.

 figure: Fig. 2.

Fig. 2. Displacement of a microsphere in an optical trap by a displacement platform. The sampling process is moving from (a) to (b). (c) Microsphere images of the reflected illumination system acquired in steps of 5 nm

Download Full Size | PDF

Using the LabVIEW program in the upper computer for integrated window control of the precision displacement piezoelectric stage and CMOS camera, a script is written for fast control to conduct the experiment, with the displacement stage moving in specified steps and the CMOS sampling rapidly multiple times and stored locally for the next data processing step.

The pre-calibration was performed using the following method. First, the platform-driven slider moved a certain step distance Z0 and the bead deviated from the trap center by $z_1$. Then, the precision displacement platform gradually moved in the opposite direction of the calibration direction, and the microsphere returned to the center. Meanwhile, the images of the bead at each axial position were recorded by the camera as calibration data. As shown in Fig. 2(c), the images acquired in steps of 5 nm are very close, and it is difficult to distinguish with the naked eye even if the images vary up to 5 steps of 30 nm pitch. When the cuvette driven by the precision displacement platform fully returned to its original position, the microsphere was still trapped by the optical trap.

Since the control of the acquisition process is very fast, the influence of environmental conditions and corresponding errors do not change during the cycle of a set of acquisition experiments.

3.3 “positioning point” positioning method in cross-sample measurement

The differences in illumination environment, axial position, and individual differences between different microsphere experimental groups lead to differences in diffraction ring image characteristics between groups, which makes position measurement across experimental groups difficult and with very low accuracy. By training the model using different batches of data from a single sample group, we could determine the precise focusing position of the microsphere when it was trapped by the optical trap, regardless of whether different experimental groups of adherent or nonadherent spheres were used. The“positioning point” is obtained from experimental results based on the new ResCNN localization algorithm and data acquisition method. The “positioning point” of the sample data correspond to the focal plane.

In the case of nonadherent spheres, “positioning point” appeared because the slide did not touch the microsphere when data acquisition started. Therefore, in the pre-sampling stage, although the true value changed, the microsphere position and the imaging characteristics of the diffraction ring did not change, resulting in a significant increase in difference of measured values. In the region with small true values before the red-circle, as shown in Fig. 3(a), the predicted values were randomly distributed in front of the focus plane because the microsphere position and the imaging characteristics of the diffraction ring did not change. In the region with larger true values after the red-circle, as shown in Fig. 3(a), namely the region behind the focus plane, the predicted values were positively correlated with the true values and were no longer randomly distributed because the imaging characteristics of the diffraction ring changed significantly with the axial position, as shown in Fig. 3(a).

 figure: Fig. 3.

Fig. 3. (a) Identification result of the nonadherent sphere in the reflective system based on the ResNet model. (b) Identification result of the adherent sphere in the reflective system based on the ResNet model. The red circle is the “positioning point” i.e., the moment when the trapped microsphere leaves the trapped center.

Download Full Size | PDF

In the case of adherent spheres, at the beginning of data acquisition, the microsphere always moves with the slide, gradually moving toward the focal plane from one side of the focal plane, and gradually moving away from the focal plane to the other side. In the region with small true values before the red-circle, as shown in Fig. 3(b), the predicted values were positively correlated with the true values because the imaging characteristics of the diffraction ring changes with the axial position. In the region with large true values after the red-circle, as shown in Fig. 3(b), the predicted values were positively correlated with the true values because the imaging characteristics of the diffraction ring changes significantly with the axial position. The regions in front of and behind the point had two different characteristics so they could be clearly distinguished, as shown in Fig. 3(b). This also corresponds to the situation depicted in Fig. 4 where the imaging characteristics of the diffraction ring were different when the microsphere was on both sides of the focal plane in the transmission and reflection configuration. This is because of the polarity of the diffraction centers on both sides of the focal plane, namely the difference between bright and dark areas.

 figure: Fig. 4.

Fig. 4. Imaging characteristics of diffraction rings when the microsphere is (a) below the focal plane, (b) at the focal plane, and (c) above the focal plane in the transmission configuration system. Imaging characteristics of diffraction rings when the microsphere is (d) below the focal plane, (e) at the focal plane, and (f) above the focal plane in the reflection configuration system. The “positioning point” of the sample data correspond to the focal plane.

Download Full Size | PDF

In summary, this point can be used as the precise focusing position of the microsphere when it is trapped by the optical trap, namely the “positioning point” of the sample data. This point allows the algorithm to align the reference position for microsphere imaging across samples, thus enhancing the robustness of the algorithm. This alignment capability has not been explicitly described in other literature and algorithms. In addition, if this method is not used to measure the axial reference position of microspheres, there will be systematic errors in matching multiple groups of sample data caused by the repeatability error of the mechanical displacement device, thus increasing the localization error of various algorithms.

4. Results and discussion

4.1 Localization precision of the algorithm

Here, ResNet was used to track the axial localization of 1.58-um-diameter microspheres trapped by optical tweezers in transmitted light and reflective illumination modes. We used the same optical tweezer system and the same sample for 10 sets of tests, obtained the information through the vision sensor, removing the coarse error, and the calculation results of Mean Absolute Error(MAE) were compared with those of traditional out-of-focus plane algorithms, such as the diffraction ring radius identification method, local gradient feature method, and correlation coefficient matching method, as shown in Table 2. As shown in Fig. 5, the prediction error values (a) and prediction results (b) for the non-attached spheres step of 5 nm in the reflected illumination mode and prediction error values (c) and prediction results (d) for non-attached spheres step of 5 nm in the transmitted illumination mode predicted by the ResNet-based model. The measurement error is the largest near the extremes of the true values in Fig. 5, mainly because the farther away from the focus plane, the larger the proportion of diffraction ring features to the picture, causing the most distant pictures on both sides of the focus plane to be easily confused. In the experiment, it was found that different step sizes should be implemented in the algorithm for data captured by different modes of illumination. As noted above, the diffraction ring signals from images captured by different illumination modes have different strengths. The transmission illumination image has a higher signal-to-noise ratio, and a higher sampling frequency is applicable, the reflection illumination image has a lower signal-to-noise ratio, and a lower sampling frequency is applicable, which may effectively prevent over-fitting. Our method achieves nm-level measurements in reflective illumination mode, which is a significant improvement over other schemes, as well as approximate measurements in transmissive illumination mode.

 figure: Fig. 5.

Fig. 5. (a) Prediction error values and (b) prediction results for nonadherent sphere in the reflective system based on the ResNet model; (c) prediction error values and (d) prediction results for nonadherent sphere in the transmitted system based on the ResNet model.

Download Full Size | PDF

Tables Icon

Table 2. Localization precision of different algorithms in reflected and transmitted illumination systems

4.2 Localization precision of the “positioning point” method

Based on the ResNet algorithm and the data acquisition method of nonadherent spheres, this study clearly identified the position of the image where the microsphere was confined by the optical trap and became out of focus, namely the “positioning point”, and collected images for different sample groups in the reflective system. We used the ResNet model to perform axial localization on the data across the sample group that were not aligned by the “positioning point” method and the data that were matched by the “positioning point” method. The results of MAE are shown in Table 3. As shown in Fig. 6, the prediction error values (a) and prediction results (b) for the step of 30 nm cross-sample set data aligned by the “positioning point” method in the reflected illumination mode predicted by the ResNet-based model. In reflective illumination mode, our method is able to measure across groups of samples down to the 50 nm level, which is a significant improvement over other schemes.

 figure: Fig. 6.

Fig. 6. (a) Prediction error values and (b) prediction results for cross-sample set data aligned by the “positioning point” method based on the RESNET model.

Download Full Size | PDF

Tables Icon

Table 3. Influence of the “positioning point” method on the localization precision of the ResNet model

5. Conclusions

In this study, a modified ResNet classification model was used to solve the problem of time-consuming and poorly generalized microsphere axial localization in reflective illumination systems, where slight changes in illumination conditions, objects, or the environment may affect the accuracy of the measurement. This study also explored whether CNN feature extraction algorithms are suitable for the localization of microspheres in reflective systems. In this type of localization problem, small changes in circular features are required and prior knowledge is required before good judgments can be made. Conventional methods can achieve microsphere localization at the nanometer scale in transmission illumination systems but are not applicable to microsphere localization tasks in reflective systems. Although these methods can only achieve micrometer-scale localization due to the small changes in characteristics. Furthermore, based on the ResNet algorithm and the data acquisition method, it is possible to extract the reference position from the identification results of the reflective microspheres as the microspheres move out of focus. This position is the inter-group “positioning point”. The method relies on the unique signal characteristics obtained from each sample measurement, eliminates setup and repeatability errors when performing identification between sample groups, and performs height reference position alignment among groups to improve the localization precision of different sample groups. From the recognition results of the algorithm, the CNN model has a very good performance compared with the traditional localization processing method. This explanation is probably because the illumination features in the image are not ideal regularization when the signal-to-noise ratio is very low. For the idea of constructing feature extractor, it is difficult to achieve adaptation, and features of different heights may not be consistent. It is not even dependent on the typical law change, which is also an important reason why the positioning accuracy of the traditional scheme is restricted. When facing these problems, the CNN model does not presuppose features, but relies on complex neurons to find unique and highly correlated features, which is also the reason why the CNN model is more suitable for such recognition problems. Compared to conventional algorithms, the use of ResNet models allows for higher microsphere axial localization resolutions. Using standard force spectroscopy samples, this method can be used to accurately measure the axial force of optical tweezers as well as the mechanical properties of flexible adherent materials and cell surfaces to an accuracy of less than 10 nm. In addition, the ResNet-based method can verify theoretical studies that require precise positional analysis using microsphere-based super-resolution microscopy.

Funding

National Natural Science Foundation of China (52075383, 61927808).

Acknowledgments

This work was supported by the National Natural Science Foundation of China [grant numbers 52075383, 61927808].

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. A. Ashkin, “History of optical trapping and manipulation of small-neutral particle, atoms, and molecules,” IEEE J. Sel. Top. Quantum Electron. 6(6), 841–856 (2000). [CrossRef]  

2. D. G. Grier, “A revolution in optical manipulation,” Nature 424(6950), 810–816 (2003). [CrossRef]  

3. H. Xie, M. Sun, X. Fan, Z. Lin, W. Chen, L. Wang, L. Dong, and Q. He, “Reconfigurable magnetic microrobot swarm: Multimode transformation, locomotion, and manipulation,” Sci. Robot. 4(28), eaav8006 (2019). [CrossRef]  

4. X. Liu, Q. Gao, S. Wu, H. Qin, T. Zhang, X. Zheng, and B. Li, “Optically Manipulated Neutrophils as Native Microcrafts In Vivo,” ACS Cent. Sci. 8, 1017 (2022). [CrossRef]  

5. M. Pan, Y. Fu, M. Zheng, H. Chen, Y. Zang, H. Duan, Q. Li, M. Qiu, and Y. Hu, “Dielectric metalens for miniaturized imaging systems: progress and challenges,” Light: Sci. Appl. 11(1), 195 (2022). [CrossRef]  

6. Z. Wang, W. Guo, L. Li, B. Luk’yanchuk, A. Khan, Z. Liu, Z. Chen, and M. Hong, “Optical virtual imaging at 50 nm lateral resolution with a white-light nanoscope,” Nat. Commun. 2(1), 218 (2011). [CrossRef]  

7. X. Gao, Y. Wang, X. He, M. Xu, J. Zhu, X. Hu, X. Hu, H. Li, and C. Hu, “Angular Trapping of Spherical Janus Particles,” Small Methods 4(12), 2000565 (2020). [CrossRef]  

8. X. Gao, C. Zhai, Z. Lin, Y. Chen, H. Li, and C. Hu, “Simulation and Experiment of the Trapping Trajectory for Janus Particles in Linearly Polarized Optical Traps,” Micromachines 13(4), 608 (2022). [CrossRef]  

9. J. R. Moffitt, Y. R. Chemla, D. Izhaky, and C. Bustamante, “Differential detection of dual traps improves the spatial resolution of optical tweezers,” Proc. Natl. Acad. Sci. 103(24), 9006–9011 (2006). [CrossRef]  

10. E. Fällman and O. Axner, “Design for fully steerable dual-trap optical tweezers,” Appl. Opt. 36(10), 2107–2113 (1997). [CrossRef]  

11. K. C. Neuman and A. Nagy, “Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy,” Nat. Methods 5(6), 491–505 (2008). [CrossRef]  

12. G. Ma, C. Hu, S. Li, X. Gao, H. Li, and X. Hu, “Axial displacement calibration and tracking of optically trapped beads,” Opt. Lasers Eng. 134, 106285 (2020). [CrossRef]  

13. B. Midtvedt, S. Helgadottir, A. Argun, J. Pineda, D. Midtvedt, and G. Volpe, “Quantitative digital microscopy with deep learning,” Appl. Phys. Rev. 8(1), 011310 (2021). [CrossRef]  

14. L. A. Carlucci and W. E. Thomas, “Modification to axial tracking for mobile magnetic microspheres,” Biophys. Rep. 1(2), 100031 (2021). [CrossRef]  

15. Y. Wu, A. Ray, Q. Wei, A. Feizi, X. Tong, E. Chen, Y. Luo, and A. Ozcan, “Deep Learning Enables High-Throughput Analysis of Particle-Aggregation-Based Biosensors Imaged Using Holography,” ACS Photonics 6(2), 294–301 (2019). [CrossRef]  

16. A. Kashchuk, O. Perederiy, C. Caldini, L. Gardini, F. Pavone, A. Negriyko, and M. Capitanio, “Particle localization using local gradients and its application to nanometer stabilization of a microscope,” bioRxiv2021.11.11.468294 (2021). [CrossRef]  

17. R. Pollari and J. N. Milstein, “Accounting for polarization in the calibration of a donut beam axial optical tweezers,” PLoS One 13(2), e0193402 (2018). [CrossRef]  

18. S. Forth, C. Deufel, M. Y. Sheinin, B. Daniels, J. P. Sethna, and M. D. Wang, “Abrupt Buckling Transition Observed during the Plectoneme Formation of Individual DNA Molecules,” Phys. Rev. Lett. 100(14), 148301 (2008). [CrossRef]  

19. Z. Qi, R. A. Pugh, M. Spies, and Y. R. Chemla, “Sequence-dependent base pair stepping dynamics in XPD helicase unwinding,” eLife 2, e00334 (2013). [CrossRef]  

20. Y. R. Chemla, “High-resolution, hybrid optical trapping methods, and their application to nucleic acid processing proteins,” Biopolymers 105(10), 704–714 (2016). [CrossRef]  

21. Z. Lin, X. Gao, S. Li, and C. Hu, “Learning-based event locating for single-molecule force spectroscopy,” Biochem. Biophys. Res. Commun. 556, 59–64 (2021). [CrossRef]  

22. N. B. Viana, M. S. Rocha, O. N. Mesquita, A. Mazolli, P. A. Maia Neto, and H. M. Nussenzveig, “Towards absolute calibration of optical tweezers,” Phys. Rev. E 75(2), 021914 (2007). [CrossRef]  

23. Z. Gong, Z. Wang, Y. Li, L. Lou, and S. Xu, “Axial deviation of an optically trapped particle in trapping force calibration using the drag force method,” Opt. Commun. 273(1), 37–42 (2007). [CrossRef]  

24. E. Higurashi, R. Sawada, and T. Ito, “Axial and lateral displacement measurements of a microsphere based on the critical-angle method,” Jpn. J. Appl. Phys. 37(7R), 4191–4196 (1998). [CrossRef]  

25. L. Friedrich and A. Rohrbach, “Improved interferometric tracking of trapped particles using two frequency-detuned beams,” Opt. Lett. 35(11), 1920–1922 (2010). [CrossRef]  

26. A. R. Carter, G. M. King, and T. T. Perkins, “Back-scattered detection provides atomic-scale localization precision, stability, and registration in 3D,” Opt. Express 15(20), 13434–13445 (2007). [CrossRef]  

27. A. Sato, Q. D. Pham, S. Hasegawa, and Y. Hayasaki, “Three-dimensional subpixel estimation in holographic position measurement of an optically trapped nanoparticle,” Appl. Opt. 52(1), A216–A222 (2013). [CrossRef]  

28. R. Bowman, G. Gibson, and M. Padgett, “Particle tracking stereomicroscopy in optical tweezers: Control of trap shape,” Opt. Express 18(11), 11785–11790 (2010). [CrossRef]  

29. J. H. Bao, Y. M. Li, L. R. Lou, and Z. Wang, “Measurement of the axial displacement with information entropy,” J. Opt. A: Pure Appl. Opt. 7(1), 76–81 (2005). [CrossRef]  

30. S. Ueda, M. Michihata, T. Hayashi, and Y. Takaya, “Wide-Range Axial Position Measurement for Jumping Behavior of Optically Trapped Microsphere Near Surface Using Chromatic Confocal Sensor,” Int. J. Optomechatronics 9(2), 131–140 (2015). [CrossRef]  

31. A. Rohrbach, H. Kress, and E. H. K. Stelzer, “Three-dimensional tracking of small spheres in focused laser beams: influence of the detection angular aperture,” Opt. Lett. 28(6), 411–413 (2003). [CrossRef]  

32. M. Speidel, A. Jonáš, and E. Florin, “Three-dimensional tracking of fluorescent nanoparticles with subnanometer precision by use of off-focus imaging,” Opt. Lett. 28(2), 69 (2003). [CrossRef]  

33. S. Knust, A. Spiering, H. Vieker, A. Beyer, A. Gölzhäuser, K. Tönsing, A. Sischka, and D. Anselmetti, “Video-based and interference-free axial force detection and analysis for optical tweezers,” Rev. Sci. Instrum. 83(10), 103704 (2012). [CrossRef]  

34. Z. Zhang and C.-H. Menq, “Three-dimensional particle tracking with subnanometer resolution using off-focus images,” Appl. Opt. 47(13), 2361–2370 (2008). [CrossRef]  

35. K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in Conference on Computer Vision and Pattern Recognition (IEEE, 2016), pp. 770–778.

36. A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, and A. Lerer, “Automatic differentiation in PyTorch,” in Workshop on Autodiff (Spotlight) (NIPS, 2017).

37. R. Sarkar and V. V. Rybenkov, “A Guide to Magnetic Tweezers and Their Applications,” Front. Phys.4, (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) Schematic diagram of sampling at different heights. (b) Comparison of transmission and reflected illumination sampling. (c) Reflected illumination configuration diagram. (d) Transmitted illumination configuration diagram.
Fig. 2.
Fig. 2. Displacement of a microsphere in an optical trap by a displacement platform. The sampling process is moving from (a) to (b). (c) Microsphere images of the reflected illumination system acquired in steps of 5 nm
Fig. 3.
Fig. 3. (a) Identification result of the nonadherent sphere in the reflective system based on the ResNet model. (b) Identification result of the adherent sphere in the reflective system based on the ResNet model. The red circle is the “positioning point” i.e., the moment when the trapped microsphere leaves the trapped center.
Fig. 4.
Fig. 4. Imaging characteristics of diffraction rings when the microsphere is (a) below the focal plane, (b) at the focal plane, and (c) above the focal plane in the transmission configuration system. Imaging characteristics of diffraction rings when the microsphere is (d) below the focal plane, (e) at the focal plane, and (f) above the focal plane in the reflection configuration system. The “positioning point” of the sample data correspond to the focal plane.
Fig. 5.
Fig. 5. (a) Prediction error values and (b) prediction results for nonadherent sphere in the reflective system based on the ResNet model; (c) prediction error values and (d) prediction results for nonadherent sphere in the transmitted system based on the ResNet model.
Fig. 6.
Fig. 6. (a) Prediction error values and (b) prediction results for cross-sample set data aligned by the “positioning point” method based on the RESNET model.

Tables (3)

Tables Icon

Table 1. Model Network Structure

Tables Icon

Table 2. Localization precision of different algorithms in reflected and transmitted illumination systems

Tables Icon

Table 3. Influence of the “positioning point” method on the localization precision of the ResNet model

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.