Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

SCPNet-based correction of distorted multi-spots for three-dimensional surface measurement of metal cylindrical shaft parts

Open Access Open Access

Abstract

Metal cylindrical shaft parts are critical components in industrial manufacturing that require high standards for roundness error and surface roughness. When using the self-developed multi-beam angle sensor (MBAS) to detect metal cylindrical shaft parts, the distorted multi-spots degrade the measurement accuracy due to the nonlinear distortion caused by the metal material’s reflective properties and surface roughness. In this study, we propose a spot coordinate prediction network (SCPNet), which is a deep-learning neural network designed to predict spot coordinates, in combination with Hough circle detection for localization. The singular value decomposition (SVD) model is employed to eliminate the tilt error to achieve high-precision, three-dimensional (3D) surface reconstruction of metal cylindrical shaft parts. The experimental results demonstrate that SCPNet can effectively correct distorted multi-spots, with an average error of the spot center of 0.0612 pixels for ten points. The proposed method was employed to measure metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm, with resulting standard deviation (STD) values of 0.0022 µm, 0.0026 µm, 0.0028 µm, and 0.0036 µm, respectively.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

The three-dimensional surface profile is an important parameter of machined components and has a great influence on the performance and lifespan of mechanical parts. This profile is considered one of the most common parameters in the complex shapes of workpieces [1]. While shape-measuring machines are widely employed in the measurement of optical components with different curvatures [24], increasing demands for workpiece measurement accuracy determine the improvement in workpiece surface control methods and the accuracy of measurement algorithms. Previous studies on cylindrical shaft part measurements have mainly focused on contact and noncontact methods, such as coordinate measuring machines (CMMs), laser scanners, and optical methods. However, these methods are limited in terms of accuracy, efficiency, and ease of use. CMMs require direct contact with parts, which can cause wear and tear and affect measurement accuracy. Laser scanners and optical methods are noncontact methods that can achieve high accuracy. However, their sensitivity to surface reflectivity and environmental factors limits their applicability. To address these limitations, we developed a sensor MBAS for roundness error detection [5,6].

In a previous investigation, we confirmed the feasibility of utilizing an MBAS for measuring the 3D profiles of cylindrical surfaces [7]. The MBAS can measure the angles reflected from multi-points on the cylindrical workpiece, and the workpiece curvature can be calculated from the difference between the two reflected angles. Nevertheless, the center of the distorted multi-spots cannot be accurately extracted, which affects the accuracy of the angle difference. The detection precision was compromised when measuring metal cylindrical shaft parts due to multi-spot distortion. In the case of inaccurate recognition of spot coordinates due to noise and diffraction, both traditional algorithms and common target detection suffer from poor robustness. Previous studies have utilized different methods for determining the center location of spots, which are mainly divided into two categories: greyscale conversion-based and edge-based methods. Greyscale-based methods, such as the greyscale centroid method (GCM) [8] and the multiorder moment geometry method (MMGM) [9], are mainly applicable for cases of small spot radii and uniform brightness distributions. Edge-based methods, such as the ellipse fitting method (EFM) [10] and the Hough transform method (HTM) [11], utilize edge shape information for processing calculations and are appropriate for spot images with relatively large radii. The GCM can realize rapid calculation and accurate localization for uniform spots, but its localization accuracy is very poor for actual images with large noise or complex situations [12]. The MMGM is insensitive to noise. When there is external noise interference, the calculated center of spots may only exhibit slight deviation. The calculation rate of the MMGM is fast, but its accuracy is very low [13]. The EFM has higher accuracy and lower time complexity, but its anti-interference ability against random noise is poor. When there is random noise, the accuracy of the central calculation will be significantly reduced [14]. The HTM has better anti-interference ability against noise, but its disadvantages are that it is computationally intensive, occupies much memory, and requires discretization of the parameter space, which limits the detection accuracy [15]. The Otsu-Kmeans with Hough circle fitting (OTH) algorithm has been proposed as a means of extracting subpixel-level spot center coordinates [16]. This algorithm takes advantage of the energy distribution information of the spot to achieve high-precision localization. However, the algorithm is only suitable for processing images with Gaussian-distributed spots. In addition, the MBAS itself has aging, glass device wear, noise interference, etc., which will produce errors in the three-dimensional surface reconstruction algorithm and reduce the accuracy of three-dimensional detection. Some studies demonstrated the impact of spot coordinate errors on high-precision wavefront detection [1719]. When measuring the surface of high-precision cylindrical shaft parts, the inability to accurately extract the center of the distorted spot affects the reconstruction accuracy and impacts the accurate reconstruction of three-dimensional surface models. Thus, it is crucial to extract accurate subpixel-level spot center coordinate information from unevenly distributed distorted spots.

To solve the problem due to distorted multi-spots when an MBAS measures metal cylindrical shaft parts, we propose a spot coordinate prediction neural network (SCPNet) that utilizes multi-spot images with noise and out-of-focus effects as the neural network learning objects. The network architecture includes the convolutional layer, max pooling layer, and upsampling layer. SCPNet exploits the learning abilities of neural networks to convert multi-spot images to a learning object that facilitates the extraction of generalized spot coordinate information from the image. The proposed method involves three steps. First, the distorted multi-spots are corrected and predicted by SCPNet. Second, the centroid of the spots is extracted with high precision using the Hough circle detection method. Last, the singular value decomposition (SVD) model is utilized to achieve high-precision, three-dimensional surface reconstruction of metal cylindrical shaft parts. The experiment demonstrates the proposed method can achieve subpixel-level center extraction, which improves the measurement accuracy of an MBAS.

2. Method

2.1 System framework and process

Figure 1 depicts the schematic of the MBAS system, which generates a light source using a laser diode (LD) with a wavelength of 658 nm. The laser light is collimated by the condensing lens (CL), pinhole, and collimating lens, and the main light energy is directed through the aperture, reflector, beam splitter (BS), and cylindrical lens. The light reflected from the workpiece is dispersed by the beam splitter into a multi-spot image that is captured by a CMOS camera. The angular difference between two points on the surface of the workpiece can be calculated by the offset distance between the spot points. The microlens array divides the incident wavefront into several sub-wavefronts. The slope of the wavefront is calculated by calculating the offset of the spot coordinates. Taking the X-direction as an example.

$${C_X} = \frac{{\int_{ - \infty }^\infty {xI(x)dx} }}{{\int_{ - \infty }^\infty {I(x)dx} }}, $$
where x is the X-coordinate of the spot center, and $I(x)$ is the light intensity corresponding to the pixel point. By comparing the offset distance of the same spot in two consecutive images, Eq. (2) can be obtained
$$\tan \theta = \frac{{\Delta x}}{f} = \frac{{\Delta z}}{d}, $$
where $\Delta \textrm{x}$ is the spot coordinate offset in the X-direction on the COMS, $\Delta z$ is the local wave image difference in front of the deformation wave, $f$ is the microlens focal length, and $d$ is the sub-aperture size.

 figure: Fig. 1.

Fig. 1. Schematic of the measurement system: a High-precision dual-axis mobile platform (HDMP), an MBAS, and a Rotary Stage (RS).

Download Full Size | PDF

In this study, the microlens array with a 10 × 10 rectangular arrangement, a focal length of 32.80 mm, and a sub-aperture spacing of 500 µm was used. The COMS camera unit pixel size was 2.4 µm, and the position of the spot coordinate was detected using the COMS camera. The accuracy of the detection was affected by various factors, including the spot equivalent Gaussian width (${\sigma _{{G^2}}}$), COMS camera readout noise (${\sigma _{{N^2}}}$), and random noise such as LD undulation noise(${\sigma _{Ram}}$). The detection error of the spot center in the X-direction can be expressed as [20]

$$\sigma _{DE}^2 = \frac{{{\sigma _{{G^2}}}}}{V} + \frac{{{\sigma _{{N^2}}}}}{V}{L_S}(\frac{{{L^2} - 1}}{{12}} + X_c^2)A_p^2 + {\sigma _{Ram}}, $$
where ${X_C}$ is the X-direction coordinate of the spot center. V is the total number of detected spots. ${A_p}$ is the number of pixels in the COMS detection window. ${L_S}$ is the size of the detection window area occupied by each spot in the focal plane.

It is particularly important to improve the accuracy of the center coordinates to reduce the detection error. In a previous study, we established a convolutional network DSCNet for large curvature aspheric optical elements to correct and locate distorted spots [21]. As shown in Fig. 1, the high-precision three-dimensional shape measurement system for metal cylindrical shaft parts consists of a High-precision dual-axis mobile platform(HDMP), MBAS, and PC. The HDMP is responsible for moving the MBAS and rotating the parts with high precision. The MBAS is responsible for capturing and transferring the image data, while the PC coordinates the HDMP and MBAS to achieve multi-threaded operation and process of the captured image data.

The overall methodology of implementing metal cylindrical shaft part reconstruction, which mainly consists of four parts, is illustrated in Fig. 2. First, at the data acquisition stage, various levels of noise interference were added to the simulated distorted spots corresponding to each set of Gaussian spots. Second, in the model training process, the distorted spots dataset (DPdataset) with added noise was input into SCPNet for training. The Gaussian spot images serve as labels for backpropagation and optimized weight by the ADAM optimizer to obtain a reliable and stable model. Third, in the 3D reconstruction process, the center coordinates were obtained through the correction provided by SCPNet and Hough circle detection. Last, high-precision phase stitching is achieved using the SVD decomposition model, leading to high-precision, 3D shape reconstruction of metal cylindrical shaft parts.

 figure: Fig. 2.

Fig. 2. Flow chart of the proposed method.

Download Full Size | PDF

2.2 SCPNet for multi-spots correction and coordinate location

In this paper, we designed SCPNet, a neural network aimed at improving the accuracy of extracting distorted multi-spot coordinates from metal cylindrical shaft parts, thereby enhancing the reconstruction accuracy of these parts. The network architecture is shown in Fig. 3. SCPNet is divided into two main parts: feature extraction (encoder) and image reconstruction (decoder). The feature extraction part, which comprises layers 2 to 12, aims to extract the feature representation of the distorted spot region from metal cylindrical shafts for further analysis and processing. Due to the high resolution of feature maps in shallow networks and the weak feature extraction ability of the network, we designed a composite convolution layer in the feature extraction part. This design helps to change the gradient, obtain more information for the final layer [22], and extract high-dimensional features while reducing the impact of noise around distorted spots. Take the previous layers as an example, the first layer is the input layer, where the input data is distorted spot images generated from cylindrical axial parts. The second and third layers are composite convolution layers with a convolution kernel size of 3 × 3 and a composite function of convolution operations, batch normalization, and LeakyReLU activation functions with a step size of 1 [23]. Then, a maximum pooling layer with a convolution kernel of 2 × 2 is used in the fourth layer to obtain high-level semantic features of the image, remove redundant information in the spot image and compress the image features. The reconstruction stage comprises upsampling and composite convolutional layers, spanning from the 13th to the 23rd layer. These operations reconstruct the feature maps of the light spots, generating an output image that matches the size of the input image while preserving its intricate details.

 figure: Fig. 3.

Fig. 3. The network architecture of SCPNet.

Download Full Size | PDF

SCPNet's encoder gradually reduces the spatial resolution of the input distorted multi-spot image, extracts low-level semantic information from the distorted spot images, compresses them, and eliminates the distorted points around a single spot by using composite convolutional layers. Then, it generates a high-dimensional feature vector, which is forwarded to the decoder. SCPNet's decoder is composed of a series of upsampling and composite convolution layers. The upsampling layers increase the resolution of the spot feature maps to enhance their spatial and detailed information. On the other hand, the composite convolution layers perform feature extraction and conversion of the Gaussian spot features during the forward propagation, strengthening the model's ability to extract the abstract characteristics of distorted spots. The feature vector is transformed into a Gaussian multi-spot image by SCPNet's decoder. This image has the same size as the input distorted spot image. The process iterates until a high-resolution output image is achieved. In this way, the encoder and decoder can extract features, compress, rectify distortions in the input multis-pot image, and produce a high-quality corrected output. The aim of obtaining an ideal spot coordinates image is to match the distorted multi-spots and accurately locate the sub-pixel-level center coordinates of the spots. A simple 3 × 3 convolution layer is used in the twenty-fourth layer to obtain the sub-pixel level spot center coordinates by the Hough circle detection. This design is particularly useful in enhancing the accuracy of extracting distorted point coordinates from metal cylindrical shafts and ultimately improving the reconstruction accuracy of these parts.

2.3 Three-dimensional reconstruction based on SVD decomposition

We can express the phase of a wavefront $\phi (x,y)$ through a linear combination of Zernike polynomials. As shown in Fig. 4, using the phase $\phi ({x_1},{y_1})$ of the first region collected by the MBAS sensor as the reference, the second phase equation is

$${\phi ^{\prime}}({x_2},{y_2}) = {\phi ^{\prime}}({x_2} - {x_1},{y_2} - {y_1}) + a{x_2} + b{y_2} + c(x_2^2 + y_2^2) + d, $$
where a, b, c, and d are the relative tilt, out-of-focus, and translation error coefficients of the X-axis and Y-axis during the measurement of the two subwavefronts.

 figure: Fig. 4.

Fig. 4. Schematic diagram of rotation principle.

Download Full Size | PDF

The splicing area in this paper is greater than two (generally set to the maximum common factor of 360, and the specific value is determined by the radius of the cylindrical shaft parts). In this paper, we use the first regional subwavefront as the reference standard, and the slope of each subwavefront is related to the slope of the standard regional subwavefront. Using the x direction as an example.

$${G_M}(x,y) = {G_i}({x_i} - {x_1},{y_i} - {y_1}) + {a_i}. $$

Using the least squares method and making the derivative zero gives

$$\sum\limits_{i = 1}^{M - 1} {[G(x,y) - {G_i}({x_i} - {x_1}} ,{y_i} - {y_1})] = \sum\limits_{i = 1}^{M - 1} {{a_i}}.$$

The least-squares parameters of a, $b$, $c$, and d are obtained by transforming the sampling points of the slope and profile using the SVD decomposition model. Unlike the traditional two-by-two splicing mode, in which the relative splicing parameters of two adjacent frames are calculated each time, this paper calculates the absolute splicing parameters of each subwavefront relative to the reference subwavefront. The parameters of all subwavefronts are correlated so that the parameters of subsequent subwavelengths are not affected by large errors in some subwavelengths. The accumulation of errors and the undesirable effect of error transmission are attenuated, which is theoretically beneficial to the improvement in splicing accuracy [2426]. The parameters a, $b$, $c$, and d are reintroduced into Eq. (4) to obtain the high-precision, 3D shape reconstruction of metal cylindrical shaft parts.

3. Simulation results and analysis

3.1 Generating training data and developing the model

To simulate real-world imaging conditions using MBAS, we created a simulation dataset using Zemax software. The dataset incorporates various imaging effects such as noise, blurring, and distortion, which can occur during real imaging scenarios. Additionally, we carefully ensured that the image quality of the dataset is consistent and adequate for our experimental purposes. We argue that this simulation dataset provides a representative and challenging benchmark to evaluate the performance of our proposed method. The system is decomposed into the image acquisition module and the optical path calibration module. The former is used to acquire the image of the cylindrical shaft parts, which mainly include two parts: microlens array and rectangular detector. The latter mainly includes the calibration parameters of the glass components such as the condensing lens, to ensure an ideal Gaussian spot in the input optical path. By combining the component optical model and optical path structure parameters, built the system simulation model has the image acquisition function. Save the multi-light spot map of different cylindrical shaft parts through the microlens array as pictures. Set different distances between the camera and the microlens array through optical path simulation, and add different levels of noise to the simulated spot. The Gaussian spot $T({k_i})$ and Distorted spot $R({k_i})$ are input to the SCPNet for training. The total number of DPdataset is 5000, and the ratio of the training set to the testing set is 9:1. The training process of the model adopts the ADAM optimizer, which is capable of avoiding becoming stuck in local optima while optimizing the objective function. The training epoch is 30,000 rounds using the cross-validation training strategy. The formula of the loss function is

$${L_{oss}} = \frac{1}{n}\sum\limits_i^n {|{R({k_i}) - T({k_i})} |} + R({k_i})\ast \log (R({k_i})) - R({k_i}) + \frac{1}{2}\log (2\pi R({k_i})), $$
where $\frac{1}{n}\sum\limits_i^n {|{R({k_i}) - T({k_i})} |}$ measures the absolute differences between the predicted $T({k_i})$ values and the ground truth $R({k_i})$ values and computes the average of these differences. The method is robust to outliers and can prevent the model from being overly influenced by extreme values. $R({k_i})\ast \log (R({k_i})) - R({k_i}) + \frac{1}{2}\log (2\pi R({k_i}))$ is often used in regression problems where the goal is to minimize the distance between the $T({k_i})$ values and the $R({k_i})$ values. The method can be used to predict the number of objects or pixels in an image and penalize large errors in predicting the counts.

3.2 Assessment of stability and accuracy of the proposed method

To evaluate the localization accuracy of the proposed method, the spot center coordinate was obtained by using GCM, MMGM, EFM, and HTM. The theoretical spot center coordinates were obtained from the detector window in Zemax, and the deviation between the theoretical and localization coordinates was defined as the localization coordinate deviation. Specifically, the localization coordinate deviation $Er$ was calculated using Eq. (8).

$$Er = \sqrt {{{({x_r} - {x_0})}^2} + {{({y_r} - {y_0})}^2}}, $$
where, $({x_r},{y_r})$ is the spot center coordinates calculated by different methods. $({x_0},{y_0})$ is the exact center coordinate of the simulated spot.

The image was magnified up to 50 times to obtain accurate spot center coordinates with a sub-pixel level of 0.02 pixels. The accuracy of the spot center coordinates is specified to three decimal places. Table 1 presents the specific center coordinates of the ten points calculated by GCM, EFM, HTM, OTH, and the proposed SCPNet method. As shown in Table 1, the spot center error extracted by the OTH algorithm is the largest, and its average error is 0.8838 pixels. Both GCM and EFM were obtained by fitting the spot contour, with an average error of 0.7018 pixels and 0.7053 pixels, respectively. HTM extracts the center of mass from the ellipse fit, and the average deviation can be reduced to 0.6838 pixels. Compared with the above four methods, the proposed SCPNet method achieved an average localization coordinate deviation $\overline {Er}$ of only 0.0612 pixels, demonstrating its high accuracy in extracting distorted spot center coordinates. The experiment proves that SCPNet can extract distorted spot center coordinates with high accuracy.

Tables Icon

Table 1. Comparison of the accuracy of five centroid positioning methods (Unit: pixel)

Figure 5 demonstrates the anti-distortion stability of the proposed method. The detector sensor is set at 20 µm, 50 µm, and 100 µm away from the focal plane for simulation, and the spot distortion becomes more distorted as the detector moves farther from the focal plane. The anti-distortion performance of the above five spot center localization algorithms by using the root mean square (RMS) formula.

$${E_{RMS}} = \sqrt {\frac{1}{N}\sum\limits_{i = 1}^N {(X_i^2 + Y_i^2)} }, $$
where N is the spot center numbers in the same image, ${X_i}$, ${Y_i}$ are the deviation value of the X-axis and Y-axis.

 figure: Fig. 5.

Fig. 5. Error repeatability experiments for five methods. (a) - (e) are error results in the out-of-focus 50 µm by the GCM, EFM, HTM, OTH, and SCPNet, respectively. (f) is the standard deviation comparison chart.

Download Full Size | PDF

Figure 5(a)-(e) show the RMS centroid deviations calculated by five algorithms in the out-of-focus 50 µm. Figure 5(f) illustrates the average RMS centroid deviations calculated by the five algorithms for three different defocus distances. It is observed that other algorithms fail to accurately locate the center location for distorted spots. In contrast, SCPNet achieves an RMS of only 0.630 pixels. These results indicate that the SCPNet is highly resistant to interference and has high accuracy in extracting the spot center, making it a promising method for practical applications.

To evaluate the stability of the proposed algorithm, we added random noise to the simulated images. Contrast the extraction accuracy of the spot center with the extraction accuracy of the spot distortion at different locations in the same image. The accuracy of the proposed method under different noise intensities was evaluated using standard deviation (STD).

$${E_{STD}} = \sqrt {{{(n - 1)}^{ - 1}}{{\sum\limits_{i = 1}^n {({x_i} - \overline x )} }^2}}, $$
where n denotes the number of samples. ${x_i}$ denotes the spot center coordinate measurements in each group of experiments. $\overline x$ denotes the average of the spot center coordinates in each group of experiments.

Figure 6 illustrates the error distribution of each method under the interference of random noise, where the error obtained by SCPNet is smaller than that of the other four methods. Table 2 gives the maximum, minimum, and average errors between the measured values and the reference values for the five methods. The error of SCPNet is more stable and smaller than that of the other four methods. The maximum error, minimum error, and average error are 0.7515 pixels, 0.6427 pixels, and 0.6936 pixels, respectively. Overall, these results demonstrate that SCPNet has high stability and can accurately extract spot center coordinates even in the presence of random noise interference.

 figure: Fig. 6.

Fig. 6. The error results of five different methods under random noise.

Download Full Size | PDF

Tables Icon

Table 2. Errors of the five methods under random noise (Unit: pixel)

Tables Icon

Table 3. Comparison of the error results with three different methods (Unit: µm)

We compared SCPNet with traditional methods, namely the grayscale center of gravity (GCG) and the triple spline fit interpolation (TSFI), using the same dataset, experimental environment, and evaluation indicators. As shown in Fig. 7, we analyzed the error results of these three methods for both single-spot and multi-spots and found that the trend of the error curves is similar. The accuracy of the reconstruction algorithm mainly hinges on the precision of the spot center extraction. Moreover, the comparative analysis of the three methods revealed that SCPNet exhibited higher location accuracy and lower error evaluation metrics. Mean error (Mean) and mean-square error (MSE) were used to evaluate the recognition accuracy of the algorithm, while standard deviation (STD) and maximum deviation (MD) were utilized to assess its robustness and generalization ability. Table 3 shows that the three algorithms performed better in multi-spot reconstruction errors than in single-spot recognition. This indicates that multi-spot recognition can effectively reduce the random error caused by single-spot recognition. Compared with the other two algorithms, SCPNet achieved an 88.36% decrease in mean error and a 98.66% decrease in mean-square error for multi-spot reconstruction results, demonstrating its superior recognition accuracy. Additionally, SCPNet exhibited a remarkable 77.98% reduction in standard deviation and 88.50% reduction in maximum deviation for multi-spot reconstruction results, indicating its excellent robustness and generalization ability.

 figure: Fig. 7.

Fig. 7. The error results of three different methods under the same experiment environment.

Download Full Size | PDF

4. Experimental results and analysis

4.1 Configuration of the experiment and implementation details

In this paper, we build the SCPNet network based on the Pytorch framework. Figure 8 illustrates the three-dimensional shape reconstruction system for metal cylindrical shaft parts, which includes the HDMP, MBAS, and RS. We utilized a stage controller to move the RS in our PC to receive the output signals from the MBAS in each measuring position. The HDMP consists of an XY moving table, rotating platform, start/stop button, and control lever, among others, and is responsible for the high-precision movement and calibration of MBAS and the cylindrical shaft parts. The light source of MBAS is generated by LD with a wavelength of 658 nm. The light emitted by LD passes through the condensing lens, pinhole (Edmund #56-286, 400 µm), and collimating lens, and forms parallel light. The main light energy then passed through the aperture (Edmund #30-263, 4 mm), reflector, BS, and cylindrical lens (GCL-110115, 50 mm). The light reflected from the workpiece reaches the CMOS camera (MV-CE050-30UM, 2592 × 1944 pixel) through the microlens array (Edmund #64-482, 32.8 mm). Finally, the SCPNet network is implemented through the PC to process the spot and reconstruct the high-precision 3D reconstruction. The system can achieve the accurate and efficient 3D reconstruction of cylindrical shaft parts, demonstrating the potential for industrial applications.

 figure: Fig. 8.

Fig. 8. Experimental setup for measuring metal cylindrical shaft parts.

Download Full Size | PDF

The PC experiments hardware configuration includes Intel Core i7-10750 H 6Core Processor, 2.60 GHz; NVIDIA GeForce GTX 1660Ti graphics card, single GPU, 8 G memory. The software environment is Windows 10 Professional 64-bit operating system. Pytorch framework was the tool used to build the detection model, applying Python3.9.7 as the programming language, CUDA11.3 as the GPU computing platform, and the GPU acceleration library by CUDNN11.3 deep learning.

4.2 Result and analysis

In this study, we utilized the multi-spot image model based on Zemax simulation to train the SCPNet algorithm for predicting the center coordinates of multi-spot images obtained by MBAS acquisition. The system was tested on metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm using MBAS scanning and image capture. The resolution of the original multi-spot images is 2592 × 1944 pixels. After SCPNet distortion correction and segmentation, multiple single-light spot pictures are extracted. Subpixel level spot center extraction is performed by magnifying single spot images 50 times.

As shown in Fig. 9, the proposed method can realize the correction of spots with different degrees of aberration and obtain the center coordinates with high accuracy. Figures 10, 11, 12, and 13 show the results of spot center correction for metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm, respectively. The brightness and distortion properties of the spot images are markedly distinct across parts of varying radii. Following the correction based on SCPNet, these distorted spots can be effectively corrected, and the centroid coordinates can be accurately extracted. These findings suggest that the SCPNet algorithm is a promising approach for achieving high-precision, 3D reconstruction of metal cylindrical shaft parts using MBAS imaging.

 figure: Fig. 9.

Fig. 9. The result of spot center extraction by SCPNet.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Results of spot center correction for a metal cylindrical shaft part with a 10 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.

Download Full Size | PDF

 figure: Fig. 11.

Fig. 11. Results of spot center correction for a metal cylindrical shaft part with a 20 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.

Download Full Size | PDF

 figure: Fig. 12.

Fig. 12. Results of spot center correction for a metal cylindrical shaft part with a 35 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.

Download Full Size | PDF

 figure: Fig. 13.

Fig. 13. Results of spot center correction for a metal cylindrical shaft part with a 50 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.

Download Full Size | PDF

To analyze the reconstruction results of SCPNet, the 3D surface profile inspection of metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm is shown in Fig. 14. Table 4 presents the measured PV, STD, and RMSE values for each part. The cylindrical shaft part with a radius of 10 mm has a PV value of 0.0435 µm, an STD value of 0.0022 µm, and an RMSE value of 0.0023 µm. Similarly, the cylindrical shaft part with a radius of 20 mm has a PV value of 0.0471 µm, an STD value of 0.0026 µm, and an RMSE value of 0.0049 µm. The cylindrical shaft part with a radius of 35 mm has a PV value of 0.0586 µm, an STD value of 0.0028 µm, and an RMSE value of 0.0055 µm. The cylindrical shaft part with a radius of 50 mm has a PV value of 0.7063 µm, an STD value of 0.0036 µm, and an RMSE value of 0.0055 µm. The experimental results demonstrate that the proposed method can accurately measure the surface shape of metal cylindrical shaft parts with high precision and effectively evaluate roundness errors. These findings suggest that the proposed method has great potential for application in precision manufacturing and other related fields.

 figure: Fig. 14.

Fig. 14. The surface shape of cylindrical shaft parts (measuring roundness errors of 20 µm, 100 µm, 150 µm, and 250 µm). (a)–(d) are measured surface results for cylindrical shaft parts with curvature radii of 10 mm, 20 mm, 35 mm, and 50 mm. (e)–(h) is the measurement deviation of cylindrical shaft parts corresponding to (a)–(d). (i)–(l) are the results of phase splicing deviation for metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm.

Download Full Size | PDF

Tables Icon

Table 4. Comparison of cylinders with radii of 10 mm, 20 mm, 35 mm, and 50 mm (Unit: µm)

Tables Icon

Table 5. Comparison of deviation data for four different radius cylinders of 10 mm, 20 mm, 35 mm, and 50 mm at 3 mm from the focal plane (Unit: µm)

To assess the ability of SCPNet to eliminate the adverse effects due to out-of-focus surfaces, we also examined the four radii of cylindrical shaft parts in a 3 mm out-of-focus plane. Figure 15 illustrates the phase and difference plots in the 3 mm out-of-focus plane. Table 5 shows that SCPNet is highly effective in mitigating the negative impacts of out-of-focus surfaces. The STD values for metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm were 0.0038 µm, 0.0041 µm, 0.0053 µm, and 0.0056 µm, respectively. The RMSE values for metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm were 0.0041 µm, 0.0481 µm, 0.0835 µm, and 0.1093 µm, respectively. The three-dimensional reconstruction results for the 3 mm defocused plane are basically the same as the reconstruction results for the focused plane, which suggests that our reconstruction method is reliable and robust.

 figure: Fig. 15.

Fig. 15. Surface shape of cylindrical shaft parts (measuring roundness errors of 20 µm, 100 µm, 150 µm, and 250 µm). (a) - (d) are measured surface results of 3 mm out of focus for cylindrical shaft parts with curvature radii of 10 mm, 20 mm, 35 mm and 50 mm. (e) - (h) is the measurement deviations of 3 mm out of focus for the cylindrical shaft parts corresponding to (a) - (d). (i) - (l) are the results of phase splicing deviation for metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm.

Download Full Size | PDF

5. Conclusion

We have demonstrated a new method to measure the 3D surface profile of metal cylindrical shaft parts using the MBAS. We propose SCPNet to correct the nonlinear distortion multi-spots generated by metal cylindrical shaft parts. The average error of center extraction is 0.7018, 0.7053, 0.6838, 0.8838, and 0.0612 pixels using GCM, EFM, HTM, OTH, and the proposed method to extract spot centers, respectively. The error of the proposed method is relatively small. Under the interference of random noise, the method can maintain the extraction error of the spot coordinates within 0.630 pixels. The singular value decomposition (SVD) model is employed to eliminate the tilt error to achieve high-precision, 3D surface reconstruction. The experimental results demonstrate that the standard deviation of the proposed method is only 0.0022 µm, 0.0026 µm, 0.0028 µm, and 0.0036 µm when measuring the 3D surfaces of metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm, respectively, by MBAS. The proposed method has the potential to significantly improve the accuracy and efficiency of cylindrical part measurement and may have important applications in manufacturing and quality control.

Funding

Natural Science Foundation of Guangdong Province (2021A1515011817, 2022A1515011636, 2022A1515010005); Science and Technology Program of Guangzhou (202201010258); National Natural Science Foundation of China (61727810, 62271157); National Key Research and Development Program of China, (2021YFB3600200); Special Project for Research and Development in Key Areas of Guangdong Province. (2022B0101090002).

Acknowledgments

The author would like to thank Mengliang Wu, General Manager of Rational Precision Instrument Co., Ltd. for his full assistance.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

References

1. R. Kudo, T. Kitayama, Y. Tokuta, H. Shiraji, M. Nakano, K. Yamamura, and K. Endo, “High-accuracy three-dimensional aspheric mirror measurement with nanoprofiler based on normal vector tracing method,” Opt. Laser Eng. 98, 159–162 (2017). [CrossRef]  

2. J. Fan, Y. Feng, J. Mo, S. Wang, and Q. Liang, “3D reconstruction of non-textured surface by combining shape from shading and stereovision,” Measurement 185, 110029 (2021). [CrossRef]  

3. M. Idir, K. Kaznatcheev, G. Dovillaire, J. Legrand, and R. Rungsawang, “A 2 D high accuracy slope measuring system based on a Stitching Shack Hartmann optical head,” Opt. Express 22(3), 2770–2781 (2014). [CrossRef]  

4. J. Yoon, K. Lee, J. Park, and Y. Park, “Measuring optical transmission matrices by wavefront shaping,” Opt. Express 23(8), 10158–10167 (2015). [CrossRef]  

5. M. Chen, S. Takahashi, and K. Takamasu, “Calibration for the sensitivity of multi-beam angle sensor using cylindrical plano-convex lens,” Precision Eng. 46, 254–262 (2016). [CrossRef]  

6. M. Chen, S. Takahashi, and K. Takamasu, “Development of high-precision micro-roundness measuring machine using a high-sensitivity and compact multi-beam angle sensor,” Precision Eng. 42, 276–282 (2015). [CrossRef]  

7. M. Chen, S. Xie, H. Wu, S. Takahashi, and K. Takamasu, “Three-dimensional surface profile measurement of a cylindrical surface using a multi-beam angle sensor,” Precision Eng. 62, 62–70 (2020). [CrossRef]  

8. M. Beier, A. Gebhardt, R. Eberhardt, and A. Tünnermann, “Lens centering of aspheres for high-quality optics,” Adv. Opt. Technol. 1(6), 441–446 (2012). [CrossRef]  

9. H. Dong and L. Wang, “Non-iterative spot center location algorithm based on Gaussian for fish-eye imaging laser warning system,” Optik 123(23), 2148–2153 (2012). [CrossRef]  

10. J. Zhu, Y. Wu, H. Yue, and X. Shao, “Single frame phase estimation based on Hilbert transform and Lissajous ellipse fitting method in fringe projection technology,” Opt. Commun. 488, 126817 (2021). [CrossRef]  

11. C. Zhao, C. Fan, and Z. Zhao, “The Center of the Circle Fitting Optimization Algorithm Based on the Hough Transform for Crane,” Appl. Sci. 12(20), 10341 (2022). [CrossRef]  

12. D. N. H. Thanh, P. V. Surya, and N. Van Son, “An adaptive image inpainting method based on the modified Mumford-Shah model and multiscale parameter estimation,” Appl. Sci. 43(2), 251–257 (2019). [CrossRef]  

13. P. Modregger, M. Endrizzi, and A. Olivo, “Direct access to the moments of scattering distributions in x-ray imaging,” Appl. Phys. Lett. 113(25), 254101 (2018). [CrossRef]  

14. Y. Tian, W. Song, L. Chen, Y. Sung, J. Kwak, and S. Sun, “Fast planar detection system using a GPU-based 3D Hough transform for LiDAR point clouds,” Appl. Sci. 10(5), 1744 (2020). [CrossRef]  

15. J. Sandoval, K. Uenishi, M. Iwakiri, and K. Tanaka, “Robust sphere detection in unorganized 3D point clouds using an efficient Hough voting scheme based on sliding voxels,” IIEE J. Trans. on Image Electron. Vis. Comput. 8, 121–135 (2020). [CrossRef]  

16. M. Chen, Z. Zhang, H. Wu, S. Xie, and H. Wang, “Otsu-Kmeans gravity-based multi-spots center extraction method for microlens array imaging system,” Opt. Laser Eng. 152, 106968 (2022). [CrossRef]  

17. X. Zhu, S. Hu, and L. Zhao, “Wafer focusing measurement of optical lithography system based on Hartmann–Shack wavefront testing,” Opt. Laser Eng. 66, 128–131 (2015). [CrossRef]  

18. V. Sorathiya, O. S. Faragallah, H. S. El-Sayed, M. M. Eid, and A. N. Z. Rashed, “Nanofocusing of optical wave using staircase tapered plasmonic waveguide,” Appl. Phys. B 128(6), 104 (2022). [CrossRef]  

19. K. K. Mehta and R. J. Ram, “Precise and diffraction-limited waveguide-to-free-space focusing gratings,” Sci. Rep. 7(1), 2019 (2017). [CrossRef]  

20. M. Pang, X. Gao, and J. Rong, “Technical requirements and uncertainty of far field laser spot centroid measurement using array detection method,” Optik 126(24), 5881–5885 (2015). [CrossRef]  

21. J. Chen, M. Chen, H. Wu, S. Xie, and T. Kiyoshi, “Distortion spot correction and center location base on deep neural network and MBAS in measuring large curvature aspheric optical element,” Opt. Express 30(17), 30466–30479 (2022). [CrossRef]  

22. Y. Chen, B. Song, J. Wu, W. Lin, and W. Huang, “Deep learning for efficiently imaging through the localized speckle field of a multimode fiber,” Appl. Opt. 62(2), 266–274 (2023). [CrossRef]  

23. Z. Zhang, Y. Zheng, T. Xu, A. Upadhya, Y. J. Lim, A. Mathews, L. Xie, and W. M. Lee, “Holo-UNet: hologram-to-hologram neural network restoration for high fidelity low light quantitative phase imaging of live cells,” Biomed. Opt. Express 11(10), 5478–5487 (2020). [CrossRef]  

24. X. Ma and J. Wang, “The research of wavefront sensor based on focal plane and pupil plane,” Optik 127(5), 2688–2693 (2016). [CrossRef]  

25. L. Chun, L. Wenhe, S. Jianxin, and Z. Yu, “An adaptive detecting centroid method for Hartmann-Shack wavefront sensor,” Chin. J. Lasers 36, 430–434 (2009). [CrossRef]  

26. D. Zewei, M. Xiuhua, and S. Xiangchun, “Wavefront sensing technology of high repetition rate heat capacity master oscillator power amplifier system,” Chin. J. Lasers 35, 1055–1058 (2008). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but can be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1. Schematic of the measurement system: a High-precision dual-axis mobile platform (HDMP), an MBAS, and a Rotary Stage (RS).
Fig. 2.
Fig. 2. Flow chart of the proposed method.
Fig. 3.
Fig. 3. The network architecture of SCPNet.
Fig. 4.
Fig. 4. Schematic diagram of rotation principle.
Fig. 5.
Fig. 5. Error repeatability experiments for five methods. (a) - (e) are error results in the out-of-focus 50 µm by the GCM, EFM, HTM, OTH, and SCPNet, respectively. (f) is the standard deviation comparison chart.
Fig. 6.
Fig. 6. The error results of five different methods under random noise.
Fig. 7.
Fig. 7. The error results of three different methods under the same experiment environment.
Fig. 8.
Fig. 8. Experimental setup for measuring metal cylindrical shaft parts.
Fig. 9.
Fig. 9. The result of spot center extraction by SCPNet.
Fig. 10.
Fig. 10. Results of spot center correction for a metal cylindrical shaft part with a 10 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.
Fig. 11.
Fig. 11. Results of spot center correction for a metal cylindrical shaft part with a 20 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.
Fig. 12.
Fig. 12. Results of spot center correction for a metal cylindrical shaft part with a 35 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.
Fig. 13.
Fig. 13. Results of spot center correction for a metal cylindrical shaft part with a 50 mm radius. (a) Multi-spot image. (b) Spot sub-image. (c) Corrected spot.
Fig. 14.
Fig. 14. The surface shape of cylindrical shaft parts (measuring roundness errors of 20 µm, 100 µm, 150 µm, and 250 µm). (a)–(d) are measured surface results for cylindrical shaft parts with curvature radii of 10 mm, 20 mm, 35 mm, and 50 mm. (e)–(h) is the measurement deviation of cylindrical shaft parts corresponding to (a)–(d). (i)–(l) are the results of phase splicing deviation for metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm.
Fig. 15.
Fig. 15. Surface shape of cylindrical shaft parts (measuring roundness errors of 20 µm, 100 µm, 150 µm, and 250 µm). (a) - (d) are measured surface results of 3 mm out of focus for cylindrical shaft parts with curvature radii of 10 mm, 20 mm, 35 mm and 50 mm. (e) - (h) is the measurement deviations of 3 mm out of focus for the cylindrical shaft parts corresponding to (a) - (d). (i) - (l) are the results of phase splicing deviation for metal cylindrical shaft parts with radii of 10 mm, 20 mm, 35 mm, and 50 mm.

Tables (5)

Tables Icon

Table 1. Comparison of the accuracy of five centroid positioning methods (Unit: pixel)

Tables Icon

Table 2. Errors of the five methods under random noise (Unit: pixel)

Tables Icon

Table 3. Comparison of the error results with three different methods (Unit: µm)

Tables Icon

Table 4. Comparison of cylinders with radii of 10 mm, 20 mm, 35 mm, and 50 mm (Unit: µm)

Tables Icon

Table 5. Comparison of deviation data for four different radius cylinders of 10 mm, 20 mm, 35 mm, and 50 mm at 3 mm from the focal plane (Unit: µm)

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

C X = x I ( x ) d x I ( x ) d x ,
tan θ = Δ x f = Δ z d ,
σ D E 2 = σ G 2 V + σ N 2 V L S ( L 2 1 12 + X c 2 ) A p 2 + σ R a m ,
ϕ ( x 2 , y 2 ) = ϕ ( x 2 x 1 , y 2 y 1 ) + a x 2 + b y 2 + c ( x 2 2 + y 2 2 ) + d ,
G M ( x , y ) = G i ( x i x 1 , y i y 1 ) + a i .
i = 1 M 1 [ G ( x , y ) G i ( x i x 1 , y i y 1 ) ] = i = 1 M 1 a i .
L o s s = 1 n i n | R ( k i ) T ( k i ) | + R ( k i ) log ( R ( k i ) ) R ( k i ) + 1 2 log ( 2 π R ( k i ) ) ,
E r = ( x r x 0 ) 2 + ( y r y 0 ) 2 ,
E R M S = 1 N i = 1 N ( X i 2 + Y i 2 ) ,
E S T D = ( n 1 ) 1 i = 1 n ( x i x ¯ ) 2 ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.