Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Uniaxial microscale 3D surface shape measurement based on π phase-shifting method

Open Access Open Access

Abstract

In this paper, π phase-shifting method is proposed in the uniaxial microscopic 3D profilometry. Specifically, the π phase-shifting method uses two fringes with a phase difference of π, whose subtraction can effectively eliminate the background information and provide a more accurate modulation distribution. Compared with the ten-step phase-shifting method (PSM) and the Fourier transform method (FTM), with only one-fifth of the data acquisition of PSM, the proposed method (PM) can achieve almost the same measurement accuracy as that, but has higher measurement accuracy than FTM.

© 2021 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Over the past several decades, much attention has been paid to micro-scale devices, which provide solutions in some demanding applications such as precision engineering, security, material science and medicines.

The advanced micro-scale surface metrology can be divided into two types: contact-type measurement system and noncontact-type measurement system. The former includes stylus profiling instrument(SPI) [1,2] and scanning probe microscopy (SPM) [37]. The SPI has a large measurement range and provides high measurement accuracy. However, as a point scanning measurement, it is time-consuming and easy to damage the measured surface. SPM can achieve atomic-level resolution, but its measurement range is narrow and the operation is complicated. With the rapid progress of manufacturing technology, countless non-contact optical approaches have been developed to achieve micro-level 3D metrology. Confocal laser scanning microscope (CLSM) [811] utilizes the laser to scan the specimen in XYZ directions and achieve profile or roughness measurement. Interferometry is another state-of-the-art technique, which includes white light interferometry (WLI) [1215], multi-wavelength interferometry (MWI) [16,17] and single-wavelength interferometry (SWI) [18]. These approaches have various measurement ranges to satisfy different actual application requirements. Besides the above techniques, there are other advanced technologies such as laser trigonometry method (LTM) [18], infinite focus microscopy (IFM) [19] and structured light method.

As a structured light method, structured illumination microscopy (SIM) [2023] applied Moiré pattern is capable of providing resolution beyond the light diffraction limit. As one of popular techniques for super-resolution imaging, it has been widely used in living cells [24,25]; Uniaxial microscopic 3D profilometry is a vertical measurement approach that utilizes the modulation [first proposed by Prof. Su [26] for the measurement of macro-object] to record the deep information of the specimen. For this technique, the modulation can be calculated by applying PSM [2628] or FTM [29,30]. As a multi-frame fringes processing technology, PSM requires at least three fringes to complete the modulation retrieval for each position; Although FTM is a single-frame fringe processing technology, it is sensitive to spectrum overlap. When the fundamental frequency component and the zero-frequency component are aliased, the measurement accuracy will be affected because of the inaccurate extraction of fundamental frequency information.

This paper proposed a microscopic 3D shape vertical measurement technology by introducing $\pi$ phase-shifting method for modulation extraction. For each scanning position, the projector projects two fringe patterns with a phase difference of $\pi$, whose subtraction will eliminate the influence of the background information and avoid the appearance of spectrum aliasing. Doing the Fourier transform operation and use an appropriate filter to extract useful fundamental frequency information to obtain modulation distribution. By contrast, with our previous approach (ten-step PSM) [28] and FTM, the PM can almost reach the same level with one-fifth of the data acquisition of PSM, but can obtain higher measurement accuracy than FTM. The experiment results show that the PM can achieve approximately 5.05 $\mu$m root-mean-square (RMS) error with a depth range of 1100 $\mu$m.

Section 2 introduces the principle of uniaxial micro-system with $\pi$ phase-shifting method. Section 3 explains the system calibration. Section 4 gives the experimental comparison results. Finally, Section 5 concludes this paper.

2. Principle

According to Gaussian lens law, each point on the object plane forms a clear image on the image plane. If the sensor plane does not coincide with the image plane, the point on the object plane will form a circular patch on the sensor plane, and the diameter of the circular patch will increase as the distance between the sensor plane and the image plane increases. As shown in Fig. 1, when the projection projects a fringe onto the plane, the clearest image can only be captured on the focal plane(P$_3$ and the images at other positions(P$_1$, P$_2$, P$_4$, P$_5$) will be blurred at different levels. Actually, the modulation value can be used to describe the level of blurriness. If one extracts the modulation value for the eponymous pixel on each image, a curve similar to Fig. 1 will be generated, which represents the relationship between the modulation value and the depth information in optical axis direction.

 figure: Fig. 1.

Fig. 1. Principle of uniaxial micro-system with $\pi$ phase-shifting method.

Download Full Size | PDF

2.1 $\pi$ phase-shifting method

For the modulation retrieval and eliminating the affection from background light intensity in microscopic 3D profilometry, $\pi$ phase-shifting method is applied. When project two fringes with a phase difference of $\pi$ on the focal plane, the images captured by a camera can be expressed as [31,32]

$$I(x,y) = {{R(x,y)} \over {{M^2}}}\left\{{B(x,y) + C(x,y)\cos \left[ {2\pi {f_0}x + {\Phi_0}(x,y)} \right]} \right\}$$
$${I_\pi}(x,y) = {{R(x,y)} \over {{M^2}}}\left\{{B(x,y) + C(x,y)\cos \left[ {2\pi {f_0}x + {\Phi_0}(x,y) + \pi } \right]} \right\}$$

where R(x, y) is the reflectivity of the specimen. $\textit {M}$ is the magnification of the measurement system. B(x, y) and C(x,y) respectively are the background light intensity and fringe contrast. f$_0$ represents the grating frequency. $\Phi _{0}\left ({x},{y}\right )$ stands for the initial phase.

According to the Gaussian lens law, the images in front of or behind the focal plane shown in Fig. 2 will become blurred, which can be expressed by the convolution of the focal image and the fuzzy equation.

$$I'(x,y;\delta)= H(x,y;\delta) \otimes I(x,y)$$
$${I}'_{\pi}(x,y;\delta)= H(x,y;\delta) \otimes {I_\pi}(x,y)$$
where $\delta$ denotes the distance between the image plane to the defocus plane, symbol $\otimes$ represents the convolution operation. H(x, y; $\delta$ ) is a Gaussian function, which can be expressed as
$$H(x,y;\delta ) = {1 \over {2\pi \sigma _H^2}}{e^{ - {{{x^2} + {y^2}} \over {2\sigma _H^2}}}}$$
$\sigma _{H}=\alpha {r}$ is the spread parameter. $\alpha$ depends on the internal parameters of measurement system, which usually takes an approximate value $\sqrt {2}$. Respectively substitute Eq. (1) and Eq. (5) into Eq. (3), Eq. (2) and Eq. (5) into Eq. (4), the two out-of-focus images with a phase difference of $\pi$ can be written as
$${I'}(x,y;\delta) = {{R(x,y)} \over {{M^2}}} {e^{ - {1 \over 2}{f_0}^2\sigma _H^2}}\left\{ {B(x,y)} +C(x,y) \cos \left[ {2\pi {f_0}x + {\Phi_0}(x,y)} \right] \right\}$$
$${I}'_{\pi}(x,y;\delta)= {{R(x,y)} \over {{M^2}}}{e^{ - {1 \over 2}{f_0}^2\sigma _H^2}} \left\{{B(x,y)} + C(x,y) \cos \left[ {2\pi {f_0}x + {\Phi_0}(x,y) + \pi } \right] \right\}$$

By subtracting Eq. (6) and Eq. (7), one can get

$${I_z}(x,y) = {I'(x,y) - I_\pi '(x,y) \over 2} ={{R(x,y)} C(x,y)\over {{M^2}}}{e^{ - {1 \over 2}{f_0}^2\sigma _H^2}}\cos \left[ {2\pi {f_0}x + {\Phi_0}(x,y)} \right]$$

 figure: Fig. 2.

Fig. 2. Distribution of imaging on the optical axis.

Download Full Size | PDF

Equation 8 shows that the subtraction of two fringes with a phase difference of $\pi$ will eliminate the background light intensity. Employing the relationship that $\cos (x) = {1 \over 2}{e^{ix}} + {1 \over 2}{e^{-ix}}$, Eq. (8) can be rewritten as

$${I_z}(x,y) = {{R(x,y)}C(x,y) \over {{M^2}}}{e^{ - {1 \over 2}{f_0}^2\sigma _H^2}}\left\{ e^{i \left[ {2\pi {f_0}x + {\Phi_0}(x,y)} \right]} + e^{ {-}i \left[ {2\pi {f_0}x + {\Phi_0}(x,y)} \right]} \right\}$$

Do the Fourier transform operation, the zero-frequency component will not exist, which will avoid the spectrum aliasing and the influence on the fundamental frequency component from zero-frequency component. Therefore, the extraction of fundamental frequency component will be more accurate than that with only one fringe. The corresponding modulation distribution on the defocused plane can be expressed as

$${M}(x,y;\delta ) = {{R(x,y) C(x,y)} \over {{M^2}}}{e^{ - {1 \over 2}{f_0}^2\sigma _H^2}}$$

where M (x, y;$\delta$ ) represents the modulation of the captured image.

3. System Calibration

From Fig. 1, one can see that the modulation curve has two segments including an upper edge and a lower edge. Only for the maximum modulation value, the mapping relationship between the modulation and the depth value is unique, and any other modulation value corresponds to two depth values, that, it is impossible to obtain different depth values based on this mapping relationship with only one modulation curve. Therefore, in the actual measurement, modulation distributions for different focal lengths are required to establish the height-modulation look-up table based on the uniqueness of the mapping relationship for the maximum modulation and the height value. This process of establishment for the height-modulation look-up table is called system calibration.

Figure 3 shows the schematic diagram of microscopic 3D shape vertical measurement, which consists of a projector (Light Crafter PRO6500), an electrically tunable lens (ETL, Model: EL-16-40-TC), a beam splitter, a camera (Point Grey camera GS3-U3-23S6M) and two telecentric lenses. A controller board is applied to precisely synchronize the projector, ETL and camera. ETL is utilized to change the focal length by setting different current values. In our system, the calibration depth is 1100$\mu$m and the distance between any two adjacent planes is 100 $\mu$m. T (T=12) planes are utilized to record the height values, and N (N=317) current values are used to change the focal length. The plane P$_1$ farthest to the projector is set to be the reference plane with height Z(1, x, y) = 0 $\mu$m. Continuously project the images with a phase difference of $\pi$ for 317 current values and calculate the modulation for each current. Number these modulation distributions and use the serial number S (1, x, y)$_{max}$ (S (1, x, y )$_{max}$=n, n $\in \; 1, 2, 3,\ldots$, N) to record the maximum modulation value. Then record other calibration planes P$_{t}$ (t $\in \; 1, 2, 3,\ldots$, T) and the corresponding serial number S (t, x, y )$_{max}$. Finally, a height-modulation look-up table can be established as

$$Z(t,x,y) = a(x,y) + b(x,y)S{(n,x,y)_{\max }} + c(x,y){S^2}{(n,x,y)_{\max }}$$

where a(x, y), b(x, y) and c(x, y) are the coefficients of the quadratic curve fitting.

 figure: Fig. 3.

Fig. 3. The schematic diagram of microscopic 3D shape vertical measurement.

Download Full Size | PDF

In the process of calibration and measurement, all of the images are cropped to reduce the computation, whose size are 800$\times$1000 pixels. Figure 4 shows the height-modulation look-up table for pixel (400, 500).

 figure: Fig. 4.

Fig. 4. The relationship between the height distribution and the serial number corresponding to the maximum modulation value for pixel (400, 500).

Download Full Size | PDF

4. Experiment

To verify the proposed method, experiments are utilized to verify its success. To make a comparison, the ten-step PSM and the FTM are used to calculate the reconstructs. Firstly, the accuracy of the measurement system is tested by the application of four planes. The measurement accuracy is evaluated by taking the difference between the measured value and their fitted planes. To make a comparison with the PM, the PSM and the FTM are applied to reconstruct the shape surface of these planes. Figure 5 and Table 1 respectively show the error distribution and the root-mean-square (RMS) error for these methods. The average RMS values are respectively 4.38 $\mu$m(PSM), 5.05 $\mu$m(PM), and 7.54 $\mu$m(FTM). With only one-fifth of the data acquisition of PSM, the measurement accuracy of PM almost can reach the same accuracy as PSM, but is higher than that of the FTM.

 figure: Fig. 5.

Fig. 5. Error maps of four tested flat planes compared with the fitted ideal plane. The first row shows error maps by PSM, the second row shows error maps by PM, the third row shows error maps by FTM. The first column shows error maps for plane 1 by PSM, PM and FTM, the second column shows error maps for plane 2 by PSM, PM and FTM, the third column shows error maps for plane 3 by PSM, PM and FTM, the fourth column shows error maps for plane 4 by PSM, PM and FTM. See Dataset 1 for underlying values [33].

Download Full Size | PDF

Tables Icon

Table 1. Root-mean-square error for each tested flat plane ($\mu$m)

Then a measured object with protrusions forming the letters’WHILE’ was measured to verify the $\pi$ phase-shifting method. All of the images are cropped to be 820 $\times$ 1430 pixels. Figures 6(a) and (b) show the fringes with a phase difference of $\pi$ when S(j) = 150. To show the variety of focus plane, Fig. 6(c), (d) and (f) show the local closeup images marked in Fig. 6(a) with a red dotted rectangle box when S(j) = 100, S(j) = 150 and S(j) = 200. The stripes are clearest at the top of the object in Fig. 6(c), and the stripes are clearest at the middle height of the object in Fig. 6(d). While for the Fig. 6(f), the stripes are clearest at the bottom of the object. To clearly show the $\pi$ phase shift relationship of the Fig. 6(a) and (b), Fig. 6(d) and (e) show the local closeup of them. In addition, two red dots respectively mark the same position of the two fringes. Obviously, the point in Fig. 6(d) is on the dark stripe, while the point in Fig. 6(e) is on the bright stripe.

 figure: Fig. 6.

Fig. 6. Fringes for the specimen. (a) Fringe when S(j) = 150; (b) Fringe with a phase difference of $\pi$ for (a); (c) A closeup image when S (j) = 100; (d) A closeup image of (a); (e) A closeup image of (b); (f) A closeup image when S (j) = 200.

Download Full Size | PDF

The spectrum of Fig. 6(a) shows in Fig. 7(a), while the spectrum of the subtraction for Fig. 6(a) and Fig. 6(b) shows in Fig. 7(b). Obviously, both zero-frequency component and fundamental frequency component exist in Fig. 7(a), while only the fundamental frequency component appears in Fig. 6(b). To clearly see the distribution of the spectrum, Figs. 7(c) and 7(d) respectively show a partial cross-section (row: 411, column: 720 - 1430) of Fig. 7(a) and Fig. 7(b). Since the measured object is very complex and the background light intensity is uneven, both zero-frequency component and fundamental frequency component will expand to a large extent towards each other, and they are aliased together. Therefore, it’s difficult to accurately extract the fundamental frequency component.

 figure: Fig. 7.

Fig. 7. Spectrum for the fringes. (a) The spectrum of the Fig. 6(a); (b) The spectrum of the subtraction for Fig. 6(a) and Fig. 6(b); (c) Part of a cross section for (a); (c) Part of a cross section for (b).

Download Full Size | PDF

Respectively use the PSM, PM and FTM to obtain the results, whose surface shape are shown in Figs. 8(a) – 8(c). Since the influence of the zero-frequency component, the useful fundamental frequency component including the depth information can not be acquired accurately, which results in a coarse surface obtained by the FTM. While for the PM, the subtraction of two fringes with $\pi$ shift difference can eliminate the influence of the zero-frequency component on the fundamental frequency component and get good reconstruction. Figure 8(b) shows that the reconstruction of deep grooves by PM is clear. Figures 8(d) – 8(f) show the local closeup images of Figs. 8(a) – 8(c). To make a clear comparison, partial section view by these methods marked in Fig. 6(a) with a green line are depicted in Fig. 8(g). Due to the influence of background light intensity, FTM fails to get correct results.

 figure: Fig. 8.

Fig. 8. Experimental results. (a) 3D reconstruction of the specimen by PSM; (b) 3D reconstruction of the specimen by PM; (c) 3D reconstruction of the specimen by FTM; (d) Partial result of (a); (e) Partial result of (b); (f) Partial result of (c); (g) Comparison of one cross section for (d)-(f). See Dataset 1 for underlying values [33].

Download Full Size | PDF

Similarly, to further prove the feasibility of the PM, another specimen is measured. Figure 9(a) shows the whole image when S(j) = 150. The local closeup image including letter ’S’ for S(j) = 100, S(j) = 150 and S(j) = 200 are respectively shown in Fig. 9(a) – 9(d). To make a comparison, three methods including the PSM, PM and FTM are applied to reconstruct the shape surface, whose results are shown in Figs. 10(a) – 10(c). There are many errors on the surface of the reconstruction by the FTM, while both PM and PSM can get good results. Figures 10(d) –10(f) show partial section view by these approaches. Obviously, one can clearly see the glitches generated by the FTM.

 figure: Fig. 9.

Fig. 9. Fringes for the specimen. (a) Fringe when S(j) = 150; (b) A closeup image when S (j) = 100; (c) A closeup image of (a); (d) A closeup image when S (j) = 200.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. Experimental results. (a) 3D reconstruction of the specimen by PSM; (b) 3D reconstruction of the specimen by PM; (c) 3D reconstruction of the specimen by FTM; (d) Partial result of (a); (e) Partial result of (b); (f) Partial result of (c); (g) Comparison of one partial cross section for (d)-(f). See Dataset 1 for underlying values [33].

Download Full Size | PDF

5. Conclusion

This article has proposed the application of $\pi$ phase-shifting method in microscopic optical 3D shape vertical measurement. Two fringe patterns with $\pi$ phase shift difference are captured for each current value. Perform subtraction and Fourier transform operations, background light intensity information can be successfully eliminated, and the spectrum distribution will be simpler and easier to extract fundamental-frequency competent. The RMS error of PM is about 5.05 $\mu$m. With only one-fifth of the data acquisition of PSM, PM can achieve almost the same measurement accuracy as PSM, but has a higher measurement accuracy than FTM. In addition to the above improvement, speed will be the next focus of research.

Funding

National Natural Science Foundation of China (61801057); Sichuan education department project (18ZB0124); Sichuan Science and Technology Program (2020YJ0431); College Students' innovation and entrepreneurship training programs (202010621108, 202110621229, 202110621232).

Acknowledgments

The authors would like to acknowledge the support of the National Natural Science Foundation of China (NSFC) (61801057), Sichuan education department project (18ZB0124), Sichuan Science and Technology Program (2020YJ0431), College Students’ innovation and entrepreneurship training programs (202010621108, 202110621229, 202110621232).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results for Fig. 5, Fig. 8 and Fig. 10 presented in this paper are available in supplementary Dataset 1 Ref. [33]. Further technical details and documentation are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. C. L. Chu, Y. L. Chen, T. Y. Tai, Y. H. Liu, C. H. Chuang, and C. T. Lu, “Development of a 3d touch trigger probe using micro spherical stylus machining by micro-edm for micro-cmm,” Proc. SPIE 8321, 83213A (2011). [CrossRef]  

2. H. Gao, X. Zhang, and F. Fang, “Axicon profile metrology using contact stylus method,” Int. J. Nanomanufacturing 14(2), 177–191 (2018). [CrossRef]  

3. X. Yin, P. Shi, A. Yang, L. Du, and X. Yuan, “Surface plasmon coupled nano-probe for near field scanning optical microscopy,” Opt. Express 28(10), 14831 (2020). [CrossRef]  

4. A. J. Fisher, “Scanning probe microscopy, theory,” Encycl. Spectrosc. Spectrom. 61, 2060–2066 (1999). [CrossRef]  

5. S. M. Salapaka and M. V. Salapaka, “Scanning probe microscopy,” IEEE Control Syst. 28(2), 65–83 (2008). [CrossRef]  

6. P. Liljeroth, B. Grandidier, C. Delerue, and D. Vanmaekelbergh, “Scanning probe microscopy and spectroscopy,” Nanoparticles pp. 223–255 (2014).

7. H. Fei and M. Lanza, “Scanning probe microscopy for advanced nanoelectronics,” Nat. Electron. 2(6), 221–229 (2019). [CrossRef]  

8. S. M. Choi, W. H. Kim, D. Côté, C.-W. Park, and H. Lee, “Blood cell assisted in vivo particle image velocimetry using the confocal laser scanning microscope,” Opt. Express 19(5), 4357–4368 (2011). [CrossRef]  

9. S. Ohkubo, “Development of birefringence confocal laser scanning microscope and its application to sample measurements,” J. Robotics Mechatronics 31(6), 926–933 (2019). [CrossRef]  

10. E. Merson, V. Danilov, D. Merson, and A. Vinogradov, “Confocal laser scanning microscopy: The technique for quantitative fractographic analysis,” Eng. Fract. Mech. 183, 147–158 (2017). [CrossRef]  

11. B. G. Chae, Y. Ichikawa, G. C. Jeong, Y. S. Seo, and B. C. Kim, “Roughness measurement of rock discontinuities using a confocal laser scanning microscope and the fourier spectral analysis,” Eng. Geol. 72(3-4), 181–199 (2004). [CrossRef]  

12. C. O’Mahony, M. Hill, M. Brunet, R. Duane, and A. Mathewson, “Characterization of micromechanical structures using white-light interferometry,” Meas. Sci. Technol. 14(10), 1807–1814 (2003). [CrossRef]  

13. J. Schmit and A. Olszak, “High-precision shape measurement by white-light interferometry with real-time scanner error correction,” Appl. Opt. 41(28), 5943–5950 (2002). [CrossRef]  

14. A. Hirabayashi, H. Ogawa, and K. Kitagawa, “Fast surface profiler by white-light interferometry by use of a new algorithm based on sampling theory,” Appl. Opt. 41(23), 4876–4883 (2002). [CrossRef]  

15. P. Pavlicek and J. Soubusta, “Measurement of the influence of dispersion on white-light interferometry,” Appl. Opt. 43(4), 766–770 (2004). [CrossRef]  

16. K. Kitagawa, “Single-shot surface profiling by multiwavelength interferometry without carrier fringe introduction,” J. Electron. Imaging 21(2), 021107 (2012). [CrossRef]  

17. A. Agrawal and C. L. Henderson, “Monitoring the dissolution rate of photoresist thin films via multiwavelength interferometry,” Proc. SPIE 5038, 1026–1037 (2003). [CrossRef]  

18. C. L. Griffiths and K. J. Weeks, “Optical monitoring of molecular beam epitaxy growth of aln/gan using single-wavelength laser interferometry: A simple method of tracking real-time changes in growth rate,” J. Vac. Sci. Technol., B: Microelectron. Nanometer Struct.--Process., Meas., Phenom. 25(3), 1066–1071 (2007). [CrossRef]  

19. L. Deininger, S. Francese, M. R. Clench, G. Langenburg, V. Sears, and C. Sammon, “Investigation of infinite focus microscopy for the determination of the association of blood with fingermarks - sciencedirect,” Sci. Justice 58(6), 397–404 (2018). [CrossRef]  

20. Z. Yang, A. Bielke, and G. Häusler, “Better three-dimensional inspection with structured illumination: speed,” Appl. Opt. 55(7), 1713–1719 (2016). [CrossRef]  

21. Z. Yang, A. Kessel, and G. Häusler, “Better 3d inspection with structured illumination part i: signal formation and precision,” Appl. Opt. 54(22), 6652–6660 (2015). [CrossRef]  

22. M. Vogel, Z. Yang, A. Kessel, C. Kranitzky, C. Faber, and G. Häusler, “Structured-illumination microscopy on technical surfaces: 3d metrology with nanometer sensitivity,” Proc. SPIE 8082, 80820S (2011). [CrossRef]  

23. G. Haeusler, M. Vogel, Z. Yang, A. Kessel, and C. Faber, “Sim and deflectometry: New tools to acquire beautiful, sem-like 3d images,” Optical Society of America (2011).

24. L. Turnbull, M. P. Strauss, A. Liew, L. G. Monahan, C. B. Whitchurch, and E. J. Harry, “Super-resolution imaging of the cytokinetic z ring in live bacteria using fast 3d-structured illumination microscopy (f3d-sim),” J. Visualized Exp. p. 51469 (2014).

25. R. Fiolka, L. Shao, E. H. Rego, and M. W. Davidson, “Time-lapse two-color 3d imaging of live cells with doubled resolution using structured illumination,” Proc. Natl. Acad. Sci. 109(14), 5311–5315 (2012). [CrossRef]  

26. X. Su, L. Su, W. Li, and L. Xiang, “A new 3d profilometry based on modulation measurement,” Proceedings of SPIE - The International Society for Optical Engineering 19, 1–7 (1999).

27. L. Su, X. Su, W. Li, and L. Xiang, “Application of modulation measurement profilometry to objects with surface holes,” Appl. Opt. 38(7), 1153–1158 (1999). [CrossRef]  

28. M. Zhong, J. Cui, J. S. Hyun, L. Pan, and S. Zhang, “Uniaxial 3d phase-shifting profilometry using a dual-telecentric structured light system in micro-scale devices,” Meas. Sci. Technol. 31(8), 085003 (2020). [CrossRef]  

29. X. Su, L. Su, and W. Li, “A new fourier transform profilometry based on modulation measurement,” Proc. SPIE 3749, 438–439 (1999). [CrossRef]  

30. Y. Wang, X. Su, and Y. Dou, “A fast three-dimensional object recognition based on modulation analysis,” Optics and Lasers in Engineering 30, 720–726 (2010). [CrossRef]  

31. E. Hu and Y. M. He, “Surface profile measurement of moving objects by using an improved π phase-shifting fourier transform profilometry,” Opt. Lasers Eng. 47(1), 57–61 (2009). [CrossRef]  

32. Y. P. Liu, G. L. Du, C. R. Zhang, C. L. Zhou, S. C. Si, and Z. K. Lei, “An improved two-step phase-shifting profilometry,” Optik 127(1), 288–291 (2016). [CrossRef]  

33. X. Zhang, K. Cheng, Y. Lu, X. Zhao, M. Li, Y. Gan, X. Dai, and M. Zhong, “Reconstruction results,” figshare (2021) [retrieved 21 October 2021], https://doi.org/10.6084/m9.figshare.16897171.

Supplementary Material (1)

NameDescription
Dataset 1       data and images

Data availability

Data underlying the results for Fig. 5, Fig. 8 and Fig. 10 presented in this paper are available in supplementary Dataset 1 Ref. [33]. Further technical details and documentation are not publicly available at this time but may be obtained from the authors upon reasonable request.

33. X. Zhang, K. Cheng, Y. Lu, X. Zhao, M. Li, Y. Gan, X. Dai, and M. Zhong, “Reconstruction results,” figshare (2021) [retrieved 21 October 2021], https://doi.org/10.6084/m9.figshare.16897171.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Principle of uniaxial micro-system with $\pi$ phase-shifting method.
Fig. 2.
Fig. 2. Distribution of imaging on the optical axis.
Fig. 3.
Fig. 3. The schematic diagram of microscopic 3D shape vertical measurement.
Fig. 4.
Fig. 4. The relationship between the height distribution and the serial number corresponding to the maximum modulation value for pixel (400, 500).
Fig. 5.
Fig. 5. Error maps of four tested flat planes compared with the fitted ideal plane. The first row shows error maps by PSM, the second row shows error maps by PM, the third row shows error maps by FTM. The first column shows error maps for plane 1 by PSM, PM and FTM, the second column shows error maps for plane 2 by PSM, PM and FTM, the third column shows error maps for plane 3 by PSM, PM and FTM, the fourth column shows error maps for plane 4 by PSM, PM and FTM. See Dataset 1 for underlying values [33].
Fig. 6.
Fig. 6. Fringes for the specimen. (a) Fringe when S(j) = 150; (b) Fringe with a phase difference of $\pi$ for (a); (c) A closeup image when S (j) = 100; (d) A closeup image of (a); (e) A closeup image of (b); (f) A closeup image when S (j) = 200.
Fig. 7.
Fig. 7. Spectrum for the fringes. (a) The spectrum of the Fig. 6(a); (b) The spectrum of the subtraction for Fig. 6(a) and Fig. 6(b); (c) Part of a cross section for (a); (c) Part of a cross section for (b).
Fig. 8.
Fig. 8. Experimental results. (a) 3D reconstruction of the specimen by PSM; (b) 3D reconstruction of the specimen by PM; (c) 3D reconstruction of the specimen by FTM; (d) Partial result of (a); (e) Partial result of (b); (f) Partial result of (c); (g) Comparison of one cross section for (d)-(f). See Dataset 1 for underlying values [33].
Fig. 9.
Fig. 9. Fringes for the specimen. (a) Fringe when S(j) = 150; (b) A closeup image when S (j) = 100; (c) A closeup image of (a); (d) A closeup image when S (j) = 200.
Fig. 10.
Fig. 10. Experimental results. (a) 3D reconstruction of the specimen by PSM; (b) 3D reconstruction of the specimen by PM; (c) 3D reconstruction of the specimen by FTM; (d) Partial result of (a); (e) Partial result of (b); (f) Partial result of (c); (g) Comparison of one partial cross section for (d)-(f). See Dataset 1 for underlying values [33].

Tables (1)

Tables Icon

Table 1. Root-mean-square error for each tested flat plane ( μ m)

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = R ( x , y ) M 2 { B ( x , y ) + C ( x , y ) cos [ 2 π f 0 x + Φ 0 ( x , y ) ] }
I π ( x , y ) = R ( x , y ) M 2 { B ( x , y ) + C ( x , y ) cos [ 2 π f 0 x + Φ 0 ( x , y ) + π ] }
I ( x , y ; δ ) = H ( x , y ; δ ) I ( x , y )
I π ( x , y ; δ ) = H ( x , y ; δ ) I π ( x , y )
H ( x , y ; δ ) = 1 2 π σ H 2 e x 2 + y 2 2 σ H 2
I ( x , y ; δ ) = R ( x , y ) M 2 e 1 2 f 0 2 σ H 2 { B ( x , y ) + C ( x , y ) cos [ 2 π f 0 x + Φ 0 ( x , y ) ] }
I π ( x , y ; δ ) = R ( x , y ) M 2 e 1 2 f 0 2 σ H 2 { B ( x , y ) + C ( x , y ) cos [ 2 π f 0 x + Φ 0 ( x , y ) + π ] }
I z ( x , y ) = I ( x , y ) I π ( x , y ) 2 = R ( x , y ) C ( x , y ) M 2 e 1 2 f 0 2 σ H 2 cos [ 2 π f 0 x + Φ 0 ( x , y ) ]
I z ( x , y ) = R ( x , y ) C ( x , y ) M 2 e 1 2 f 0 2 σ H 2 { e i [ 2 π f 0 x + Φ 0 ( x , y ) ] + e i [ 2 π f 0 x + Φ 0 ( x , y ) ] }
M ( x , y ; δ ) = R ( x , y ) C ( x , y ) M 2 e 1 2 f 0 2 σ H 2
Z ( t , x , y ) = a ( x , y ) + b ( x , y ) S ( n , x , y ) max + c ( x , y ) S 2 ( n , x , y ) max
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.