Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Time-slicing high dynamic range 3D imaging

Open Access Open Access

Abstract

Fringe projection profilometry (FPP) has been widely used in 3D measurement due to its high precision and non-contact properties. Nevertheless, it still faces great challenges in measuring scenes with complex reflectivity, in which the dynamic range of the reflected light field of the scene is significantly higher than that of the image detector. In this paper, we propose a time-slicing strategy for high dynamics range 3D imaging by projecting a series of sinusoidal fringe patterns with short and equal length exposure time and performing the fusion of different numbers of short exposure images according to the local gray-value distribution of the images. Moreover, to further improve the measurement efficiency, we realize phase unwrapping using complementary Gray code patterns, which are binary and insensitive to the image sensor’s nonlinear response to the reflected light from the scene under test. Experiments are conducted to demonstrate the feasibility and efficiency of the proposed method.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Due to its high speed and non-contact advantages, optical 3D measurement technology has been widely used in industrial measurement, reverse engineering, cultural relics protection, medical pathology detection, and other fields [13]. Fringe projection profilometry(FPP) [4,5] measurement is one of the most typical optical 3D measurement approaches because of its advantages of full-field detection, high spatial resolution, and high accuracy. By projecting a series of fringe patterns onto the scene under test and analyzing the deformed fringes modulated on the surface contour of the scene, the 3D profile of the scene can be rebuilt accurately. Due to the use of phase values as feature descriptors, the strategy of projecting and solving sinusoidal fringes has strict requirements on the linear response of light intensity from the projector to the camera. However, in the actual measuring environment, the reflected light from the scene under test may have a very large dynamic range because of the difference in reflectivity or color in the field of view. In this case, if a single exposure time is used, the high reflectivity area tends to be overexposed, while the low reflectivity area is too dark, neither of which satisfies the linear response requirement and will lead to error or deviations in the measurement results.

To solve the above problem, researchers have developed various fringe projection techniques to realize high dynamic range (HDR) 3D imaging these years [6]. The most direct method is gamma correction [710]. The visual equipment has gamma distortion, making dark objects measured with smaller output intensity so as to meet the human visual response(power-law response). The nonlinearity of a measuring system could be quantified and corrected by the photometric calibration technique that can extend the dynamic range of captured patterns. To avoid the calibration, on the contrary, blind gamma correction technique, iterative gamma correction method, and correction techniques based on look-up table mapping were presented to remove the effect of gamma. But such methods are difficult to solve the problem fundamentally because they did not focus on avoiding overexposure and improving the dark part’s signal-to-noise ratio(SNR). Therefore, some hardware-based methods, such as setting one or several perfect exposure times [1113], capturing deformed fringes by different channels respectively [14], designing the optimal projected pattern by recursive algorithm [1518], and adding polarizers in front of the projector and camera [19] were all to avoid image overexposure. On the other hand, some algorithm-based methods, such as increasing the number of phase-shifting steps [20], replacing one camera with a light field camera [21], and using 180-degree phase-inverted fringe patterns [22] can calculate the correct wrapped phase even if the image is overexposed. These methods could achieve HDR measurement, but the hardware-based methods need to add more cumbersome hardware and the algorithm-based methods are complex and time-consuming.

In 2D imaging, Google proposed the HDR+ theory, which is "burst photography for low-light imaging on mobile cameras". Unlike the traditional HDR theory, the HDR+ method takes a series of equal length exposure images rather than different ones. Then it carries out image fusion to get high dynamic range images. That is to say, it is difficult to overexpose and the noise reduction effect is obvious.

In this paper, we proposed a new HDR 3D imaging method inspired by the Google HDR+ photography method. Our method can improve the imaging Dynamic measurement range of the system and perpetuate its practicality and simplicity. We divide the long single phase-shifting image exposure time into several equal length short exposure groups. After the image fusion process, these images would be used to get HDR 3D results with the cooperating of complementary Gary code images [2326]. This paper is organized as follows. Section 2 explains the principle of the proposed time-slicing method and complementary Gray code patterns assisted phase unwrapping in detail. Section 3 presents some experimental results to support our method, and Section 4 summarizes this paper.

2. Principle

The proposed method flow is illustrated in Fig. 1. Our FPP system comprises a projector with blue light and a monochrome camera. And we prepared the phase-shifting sinusoidal and complementary Gray code patterns in advance. The specific steps are as follows:

 figure: Fig. 1.

Fig. 1. Overview of the proposed method.

Download Full Size | PDF

Step I: Adaptive acquisition of time-slicing phase-shifting image sets. By taking the minimum exposure time as the unit time, the computation of the gray value for each pixel in the first set of sinusoidal images from the camera necessitates a total of $m$ sets of sinusoidal images.

Step II: Acquisition of one set of complementary Gray code images.

Step III: The calculation of the 3D measurement result. After the sinusoidal image fusion progress, the wrapped phase is calculated by phase-shifting algorithm. The absolute phase is calculated with the assistance of complementary Gray code images, and last, the 3D measurement result is obtained.

The above processes are shot using 10-bit images.

2.1 Time-slicing image fusion method

2.1.1 N-step phase-shifting strategy

The N-step phase-shifting strategy has been widely used in FPP systems because it is not affected by violent changes or fractures on the object’s surface to be measured [4,2729]. The gray-value map captured by the camera can be expressed as:

$$I_{n}(x, y)=A(x, y)+B(x, y) \cos [\phi(x, y)+2 \pi n / N] ,$$
where $n$ is the phase-shifting index, $(x, y)$ denotes the pixel coordinates of the camera, $A$ and $B$ denote the average intensity and intensity modulation, respectively, $\phi$ is the phase value we need to solve. The average intensity $A$, intensity modulation $B$ and phase value $\phi$ can be calculated by the following equations:
$$A(x, y)=\frac{\sum_{n=1}^{N} I_{n}(x, y)}{N} ,$$
$$B(x, y)=\frac{2}{N} \sqrt{\left[\sum_{n=1}^{N} I_{n}(x, y) \sin (2 \pi n / N)\right]^{2}+\left[\sum_{n=1}^{N} I_{n}(x, y) \cos (2 \pi n / N)\right]^{2}} ,$$
$$\phi(x, y)=\tan ^{{-}1} \frac{\sum_{n=1}^{N} I_{n}(x, y) \sin (2 \pi n / N)}{\sum_{n=1}^{N} I_{n}(x, y) \cos (2 \pi n / N)} .$$

2.1.2 Time-slicing method

For objects with multiple reflectivities, there is hidden trouble in a single exposure. The sinusoidal images captured by the camera in a single exposure time modulated by the subject are shown in Fig. 2(a). The grayscale image cross-section at the red line marker is shown in Fig. 2(b). From the Fig. 2, we can see that for complex scenes, a single exposure time would easily cause overexposure in the highly reflective regions and low SNR in the low reflective regions (see the red circles in Fig. 2(b)). Both cases would cause phase calculation errors and eventually lead to 3D reconstruction failure. The current mainstream Multiple exposure method divides the measured object into different areas through reflectivity, and the best exposure time is selected for different areas. This method can achieve HDR measurement, but it is time-consuming relatively.

 figure: Fig. 2.

Fig. 2. (a) Sinusoidal modulated image of a single exposure time taken by the camera, (b) the gray-value at the red line in (a).

Download Full Size | PDF

Therefore, we propose a time-slicing method, which slices a single long time exposure into multiple sets of small exposure time with equal length. The Multiple exposure method requires multiple sets of sinusoidal images to obtain the wrapped phase. As shown by the red arrow in Fig. 3, Fre1 (Fre1 represents a set of sinusoidal images of frequency 1.) has the shortest exposure time, which corresponds to the plaster head, the longer exposure Fre1 corresponds to the black Baseball cap, and the longest exposure Fre1 corresponds to the black part of the calibration plate. Our method can be seen as slicing the longest Fre1 part into equal length segments as shown in the red dashed box in Fig. 3. For different reflectivity parts of the measured object, selecting different sets of images for image fusion can achieve the same effect, which can save a lot of shooting time.

 figure: Fig. 3.

Fig. 3. Time comparison for image acquisition by Multiple exposure plus triple-frequency phase-shifting method against our method. (Fre1 represents a set of sinusoidal images of frequency 1; Fre2 represents a set of sinusoidal images of frequency 2; Fre3 represents a set of sinusoidal images of frequency 3. S and G represent sinusoidal images and complementary Gray code images of our method, respectively.)

Download Full Size | PDF

2.1.3 Images fusion

After shooting the first set of phase-shifting sinusoidal images, we stacked them pixel by pixel and divided them by $N$, as shown in Eq. (5). We can obtain results similar to those shot when the measured object is under uniform illumination. Then we counted the gray-value of each pixel into a histogram, as shown in Fig. 4, while the first valley in the histogram is the gray-value $I_{0}$, which we need exactly. Using the ratio of this value to 255, the optimal number of acquisition groups ${m}$ can be calculated, as shown in Eq. (6), where INT indicates rounding down:

$$I_{c}(x, y)=\frac{\sum_{n=1}^{N} I_{n}(x, y)}{N},$$
$$m=I N T\left(255 / I_{0}\right) .$$

 figure: Fig. 4.

Fig. 4. Pixel gray-value statistics histogram of the first four sinusoidal images averaged after superposition. $I_{c}$ could be seen as the gray-value of the measured object from the uniform illumination.

Download Full Size | PDF

When all $m$ sets of sinusoidal images are shot, every point will undergo a fusion process, which is calculated by equations (3) and (7) to determine the best fusion group $i_{\text {opt }}$ for that point. The schematic diagram is shown in Fig. 5. Only the pixel points whose gray-values are within the gray-value mask $\left [I_{l}(x, y), I_{h}(x, y)\right ]$ (to reduce the nonlinear response of the projector, it is generally taken as $\left. I_{l}(x, y)=30, I_{h}(x, y)=220\right )$, and with the highest modulation value will go to the next process of the phase unwrapping:

$$B_{i_{o p t}}(x, y)=\operatorname{MAX}\left[B_{i}(x, y)\right] .$$

 figure: Fig. 5.

Fig. 5. Schematic diagram of image fusion for each pixel.

Download Full Size | PDF

For the high reflectivity region, the time-slicing method could solve the overexposure problem because the exposure time is short enough. For the low reflectivity region, the random noise is usually considered additive white Gaussian noise in the FPP system [11]. It is known from Eq. (8) that the time-slicing image fusion method can effectively reduce the effect of random noise and improve the SNR in the low reflectivity region, thus expanding the measurement range:

$$S N R_{i}=\sqrt{i} S N R_{\text{single }} .$$

2.2 Complementary Gray code assisted phase unwrapping

In Eq. (4), the phase value $\phi (x, y)$ have the value domain of $(-\pi,+\pi ]$. To calculate the full-field unambiguous phase value, using the single-period sinusoidal image to obtain the high-precision absolute phase value is impossible due to the periodic uncertainty caused by sharp changes in the edges or interior of the measured object. So additional fringe patterns are needed to unwrap the wrapped phase. From the unwrapping Eq. (9), it is crucial to obtain the order of fringe periods, the integer $k$.

$$\Phi(x, y)=\phi(x, y)+2 \pi k .$$

We use the method of complementary Gray code patterns assisted phase unwrapping. Compared to the multi-frequency (heterodyne) temporal phase unwrapping method, complementary Gray code patterns are insensitive to the nonlinear response of the camera sensor to the reflected light from the scene under test. By the way, complementary Gray code patterns add an extra Gray code image compared to traditional Gray code patterns.

After the modulated complementary Gray code images were captured, the grayscale threshold of each pixel point was calculated by averaging the gray-value of sinusoidal fringe maps, and last, the value of the points which are smaller than the threshold is judged as 0; otherwise, they are judged as 1. The grayscale threshold and the $i$th binarized Gray code images are shown in Eqs. (10) and (11):

$$I_{t h}(x, y)=\frac{\sum_{i}^{N} I_{n}(x, y)}{N/2}, i=0,1,2,3 \cdots ,$$
$$G_{i}(x, y)= \begin{cases}0, & I_{G_{i}}(x, y)<I_{t h}(x, y) \\ 1, & I_{G_{i}}(x, y) \geq I_{t h}(x, y)\end{cases} .$$

$N_{g}$ complementary Gray code images decoded to obtain two fringe order integers, $k_{1}(x, y)$ and $k_{2}(x, y)$, as shown in Eqs. (14) and (15):

$$V_{1}(x, y)=\sum_{i=1}^{N_{g}-1} G_{i}(x, y) * 2^{N_{g}-1-i},$$
$$V_{2}(x, y)=\sum_{i=1}^{N_{g}} G_{i}(x, y) * 2^{N_{g}-i},$$
$$k_{1}(x, y)=i(V_{1}(x, y)),$$
$$k_{2}(x, y)=INT[(i(V_{2}(x, y))+1)/2],$$
where i($\cdot$) is used to find the known relationship between the decoding decimal number $V(x, y)$ and the monotonous fringe order $k(x, y)$. Then the unwrapped phase value can be obtained from Eqs. (4), (9), (14) and (15), as shown in Equation(16):
$$\Phi(x, y)=\left\{\begin{array}{lr} \phi(x, y)+2 \pi k_{2}(x, y), & \phi(x, y) \leq \pi / 2 \\ \phi(x, y)+2 \pi k_{1}(x, y), & \quad \pi / 2<\phi(x, y)<3 \pi / 2 \\ \phi(x, y)+2 \pi\left[k_{2}(x, y)-1\right], & \phi(x, y) \geq 3 \pi / 2 \end{array}\right. .$$

According to the principle of phase unwrapping, it is clear that the accuracy of the 3D reconstructed point cloud depends on the quality of the phase-shifting images. And the complementary Gray code images are only used for the phase unwrapping process. Therefore, compared to the triple-frequency phase-shifting algorithm with the same 3D reconstruction accuracy [30], it only needs to project one set. Still, the triple-frequency phase-shifting algorithm needs to project another two sets. Moreover, because of the binary property of complementary Gray code, combining it with the time-slicing method can greatly reduce its projection time. The difference between them is shown in Fig. 3. The area of the yellow rectangle can be seen as the time required for the Multiple exposure plus triple-frequency (heterodyne) methods, and the area of the green rectangle can be seen as the time required for our method.

2.3 Shooting images in 10-bit format

Furthermore, our system shot 10-bit raw format images instead of 8-bit images. Compared to 8-bit images, 10-bit images have four times the grayscale bit depth. The camera’s charge coupled device (CCD) records the simulated light intensity distribution in discrete gray-values, and the higher the sampling number, the smaller the sampling error. Therefore, when projecting a sinusoidal pattern with the same exposure time and setting the same modulation threshold to acquire the high-quality pixel gray value, the 10-bit image can extend the reconstruction range of the system in areas with low reflectivity.

3. Experiments

3.1 System Setup

A typical FPP system has been built, as shown in Fig. 6. The system includes a CMOS camera with 1920$\times$1080 resolution (model: Daheng Image MER2-230-168U3M) and an 8mm lens (model: Daheng Image HN-0826-20M-C1/1X), and a DLP projector with 912$\times$1140 resolution (model: TI LightCrafter 4500). We used a JUNTEK signal trigger (model: JDS-2900) to trigger the camera and projector simultaneously and precisely control the trigger signal’s pulse width and time ratio. In our experiments, we chose the 4-step phase-shifting strategy, and the fringe wavelength of the phase-shifting sinusoidal image is set to 8 pixels. By the way, compared to the whole shooting time, the time lost in each small period of shooting time by the time-slicing strategy is almost negligible, and our system is limited by the max projector’s frequency of 111Hz, so we used 10ms as the unit time.

 figure: Fig. 6.

Fig. 6. System setup.

Download Full Size | PDF

3.2 Accuracy verification experiment

In the accuracy verification experiment, we tested the high-precision objects to verify the accuracy of our method. We measured a pair of standard ceramic ball rods, shown in Fig. 7(a). The radius of ball A is $25.3999 \mathrm {~mm}$, the radius of ball B is $25.3983 \mathrm {~mm}$, and the center distance of two balls is $100.1563 \mathrm {~mm}$, which has been authenticated by the Shenzhen Academy of Metrology and Quality Inspection. Fig. 7(b) shows our method’s 3D measurement result. Fig. 7(c)(d) show the reconstructions of spheres A and B, respectively. To verify the measurement accuracy, the 3D point cloud data of sphere A and sphere B were used to perform the spherical fitting. The difference between the measured and fitted data of sphere A and sphere B are shown in Fig. 7(e)(f), and the error distributions of sphere A and sphere B are shown in Fig. 7(g)(h). Based on the fitting results, we calculated their root mean square error (RMSE), and the RMSE of sphere A is about 0.063, that of sphere B is about 0.049. The spherical center distance of two balls is $100.2221 \mathrm {~mm}$, which has an absolute error of $0.0658 \mathrm {~mm}$. This experiment proved that our method can obtain highly accurate measurement results.

 figure: Fig. 7.

Fig. 7. (a) Photograph of a pair of standard ceramic balls. (b) The 3D measurement result of the standard ceramic balls; (c) the 3D measurement result of sphere A; (d) the 3D measurement result of sphere B; (e) the 3D measurement result error of sphere A; (f) the 3D measurement result error of sphere B; (g) error distribution of Sphere A and its RMSE value; (h) error distribution of Sphere B and its RMSE value.

Download Full Size | PDF

3.3 Highly dynamic verification experiments

We first measured a composite scene. This scene includes a metal frame, a plaster head, a pair of standard balls, and a black cardboard box, as shown in Fig. 8(a). The enlarged image of the black paper box in Fig. 8(a) is shown in Fig. 8(b), which has a low reflectivity. Fig. 8(c) and (d) show the 3D measurement result and absolute phase image calculated by 8-bit images, respectively. Fig. 8(e) and (f) show the 3D measurement result and absolute phase image calculated by 10-bit images, respectively. Whether from the 3D measurement result or the absolute phase map, it can be seen that using 10-bit images to calculate for low reflection areas can effectively achieve good results thus expand the measurement range of the system.

 figure: Fig. 8.

Fig. 8. Difference between 3D reconstruction using 10-bit images and 8-bit images. (a) Actual measured object; (b) the low reflectivity black paper box in the red circle in (a); (c) reconstruction result calculated from 8-bit images; (d) absolute phase map calculated from 8-bit images; (e) reconstruction result calculated from 10-bit images; (f) absolute phase map calculated from 10-bit images.

Download Full Size | PDF

Then we measured a complex scene in a realistic environment, as shown in Fig. 9. This complex scene has objects with different materials, colors, and reflectivity, which can be a good verification of the dynamic range for the measurement system. Fig. 10(a)(b)(c) show the absolute phase maps obtained by fusing 1, 3 and 5 sets of sinusoidal images, respectively, while Fig. 10(d)(e)(f) show the 3D measurement results corresponding to Fig. 10(a)(b)(c), respectively. This experiment demonstrated the high dynamic measurement range of our method.

 figure: Fig. 9.

Fig. 9. Photograph of the complex scene.

Download Full Size | PDF

 figure: Fig. 10.

Fig. 10. 3D measurement results of a complex scene. (a) Absolute phase map obtained by using 1 set of sinusoidal images; (b) absolute phase map obtained by fusing 3 sets of sinusoidal images; (c) absolute phase map obtained by fusing 5 sets of sinusoidal images; (d) 3D measurement result calculated by using 1 set of sinusoidal images; (e) 3D measurement result calculated by fusing 3 sets of sinusoidal images; (f) 3D measurement result calculated by fusing 5 sets of sinusoidal images.

Download Full Size | PDF

In addition, it can be noted that the unwrapped phase maps in both Fig. 8(d)(f) and Fig. 10(b)(c) contain the wrong phase points. In order to avoid these errors were transferred to the 3D measurement results, we removed these error points by the monotonic property of the unwrapped phase.

Finally, we compared our method against the previously mentioned Multiple exposure method [12]. The Multiple exposure method sets the appropriate exposure time for the different reflectivity areas of measured objects, which also can obtain the HDR 3D measurement results. In this contrast experiment, the Multiple exposure method also used the 4-step phase-shifting algorithm to calculate the wrapped phase, then used the three-frequency (heterodyne) method to get the unwrapped phase, and finally, the 3D measurement results were received. Like our method, the Multiple exposure method also required 12 patterns. The difference is that our method uses 4 sinusoidal patterns plus 8 complementary Gray code patterns, and the Multiple exposure method uses three groups of sinusoidal patterns with wavelengths of 8, 9, and 75 pixels. Table 1 compares the shooting time for this scene in the experiment, where T represents the exposure time of each pattern, and F represents the number of patterns in one set. In the phase recovery process, the time-slicing method needed to shoot 5 sets of sinusoidal images with 10ms per frame, while the Multiple exposure method needed to shoot 3 sets of sinusoidal images with 12ms, 20ms and 46ms, respectively. In the phase unwrapping process, the time-slicing method only needed to shoot one set of sinusoidal images with 10ms per frame, while the Multiple exposure method needed to shoot 3 sets of the rest two groups of sinusoidal images. In total, time-slicing only needed 280ms to finish the whole shooting progress, while the multiple exposure method needed 1560ms. Fig. 11(a) shows the 3D measurement result obtained by our method, Fig. 11(b) shows the 3D measurement result of the calibration plate in the red box marked in (a), Fig. 11(c) shows the error map of the 3D measurement result of the calibration plate marked in (a) with its fitting plane, and the RMSE value of this calibration plate area is 0.2063. Fig. 11(d) shows the 3D measurement result obtained by the Multiple exposure method, Fig. 11(e) shows the 3D measurement result of the calibration plate in the red box marked in (d). Fig. 11(f) shows the error map of the 3D measurement result of the calibration plate marked in (d) with its fitting plane, and the RMSE value of this calibration plate area is 0.2617. In summary, Our method can significantly reduce shooting time while obtaining HDR 3D measurement results.

 figure: Fig. 11.

Fig. 11. Reconstruction results in Comparison of time-slicing and Multiple exposure methods. (a) 3D measurement result calculated by time-slicing method; (b) 3D measurement result of the calibration plate in the red box in (a); (c) the error map of the calibration plate 3D measurement result in (a) with its fitting plane; (d) 3D measurement result calculated by Multiple exposure method (e) 3D measurement result of the calibration plate in the red box in (d); (f) the error map of the calibration plate 3D measurement result in (d) with its fitting plane.

Download Full Size | PDF

Tables Icon

Table 1. The comparison of the shooting time for the complex scene.(T represents the exposure time of each image, F represents the number of images in the group.)

4. Conclusion

In this paper, we proposed a new FPP high dynamic range 3D measurement method, time-slicing strategy, where the system projects and captures a series of short time equal length sinusoidal images for image fusion to get high-quality wrapped phase values. Then using complementary Gray code images to assist phase unwrapping to obtain high-precision 3D results. The experiments proved that our method could significantly improve the dynamic measurement range of the FPP system, and it has great advantages in terms of shooting time. Compared with the traditional HDR method, our system is simple in structure, phase-shifting images are projected several times, and the complementary Gray code images are projected only once. Hence, it takes less time and suppresses random noise effectively. Also, using 10-bit format images instead of 8-bit images can further widen the dynamic measurement range of the system.

Our method has a good effect on measuring diffuse reflective objects, but it is ineffective for objects with specular reflection. Such objects are widely present in daily life. Therefore, in future research, we will propose more universal methods to further improve the dynamic measurement range of the FPP system.

Funding

National Natural Science Foundation of China (61735003); Foundation Enhancement Program (2021-JCJQ-JJ-0823).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Pattern Recognition 43(8), 2666–2680 (2010). [CrossRef]  

2. J. Geng, “Structured-light 3d surface imaging: a tutorial,” Adv. Opt. Photonics 3(2), 128–160 (2011). [CrossRef]  

3. Z. Zhang, “Review of single-shot 3d shape measurement by phase calculation-based fringe projection techniques,” Optics and Lasers in Engineering 50(8), 1097–1106 (2012). [CrossRef]  

4. C. Zuo, S. Feng, L. Huang, T. Tao, W. Yin, and Q. Chen, “Phase shifting algorithms for fringe projection profilometry: A review,” Optics and Lasers in Engineering 109, 23–59 (2018). [CrossRef]  

5. J. Xu and S. Zhang, “Status, challenges, and future perspectives of fringe projection profilometry,” Optics and Lasers in Engineering 135, 106193 (2020). [CrossRef]  

6. S. Feng, L. Zhang, C. Zuo, T. Tao, Q. Chen, and G. Gu, “High dynamic range 3d measurements with fringe projection profilometry: a review,” Meas. Sci. Technol. 29(12), 122001 (2018). [CrossRef]  

7. P. S. Huang, C. Zhang, and F.-P. Chiang, “High-speed 3-d shape measurement based on digital fringe projection,” Opt. Eng 42(1), 163–168 (2003). [CrossRef]  

8. H. Farid, “Blind inverse gamma correction,” IEEE Trans. on Image Process. 10(10), 1428–1433 (2001). [CrossRef]  

9. H. Guo, H. He, and M. Chen, “Gamma correction for digital fringe projection profilometry,” Appl. Opt. 43(14), 2906–2914 (2004). [CrossRef]  

10. S. Zhang and S.-T. Yau, “Generic nonsinusoidal phase error correction for three-dimensional shape measurement using a digital video projector,” Appl. Opt. 46(1), 36–43 (2007). [CrossRef]  

11. L. Zhang, Q. Chen, C. Zuo, T. Tao, Y. Zhang, and S. Feng, “High-dynamic-range 3d shape measurement based on time domain superposition,” Meas. Sci. Technol. 30(6), 065004 (2019). [CrossRef]  

12. S. Feng, Y. Zhang, Q. Chen, C. Zuo, R. Li, and G. Shen, “General solution for high dynamic range three-dimensional shape measurement using the fringe projection technique,” Optics and Lasers in Engineering 59, 56–71 (2014). [CrossRef]  

13. S. Zhang and S.-T. Yau, “High dynamic range scanning technique,” Opt. Eng 48(3), 033604 (2009). [CrossRef]  

14. Y. Liu, Y. Fu, Y. Zhuan, K. Zhong, and B. Guan, “High dynamic range real-time 3d measurement based on fourier transform profilometry,” Optics & Laser Technology 138, 106833 (2021). [CrossRef]  

15. G. M. Frank Chen and S. Mumin, “Overview of three-dimensional shape measurement using optical methods,” Opt. Eng 39(1), 10–21 (2000). [CrossRef]  

16. S. Feng, C. Zuo, T. Tao, Y. Hu, M. Zhang, Q. Chen, and G. Gu, “Robust dynamic 3-d measurements with motion-compensated phase-shifting profilometry,” Optics and Lasers in Engineering 103, 127–138 (2018). [CrossRef]  

17. D. Huang, E. Swanson, C. Lin, J. Schuman, W. Stinson, and W. Chang, “Optical coherence tomography,” Opt Coher Tomogr. 254(5035), 1178 (1991).

18. M. Idesawa, T. Yatagai, and T. Soma, “Scanning moiré method and automatic measurement of 3-d shapes,” Appl. Opt. 16(8), 2152–2162 (1977). [CrossRef]  

19. Z. Zhu, D. You, F. Zhou, S. Wang, and Y. Xie, “Rapid 3d reconstruction method based on the polarization-enhanced fringe pattern of an hdr object,” Opt. Express 29(2), 2162–2171 (2021). [CrossRef]  

20. B. Chen and S. Zhang, “High-quality 3d shape measurement using saturated fringe patterns,” Optics and Lasers in Engineering 87, 83–89 (2016). [CrossRef]  

21. Z. Cai, X. Liu, X. Peng, Y. Yin, A. Li, J. Wu, and B. Z. Gao, “Structured light field 3d imaging,” Opt. Express 24(18), 20324–20334 (2016). [CrossRef]  

22. C. Jiang, T. Bell, and S. Zhang, “High dynamic range real-time 3d shape measurement,” Opt. Express 24(7), 7337–7346 (2016). [CrossRef]  

23. Z. Wu, C. Zuo, W. Guo, T. Tao, and Q. Zhang, “High-speed three-dimensional shape measurement based on cyclic complementary gray-code light,” Opt. Express 27(2), 1283–1297 (2019). [CrossRef]  

24. Q. Zhang, X. Su, L. Xiang, and X. Sun, “3-d shape measurement based on complementary gray-code light,” Optics and Lasers in Engineering 50(4), 574–579 (2012). [CrossRef]  

25. Y. Wang, L. Liu, J. Wu, X. Song, X. Chen, and Y. Wang, “Dynamic three-dimensional shape measurement with a complementary phase-coding method,” Optics and Lasers in Engineering 127, 105982 (2020). [CrossRef]  

26. Z. Wu, W. Guo, Y. Li, Y. Liu, and Q. Zhang, “High-speed and high-efficiency three-dimensional shape measurement based on gray-coded light,” Photon. Res. 8(6), 819–829 (2020). [CrossRef]  

27. P. S. Huang, S. Zhang, and F.-P. Chiang, “Trapezoidal phase-shifting method for three-dimensional shape measurement,” Opt. Eng 44(12), 123601 (2005). [CrossRef]  

28. S. Zhang and S.-T. Yau, “High-speed three-dimensional shape measurement system using a modified two-plus-one phase-shifting algorithm,” Opt. Eng 46(11), 113603 (2007). [CrossRef]  

29. C. Zuo, Q. Chen, G. Gu, J. Ren, X. Sui, and Y. Zhang, “Optimized three-step phase-shifting profilometry using the third harmonic injection,” Optica Applicata 43, 1 (2013). [CrossRef]  

30. C. Zuo, L. Huang, M. Zhang, Q. Chen, and A. Asundi, “Temporal phase unwrapping algorithms for fringe projection profilometry: A comparative review,” Optics and lasers in engineering 85, 84–103 (2016). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1.
Fig. 1. Overview of the proposed method.
Fig. 2.
Fig. 2. (a) Sinusoidal modulated image of a single exposure time taken by the camera, (b) the gray-value at the red line in (a).
Fig. 3.
Fig. 3. Time comparison for image acquisition by Multiple exposure plus triple-frequency phase-shifting method against our method. (Fre1 represents a set of sinusoidal images of frequency 1; Fre2 represents a set of sinusoidal images of frequency 2; Fre3 represents a set of sinusoidal images of frequency 3. S and G represent sinusoidal images and complementary Gray code images of our method, respectively.)
Fig. 4.
Fig. 4. Pixel gray-value statistics histogram of the first four sinusoidal images averaged after superposition. $I_{c}$ could be seen as the gray-value of the measured object from the uniform illumination.
Fig. 5.
Fig. 5. Schematic diagram of image fusion for each pixel.
Fig. 6.
Fig. 6. System setup.
Fig. 7.
Fig. 7. (a) Photograph of a pair of standard ceramic balls. (b) The 3D measurement result of the standard ceramic balls; (c) the 3D measurement result of sphere A; (d) the 3D measurement result of sphere B; (e) the 3D measurement result error of sphere A; (f) the 3D measurement result error of sphere B; (g) error distribution of Sphere A and its RMSE value; (h) error distribution of Sphere B and its RMSE value.
Fig. 8.
Fig. 8. Difference between 3D reconstruction using 10-bit images and 8-bit images. (a) Actual measured object; (b) the low reflectivity black paper box in the red circle in (a); (c) reconstruction result calculated from 8-bit images; (d) absolute phase map calculated from 8-bit images; (e) reconstruction result calculated from 10-bit images; (f) absolute phase map calculated from 10-bit images.
Fig. 9.
Fig. 9. Photograph of the complex scene.
Fig. 10.
Fig. 10. 3D measurement results of a complex scene. (a) Absolute phase map obtained by using 1 set of sinusoidal images; (b) absolute phase map obtained by fusing 3 sets of sinusoidal images; (c) absolute phase map obtained by fusing 5 sets of sinusoidal images; (d) 3D measurement result calculated by using 1 set of sinusoidal images; (e) 3D measurement result calculated by fusing 3 sets of sinusoidal images; (f) 3D measurement result calculated by fusing 5 sets of sinusoidal images.
Fig. 11.
Fig. 11. Reconstruction results in Comparison of time-slicing and Multiple exposure methods. (a) 3D measurement result calculated by time-slicing method; (b) 3D measurement result of the calibration plate in the red box in (a); (c) the error map of the calibration plate 3D measurement result in (a) with its fitting plane; (d) 3D measurement result calculated by Multiple exposure method (e) 3D measurement result of the calibration plate in the red box in (d); (f) the error map of the calibration plate 3D measurement result in (d) with its fitting plane.

Tables (1)

Tables Icon

Table 1. The comparison of the shooting time for the complex scene.(T represents the exposure time of each image, F represents the number of images in the group.)

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

I n ( x , y ) = A ( x , y ) + B ( x , y ) cos [ ϕ ( x , y ) + 2 π n / N ] ,
A ( x , y ) = n = 1 N I n ( x , y ) N ,
B ( x , y ) = 2 N [ n = 1 N I n ( x , y ) sin ( 2 π n / N ) ] 2 + [ n = 1 N I n ( x , y ) cos ( 2 π n / N ) ] 2 ,
ϕ ( x , y ) = tan 1 n = 1 N I n ( x , y ) sin ( 2 π n / N ) n = 1 N I n ( x , y ) cos ( 2 π n / N ) .
I c ( x , y ) = n = 1 N I n ( x , y ) N ,
m = I N T ( 255 / I 0 ) .
B i o p t ( x , y ) = MAX [ B i ( x , y ) ] .
S N R i = i S N R single  .
Φ ( x , y ) = ϕ ( x , y ) + 2 π k .
I t h ( x , y ) = i N I n ( x , y ) N / 2 , i = 0 , 1 , 2 , 3 ,
G i ( x , y ) = { 0 , I G i ( x , y ) < I t h ( x , y ) 1 , I G i ( x , y ) I t h ( x , y ) .
V 1 ( x , y ) = i = 1 N g 1 G i ( x , y ) 2 N g 1 i ,
V 2 ( x , y ) = i = 1 N g G i ( x , y ) 2 N g i ,
k 1 ( x , y ) = i ( V 1 ( x , y ) ) ,
k 2 ( x , y ) = I N T [ ( i ( V 2 ( x , y ) ) + 1 ) / 2 ] ,
Φ ( x , y ) = { ϕ ( x , y ) + 2 π k 2 ( x , y ) , ϕ ( x , y ) π / 2 ϕ ( x , y ) + 2 π k 1 ( x , y ) , π / 2 < ϕ ( x , y ) < 3 π / 2 ϕ ( x , y ) + 2 π [ k 2 ( x , y ) 1 ] , ϕ ( x , y ) 3 π / 2 .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.