## Abstract

In this paper we describe a high-resolution, low-noise phase-shifting algorithm applied to 360 degree digitizing of solids with diffuse light scattering surface. A 360 degree profilometer needs to rotate the object a full revolution to digitize a three-dimensional (3D) solid. Although 360 degree profilometry is not new, we are proposing however a new experimental set-up which permits full phase-bandwidth phase-measuring algorithms. The first advantage of our solid profilometer is: it uses base-band, phase-stepping algorithms providing full data phase-bandwidth. This contrasts with band-pass, spatial-carrier Fourier profilometry which typically uses 1/3 of the fringe data-bandwidth. In addition phase-measuring is generally more accurate than single line-projection, non-coherent, intensity-based line detection algorithms. Second advantage: new fringe-projection set-up which avoids self-occluding fringe-shadows for convex solids. Previous 360 degree fringe-projection profilometers generate self-occluding shadows because of the elevation illumination angles. Third advantage: trivial line-by-line fringe-data assembling based on a single cylindrical coordinate system shared by all 360-degree perspectives. This contrasts with multi-view overlapping fringe-projection systems which use iterative closest point (ICP) algorithms to fusion the 3D-data cloud within a single coordinate system (e.g. Geomagic). Finally we used a 400 steps/rotation turntable, and a 640x480 pixels CCD camera. Higher 3D digitized surface resolutions and less-noisy phase measurements are trivial by increasing the angular-spatial resolution and phase-steps number without any substantial change on our 360 degree profilometer.

© 2014 Optical Society of America

## 1. Introduction

Fringe projection profilometry is a well known technique since the classical paper by Takeda et al. in 1982 [1]. Although this profilometry technique effectively demonstrated that 3D digitalization was possible using a single carrier fringe pattern, it cannot digitize the full 360 degree 3D object. The primary reason is that one needs to position the 3D object over a turntable to have access to every solid perspective from all (360 degree) directions. As far as we know the first researcher to implement an automated 360 degree profilometer was Halioua et al. in 1985 [2]. Halioua used a grating projector along with a 3-step phase shifter set-up and a turntable to obtain the 360 degree profilometry of a human mannequin head [2]. Later on in 1991 Cheng et al. have also positioned the 3D object in a turntable in order to rotate it 360° degree and obtain the full object information, and projected over the object a carrier frequency fringe-pattern [3]. Asundi published an interesting technique based on a striped light projection for 360 degree profilometry [4]. Using a light stripe from a laser diode, Chang et al. [5] automatically reconstructed a solid with 360 degrees using a neural network. Gomes et al. used projected linear grating to study the human trunk looking for spinal deformities; they used Fourier profilometry for their purpose [6]. Later on Song et al. used a fringe grating projector and phase shifting interferometry of a rotating object for 360° degree profilometry [7]. Asundi et al. used a time delay integration imaging for 360 degree acquisition of a rotating object [8] for 360 degree profilometry. The state of the art on 3D profilometry was reviewed at 2001 by Sue and Chen but they have only included a single paper of 360 degree profilometry [9]. Afterwards Zhang et al. have used 360 degree profilometry for flow analysis in mechanical engines [10]. In 2005 Munoz-Rodriguez et al. used triangulation for 3D object reconstruction by projecting a stripe light and Hu moments [11]. In 2008 Trujillo-Shiaffino et al. used 3D profilometry based on a single line projection and triangulation of a smooth rotating-symmetric object [12]. More recently Shi et al. used 360 degree fringe pattern projection profilometry applied to fluorescent molecular tomography [13]. Some researchers have used shearing interferometry to project high quality linear fringes for 360 degree profilometry [14]. In this paper we have not discussed calibration issues, because we have not used any new or non-standard calibration strategy apart from those well-known in 3D profilometry [15]. Also we have not used any new or non-standard phase unwrapping algorithm [14,15]. Given the low noise of the fringes and the noise-rejection capability of the 4-step least-squares phase-shifting demodulation, we have unwrapped our phase using simple line integration of wrapped phase differences.

From this review we see that 360 degree profilometry is already a mature research field and applications in industrial inspection and robotic vision are well known. But previous efforts [2–15] have mainly concentrated in digitizing very smooth, quasi-cylindrical objects. This is because smooth, almost cylindrical 3D objects are easier to digitize with previous 360 degree profilometry [2–15]. Here we have tested our proposed technique to digitize a Rubik’s cube all around it (360-degrees, almost 4*π*-steradians). The Rubik’s cube is a continuous 3D-solid with sharp corners and it is certainly not quasi-cylindrical. By digitizing a Rubik’s cube we demonstrate that our 360-degree digitizing technique is more robust and can digitize more complex 3D-surfaces than previous published approaches [2–15].

## 2. Previous 360-degree profilometers using fringe-grating or stripe-light projections

Here we review the two basic experimental set-ups that have been used for 360 degree profilometry (see Fig. 1) [2–15]. Figure 1(a) shows the standard configuration when linear fringes are projected over the 3D sphere shown. The fringe projector has a polar phase-sensitivity angle ${\theta}_{0}$with the (*x,y*) plane. On the other hand, the stripe-light projection set-up (Fig. 1(b)) has an azimuth phase-sensitivity angle${\phi}_{0}$. In both cases the 3D object is rotated 360 degrees in the azimuthal direction to obtain 3D data from all possible perspectives.

Assume that the 3D-surfaces enclosing the solids are expressed in 3D cylindrical coordinates $(\rho ,\phi ,z)$ as (see Fig. 1),

*z*direction and within $[0,2\pi )\text{\hspace{0.17em}}$in the azimuthal $\phi $ direction (see Fig. 1). The projected linear fringes are aimed towards the 3D-surface$\rho (z,\phi )$having a phase-sensitivity angle

*θ*

_{0}(see Fig. 1(a)). Then the object is rotated

*N*times an incremental azimuthal angle $\Delta \phi =2\pi /N$ (see Fig. 1(a)). For each increment $\Delta \phi =2\pi /N$one collects the pixels at the center column of the CCD (

*x*= 0,

*z*). In this way one generates an image composed by

*N*-lines in the $\phi $ direction, each having the CCD’s pixel-column size in the

*z*range$[-L,L]$. Therefore the composed

*spatial-carrier*fringe pattern is,

On the other hand a mathematical model for light-stripe profilometry may be (Fig. 1(b)),

## 3. Proposed improved profilometer for 360 degree solid digitizing

The improved 360-degree solid profilometry set-up is shown in Fig. 2. With this experimental set-up one maintains the higher phase resolution of interferometric phase-measuring methods while eliminating most self-generated object shadows cast by the 3D solid (see Fig. 1(a)). The 3D surfaces that can be digitized with our 360 degree profilometer belongs to the continuous, single-valued function space *C*^{1} in cylindrical coordinates$(\rho ,\phi ,z)$,

*x,z*) plane. Therefore the 3D-surface imaged over the CCD plane has the following mathematical form,

*x*= 0) CCD column-pixels$I(0,z)$. In this way for a full azimuthal rotation$\phi \in [0,2\pi )$, one collects

*N*–CCD columns at $I(0,z,\phi )$ as,

From this equation, one may drop the mathematically “dummy” variable *x* = 0 obtaining,

To give an intuitive idea of the phase-shifted fringe patterns generated by our profilometer, assume that a sphere with radius *L* is being digitized. The sphere in cylindrical coordinates $(\rho ,\phi ,z)$ is,

*x,z*) plane is shown in Fig. 3(a). In Figs. 3(b), 3(c), and 3(d) the phase-shifted fringe patterns correspond to,

## 4. Experimental results

Here we show the experimental results for our proposed 360-degree profilometry technique. As test 3D-surface$\rho (z,\phi )$, we have chosen a (white-painted) Rubik’s cube mounted in its triangular base for its sharp angles and high frequency surface structure which clearly reveals the accuracy, high resolution and low noise of the proposed 360-degree digitizing technique.

We start by showing a white-light photograph of our Rubik’s cube in Fig. 4(a). In Fig. 4(b) we show the same cube illuminated with the linear grating along the *z* coordinate. The projection and imaging optical system are almost telecentric, and the camera’s CCD is parallel to the (*x,z*) plane.

Figure 5(a) and Fig. 5(b) show two (out-of 4) fringe patterns phase-modulated by the Rubik’s cylindrical coordinate representation$\rho (z,\phi )$. Figure 5(a) has a phase-shift of 0-radians, while Fig. 5(b) has a phase-shift of *π* radians. The fact that the camera and the fringe projector both have their optical axis at the middle of the Rubik’s cube permits an entire (shadows-free) clear digitalization of the cube 3D-surface $\rho (z,\phi )$ (see Fig. 2). In general if the sensitivity angle $({\phi}_{0})$ is not too high, and the 3D-surface belongs to$\rho (z,\phi )\in {C}^{1}$, no self-generated shadows from the object under analysis are cast over the camera-view as in previous grating projection configurations (see Fig. 1(a)).

Figure 5(c) shows the wrapped phase obtained according with the following least-squares 4-step phase-shifting algorithm [16],

Of course higher spatial frequency surface reconstruction is possible by increasing the azimuthal $\Delta \phi =2\pi /N$ and spatial CCD pixel resolutions. Lower phase-noise demodulation is also possible by increasing the number *M* of phase-stepped images, Fig. 6 clearly shows interesting high frequency surface details of the Rubik’s cube.

Because the fringe patterns obtained have no spatial-carrier (see Eq. (7)) we may use the full spatial Fourier spectrum to reconstruct fine details of the 3D cube's surface. In spatial carrier interferometry one has at most half, but in practice, one third of the available Fourier frequency space to house the 3D surface spectrum. In contrast having close-fringes (base-band) fringe patterns (Figs. 5(a) and 5(b)) one has the full spectral bandwidth to recover fine surface details. This high-frequency surface reconstruction (Fig. 6) cannot be seen in previously published 360-degree approaches which render very smooth digital solid surfaces [2–15]. At the risk of becoming repetitive, we emphasize that, using base-band (closed-fringes) fringe patterns, allows one to reconstruct the digitized 3D-surfaces with the full theoretical spatial bandwidth that the raw digitized fringe data can hold.

## 5. Discussion of the advantages and limitations of the proposed 360-degree profilometer

In the peer-reviewing process we had many interesting questions coming from our reviewers and we feel that it is worth answering them within the paper because they elucidate and clarifies many interesting points not covered, or probably not fully explained so far.

- a) Why this is a low-noise 360-degree profilometry technique? This 360-degree profilometer has low phase-noise because it uses base-band, phase-measuring,
*M*-step demodulation algorithms. If white additive Gaussian noise corrupt the fringes, and least-squares*M*-step phase-shifting algorithms are used (as in this paper) we obtain an analytic signal with a*noise-power-reduction of*(1/*M*) with respect to the noise power of the digitized fringes [16]. In our case the analytical signal in Eq. (10) has a noise power reduction of (1/4) with respect to the additive noise power of each phase-stepped fringe images$I(z,\phi ,n)$, $n=\{0,1,2,3\}$. - b) How many digital CCD images are needed to obtain each phase-stepped cylindrical fringe image $I(z,\phi ,n)$ of the solid? We need
*N*CCD-images to assemble*N*central CCD-lines at*x*= 0 into a single fringe-image$I(z,\phi ,n)$. In our Rubik’s cube, for each rotation we took 400 CCD-lines per phase-stepped fringe-image$I(z,\phi ,n)$. The azimuthal resolution increment was Δφ = 2π/400. - c) Which unwrapping algorithm was used? Given the low-noise of the projected fringes and the 4-step least-squares phase-shifting demodulation used, we have employed the most basic phase unwrapping algorithm: line integration of wrapped phase differences.
- d) How long does it take to capture a single phase-stepped fringe-pattern in Figs. 5(a) or 5(b)? The turntable was controlled by a stepping motor having an incremental angle Δφ = 2π/400. It takes about 2 seconds for each full turntable rotation, during which we grab 400 central CCD pixel-lines at
*x*= 0. So to capture the 4 phase-stepped fringe-pattern image-data takes about 8 seconds. - e) Which is the resolution of each phase-stepped cylindrical fringe-image $I(z,\phi ,n)$ of the Rubik’s cube in Fig. 5? The angular and spatial resolution for each image $I(z,\phi ,n)$ in this paper is$\text{\hspace{0.17em}}z\in \{1,2,\mathrm{3...},480]$, and $\phi \in \{0,\Delta \phi ,2\Delta \phi ,\mathrm{...},400\Delta \phi \}$with Δφ = 2π/400 and a CCD camera with 640x480 pixels.
- f) Why this 360 degree method is high resolution? The spatial 3D surface resolution is only limited by the CCD's vertical resolution, the azimuthal angle resolution Δφ = 2π/
*N*of the stepped turntable motor and the mechanical stability of the turntable. In our Rubik's experiment the spatial resolution was (480,400) surface pixels to represent$\rho (z,\phi )$ in cylindrical coordinates. - g) Self-generated shadow-free digitalization is only possible for topological convex solids, isn’t it? Topological convex 3D-surfaces form a proper subset of$\rho (z,\phi )\in {C}^{1}$. Any solid enclosed by a single-valued 3D-surface $\rho (z,\phi )\in {C}^{1}$ (see Eq. (4)) may be digitized using the proposed 360-degree technique. The 3D Rubik’s cube in Fig. 4 is not entirely convex because of its mounting triangular base. There are points over the 3D surface of the cube-base compound which cannot be joined by a straight line lying entirely within this composed solid (convexity definition). The Rubik's cube-plus-base compound is a single-valued$\rho (z,\phi )\in {C}^{1}$surface except for the bottom of the triangle-base, which could not be digitized.
- h) Why quasi-cylindrical objects are easier to digitize? In the extreme case of a 3D cylinder,$\rho (z,\phi )=R,\text{\hspace{0.17em}}z\in [-L,L],\text{\hspace{0.17em}}\phi \in [0,2\pi ),$ a single CCD-line at
*x*= 0 would define the entire cylinder for every azimuthal angle*φ*. Continuous quasi-cylindrical objects $\rho (z,\phi )\in [R,(1+\epsilon )R]$ $(R>0,\text{\hspace{0.17em}}\epsilon \ll 1.0)$, are always easier to digitize and form a small subset of the broader function space $\rho (z,\phi )\in {C}^{1}$. - i) Which is the most basic mathematical condition of 3D-surfaces that this method can digitize? As we said, the basic mathematical assumption is that the 3D boundary-surface must be continuous and single-valued function$\rho (z,\phi )\in {C}^{1}$, $z\in [-L,L]$, $\phi \in [0,2\pi )$. For example a coffee-mug 3D boundary-surface is not within the space$\rho (z,\phi )\in {C}^{1}$. That is because at the handle, the mug has two-valued cylindrical radiuses$\rho ({z}_{1},{\phi}_{1})={\rho}_{1}$and$\rho ({z}_{1},{\phi}_{1})={\rho}_{2}$, one surface (
*i.e.$\rho ({z}_{1},{\phi}_{1})={\rho}_{1}$*) occludes the other one. Digitize multi-valued cylindrical functions $\rho (z,\phi )$ such a donut-like genus-1 solids is impossible with the proposed profilometer. One would necessarily need multiple camera perspectives, not a single one. - j) Why it is claimed that the projection-camera system proposed is new?, isn't it the one used in standard single stripe-line 360-degree profilometry?. That is true, however our system uses more robust, less-noisy coherent, phase-stepping demodulation algorithms while maintaining the shadow-free (for$\rho (z,\phi )\in {C}^{1}$) advantages of the projector-camera geometry of single-line projection 360-degree profilometry.
- k) Is fringe-image fusion from different perspectives needed? Our method uses a straightforward and trivial line-by-line raw fringe-data assembling to obtain the digitized 3D-surface$\rho (z,\phi )\in {C}^{1}$within a single cylindrical coordinate system $(\rho ,z,\phi )$, so no complex image fusion is needed. This is an advantage with respect to some multi-view 360-degree profilometers that must fusion or blend different fringe-image perspectives residing in different coordinate systems into a common coordinate system using iterative closest point (ICP) algorithms (e.g. Geomagic). The approach herein proposed only measures each object point once and thereby the fusion problem from different fringe-patterns perspectives is eliminated and the generation of the final 3D-surface $\rho (z,\phi )\in {C}^{1}$ is very easy.
- l) How to understand
*x*= 0 in Eq. (7) ? The pixel-line coordinates given by the line (*x*= 0,*z*) is the central CCD-column. We keep this line for each azimuthal increment$\Delta \phi =2\pi /N$. As mentioned, we then assemble in a line-by-line basis,*N*CCD-columns with coordinates $(x=0,z,\phi )$for$z\in [-L,L]$, and$\phi \in [0,2\pi )$ to obtain each phase-shifted fringe-pattern $I(z,\phi ,n)$in cylindrical coordinates (see Figs. 5(a) and 5(b)).

Finally as far as we know, this is the first time that a more precise mathematical definition of the functional space where the solid bounding-surfaces must reside is given. This functional space is the continuous, single-valued space$\rho (z,\phi )\in {C}^{1}$. The functional space $\rho (z,\phi )\in {C}^{1}$ includes all topological convex solids as a proper subset. However the space$\rho (z,\phi )\in {C}^{1}$ fails to include more complex 3D-surfaces such as the boundary of a coffee-mug for example. That is because the function space $\rho (z,\phi )\in {C}^{1}$does not include multi-valued (in cylindrical coordinates) boundary surfaces such as a coffee-mug which is a topological genus-1 3D-surface.

## 6. Conclusions

Here we have presented a new experimental set-up along with its theoretical analysis for 360 degree digitizing of solids. We have seen that whenever the 3D enclosing surface of the digitizing solid is within the single-valued, function space$\rho (z,\phi )\in {C}^{1}$, it can be digitized accurately and without self-occluding shadows. We have chosen as experimental solid-test a Rubik’s cube positioned over its triangular base for its sharp corners and high frequency surface details. As seen from the experimental results (Figs. 4, 5 and 6) we are capable of digitizing most of the non-convex Rubik’s cube-plus-base compound without self-generated shadows, because the CCD camera and the linear grating projector have both their optical axis crossing the center of the cube-base compound (Fig. (2)). In contrast, previous grating approaches (see Fig. 1(a)) many times generate self-occluding shadows even for convex solids precluding the analysis of the lowest parts of the object. On the other hand stripe-light triangulation estimation precludes the use of more accurate phase-demodulation techniques, such as the least-squares phase-step algorithm used in this work. Finally having base-band fringe patterns (Figs. 5(a) and 5(b)) one keeps the full spatial bandwidth of the raw digitized data for 3D-surface digital reconstruction. This allows visualizing higher frequency 3D-surface details. In other words, closed-fringes pattern images, allows one to analyze the digitized solids with the highest possible surface bandwidth available from the raw fringe data.

## Acknowledgments

The authors would like to acknowledge the financial support from project 177044, issue by the Mexican Consejo Nacional de Ciencia y Tecnologia (CONACYT).

## References and links

**1. **M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A **72**, l56–l60 (1982).

**2. **M. Halioua, R. S. Krishnamurthy, H. C. Liu, and F. P. Chiang, “Automated 360 ° profilometry of 3-D diffuse objects,” Appl. Opt. **24**(14), 2193–2196 (1985). [CrossRef] [PubMed]

**3. **X. X. Cheng, X. Y. Su, and L. R. Guo, “Automated measurement method for 360 ° profilometry of 3-D diffuse objects,” Appl. Opt. **30**(10), 1274–1278 (1991). [CrossRef] [PubMed]

**4. **A. K. Asundi, “360-deg profilometry: new techniques for display and acquisition,” Opt. Eng. **33**(8), 2760–2769 (1994). [CrossRef]

**5. **M. Chang and W. C. Tai, “360-deg profile noncontact measurement using a neural network,” Opt. Eng. **34**(12), 3572–3576 (1995). [CrossRef]

**6. **A. S. Gomes, L. A. Serra, A. S. Lage, and A. Gomes, “Automated 360° degree profilometry of human trunk for spinal deformity analysis,” in *Proceedings of Three Dimensional Analysis of Spinal Deformities*, M. Damico et al. eds., (IOS, Burke, 1995), pp. 423–429.

**7. **Y. Song, H. Zhao, W. Chen, and Y. Tan, “360 degree 3D profilometry,” Proc. SPIE **3204**, 204–208 (1997). [CrossRef]

**8. **A. Asundi and W. Zhou, “Mapping algorithm for 360-deg profilometry with time delayed integration imaging,” Opt. Eng. **38**(2), 339–344 (1999). [CrossRef]

**9. **X. Su and W. Chen, “Fourier transform profilometry,” Opt. Lasers Eng. **35**(5), 263–284 (2001). [CrossRef]

**10. **X. Zhang, P. Sun, and H. Wang, “A new 360 rotation profilometry and its application in engine design,” Proc. SPIE **4537**, 265–268 (2002). [CrossRef]

**11. **J. A. Munoz-Rodriguez, A. Asundi, and R. Rodriguez-Vera, “Recognition of a light line pattern by Hu moments for 3-D reconstruction of a rotated object,” Opt. Laser Technol. **37**(2), 131–138 (2005). [CrossRef]

**12. **G. Tmjillo-Schiaffino, N. Portillo-Amavisca, D. P. Salas-Peimbert, L. Molina-de la Rosa, S. Almazan-Cuellarand, and L. F. Corral-Martinez, “Three-dimensional profilometry of solid objects in rotation,” in AIP Proceedings **992**, 924–928 (2008).

**13. **B. Shi, B. Zhang, F. Liu, J. Luo, and J. Bai, “360° Fourier transform profilometry in surface reconstruction for fluorescence molecular tomography,” IEEE J Biomed Health Inform **17**(3), 681–689 (2013). [CrossRef] [PubMed]

**14. **Y. Zhang and G. Bu, “Automatic 360-deg profilometry of a 3D object using a shearing interferometer and virtual grating,” Proc. SPIE **2899**, 162–169 (1996). [CrossRef]

**15. **Z. Zhang, H. Ma, S. Zhang, T. Guo, C. E. Towers, and D. P. Towers, “Simple calibration of a phase-based 3D imaging system based on uneven fringe projection,” Opt. Lett. **36**(5), 627–629 (2011). [CrossRef] [PubMed]

**16. **M. Servin and J. C. Estrada, “Analysis and synthesis of phase shifting algorithms based on linear systems theory,” Opt. Lasers Eng. **50**(8), 1009–1014 (2012). [CrossRef]