Abstract

Three-dimensional displacement in multi-colored objects is measured by combining fringe projection (FP) and digital image correlation (DIC). Simultaneity of measurements through both techniques is warranted by encoding their signals on the RGB channels of color images. By separating the illuminating sources for each technique and by using ultraviolet light for DIC, contrast and amplitude of the registered signals are enhanced, enabling displacement measurement of multi-colored objects in dynamic events. The objects may even present a relatively large dynamic range of intensity levels. Proper selection of the light sources is supported by spectral analyses of the components of the system and by evaluation of the contrast of the registered images. Experimental results reveal that displacement measurements with relatively high accuracy can be obtained.

© 2017 Optical Society of America

1. Introduction

In optical metrology, when measurement of dynamic events is to be done, one may resort to the use of composite color images, where up to three different signals can be encoded on the different color channels of the images. Once a color image is registered, the images corresponding to each color channel can be separated by software and be used for analysis. The signals may correspond to information needed either by different optical techniques, for example by fringe projection (FP) and digital image correlation, DIC [1–3], or by a common auxiliary method, as in phase stepping [4–8]. Furthermore, in deflectometry, color encoding has been used to enable simultaneous registering of cross gratings [9], necessary for the recovering of the gradient of refractive index, or for performance comparison of DIC and fringe deflection [10].

In particular, one approach that uses color encoding for the measurement of three-dimensional deformation, comprises two color signals; one is related to FP and another to DIC [2] (we name this technique FP-DIC). In this case, FP yields the out-of-plane component of displacement, and DIC gives the in-plane component. The resulting setup becomes simple since only one camera and one digital projector are used. The two needed signals can be both generated by the projector –using complementary colors [3]– or the signal for DIC can be directly painted on the object, as randomly-located spots [2, 11].

One drawback of FP-DIC is that for non-planar objects, the in-plane component of displacement is influenced by the out-of-plane component [12]. For the color encoding technique, this issue is shown to be corrected by applying a transformation equation to the imaging lens which is coupled to the camera [13]. Another limitation is that color-encoded signals can only be optimally retrieved as long as the color of the object is neutral.

The aim of this work is to extend the analysis of FP-DIC to non-neutral colored objects. We show that for multi-colored objects, by introducing a variation in the setup, results with relatively high-accuracy can be obtained. The variation consists in using two separated light sources for each technique. Then, the optimal design of the setup is obtained by considering coupling effects between the color channels of the camera and of the different light sources. In this work, proper selection of the light sources is done by evaluating the contrast of resulting images when examining a plane object.

This work is organized as follows. In Sec. 2 we introduce the mathematical model that represents the imaging of a neutral-colored object; the theory of both FP-DIC technique and contrast calculation is given in this section, as well. In Sec. 3, description of the spectral characterization of the components of the setup is outlined. An analysis of contrast is presented in Sec. 4; in this section, the optimal components of the light source are found, for neutral and multi-colored objects. In Sec. 5, we include a displacement analysis that allows us to evaluate the performance of various types of combinations of light sources, when dealing with multi-colored scenes. Then, in Sec. 6, performance of the resulting optical system is evaluated by comparing prescribed and corresponding measured displacements, for multi-colored objects; additionally, the optimized setup is used for the measurement of three-dimensional displacement in a geological model subjected to compression. Finally, in the last section, our conclusions are given.

2. Background

2.1 Imaging of colored objects

The modeling of the imaging process considers a typical FP-DIC setup, as that depicted in Fig. 1. The setup consists of an illuminating source (a digital projector or a laser), a camera, an imaging lens, and the object under test. The schematic diagram also includes a second light source (DIC projector), which can be used in case that the light sources for FP and DIC are spatially separated.

 figure: Fig. 1

Fig. 1 Optical layout for FP-DIC technique.

Download Full Size | PPT Slide | PDF

In order to gain insight of the registering process of non-neutral colored objects, we need to contemplate the spectral characteristics of the setup components: spectral response of the camera, spectral content of the light source [I(x,y,λ)], and spectral reflectance of the object surface, O(x,y,λ), where(x,y)designates a particular position and λ is light wavelength. For a color camera that incorporates a Bayer filter array, the spectral transmittance functions of the red, green and blue filters can be denoted by rC(λ), gC(λ) and bC(λ), respectively. Notice that these functions do not depend on position. Figure 2(a) depicts these three functions for the camera used in the experimental work of this report. Also in Figs. 2(b)-2(d), we present the spectral reflectance of various objects (pigments) and the spectral power distribution of several illumination sources; these figures are fully described in Sec. 3.

 figure: Fig. 2

Fig. 2 (a) Spectral response –quantum efficiency, QE– of each RGB color channel of camera sensor; (b) spectral reflectance of various pigments (primary RGB colors, yellow Y, black K, white W, gray Gy, and gold Gd); (c) spectral distribution of FP light sources (in legend, words starting with “p” stand for projector, and the second letter indicates primary RGB color or secondary CMY color; also lasB designates the blue laser); (d) spectral distribution of DIC light sources (led denotes a matrix of LEDs; R for red and IR for infrared; UV stands for ultraviolet LEDs matrix; and Na is sodium lamp); (e) Superposition of Figs. 2(a) and 2(d); (f) nonlinear behavior of FP projector (solid lines) and of projector-camera combination (dotted lines with markers); c designates camera. GL and au correspond to gray levels and arbitrary units, respectively.

Download Full Size | PPT Slide | PDF

Considering the components of the setup, output of the camera, for each color channel [R G B], can be described by [14]

[R(x,y)G(x,y)B(x,y)]=[0rC(λ)O(x,y,λ)I(x,y,λ)dλ0gC(λ)O(x,y,λ)I(x,y,λ)dλ0bC(λ)O(x,y,λ)I(x,y,λ)dλ].
When the illumination source corresponds to a digital projector, the spectral power distribution I(x,y,λ)can be set by the actual RGB triplet values fed to the projector [rPgPbP], which in turn depend nonlinearly on the RGB values instructed to the graphics card [14], [rinsginsbins]–in Fig. 2(f), the nonlinear behavior of the projector employed in this work is presented–. Therefore,
I(x,y,λ)=rP(x,y)IPR(λ)+gP(x,y)IPG(λ)+bP(x,y)IPB(λ),
where IPn(λ)is the spectral distribution of each illuminating primary color of the projector, with n={R,G,B}. By substituting Eq. (2) into Eq. (1), the composite signal from the camera is
[RGB]=[0rCOIPRdλ0rCOIPGdλ0rCOIPBdλ0gCOIPRdλ0gCOIPGdλ0gCOIPBdλ0bCOIPRdλ0bCOIPGdλ0bCOIPBdλ][rPgPbP],
where, for brevity, coordinates (x,y) and (λ) have been omitted.

The off-diagonal elements of this matrix correspond to coupling effects between color channels of the camera sensor. Equation (3) can be equivalently indicated by a crosstalk matrix [5, 7, 14],

[RGB]=[arRarGarBagRagGagBabRabGabB][rPgPbP],
where first subindex of matrix coefficients amn, m={r,g,b}, designates color channel of the camera.

To disregard the influence of the object in Eq. (4) and consider only the coupling effects of the camera and the projector, a white object can be considered (or even better, direct illumination of the camera sensor by the projector may be assumed), where O(λ)=c0, with c0 being a constant. In this case, matrix coefficients can be measured by illuminating the object consecutively by pure red, green and blue, and separating the registered images into each color channel [5, 7, 15]. Matrix coefficients can then be computed by evaluating either the modulation amplitude, for the case of fringe pattern projection, or the average intensity, for the case of uniform illumination.

As noticed in [7], one useful feature of Eq. (4) is that it allows us to recover the instructed image from the corresponding registered image, via inversion of the crosstalk matrix. This implies that any coupling effect can be effectively reduced. In this work, the crosstalk matrix helps us select the color channels of the camera that will register the FP and DIC signals. In some tests, the crosstalk effect is to be reduced (for neutral-colored objects, Sec. 4.1), but in others, enhanced (for multi-colored objects, Sec. 4.2).

2.2 FP-DIC technique

Implementing simultaneity of the acquisition of FP and DIC signals can be done by encoding their signals on two distinct color channels of a camera. However, when color separation of images is carried out, residual interference between them may appear due to crosstalk, giving rise to noise and a consequent reduction of contrast. To partially alleviate this problem, we generally select a pair of channels such that they present the weakest coupling effect, typically the blue and red channels.

By disregarding the crosstalk effect and the nonlinear response of light source, the FP intensity image detected by the camera can be expressed as [16] –this corresponds to any of the channel signals given by Eq. (4)

I(x,y)=a(x,y)+b(x,y)cos[ϕ(x,y)+2πf0x],
where (x,y) are pixel coordinates on sensor plane, a(x,y) and b(x,y) refer to background level of illumination and modulation amplitude of resulting fringes, respectively. The terms of the argument of the cosine function are the phase information ϕ, which is related with the height distribution of object surface, and the carrier frequency f0 that allows us to apply the Fourier phase extraction method [3, 16]. To take care of intrinsic projection variations of the projected grating, a compensation factor is applied to the period measured at the center of the field of view [17], P=1/f0, as P'=P(1+xsinα/l)2, where αis the mean angle between projector and optical axis of the camera (as depicted in Fig. 1), and l represents the distance between the exit of the light source and object. The grating can be generated either by a projector or by a combination of a laser beam and a Ronchi grating [18].

By measuring the phase of two consecutive image captures, their difference Δϕ, proportional to the out-of-plane component of displacement, can be computed via double application of Fourier method. Therefore, out-of-plane displacement Δz can be obtained by [17]

Δz=Δϕ2πP'cosαsinα+(dlcosα)x/(ld),
where d is the camera-to-object distance.

On the other hand, for captured DIC images, we can similarly assume that these images are free of residual FP information. Therefore, the correlation method outlined in [3] can be applied to the separated images. To discard any intrusive effect that may result from surface preparations, like surface spraying or painting, the object is illuminated uniformly so that the speckle signal of the DIC images is formed by the natural texture of the object surface. In this case, however, for objects with relatively smooth surfaces, contrast of the images may become too low. To overcome this situation and, in general, to increase the contrast of images, a separate light source for DIC is proposed. As described in Sec. 4, a separate DIC light source will also enable us to analyze objects with both multicolor content and large dynamic range.

2.3 Contrast calculation

Since accuracy of displacement measurements is largely dependent on the contrast of the registered images, we will use this parameter to estimate the performance of the employed light sources. Considering that the nature of the spatial structures for FP and DIC images is different, two expressions for contrast are defined.

For FP, we define the following expression for fringe contrast,

CFP=b(x,y)/a(x,y)(wF,MIN/wF)(KMIN/K)1/3,
where angular brackets denote the operation of ensemble spatial averaging. The first factor of Eq. (7), b(x,y)/a(x,y), refers to the common definition of contrast, which incorporates the parameters appearing in Eq. (5), i.e., the modulation amplitude and the background level [19]. These two parameters can be obtained by filtering the Fourier spectrum of the corresponding image I(x,y), and transforming back to the spatial coordinates: a(x,y)=|1{I˜(fx,fy)H(fx,fy)}|and b(x,y)=2|1{I˜(fx,fy)H(fxf0,fy)}|, with {} being the Fourier transform operator, I˜(fx,fy)={I(x,y)}and H(fx,fy) is a band-pass filter applied to image spectrum; (fx, fy) are frequency domain coordinates.

Contrast of FP images is reduced by the presence of any residual speckle noise (arising mainly from the DIC signal). We found that the larger the noise content, the wider the spectral side lobes. Thus, an additional multiplicative parameter is included in Eq. (7), wF,MIN/wF, where wis the width of the first spectral lobe for any image, at 85% of the maximum level (at this percentage value, the shape of the spectral lobe is smooth and identification of the corresponding pixel is straightforward), and wF,MINis the value of w, a minimum, that corresponds to the image with maximum contrast. The third factor in Eq. (7), (KMIN/K)1/3, is included to further enhance FP contrast differences between images; this factor weighs the level of scattering of intensity values with respect to the mean, where K is the number of points that fulfill the condition|(I(x,y)I(x,y))/I(x,y)|<0.1; then, KMIN corresponds to the image with the highest contrast. In the latter condition, a relatively small value of the threshold, 0.1, is used since intensity values of low-contrast images are generally concentrated within a narrow range of values.

For DIC, we define contrast as

CDIC=σlocalIlocalΓMAX/Γ(KMIN/K),
where subindex local makes reference to a subset of data (subimages of 7x7 pix, where pix denotes pixel) and σlocalis the respective standard deviation. The first factor in Eq. (8) refers to the expression generally used in laser speckle fields [17]. Analogously to the case of FP contrast, to boost contrast differences, two more factors are added. One of the additional factors considers the effect of any residual fringes and the other follows the same reasoning as in FP (notice a difference in the exponent of the last factor, which reflects the fact that spatial features of fringes and speckle are different). The first additional factor is ΓMAX/Γ, where Γ is the value of the maximum amplitude of the first side lobe, in frequency domain; hence, ΓMAX represents Γ for the image with the maximum contrast.

Contrast values for FP and DIC are scaled in such a way that they take values in the range 0-100. There is no relationship between both scales.

3. Spectral characterization

Spectral specifications related to the components of the experimental optical setup are presented in Fig. 2. Figure 2(a) shows the spectral response of the camera sensor (Lumenera Lt225c, 1088x2048 pix, 18-55 mm Nikon lens) [20], for each of the RGB primary colors (abbreviations are defined in figure title). We can notice the ability of the sensor to detect near IR and UV. Overlapping between neighbor and non-neighbor channels can be noted as well. Two additional vertical lines are incorporated in this figure to indicate location of the emitting spectrum of a sodium lamp (emitting peak at 589 nm, Pasco OS-9287B) and UV LEDs (peak at 394 nm, bandwidth of 7 nm).

Figure 2(b) depicts the spectral reflectance curves of the color-saturated pigments used in this work. Reflectance measurements were done by a PerkinElmer spectrometer, Lambda 900. These particular pigments are commonly used in the local leather footwear industry, and therefore they show highly saturated colors. We selected various representative pigment colors: red, green, blue, yellow, black, white, gray and gold; gray and gold are slightly darker variations of white and yellow, respectively. From plots in Fig. 2, we can see that all pigments present significant reflectance values close to UV and around 850 nm. By considering the vertical lines that specify the location of the UV and sodium light sources, we can note that UV is reflected almost equally by all pigments (except for the white and gray pigment). The same result holds for the sodium light source, but only for red, green, blue, and black pigments. The fact that the reflectance does not depend on object color, suggests the possibility to analyze either multi-colored objects or scenes with high-dynamic range.

Also observed from Fig. 2(b), it is the high values of reflectance in the IR, with the red and yellow pigments standing out.

In Fig. 2(c), spectral distributions of light sources, for FP, are presented. These curves were obtained by an Ocean Optics spectrometer, USB2000. Two FP light sources are used: a blue laser (OxLasers, OX-BX8 Pro, 4 W, emitting at 447 nm), denoted by lasB, and a digital projector –Epson EH-TW-6100, 1080x1920 pix, 2300 lm–. Spectral compositions of the pure colors produced by the projector are included, and are indicated by “p” –all of them show a broad spectral width–. Analogously, Fig. 2(d) contains the spectral distributions of the light sources for DIC. In this case, five distinct sources are analyzed: one matrix of 4x10 5-W IR LEDs –center wavelength at 850 nm–, (ledIR); a matrix of 20x20 1-W red LEDs –at 640 nm– (ledR); a matrix of 4x10 3-W UV LEDs –at 394 nm– (UV); a DIC projector that produces uniform illumination –Epson PowerLite, 800x1280 pix, 3000 lm–; and a sodium lamp (Na). For the DIC projector, only the CMY colors are used. As shown, spectral distributions of the two projectors are different. Besides, their spectral distributions reveal the presence of weak crosstalk.

For visualization purposes, in Fig. 2(e) we present the superposition of the spectral compositions of the camera sensor, DIC light sources, and blue laser. Some facts are worth to highlight: (1) IR from LED matrix is detected almost equally by all camera channels; (2) both UV from LEDs and light from blue laser are strongly detected by the blue channel, but weakly registered by the green channel; in this case, response of the red channel is larger than that of the green channel; (3) red LED illumination is selectively detected by the red channel; (4) red spectral distribution is strongly reflected by W, Gy, Gd and R pigments, but weakly by G and B pigments.

In general, response of the camera to the colors of the projectors is defined by the crosstalk matrix, as is derived below.

As shown in Sec. 4, in selecting the optimum color from the FP projector, values of the individual electronic gains of the camera channels have to be adjusted. This is to partially compensate for the nonlinear response of both projector and camera. In the case of a projector, actual generated light intensity and RGB triplet values instructed to the graphics card show a nonlinear relationship. In Fig. 2(f) we present the nonlinear behavior of the projector, indicated by solid curves. The actual values of intensity are measured by a Newport power meter –M-815– located at the exit of the projector. In the case of white light, the RGB triplet is set to (255,255,255). As noticed, white color intensity is approximately equal to the sum of the pure primary colors. Moreover, note the imbalance between primary colors, with green being clearly favored when compared to red and blue.

Now, to understand the effect of the camera on the projector throughput, a white object is illuminated by FP projector (projecting each primary color one at a time) and the reflected signal is detected by the camera. Similarly to the situation of reinforcement of green by the projector, the camera reinforces notably the signal from blue channel, as shown by the curves indicated by dotted lines with markers, Fig. 2(f) –corresponding gray level values produced by camera software are scaled to mW, in such a way that the maxima for greens coincide–. From the two sets of curves, solid and dotted, we observe that curves for red and green tightly coincide, and therefore the camera response to red and green is approximately flat throughout the dynamic range. To conclude the latter, we have to consider that the measured response of the camera encompasses nonlinearities of both camera and projector. As shown below, the theoretical crosstalk matrix of the camera-projector combination yields (arR=0.54,agG=1.00,abB=0.54). When comparing these values with the normalized RGB values given by dotted curves of Fig. 2(f), at gray level of 255, (arR=0.51,agG=1.00,abB=0.85), we note clearly the difference related to the blue response, which should stem from the approach adopted by the camera software to process the RGB signal.

The theoretical crosstalk matrix of the projector-camera combination can be obtained by considering the spectral distributions of the RGB outputs from FP projector, Fig. 2(c), and the spectral response of camera sensor, Fig. 2(a), via Eq. (3):(0.540.340.03;0.111.000.18;0.030.320.54)–object spectral reflectance O(x,y,λ)is set to 1–. From this theoretical matrix, we can notice that signal from green channel of the projector is strongly coupled onto both red and blue channels of the sensor, with values of 0.34 and 0.32, respectively (couplings are scaled within the range 0-1). Smaller couplings are regarded to the green channel of the sensor, when projecting red and blue colors (0.11 and 0.18, respectively). Also, overlapping between red and blue is relatively weak (0.03). Additionally, differences in the values of the diagonal account for the imbalance between the three primary colors (0.54, 1.00, and 0.54), which apparently are similar to the values given by the normalized nonlinear relationship of the projector, at gray level of 255, (0.50, 1.00, and 0.55), which are derived from the plots indicated by solid lines in Fig. 2(f).

Values of the latter theoretical coupling matrix are verified experimentally. Firstly, gains of the camera channels are set to 1 (no-gain condition), and then the camera registers a uniform light field coming directly from FP projector (no object is used), where each primary color is projected sequentially; instructed values are indicated by triplets; for example, for pure red, (255,0,0). The resulting matrix is

(0.490.000.00;0.041.000.12;0.000.000.84), which is represented graphically by the group of the three leftmost bars in Fig. 3(a), case 1.0r (the first bar is formed by 0.49, 0.04 and 0, in a normalized mode; i.e., this bar refers to the recorded RGB signals when red is projected). We can observe that the coupling values are similar to those obtained theoretically, except for the second bar, where it is seen that red and blue crosstalk signals become practically null.

 figure: Fig. 3

Fig. 3 Relative coupling values. For (a) FP projector; (b) UV (U), blue laser (L), IR (I); (c) Dependence of coupling on instructed RGB value to FP projector.

Download Full Size | PPT Slide | PDF

As gains of the camera channels are varied –minimum and maximum gains of camera are 1 and 4, respectively–, the crosstalk matrix of FP projector is modified. This behavior is presented in Fig. 3(a). In this figure, each bar represents one column of the corresponding coupling matrix and the length of each RGB color segment designates the coupling value to the corresponding RGB channel –labels located exactly below the bars, {R, G, B}, denote the illuminating primary color–. For visualization purposes, lengths of all bars are set to 1; then, these types of plots indicate the relative response of each color channel to the corresponding illuminating color produced by the projector. Further, the bottommost horizontal labels designate both the value of the gain of camera channel and the corresponding boosted channel; e.g., 2.5r implies that the gain of red channel is 2.5, and that of the others, green and blue, are 1.0.

Considering the three first groups of bars in Fig. 3(a), it is observed that when the gain of the red channel is changed from 1.0 to 2.5 and 4.0 (while other gains are set to 1.0), the red signals detected by all camera channels increase, while green and blue signals decrease. Accordingly, the crosstalk matrix is changed and therefore the matrix depends on the camera gains. A similar result is obtained when gain of the other channels is increased. However, for the case when green gain is augmented, the effect is more marked.

It is worth pointing out that crosstalk values obtained by direct projection of light onto the camera are almost identical to those measured by using an intermediate white object, as done in [5]. Further, when the illuminating uniform field is replaced by a pattern of fringes, only slight differences are noticed as well. In all the analyses presented in this report it is assumed that all channel gains of the projectors are set to 1.0.

In Fig. 3(b) we include a similar analysis to that done for FP projector, but for DIC sources (UV and IR) and blue laser –denoted by U, I and L, respectively–. However, in this case a complete crosstalk matrix cannot be formed for each light source, but only one column. Then, a bar corresponds to the RGB signals registered by the camera when a particular light source is used. Considering the case of all gains set to 1.0 –denoted by 1.0r–, we note that both UV and blue laser are mostly detected by blue channel, and that IR is coupled almost equally onto all channels. Besides, for UV case, there is a non-negligible coupling onto both red and green channels. Additionally, as gain of only one camera channel is increased, a similar behavior as that found for FP is obtained; that is, the signal of the reinforced color channel increases at the expense of the signals of the other camera channels.

Apart from channel gains of the camera, another parameter that influences the coupling matrix is the intensity of the illuminating beam. In particular, for the FP projector, the intensity of the illuminating beam is directly related to the instructed RGB values. Regarding this case, the crosstalk matrix is measured as the intensity of each projector channel is varied from 0 to 255; resulting matrices are exhibited in Fig. 3(c) –all channel gains of camera are set to 1–. Each group of 3 bars correspond, going from left to right, to red, green, and blue illumination, respectively. Each group is associated with a particular gray level; for example, abscissa value of 45 implies illumination triplets (45,0,0), (0,45,0) and (0,0,45). Notice that no evaluation is done for gray values smaller than 30, since in these circumstances, noise level is on the order of the signal. Information given by Fig. 3(c) is the same as that shown in Fig. 2(f), denoted by dotted lines (with markers), but normalized. One advantage of Fig. 3(c) is the straightforward visualization of how the fractional composition of the camera RGB signals varies with intensity; for instance, as gray level is diminished, green signal increases, in detriment of red and blue signals.

4. Contrast analysis

4.1 Neutral-colored objects

As mentioned in Sec. 2, contrast is taken as the evaluation criterion for proper selection of light sources for FP and DIC, when these techniques are used simultaneously. Therefore, all possible combinations of FP and DIC light sources are tested for representative object colors (pigments); the object under analysis is a diffuse reflecting flat surface formed by a layer of pigment (thickness, 5 mm, and rms roughness of 0.7 mm). The DIC light beam presents constant intensity and the FP beam is formed by a pattern of fringes.

Several configurations for the illumination sources are tested (to name them, the first component of the name corresponds to the device generating the DIC signal and the second one, to the generator of the FP signal): (1) just one projector [denoted by (1p); here, both signals for DIC and FP are produced by only one projector], e.g. W-C(1p), meaning cyan fringes embedded in white background, (2) two projectors, (2p), e.g. C-R(2p), implying projection of a cyan uniform beam generated by DIC projector and red fringes by FP projector (half a fringe period is black), (3) projector-blue laser, p-lasB, e.g. pY-lasB –yellow background produced by DIC projector and fringes by blue laser–, (4) matrix of IR LEDs-blue laser, IR-lasB, (5) matrix of IR LEDs-projector, IR-p, (6) matrix of red LEDs-blue laser, ledR-lasB, (7) matrix of red LEDs-projector, ledR-p, (8) matrix of UV LEDs-blue laser, UV-lasB, (9) matrix of UV LEDs-projector, UV-p, (10) low-pressure sodium lamp-projector, Na-p (this configuration is used only for multi-colored objects, Sec. 4.2).

Previous works using configuration (1) have already been reported [2,3,11,13]. In configurations (2)-(10), illuminating beams for DIC and FP are produced by separate light sources. Separating the light sources for FP and DIC allows us to widen the possibilities for adapting the illuminating beams to the features of the objects, such as spectral reflectance distribution and level of roughness. In this way, the intensity of the two beams can be adjusted independently of each other; consequently, high-power sources can be incorporated into the setup, enabling imaging of large and even dark objects. Additionally, illumination angle for DIC can be selected as to optimize the contrast of DIC images (any production of shadows can be avoided by use of multiple DIC sources that can be positioned as to achieve both different illumination angles and different directions of illumination).

For FP and DIC, images of size 731x2048 pix are employed, which corresponds to 10.6x29.8 cm2. Angles of illumination αandβ, for FP and DIC, respectively, are 9.3° and 30° (see Fig. 1). To promote the white-speckle effect of DIC images and hence their contrast, βis set to relatively low values.

Considering FP and DIC, distances from illumination source to object are 83.5 cm and 55 cm, respectively (object-to-camera distance is 66.9 cm). In FP, a projected period of 2 mm (duty cycle of 50%) is used [3] (generated by either the FP projector or the blue laser); when the projected grating is generated by the blue laser, a combination of a Ronchi ruling of 1000 lines/in and a 20-X microscope objective are used (distance between ruling and objective is 21.5 mm).

In Fig. 4, we include exemplary zoomed-in images (58x87 pix) obtained at each stage of the procedure for calculating contrast. Each object color is represented by one row of images, where we include, first, going from left to right, the captured image with white illumination (designated as the real image); then, the image recorded with the optimal combination of light sources, considering only FP contrast; and finally, the separated images associated with camera color channels. As verified by quantitative results in Fig. 5(a), the best combination of light sources for maximum FP contrast is the combination including a uniform beam produced by the red LEDs (DIC light source), and blue fringes by FP projector (combination ledR-pB) –this is an expected result since red and blue channels present the lowest crosstalk between them–. Therefore, in all cases presented in Fig. 4 [except for case (b)], the light combination corresponds to ledR-pB, and the fringe pattern is extracted from the blue channel and the DIC image from the red channel. In case represented by Fig. 4(c), for green pigment, the ledR-pG (red background produced by LED matrix and green fringes by FP projector) gives a slightly larger contrast than ledR-pB (recall that green crosstalk to blue channel is relatively strong).

 figure: Fig. 4

Fig. 4 Exemplary zoomed-in processed images for each neutral-colored object (we show only cases of optimal illumination for maximum FP contrast). (a) Black pigment. (b) Red pigment. (c) Green pigment. (d) Blue pigment. (e) Yellow pigment.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5

Fig. 5 Plots of contrast. (a) FP, (b) DIC. Legend is common to both plots, and notation is as follows: first part of name refers to DIC light source, and last part to FP light source; “p” stands for projector; (1p) means that only one projector (FP projector) is used for both FP and DIC, and (2p) denotes that two different projectors are used (one projector for FP and another for DIC). An instance of light combination is UV-pR –a matrix of UV LEDs is used for DIC and light by red channel of FP projector for FP (red fringes on a black background)-. More details are given in text.

Download Full Size | PPT Slide | PDF

Gains of camera channels for each case in Fig. 4 are as follows; Fig. 4(a) [3.2, 1.0, 1.0]; Fig. 4(b) [1.0, 1.0, 1.7]; Fig. 4(c) [1.2, 1.0, 1.5]; Fig. 4(d) [1.2, 1.0, 1.0]; and Fig. 4(e) [1.0, 1.0, 1.0], where triplets refer to [R, G, B] –exposure time is 500 ms for all cases–. Notice that gain of red channel is generally large, which compensates for the reduction of the red signal done by the projector, as pointed out in Sec. 3; -this compensation effect can be seen by comparing cases 1.0r and 2.5r in Fig. 3(b)-. From the recorded images, we can observe that all images are notably red. A consequence of the incremented red gain is a reduction of residual fringes in DIC images –to avoid the emergence of residual speckle in FP images, intensity of DIC source should be moderate–.

Complete quantitative results of contrast are presented in Fig. 5, for fringe illumination –FP–, Fig. 5(a), and spatially uniform illuminating light –DIC–, Fig. 5(b). On obtaining these data, Eqs. (7) and (8) are used, respectively. Regarding DIC, images are divided in subimages of 15x15 pix (with no overlapping), so reported values correspond to the average of the contrast of the subimages. Numerical values of the contrast in the two plots are unrelated, where their maxima of scale are assigned to be 100.

As observed from Fig. 5(a), considering all pigment colors, the optimal light-source combination for FP is ledR-pB (red LEDs for DIC and blue light from FP projector for FP). Similarly, from Fig. 5(b), the combination that produces the highest DIC contrast is C-R(2p) –cyan from DIC projector and red from FP projector–. Further, considering both techniques, the combination with the highest performance is ledR-pB. In addition to this, one-light-source configurations (shown by black line) present low contrast in DIC; this is mainly because of the limitation of the DIC incidence illumination angle, which is fixed by FP angle. FP angle is generally set to small values to avoid any production of shadows.

In general, to achieve the best performance of any combination of light sources, first, intensity of FP source is maximized (up to a certain gray level from 150 to 255) to obtain high-contrast fringes and relatively low content of residual speckle in FP image. Gain of FP channel should be small to obtain a low level of background. Then, DIC intensity is added, taking care that no image saturation occurs and that no residual speckle is introduced in FP image. Besides, to reduce the content of residual fringes in DIC image, gain of DIC channel should be relatively large.

4.2 Multi-colored objects

In previous works [14, 21], measurement of shape in multi-colored objects has been reported; in those works, color-encoded signals are used, and the main reported limitation is related with the identification of the signals. In the FP/DIC method, the same problem arises when the object contains multiple colors; and evidently, the previous results obtained for neutral objects do not hold. In multi-colored objects, it is common to deal with relatively large dynamic ranges of the scenes, what complicates measurements.

To get insight in the behavior of multi-colored objects, we analyze three different scenes, each with a distinct composition of colors. The three cases are presented in Fig. 6, where the first row of images, Fig. 6(a), represents an object formed by 5 stripes of different color (from each stripe, a zoomed-in region –58x29 pix– is taken, and these small regions are arranged as shown). Figures follow the same spatial arrangement used in Fig. 4. In Figs. 6(a) and 6(b), only the combination of light sources with the best overall performance (UV-pB) is shown (to evaluate the distinct light combinations, we proceed as in the case of neutral objects, but measuring contrast in each stripe, for each light combination). Figure 6(b) refers to an object composed of 6 distinct colors (a stripe with close-to-white pigment is added to the case of 5 colors), and this variation allows us to model a larger dynamic range at UV.

 figure: Fig. 6

Fig. 6 Exemplary zoomed-in processed images for each multi-colored object (we show only cases using optimal illumination). (a) 5-color object. (b) 6-color object. (c) 4-color object.

Download Full Size | PPT Slide | PDF

Third row of images, Fig. 6(c), displays the case of an object with only 4 colors, but illuminated by Na-pG (in this case, size of subimages is 58x44-pix). In some subimages, a cross is used to indicate that the subimage should not be considered for analysis, since it is not part of the full image. In contrast to this, white subimages do correspond to recorded full images, and they are associated with fully saturated cases.

Note that signals of FP and DIC are extracted from green and red channels, respectively. Blue channel is not used because it gets saturated readily when analyzing objects with high-dynamic range.

Gain of camera channels and exposure time, for each part of Fig. 6 are, Fig. 6(a) [2.5, 1.0, 1.0, 60]; Fig. 6(b) [1.7, 1.0, 1.0, 20]; and Fig. 6(c) [2.4, 1.0, 1.0, 500]; where the elements of the quadruplets refer to the RGB channel gains and exposure time, in ms, respectively. By incrementing the gain of the red channel of the camera, the spatially uniform UV beam is forced to couple into red channel, and this gives rise to the DIC signal [as depicted in Fig. 3(b), case 2.5r]. UV intensity value is selected as to avoid a large background in FP image. For scene with 6 colors, not all subimages corresponding to blue channel get saturated, since the overall intensity level is set to a lower value than the 5-color scene (recall that incorporation of the gray pigment causes a larger dynamic range).

In case of sodium lamp illumination, a relatively large exposure time is used. This compensates for the relatively low power of the lamp.

With regard to multi-colored objects, the most important requirement of an appropriate light combination is that both DIC and FP light beams have to be reflected as uniformly as possible by all stripe colors. Additionally, the camera channel where fringes are to be detected should couple the less to the color used for speckle and vice versa [Figs. 2(b), 2(c) and 2(e) help us visualize these requirements]. Let us analyze, for example, the suitability of IR-pB. From Fig. 2(b), we see that reflectance values of black and yellow pigments, at IR (850 nm), are relatively dissimilar; hence the dynamic range of the scene becomes large, and any light combination using IR will not yield acceptable displacement measurements. Besides, as IR is detected by all camera channels, the resulting signal becomes a large background, which in turn decreases the contrast of fringes.

For configurations that use only one projector, difficulties arise from limitations to control independently the intensities for FP and DIC and limitation to adjust the DIC illumination angle. Therefore, one-projector configurations present the lowest performance.

Unlike IR, UV is reflected more uniformly by all pigments, and therefore some combinations incorporating UV, like UV-pB, produce the best results; blue fringes, by crosstalk, are registered on green channel (blue channel gets saturated and is useless) and speckle field is detected on red channel. This can be seen from Fig. 3(b), case 2.5r, where UV is strongly detected by red channel (this helps enhancing speckle contrast) and weakly by green channel (causing a low background in FP image).

In all three types of scenes tested, the camera gamma value is adjusted as to increase the signal from low-reflectance stripes (gamma value is 4.0 for the 5- and 6-color scenes and 2.0 for the sodium illumination, where scale of gamma values is 0.1-10, 1.0 representing a linear relationship between input intensity and output gray scale).

5. Displacement analysis

Performance of light combinations can be complementarily evaluated by the accuracy of the measurement of three-dimensional displacement. For this aim, measurement of prescribed displacements is done. Instructed displacements are of 1 mm, both in-plane and out-of-plane displacements, and they are produced by a Thorlabs stage, which shows an accuracy of 1.25 µm. Out-of-plane displacement is obtained by Eq. (6), and in-plane, as described in [3], by software proVision-XS PIV, from IDT, where subimages of 24x24 pix are used. Influence of out-of-plane component on in-plane results is compensated as in [13].

In this analysis, images of a scene with different color content are registered, and then each color stripe is separated into its RGB signals. When building up the object under analysis, V-shaped depressions are formed along the boundaries between neighbor stripes. The displacement is calculated for individual color stripes. Figure 7 illustrates the results of one of the measurements; in Figs. 7(a) and 7(c) we show the out-of-plane, OP, and in-plane, IP, components of displacement, and in Figs. 7(b) and 7(d), their respective cross sections (located at the center of the image, in horizontal direction). In Fig. 7, the order of the colors is (from left to right) green, blue, yellow, black and red. As observed, for the out-of-plane results, high levels of noise appear in calculations along the boundaries between neighbor stripes (these regions are not included in calculations). Since the boundaries present random depths on the order of the period of the fringes, they alter locally the period of the fringes; these random alterations are slightly different for the reference and displaced images, and in the end this is reflected as noise in the phase maps.

 figure: Fig. 7

Fig. 7 Typical measurement of three-dimensional displacement in multi-colored objects (instructed displacement is 1.0 mm). (a) Out-of-plane component. (b) Horizontal cross-section at center of Fig. 7(a). (c) In-plane component. (d) Horizontal cross-section at center of Fig. 7(c).

Download Full Size | PPT Slide | PDF

Further, the largest errors are related to the red stripe, which is due to the low reflectance that this pigment color presents to blue light. Also it is noticed from Fig. 7, that in-plane results show smaller variations when compared to the out-of-plane calculations; this is due to the fact that, unlike point-wise calculations for out-of-plane displacement, in-plane calculation involves subwindows with finite size.

Accuracy is evaluated by computation of the absolute percentage relative error and the standard deviation. Results regarding these parameters are shown in Figs. 8(a) and 8(b), for FP and DIC, respectively (only 6 light combinations that yield acceptable results are reported). In these figures, stripe color corresponds to x-axis and relative error, to y-axis. Each curve represents a particular light combination. The UV-pB combination is used in two different scenes: with 5 color stripes, K, R, G, B, Y [denoted by UV-pB (5)], and with 6 color stripes [a gray stripe, Gy, is added to the latter one, to model a larger dynamic range; Gy reflectance can be seen in Fig. 2(b), Gy curve]. For combination Na-pG (background produced by sodium light and green fringes by FP projector), object comprises only 4 stripes.

 figure: Fig. 8

Fig. 8 Accuracy of displacement measurement in multi-colored objects. Percentage absolute relative error, (a) out-of-plane (FP) –bars indicate standard deviation– and (b) in-plane (DIC). (c) Contrast evaluation (arbitrary units). The minimum and maximum standard deviations for (a) and (b) are given in the text.

Download Full Size | PPT Slide | PDF

For Fig. 8(a), minimum and maximum standard deviations correspond to 0.03 mm [B stripe, UV-pB(5) light combination] and 0.47 mm [R stripe, W-Y(1p) light combination], respectively. On the other hand, for Fig. 8(b), they are 0.01 mm (R pigment, Na-pG) and 0.19 mm (K pigment, IR-pB), respectively.

Difficulties arising from dealing with large dynamic ranges can be visualized by comparing the two cases UV-pB(5) and UV-pB(6), where the same light combination is used. As seen from Figs. 8(a) and 8(b), performance of UV-pB light combination is superior for an object with 5 colors to an object with 6.

Following the afore-mentioned criterion for proper selection of light source, we can observe from Fig. 2(b) that a light source emitting close to 590 nm may be a candidate for the technique, since light of this wavelength is reflected almost equally by black, red, green and blue stripes. This circumstance is analyzed and corresponding results are displayed by the green curves in Figs. 8(a) and 8(b). As noticed, Na-pG turns to be one of the best light combinations, as long as the scene does not include white and yellow shades, which would enlarge the dynamic range as to cause saturation. For this light combination, Na-pG, DIC and FP signals are detected on red (via crosstalk) and green channels, respectively. Moreover, if in this case, blue fringes are used, i.e. Na-pB, similar results are obtained (fringe pattern should be detected on green channel via crosstalk).

Considering all tested light combinations, performance of FP (error, 3.4%) is higher than that of DIC (error, 10.6%). This result is contrary to what was found for a black-and-white object [3]. The order of the degree of difficulty to measure three-dimensional displacement, considering the pigment colors, is K (error, 6.7%), G (5.7%), R (4.2%), B (3.1%), Y (2.2%). With regard to light combinations, the order of accuracy is Na-pG (error 1.2%), UV-pB(5) [1.5%], UV-pB(6) [2.6%], Y-B(2p) [4.2%], ledR-pB (7.2%), IR-pB (7.6%), W-Y(1p) (24.8%). The most appropriate light combination for analysis of multi-colored objects, overall, is UV-pB. For this latter light combination, a fuller analysis of displacement accuracy is given in Sec. 6.

With reference to the UV-pB light combination, in Fig. 8(c) we present some additional information about its contrast realization, for both neutral and multi-colored objects. In that figure, black lines designate FP contrast results and red lines, DIC contrast (the scales of DIC and FP are unrelated). Two cases of multi-colored objects are shown: an object with 5 colors, case (5), indicated by green markers, and an object with 6 colors –a gray stripe is added to the case of 5 colors, case (6), indicated by red markers–. For this type of objects, as described in Sec. 4, one registered image is separated into individual images associated with colors present in the scene (which are designated by the abscissa), and then contrast calculation is done. On the other hand, for neutral objects, images of individual colors are taken and no separation is required; the two associated plots with neutral objects are indicated by blue markers. As it can be observed, the largest contrast is obtained for neutral objects, and the lowest, for case of 6 colors. This result is related to differences in the dynamic range of the three types of scenes.

As pointed out in Sec. 5, one-projector configurations show low performance in DIC. For example, even for the best one-projector configuration, [W-Y(1p)], for some stripe colors, the errors are so large that they are not displayed, in Fig. 8(b).

6. Experimental evaluation of displacement

In this section, we evaluate performance of FP-DIC in multi-colored objects, when the setup includes the best light combination analyzed in Sec. 5, UV-pB (UV for DIC, and blue fringes from FP projector, for FP); the multi-colored objects correspond to those described in Sec. 5. Experimental measurements of a series of prescribed three-dimensional displacements are done. The optical setup is as that shown in Fig. 1; separate FP and DIC light sources are incorporated. Values of the distinct parameters of the setup correspond to those described in Sec. 4. Three distinct values of displacement are tested: 0.25 mm, 0.5 mm and 1 mm (for both FP and DIC). For each case, the total numbers of displacement steps are 40, 20 and 10, respectively.

Figure 9 includes the resulting absolute percentage relative errors, indicated by the ordinate of the plot, along with the standard deviation; standard deviation is indicated by green small bars positioned over main bars. The abscissa designates the stripe colors that form the multi-colored object; two sets of stripe colors are included: one for a 5-stripe object (K, R, G, B, and Y), and another for a 6-stripe object (K, R, G, B, Y, and Gy); the rightmost color, W, is associated with measurements in a neutral white object, which serves as a reference. For the latter case, W, FP and DIC light sources correspond to FP and DIC projectors (which are spatially separated); they respectively generate, in a non-simultaneous way, a pattern of black-and-white fringes and a uniform white light field.

 figure: Fig. 9

Fig. 9 Absolute percentage relative error of three-dimensional displacement in multi-colored objects. (a) Out-of-plane (OP) displacement from FP; minimum and maximum values of standard deviations are 0.012 mm (for W, displacement step of 1.0 mm) and 0.28 mm (R, 6 colors, displacement step of 1.0 mm); and (b) in-plane (IP) displacement from DIC –standard deviation is within 0.008 mm (for W, displacement step of 1.0 mm) and 0.1 mm (K, 6 colors, displacement step of 0.25 mm).

Download Full Size | PPT Slide | PDF

Some interesting features can be observed from the results. First, accuracy of DIC reference measurement (white object, W, illuminated by separated projectors) is larger than that found previously in [3]; this arises from the fact that DIC source is detached from FP source. Besides, DIC performance of this case is higher than that of FP, as found in [3] –compare the rightmost groups of bars of Figs. 9(a) and 9(b). Second, for DIC, the greater the displacement, the less the relative error. Third, overall accuracy for multi-colored objects is on the order of that obtained when only one projector illuminates a neutral object, about 1.5% [3]. Fourth, performances of DIC and FP are similar. And finally, by comparing the behavior of results in Figs. 9 and 8(c), accuracy and contrast show a direct relationship.

It is worth pointing out that on obtaining the results related to multi-colored objects, the camera gamma value is adjusted as described at the end of Sec. 4; as noted, this implies that the intensity of low-reflectance pixels is reinforced. Additionally, gains of channels are chosen as indicated in Sec. 4.2.

In Fig. 10, we present an application case of the optimized setup. The experiment is regarded to the analysis of geological models, when subjected to external compressive forces [22]. These types of experiments serve to model the morphological spatio-temporal evolution of the Earth’s crust while subjected to particular natural conditions. The model is formed by 5 vertical layers of colored granular media (average diameter of 0.5 mm), Fig. 10(a). The shape of the initial state of the object corresponds to a box of volume 30 cm x 15 cm x 3 cm (length, width, height). In these type of experiments, it is customary the use of colored layers for easy visualization of the deformation. Compression is applied as in [22], i.e., by an advancing wall made of plastic, which is driven by a stepper motor (at a speed of 1 mm/s). The object is illuminated by UV-pB combination (parameters of the setup take values as described in Sec. 4.1). The image of the object, at its reference state, when illuminated by UV-pB is shown in Fig. 10(b). From this latter image, we obtain the separated images (FP image from green channel and DIC image from red channel), which are shown in Figs. 10(c) and 10(d). Figures 10(e) and 10(f) show the components of displacement at two different states: length shortenings of 9.7% and 21.6%, respectively; in these images, the out-of-plane and in-plane displacements are represented by the color map and the vectors, respectively [for Fig. 10(e), the maximum vector represents 0.26 mm, and for Fig. 10(f), 0.28 mm; the similarity of these values reflects the constancy of displacement of the moving wall]. Horizontal cross-sections of the two latter images are shown in Figs. 10(g) and 10(h), for the central row; the first cross-section plot depicts the out-of-plane component and the second, the horizontal in-plane component; evolution of the deformation is clearly noticed. For the out-of-plane displacement, the reference state is fixed and corresponds to the undeformed state; and for the in-plane displacement, the reference is continuously updated; i.e., the reference image corresponds to the image preceding the image under analysis (camera recording speed is 1 fps). As observed, the largest displacements occur near the compressing wall. Further, small rotations of the surface appear due to the interaction of the sand and the side walls of the container (this in turn produces a faster movement of the central part of the foreland). As noticed, the maximum in-plane displacement is on the order of one third of the wall displacement. This difference arises from the fact that the movement of the wall generates both in-plane and out-of-plane components of displacement of the sand. The out-of-plane movement is reflected as accumulation of sand near the wall. The movement of the sand resembles that of a solid object (a plateau), that only gains height with compression but does not flow horizontally easily –this effect can also be observed from the in-plane maps, where movement of the sand close to the far edge is relatively small–.

 figure: Fig. 10

Fig. 10 Three-dimensional displacement in a geological model. (a) Real specimen. (b) Specimen illuminated by UV-pB. From Fig. 10(b), (c) FP image and (d) DIC image. (e) Three-dimensional displacement at length shortening of 9.7%. (f) Three-dimensional displacement at length shortening of 21.6%. Cross-sections of Figs. 10(e) and 10(f): (g) Out-of-plane component and (h) horizontal in-plane component.

Download Full Size | PPT Slide | PDF

In summary, by using an appropriate light combination, multi-colored objects may be analyzed by using only one registered image, simplifying the standard procedure of using multi-exposure methods, as in [23–25].

7. Conclusions

We have evaluated the performance of the FP/DIC technique, when applied to colored objects, by measurements of contrast and displacement accuracy. When analyzing a set of different neutral-colored objects, we found out that the best performance is obtained when illuminating the object by a light combination that comprises a red light background, produced by a matrix of LEDs (DIC light source), and a pattern of blue fringes, generated by a projector (FP light source). In this case, for selection of the light sources, their crosstalk, with respect to the camera sensor, should be as weak as possible. Similarly, when dealing with multi-colored objects that present high dynamic ranges, the recommended combination of light sources is UV for DIC and pure blue fringes, for FP. Regarding this case, the key point for selecting the light sources, is that all colors comprising the object should reflect as similar as possible the light spectra.

In general, to optimize the FP-DIC setup, light sources for each technique should be spatially separated. One advantage of this type of configuration is the possibility to select independently both optimum angles of illumination and illumination power for each technique, what may increase flexibility and accuracy of the setup. The optimal setup allowed us to measure three-dimensional displacement in dynamic events, by using only one image.

Acknowledgments

We thank Conacyt for doctorate scholarship granted to one of the authors. Also, we acknowledge Martín Olmos and Reyna Duarte for their assistance with the use of some optical components. We also wish to thank the reviewers, for their useful suggestions, which improved the manuscript.

References and links

1. H. Weber, R. Lichtenberger, and T. Wolf, “The combination of speckle correlation and fringe projection for the measurement of dynamic 3-D deformations of airbags caps,” in Proceedings of IUTAM Symposyum on Advanced Optical Methods and Applications in Solid Mechanics, A. Lagarde, ed. (Kluwer Akademic Publishers, 2000), pp. 619–626.

2. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011). [CrossRef]   [PubMed]  

3. C. Mares, B. Barrientos, and A. Blanco, “Measurement of transient deformation by color encoding,” Opt. Express 19(25), 25712–25722 (2011). [CrossRef]   [PubMed]  

4. C. Wust and D. W. Capson, “Surface profile measurement using color fringe projection,” Mach. Vis. Appl. 4(3), 193–203 (1991). [CrossRef]  

5. P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999). [CrossRef]  

6. J. L. Flores, J. A. Ferrari, G. García Torales, R. Legarda-Saenz, and A. Silva, “Color-fringe pattern profilometry using a generalized phase-shifting algorithm,” Appl. Opt. 54(30), 8827–8834 (2015). [CrossRef]   [PubMed]  

7. M. Padilla, M. Servin, and G. Garnica, “Fourier analysis of RGB fringe-projection profilometry and robust phase-demodulation methods against crosstalk distortion,” Opt. Express 24(14), 15417–15428 (2016). [CrossRef]   [PubMed]  

8. I. Trumper, H. Choi, and D. W. Kim, “Instantaneous phase shifting deflectometry,” Opt. Express 24(24), 27993–28007 (2016). [CrossRef]   [PubMed]  

9. M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011). [CrossRef]  

10. A. Blanco, B. Barrientos, and C. Mares, “Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement, part 2: experimental evaluation,” Opt. Eng. 55(6), 064104 (2016). [CrossRef]  

11. L. F. Sesé, P. Siegmann, and E. A. Patterson, “Integrating fringe projection and digital image correlation for high quality measurements of shape changes,” Opt. Eng. 53(4), 044106 (2014). [CrossRef]  

12. M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008). [CrossRef]  

13. L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014). [CrossRef]  

14. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998). [CrossRef]  

15. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006). [CrossRef]   [PubMed]  

16. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982). [CrossRef]  

17. K. J. Gasvik, Optical Metrology (John Wiley and Sons, 2003).

18. E. Stoykova, G. Minchev, and V. Sainov, “Fringe projection with a sinusoidal phase grating,” Appl. Opt. 48(24), 4774–4784 (2009). [CrossRef]   [PubMed]  

19. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990). [CrossRef]   [PubMed]  

20. CMOSIS image sensors, “CMV2000 Datasheet v3.2,” 2012.

21. Z. Zhang, C. E. Towers, and D. P. Towers, “Robust color and shape measurement of full color artifacts by RGB fringe projection,” Opt. Eng. 51(2), 021109 (2012). [CrossRef]  

22. B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008). [CrossRef]  

23. A. R. Varkonyi-Koczy, A. R. Rovid, and T. Hashimoto, “Gradient based synthesized multiple exposure time color HDR image,” IEEE Trans. Instrum. Meas. 57(8), 1779–1785 (2008). [CrossRef]  

24. D. Skocaj and A. Leonardis, “Range image acquisition of objects with non-uniform albedo using structured light range sensor,” in Proceedings of IEEE 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 778–781. [CrossRef]  

25. B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Lasers Eng. 87, 83–89 (2016). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. H. Weber, R. Lichtenberger, and T. Wolf, “The combination of speckle correlation and fringe projection for the measurement of dynamic 3-D deformations of airbags caps,” in Proceedings of IUTAM Symposyum on Advanced Optical Methods and Applications in Solid Mechanics, A. Lagarde, ed. (Kluwer Akademic Publishers, 2000), pp. 619–626.
  2. P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011).
    [Crossref] [PubMed]
  3. C. Mares, B. Barrientos, and A. Blanco, “Measurement of transient deformation by color encoding,” Opt. Express 19(25), 25712–25722 (2011).
    [Crossref] [PubMed]
  4. C. Wust and D. W. Capson, “Surface profile measurement using color fringe projection,” Mach. Vis. Appl. 4(3), 193–203 (1991).
    [Crossref]
  5. P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999).
    [Crossref]
  6. J. L. Flores, J. A. Ferrari, G. García Torales, R. Legarda-Saenz, and A. Silva, “Color-fringe pattern profilometry using a generalized phase-shifting algorithm,” Appl. Opt. 54(30), 8827–8834 (2015).
    [Crossref] [PubMed]
  7. M. Padilla, M. Servin, and G. Garnica, “Fourier analysis of RGB fringe-projection profilometry and robust phase-demodulation methods against crosstalk distortion,” Opt. Express 24(14), 15417–15428 (2016).
    [Crossref] [PubMed]
  8. I. Trumper, H. Choi, and D. W. Kim, “Instantaneous phase shifting deflectometry,” Opt. Express 24(24), 27993–28007 (2016).
    [Crossref] [PubMed]
  9. M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011).
    [Crossref]
  10. A. Blanco, B. Barrientos, and C. Mares, “Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement, part 2: experimental evaluation,” Opt. Eng. 55(6), 064104 (2016).
    [Crossref]
  11. L. F. Sesé, P. Siegmann, and E. A. Patterson, “Integrating fringe projection and digital image correlation for high quality measurements of shape changes,” Opt. Eng. 53(4), 044106 (2014).
    [Crossref]
  12. M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
    [Crossref]
  13. L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014).
    [Crossref]
  14. D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998).
    [Crossref]
  15. Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006).
    [Crossref] [PubMed]
  16. M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982).
    [Crossref]
  17. K. J. Gasvik, Optical Metrology (John Wiley and Sons, 2003).
  18. E. Stoykova, G. Minchev, and V. Sainov, “Fringe projection with a sinusoidal phase grating,” Appl. Opt. 48(24), 4774–4784 (2009).
    [Crossref] [PubMed]
  19. E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990).
    [Crossref] [PubMed]
  20. CMOSIS image sensors, “CMV2000 Datasheet v3.2,” 2012.
  21. Z. Zhang, C. E. Towers, and D. P. Towers, “Robust color and shape measurement of full color artifacts by RGB fringe projection,” Opt. Eng. 51(2), 021109 (2012).
    [Crossref]
  22. B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008).
    [Crossref]
  23. A. R. Varkonyi-Koczy, A. R. Rovid, and T. Hashimoto, “Gradient based synthesized multiple exposure time color HDR image,” IEEE Trans. Instrum. Meas. 57(8), 1779–1785 (2008).
    [Crossref]
  24. D. Skocaj and A. Leonardis, “Range image acquisition of objects with non-uniform albedo using structured light range sensor,” in Proceedings of IEEE 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 778–781.
    [Crossref]
  25. B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Lasers Eng. 87, 83–89 (2016).
    [Crossref]

2016 (4)

M. Padilla, M. Servin, and G. Garnica, “Fourier analysis of RGB fringe-projection profilometry and robust phase-demodulation methods against crosstalk distortion,” Opt. Express 24(14), 15417–15428 (2016).
[Crossref] [PubMed]

I. Trumper, H. Choi, and D. W. Kim, “Instantaneous phase shifting deflectometry,” Opt. Express 24(24), 27993–28007 (2016).
[Crossref] [PubMed]

A. Blanco, B. Barrientos, and C. Mares, “Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement, part 2: experimental evaluation,” Opt. Eng. 55(6), 064104 (2016).
[Crossref]

B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Lasers Eng. 87, 83–89 (2016).
[Crossref]

2015 (1)

2014 (2)

L. F. Sesé, P. Siegmann, and E. A. Patterson, “Integrating fringe projection and digital image correlation for high quality measurements of shape changes,” Opt. Eng. 53(4), 044106 (2014).
[Crossref]

L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014).
[Crossref]

2012 (1)

Z. Zhang, C. E. Towers, and D. P. Towers, “Robust color and shape measurement of full color artifacts by RGB fringe projection,” Opt. Eng. 51(2), 021109 (2012).
[Crossref]

2011 (3)

2009 (1)

2008 (3)

B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008).
[Crossref]

A. R. Varkonyi-Koczy, A. R. Rovid, and T. Hashimoto, “Gradient based synthesized multiple exposure time color HDR image,” IEEE Trans. Instrum. Meas. 57(8), 1779–1785 (2008).
[Crossref]

M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
[Crossref]

2006 (1)

1999 (1)

P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999).
[Crossref]

1998 (1)

D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998).
[Crossref]

1991 (1)

C. Wust and D. W. Capson, “Surface profile measurement using color fringe projection,” Mach. Vis. Appl. 4(3), 193–203 (1991).
[Crossref]

1990 (1)

1982 (1)

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982).
[Crossref]

Álvarez-Fernández, V.

Barrientos, B.

A. Blanco, B. Barrientos, and C. Mares, “Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement, part 2: experimental evaluation,” Opt. Eng. 55(6), 064104 (2016).
[Crossref]

C. Mares, B. Barrientos, and A. Blanco, “Measurement of transient deformation by color encoding,” Opt. Express 19(25), 25712–25722 (2011).
[Crossref] [PubMed]

B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008).
[Crossref]

Blanco, A.

A. Blanco, B. Barrientos, and C. Mares, “Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement, part 2: experimental evaluation,” Opt. Eng. 55(6), 064104 (2016).
[Crossref]

C. Mares, B. Barrientos, and A. Blanco, “Measurement of transient deformation by color encoding,” Opt. Express 19(25), 25712–25722 (2011).
[Crossref] [PubMed]

Capson, D. W.

C. Wust and D. W. Capson, “Surface profile measurement using color fringe projection,” Mach. Vis. Appl. 4(3), 193–203 (1991).
[Crossref]

Caspi, D.

D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998).
[Crossref]

Cerca, M.

B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008).
[Crossref]

Chen, B.

B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Lasers Eng. 87, 83–89 (2016).
[Crossref]

Chiang, F. P.

P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999).
[Crossref]

Choi, H.

Diaz, F. A.

L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014).
[Crossref]

Díaz-Garrido, F.

Ferrari, J. A.

Flores, J. L.

García Torales, G.

Garcia-Marquez, J.

B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008).
[Crossref]

Garnica, G.

Hamada, K.

M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011).
[Crossref]

Hashimoto, T.

A. R. Varkonyi-Koczy, A. R. Rovid, and T. Hashimoto, “Gradient based synthesized multiple exposure time color HDR image,” IEEE Trans. Instrum. Meas. 57(8), 1779–1785 (2008).
[Crossref]

Hernandez-Bernal, C.

B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008).
[Crossref]

Hu, Q.

P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999).
[Crossref]

Huang, P. S.

P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999).
[Crossref]

Ina, H.

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982).
[Crossref]

Jin, F.

P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999).
[Crossref]

Kato, H.

M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011).
[Crossref]

Kim, D. W.

Kiryati, N.

D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998).
[Crossref]

Kobayashi, S.

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982).
[Crossref]

Legarda-Saenz, R.

Leonardis, A.

D. Skocaj and A. Leonardis, “Range image acquisition of objects with non-uniform albedo using structured light range sensor,” in Proceedings of IEEE 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 778–781.
[Crossref]

Maeno, K.

M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011).
[Crossref]

Mares, C.

A. Blanco, B. Barrientos, and C. Mares, “Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement, part 2: experimental evaluation,” Opt. Eng. 55(6), 064104 (2016).
[Crossref]

C. Mares, B. Barrientos, and A. Blanco, “Measurement of transient deformation by color encoding,” Opt. Express 19(25), 25712–25722 (2011).
[Crossref] [PubMed]

Minchev, G.

Orteu, J. J.

M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
[Crossref]

Ota, M.

M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011).
[Crossref]

Padilla, M.

Patterson, E. A.

L. F. Sesé, P. Siegmann, and E. A. Patterson, “Integrating fringe projection and digital image correlation for high quality measurements of shape changes,” Opt. Eng. 53(4), 044106 (2014).
[Crossref]

L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014).
[Crossref]

P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011).
[Crossref] [PubMed]

Peli, E.

Rovid, A. R.

A. R. Varkonyi-Koczy, A. R. Rovid, and T. Hashimoto, “Gradient based synthesized multiple exposure time color HDR image,” IEEE Trans. Instrum. Meas. 57(8), 1779–1785 (2008).
[Crossref]

Sainov, V.

Schreier, H. W.

M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
[Crossref]

Servin, M.

Sesé, L. F.

L. F. Sesé, P. Siegmann, and E. A. Patterson, “Integrating fringe projection and digital image correlation for high quality measurements of shape changes,” Opt. Eng. 53(4), 044106 (2014).
[Crossref]

L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014).
[Crossref]

Shamir, J.

D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998).
[Crossref]

Siegmann, P.

L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014).
[Crossref]

L. F. Sesé, P. Siegmann, and E. A. Patterson, “Integrating fringe projection and digital image correlation for high quality measurements of shape changes,” Opt. Eng. 53(4), 044106 (2014).
[Crossref]

P. Siegmann, V. Álvarez-Fernández, F. Díaz-Garrido, and E. A. Patterson, “A simultaneous in- and out-of-plane displacement measurement method,” Opt. Lett. 36(1), 10–12 (2011).
[Crossref] [PubMed]

Silva, A.

Skocaj, D.

D. Skocaj and A. Leonardis, “Range image acquisition of objects with non-uniform albedo using structured light range sensor,” in Proceedings of IEEE 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 778–781.
[Crossref]

Stoykova, E.

Sutton, M. A.

M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
[Crossref]

Takeda, M.

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982).
[Crossref]

Tiwari, V.

M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
[Crossref]

Towers, C. E.

Z. Zhang, C. E. Towers, and D. P. Towers, “Robust color and shape measurement of full color artifacts by RGB fringe projection,” Opt. Eng. 51(2), 021109 (2012).
[Crossref]

Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006).
[Crossref] [PubMed]

Towers, D. P.

Z. Zhang, C. E. Towers, and D. P. Towers, “Robust color and shape measurement of full color artifacts by RGB fringe projection,” Opt. Eng. 51(2), 021109 (2012).
[Crossref]

Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006).
[Crossref] [PubMed]

Trumper, I.

Varkonyi-Koczy, A. R.

A. R. Varkonyi-Koczy, A. R. Rovid, and T. Hashimoto, “Gradient based synthesized multiple exposure time color HDR image,” IEEE Trans. Instrum. Meas. 57(8), 1779–1785 (2008).
[Crossref]

Wust, C.

C. Wust and D. W. Capson, “Surface profile measurement using color fringe projection,” Mach. Vis. Appl. 4(3), 193–203 (1991).
[Crossref]

Yan, J. H.

M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
[Crossref]

Zhang, S.

B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Lasers Eng. 87, 83–89 (2016).
[Crossref]

Zhang, Z.

Z. Zhang, C. E. Towers, and D. P. Towers, “Robust color and shape measurement of full color artifacts by RGB fringe projection,” Opt. Eng. 51(2), 021109 (2012).
[Crossref]

Z. Zhang, C. E. Towers, and D. P. Towers, “Time efficient color fringe projection system for 3D shape and color using optimum 3-frequency Selection,” Opt. Express 14(14), 6444–6455 (2006).
[Crossref] [PubMed]

Appl. Opt. (2)

IEEE Trans. Instrum. Meas. (1)

A. R. Varkonyi-Koczy, A. R. Rovid, and T. Hashimoto, “Gradient based synthesized multiple exposure time color HDR image,” IEEE Trans. Instrum. Meas. 57(8), 1779–1785 (2008).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (1)

D. Caspi, N. Kiryati, and J. Shamir, “Range imaging with adaptive color structured light,” IEEE Trans. Pattern Anal. Mach. Intell. 20(5), 470–480 (1998).
[Crossref]

J. Opt. A, Pure Appl. Opt. (1)

B. Barrientos, M. Cerca, J. Garcia-Marquez, and C. Hernandez-Bernal, “Three-dimensional displacement fields measured in a deforming granular-media surface by combined fringe projection and speckle photography,” J. Opt. A, Pure Appl. Opt. 10(10), 104027 (2008).
[Crossref]

J. Opt. Soc. Am. A (2)

E. Peli, “Contrast in complex images,” J. Opt. Soc. Am. A 7(10), 2032–2040 (1990).
[Crossref] [PubMed]

M. Takeda, H. Ina, and S. Kobayashi, “Fourier-transform method of fringe-pattern analysis for computer-based topography and interferometry,” J. Opt. Soc. Am. A 72(1), 156–160 (1982).
[Crossref]

Mach. Vis. Appl. (1)

C. Wust and D. W. Capson, “Surface profile measurement using color fringe projection,” Mach. Vis. Appl. 4(3), 193–203 (1991).
[Crossref]

Meas. Sci. Technol. (1)

M. Ota, K. Hamada, H. Kato, and K. Maeno, “Computed-tomographic density measurement of supersonic flow field by colored-grid background oriented schlieren (CGBOS) technique,” Meas. Sci. Technol. 22(10), 104011 (2011).
[Crossref]

Opt. Eng. (4)

A. Blanco, B. Barrientos, and C. Mares, “Performance comparison of background-oriented schlieren and fringe deflection in temperature measurement, part 2: experimental evaluation,” Opt. Eng. 55(6), 064104 (2016).
[Crossref]

L. F. Sesé, P. Siegmann, and E. A. Patterson, “Integrating fringe projection and digital image correlation for high quality measurements of shape changes,” Opt. Eng. 53(4), 044106 (2014).
[Crossref]

P. S. Huang, Q. Hu, F. Jin, and F. P. Chiang, “Color-encoded digital fringe projection technique for high-speed three-dimensional surface contouring,” Opt. Eng. 38(6), 1065–1071 (1999).
[Crossref]

Z. Zhang, C. E. Towers, and D. P. Towers, “Robust color and shape measurement of full color artifacts by RGB fringe projection,” Opt. Eng. 51(2), 021109 (2012).
[Crossref]

Opt. Express (4)

Opt. Lasers Eng. (3)

M. A. Sutton, J. H. Yan, V. Tiwari, H. W. Schreier, and J. J. Orteu, “The effect of out-of-plane motion on 2D and 3D digital image correlation measurements,” Opt. Lasers Eng. 46(10), 746–757 (2008).
[Crossref]

L. F. Sesé, P. Siegmann, F. A. Diaz, and E. A. Patterson, “Simultaneous in-and-out-of-plane displacement measurement using fringe projection and digital image correlation,” Opt. Lasers Eng. 52, 66–74 (2014).
[Crossref]

B. Chen and S. Zhang, “High-quality 3D shape measurement using saturated fringe patterns,” Opt. Lasers Eng. 87, 83–89 (2016).
[Crossref]

Opt. Lett. (1)

Other (4)

H. Weber, R. Lichtenberger, and T. Wolf, “The combination of speckle correlation and fringe projection for the measurement of dynamic 3-D deformations of airbags caps,” in Proceedings of IUTAM Symposyum on Advanced Optical Methods and Applications in Solid Mechanics, A. Lagarde, ed. (Kluwer Akademic Publishers, 2000), pp. 619–626.

K. J. Gasvik, Optical Metrology (John Wiley and Sons, 2003).

CMOSIS image sensors, “CMV2000 Datasheet v3.2,” 2012.

D. Skocaj and A. Leonardis, “Range image acquisition of objects with non-uniform albedo using structured light range sensor,” in Proceedings of IEEE 15th International Conference on Pattern Recognition (IEEE, 2000), pp. 778–781.
[Crossref]

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Optical layout for FP-DIC technique.
Fig. 2
Fig. 2 (a) Spectral response –quantum efficiency, QE– of each RGB color channel of camera sensor; (b) spectral reflectance of various pigments (primary RGB colors, yellow Y, black K, white W, gray Gy, and gold Gd); (c) spectral distribution of FP light sources (in legend, words starting with “p” stand for projector, and the second letter indicates primary RGB color or secondary CMY color; also lasB designates the blue laser); (d) spectral distribution of DIC light sources (led denotes a matrix of LEDs; R for red and IR for infrared; UV stands for ultraviolet LEDs matrix; and Na is sodium lamp); (e) Superposition of Figs. 2(a) and 2(d); (f) nonlinear behavior of FP projector (solid lines) and of projector-camera combination (dotted lines with markers); c designates camera. GL and au correspond to gray levels and arbitrary units, respectively.
Fig. 3
Fig. 3 Relative coupling values. For (a) FP projector; (b) UV (U), blue laser (L), IR (I); (c) Dependence of coupling on instructed RGB value to FP projector.
Fig. 4
Fig. 4 Exemplary zoomed-in processed images for each neutral-colored object (we show only cases of optimal illumination for maximum FP contrast). (a) Black pigment. (b) Red pigment. (c) Green pigment. (d) Blue pigment. (e) Yellow pigment.
Fig. 5
Fig. 5 Plots of contrast. (a) FP, (b) DIC. Legend is common to both plots, and notation is as follows: first part of name refers to DIC light source, and last part to FP light source; “p” stands for projector; (1p) means that only one projector (FP projector) is used for both FP and DIC, and (2p) denotes that two different projectors are used (one projector for FP and another for DIC). An instance of light combination is UV-pR –a matrix of UV LEDs is used for DIC and light by red channel of FP projector for FP (red fringes on a black background)-. More details are given in text.
Fig. 6
Fig. 6 Exemplary zoomed-in processed images for each multi-colored object (we show only cases using optimal illumination). (a) 5-color object. (b) 6-color object. (c) 4-color object.
Fig. 7
Fig. 7 Typical measurement of three-dimensional displacement in multi-colored objects (instructed displacement is 1.0 mm). (a) Out-of-plane component. (b) Horizontal cross-section at center of Fig. 7(a). (c) In-plane component. (d) Horizontal cross-section at center of Fig. 7(c).
Fig. 8
Fig. 8 Accuracy of displacement measurement in multi-colored objects. Percentage absolute relative error, (a) out-of-plane (FP) –bars indicate standard deviation– and (b) in-plane (DIC). (c) Contrast evaluation (arbitrary units). The minimum and maximum standard deviations for (a) and (b) are given in the text.
Fig. 9
Fig. 9 Absolute percentage relative error of three-dimensional displacement in multi-colored objects. (a) Out-of-plane (OP) displacement from FP; minimum and maximum values of standard deviations are 0.012 mm (for W, displacement step of 1.0 mm) and 0.28 mm (R, 6 colors, displacement step of 1.0 mm); and (b) in-plane (IP) displacement from DIC –standard deviation is within 0.008 mm (for W, displacement step of 1.0 mm) and 0.1 mm (K, 6 colors, displacement step of 0.25 mm).
Fig. 10
Fig. 10 Three-dimensional displacement in a geological model. (a) Real specimen. (b) Specimen illuminated by UV-pB. From Fig. 10(b), (c) FP image and (d) DIC image. (e) Three-dimensional displacement at length shortening of 9.7%. (f) Three-dimensional displacement at length shortening of 21.6%. Cross-sections of Figs. 10(e) and 10(f): (g) Out-of-plane component and (h) horizontal in-plane component.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

[ R(x,y) G(x,y) B(x,y) ]=[ 0 r C (λ)O(x,y,λ)I(x,y,λ)dλ 0 g C (λ)O(x,y,λ)I(x,y,λ)dλ 0 b C (λ)O(x,y,λ)I(x,y,λ)dλ ].
I(x,y,λ)= r P (x,y) I PR (λ)+ g P (x,y) I PG (λ)+ b P (x,y) I PB (λ),
[ R G B ]=[ 0 r C O I PR dλ 0 r C O I PG dλ 0 r C O I PB dλ 0 g C O I PR dλ 0 g C O I PG dλ 0 g C O I PB dλ 0 b C O I PR dλ 0 b C O I PG dλ 0 b C O I PB dλ ][ r P g P b P ],
[ R G B ]=[ a rR a rG a rB a gR a gG a gB a bR a bG a bB ][ r P g P b P ],
I(x,y)=a(x,y)+b(x,y)cos[ ϕ(x,y)+2π f 0 x ],
Δz= Δϕ 2π P'cosα sinα+(dlcosα)x/(ld) ,
C FP = b(x,y)/a(x,y) ( w F,MIN / w F ) ( K MIN /K ) 1/3 ,
C DIC = σ local I local Γ MAX /Γ ( K MIN /K ),

Metrics