Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hand-guided qualitative deflectometry with a mobile device

Open Access Open Access

Abstract

We introduce a system that exploits the screen and front-facing camera of a mobile device to perform three-dimensional deflectometry-based surface measurements. In contrast to current mobile deflectometry systems, our method can capture surfaces with large normal variation and wide field of view (FoV). We achieve this by applying automated multi-view panoramic stitching algorithms to produce a large FoV normal map from a hand-guided capture process without the need for external tracking systems, like robot arms or fiducials. The presented work enables 3D surface measurements of specular objects ’in the wild’ with a system accessible to users with little to no technical imaging experience. We demonstrate high-quality 3D surface measurements without the need for a calibration procedure. We provide experimental results with our prototype Deflectometry system and discuss applications for computer vision tasks such as object detection and recognition.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Three-dimensional (3D) imaging techniques are now omnipresent in a multitude of scientific and commercial disciplines. Industrial 3D inspection, medical 3D imaging as well as 3D documentation and analysis of art or cultural heritage are only a few examples of the broad range of applications. The work introduced in this paper is motivated by a specific and challenging application of 3D imaging: The 3D measurement and analysis of highly reflective surfaces in the wild, i.e., for objects that cannot be transported to a laboratory for measurement. As a concrete example, we study the 3D measurement and analysis of stained glass paintings, such as larger glass artworks, church windows, or glass reliefs. The shape of the small glass pieces in a stained glass artwork is not necessarily flat! Over the centuries, several glass manufacturers developed unique techniques to imprint unique three-dimensional structures to the glass surface that reflects and refracts light in a very distinct way. These unique 3D structures in the glass piece are a powerful tool to match the small glass pieces in a stained glass painting to the individual manufacturers and to trace the circulation of stained glass around the globe. The latter is of significant interest for the cultural heritage community. We present a comprehensive 3D measurement tool that can perform this task in a hand-guided fashion with unprecedented ease of use, to be adopted by a broad audience of users with little to no technical expertise. In particular, we wish to provide 3D surface measurement capability to untrained personnel like museum conservators and tourists.

 figure: Fig. 1.

Fig. 1. a) Handheld measurement of a stained glass painting with a mobile device. The reflections of the screen are visible on parts of the glass surface and reveal its three-dimensional structure. The measurement result (normal map) is displayed in the zoomed inset. b) Basic principle of ‘Phase Measuring Deflectometry’ (PMD): A screen with a fringe pattern is observed over the reflective surface of an object. The normal map of the object surface can be calculated from the deformation of the fringe pattern in the camera image.

Download Full Size | PDF

3D image acquisition techniques can be roughly divided into methods for two categories of surfaces: (diffuse) scattering and specular. Diffusely scattering surfaces are commonly measured by projecting a temporally or spatially structured light beam onto the object and evaluating the back-scattered signal. ‘Time-of-Flight’ [1] or Active Triangulation (‘Structured Light’) [2,3] are prominent examples. Another well-known principle is ‘Photometric Stereo’ [4], where the object surface is sequentially flood illuminated with ‘point’ light sources from different angles.

Unfortunately, the application of these principles to specular surfaces yields only limited success. The reason for this is simple: specular reflections from a point light source scarcely find their way back into the camera objective. A straightforward solution to this problem is to extend the angular support of the illumination sources. This is the basic principle behind ‘Deflectometry’ [57], where a patterned screen replaces the ‘point-like’ light source (see Fig. 1). This screen can be self-illuminated (TV Monitor) or printed. In Deflectometry systems, the screen and camera face the object, which means that the camera observes the specular reflection of the screen over the object surface. The observed pattern in the camera image is a deformed version of the image on the screen, where the deformation depends on the surface normal distribution of the object surface (Fig. 1(b)). From this deformation, the normal vectors of the surface can be calculated. In order to calculate a normal vector for each camera pixel, correspondence between camera pixels and projector pixels must be determined. A common technique to achieve this is with the phase-shifting of sinusoidal fringes. The resulting ‘Phase-Measuring Deflectometry’ (PMD) [5,7] has established itself as a powerful technique that is used with great success in industrial applications, e.g., to test the quality of optical components or to detect defects on metallic parts like car bodies. Given a proper calibration, PMD reaches a precision close to interferometric methods [811].

The task of digitizing specular 3D surfaces ’in the wild’ leads to several fundamental and technical challenges of great scientific interest. Our goal is to develop a surface measurement method for objects that are large and, therefore, cannot be transported to a controlled lab environment. Besides a large FoV, the desired method should support a large variation in surface normals and also achieve high spatial resolution. In principle, this can be achieved using large-screen PMD systems, but these setups are bulky and cannot be applied ’in the wild’.

Our solution to this problem is to use mobile devices (smartphone, tablet) for PMD measurements, i.e., using the screen to display the patterns and the front-facing camera to image the object surface. Since the screen size of mobile devices is limited, only a small angular range of surface normals can be measured in any single view [1215]. We overcome this limitation using an automated feature-based registration applied to PMD measurements acquired from different viewing angles. The multi-view measurements can be acquired in a hand-guided fashion. The features are extracted directly from captured images so that external markers or fiducials are not necessary.

For our mobile PMD system, we do not perform photometric and geometric calibration, necessary to recover quantitative surface shape information. This is because accurate calibration severely complicates the acquisition setup and makes it difficult to capture 3D shape for objects ’in the wild’, which is the primary goal of this paper. Without calibration, the accuracy of our method is compromised for low spatial frequencies of 3D surfaces that are reconstructed. This low-frequency bias produces limitations in the quantitative 3D surface information that can be extracted. We sidestep this problem by exploiting a-priori knowledge about our objects of interest (e.g., the stained glass examples in Fig. 2, or paintings in Fig. 5). Their overall shape is mostly flat but also contains high-frequency 3D surface shape information. This information is captured with high quality and can be used as features to help recognize an object’s identity, e.g., by applying feature matching techniques (e.g., ’SIFT’ [16,17]) to register normal maps captured from different viewpoints.

In summary, our paper provides the following unique contributions:

  • We demonstrate a hand-held Deflectometry system, able to measure specular 3D surfaces ’in the wild’ over a large FoV. The system consists only of an off-the-shelf mobile device, like a tablet or a smartphone.
  • We introduce the idea of exploiting a priori knowledge about surface shape to avoid the tedious calibration process necessary for multi-view registration and stitching of arbitrary 3D surfaces. Our method works well for objects that contain a small amount of low-frequency 3D surface information but also posses interesting high-frequency 3D surface features.
  • We apply automated feature-based registration to stitch together different ’normal maps’ of an extended object surface into a panoramic, wide-FoV normal map. To our knowledge, our method is the first to enable hand-guided deflectometric measurements without the need for a priori 3D pose information, tracking, or external fiducials.
  • We demonstrate the first registered and stitched normal map of an extended specular object with large angular normal variation that was captured with a hand-guided system ’in the wild’ - a stained glass artwork (see Fig. 4(e)). In addition, we show numerous examples of surface normal maps recovered from a variety of objects captured ’in the wild’ from a single viewpoint.

2. Related work

‘Phase Measuring Deflectometry’ (PMD) is just one of many techniques that have been introduced to measure the 3D surface of specular objects. As discussed, deflectometric methods are widely used in the optical metrology community for the ultra-precise measurement of optical components, such as lenses, astronomical mirrors, or other kinds of free form surfaces. The power of the related approaches has been impressively demonstrated by many researchers over the last decades [57,10,11,13,1820]. It has been shown that the principle is by far not limited to the procedure of phase-shifting sinusoidal fringes (PMD). Correspondence between the screen and camera can be established in many different ways [21], including the utilization of binary patterns [22], patterns multiplexed in color space [13], or the application of the single-sideband demodulation trick, known from ’Fourier Transform Profilometry’ (FTP) [19,2325].

Considering the vast potential of the deflectometric principle, it is not surprising that the computer vision community makes extensive use of it as well. However, the names of the proposed methods mostly lack the word ’Deflectometry,’ and the related applications differ from high precision metrology tasks in many cases. Nevertheless, similar techniques using color fringes [26], lines [27], or even a light field created from two stacked LED screens [28] are known. Passive methods that do not require a self-illuminated screen at all are used as well: In [29], the reflection of color-coded circles observed by multiple cameras is exploited (which also resolves the bas-relief ambiguity). Completely ‘screenless’ methods, such as [30,31] analyze the environment or track prominent features (e.g., straight lines in buildings) used to obtain information about the slope of specular surfaces. In general, the deflectometric principle allows for any known pattern or structure to be used as a reference.

Of course, each of the techniques mentioned above comes with benefits and drawbacks. For example, some of the techniques that use a static pattern instead of temporally phase-shifted sinusoids are capable of ‘single-shot’ acquisition [13,19,24,25]. However, this comes not without a price: Many related methods deliver restricted lateral resolution or require the object surface to be sufficiently smooth. Shifting the correspondence problem to the color space (by applying a colored pattern) implies certain assumptions about the texture and reflectivity of the object surface. All this might not be a big problem for the measurement of lenses or mirrors, but it presents a significant challenge for cultural heritage applications like the measurement of stained glass surfaces.

It should be noted that even ’Photometric Stereo’ techniques can perform the desired tasks under certain limitations. For example, [32] and [33] use known reflectance maps of object surfaces to measure their 3D structure. Such approaches are especially beneficial for partially specular surfaces, but fail when the surface is too shiny. Other techniques exploit sparse specular reflections produced by photometric stereo measurements for 3D surface reconstruction or refinement [3436].

It should be noted as well, that mobile versions of Deflectometry have also been demonstrated. The authors of [37], built a custom Deflectometry device compact enough to be used inside diamond turning machines to measure milled free form surfaces in-situ without rechucking. The authors of [1214] even exploit the LCD screen and front camera of a smartphone or tablet to perform deflectometric measurements. However, these ’mobile device’ systems only demonstrate results with limited FoV and coverage of surface normals. The 3D surface measurement of objects with high-frequency surface information is not addressed in these papers. The authors of the previously mentioned paper [37] circumvent the problem of insufficient coverage of surface normals by rotating the object under the device and fusing normal maps taken at different rotation angles. The respective transformations are received from the rotation stages of the diamond turning machine. A similar approach is used in [15]. However, free-hand guidance over the object with subsequent pose calculation of the device is not possible. In comparison to previous work, we introduce a system capable of free-hand guided 3D surface measurement ’in the wild’ for extended specular surfaces with large normal variations.

3. Hand-guided qualitative deflectometry without calibration

This section describes the image acquisition and processing steps that enable uncalibrated 3D Deflectometry measurements with mobile devices. We demonstrate a set of qualitative surface measurements that can be used to identify and compare characteristic surface structures for highly specular objects.

3.1 Setup and image acquisition process

Our hand-held PMD system implementation consists of a consumer tablet that serves as a measurement device (for the results shown in this paper we used an NVIDIA Shield K1 or an Apple iPad Pro 10.5"). An application runs on the mobile device to perform the image acquisition process and transfer data to a host computer that performs the surface normal calculation and panoramic stitching.

During image acquisition, the tablet displays phase-shifted sinusoidal patterns and observes the object with its front camera (see Fig. 1(a)). The tablet is positioned approximately $200\;mm$ above the object surface. PMD is a multi-shot principle, meaning that a sequence of temporally acquired images has to be used to calculate one 3D image. During the measurement, the display projects four $90^\circ$-phase-shifted versions of a sinusoid in horizontal and vertical direction. Different frequencies of the sinusoid can optionally be used as well. The position of the tablet relative to the object has to remain fixed during the whole acquisition process. Depending on the speed of projection and image acquisition, this can be a hard task for the inexperienced user, if a handheld measurement is desired. For an optimal measurement result, the tablet can be fixed with a respective mount. We discuss possible extensions of our system towards a single-shot principle in section 5.

The front-facing camera objectives of mobile device cameras commonly have a short focal length, which results in a large FoV. Unfortunately, this large FoV cannot be exploited in its entirety by our system. This is because the device cannot be held closer to the object surface than the minimum possible focus distance, and the LCD screen has limited angular coverage. A valid PMD measurement can only be taken at image pixels that observe a display pixel over the specular surface. As a result, the number of pixels that produce valid measurements can be as small as 25% of the imaging FoV.

3.2 Evaluation, results and discussion

In the following, we evaluate the surface normal map of several stained glass tiles [38], and a large, $300\;mm$ diameter stained glass artwork. A photo of the stained glass objects is shown in Fig. 2. The tiles have an approximately squared shape with an edge length of about 50 mm and demonstrate a significant variation in the distribution of surface normals. We first demonstrate the measurement and evaluation of the small stained glass tiles from a single viewpoint, then demonstrate a ‘multi-view measurement’ of the large stained glass painting.

 figure: Fig. 2.

Fig. 2. Photograph of objects to be measured with our system. a-d) Stained glass test tiles from the Kokomo glass factory [38], each with an edge length of $\sim 50\;mm$. Surface structure complexity and angular distribution of surface normals increase from a to d: ’33KDR’ (a), ’33RON’ (b), ’33WAV’ (c), ’33TIP’ (d). e) Large stained glass painting (diameter $300\;mm$), scanned with our multi-view technique by 14 views from different angles and positions.

Download Full Size | PDF

3.2.1 Single-view measurement

Most of the tiles in our test set display a size and surface normal distribution small enough to be evaluated from one single view. Each tile is placed at a position in the field of view, where the reflected display can be observed. The intensity $I(x', y')$ at each image pixel $(x', y')$ can be expressed as

$$I(x', y') = A(x', y') + B(x', y') \cdot \cos(\phi(x', y'))~.$$
Equation (1) contains three unknowns per pixel: The (desired) phase $\phi (x', y')$ of the sinusoidal pattern, that correlates display pixels with image pixels, but also $A(x', y')$ and $B(x', y')$, which contain information about the unknown bias illumination and object reflectivity. This means that at least three equations are required per pixel to calculate $\phi (x', y')$. For each pattern direction, these equations are taken from the four acquired phase-shift images (the four phase-shift algorithm is very simple and, in addition, insensitive to second order nonlinearities), where the intensity in each image pixel for the $m$-th phaseshift is
$$I_m(x', y') = A(x', y') + B(x', y') \cdot \cos(\phi(x', y') - \phi_m)~,$$
with
$$\phi_m = (m-1) \frac{\pi}{2} ~.$$
Finally $\phi (x', y')$ can be evaluated by
$$\phi(x', y') = \arctan{\frac{I_2(x', y') - I_4(x', y') }{I_1(x', y') - I_3(x', y')}}$$
This has to be done for each pattern direction, leading to phase maps $\phi _x(x', y')$ and $\phi _y(x', y')$ for the horizontal and vertical fringe direction respectively. The acquired phase maps are equivalent to the surface gradient in the horizontal and vertical direction plus a low-frequency phase offset that is dependent on the relative position between device and object, and any distortion present in the camera objective [7,9]. In conventional PMD setups, this offset is removed by employing a calibration process whereby the phase map is first measured for a planar mirror, then subtracted from the measured phase. We avoid this step by exploiting a priori knowledge about our objects, namely that their overall shape is known to be mostly flat so that low spatial frequencies in the surface normal measurements can be ignored. In this case, the unknown phase offset can be removed by simply high pass filtering the unwrapped phase map. The high pass filtered phase maps $\tilde {\phi _x}$ and $\tilde {\phi _y}$ are then equivalent to the surface gradient maps in the x- and y- directions. It should be noted that the filtering operation also compensates for the nonlinear photometric responses of the display and camera, avoiding an additional calibration procedure. Moreover, the assumption of a mostly flat object resolves the depth-normal ambiguity of Deflectometry measurements, which typically requires two cameras to resolve [7].

The surface normal can be computed directly from the estimated phase maps via

$$\vec{n} = \frac{1}{\sqrt{\tilde{\phi_x}^2 + \tilde{\phi_y}^2 +1}} \cdot \left(\begin{array}{c}\tilde{\phi_x}\\\tilde{\phi_y}\\-1\end{array}\right)~,$$
where $\tilde {\phi _x}$ and $\tilde {\phi _y}$ denote the gradient for the horizontal and vertical direction, respectively. Figure 3 shows the calculated normal maps of all four tiles. The normal maps are shaded with a specular finish and are slightly tilted for visualization purposes. It can be seen that the characteristic surface structures important for the identification process are well resolved. The black spots in the normal maps are produced by surface points where the surface normal resulted in no measured signal, i.e., the camera was not able to see the display.

 figure: Fig. 3.

Fig. 3. Single-view 3D reconstructions (surface normal maps) of Kokomo glass test tiles. ’33KDR’ (a), ’33RON’ (b), ’33WAV’ (c), ’33TIP’ (d). Measurements are performed with mounted tablet and no room lights. e) Reconstructions of ’33RON’ and ’33WAV’ measured under normal office light ($\sim 500lx$). e) Reconstructions for a handheld measurement of’33RON’ and ’33WAV’.

Download Full Size | PDF

To test the robustness of our qualitative measurement results against different environmental conditions, we additionally acquired measurements for two of the four tiles with ambient room lighting and with performing a hand-held measurement without mounting the device. The results are shown in Fig. 3(e) and (f).

The measurement captured with ambient room lighting (Fig. 3(e)) shows no significant degradation in performance. This is understandable because the brightness of the room light was moderate (regular office lights, illuminance $\sim 500lx$), and the SNR was not reduced significantly. Under these conditions, the four-phaseshift algorithm effectively compensates for bias illumination. For the free-hand guided measurement, motion artifacts in the evaluated phase map are expected. These artifacts can be seen at the slightly blurred edges in Fig. 3(f). The fact that the visible artifacts occur ‘only’ at edges is a consequence of the low frequency $\nu =1$ (corresponding to one sinusoidal period displayed over the entire width of the screen) used to acquire these measurements. Higher frequencies would result in more prominent artifacts, for example, commonly observed in triangulation-based fringe projection.

3.2.2 Multi-view measurement

A single view measurement is not enough to capture a sizeable specular object with large normal variation in its entirety. This is not only because of the limited effective FoV of mobile devices but also because the large normal variation of some surfaces cannot be captured from a single viewing angle (see e.g., Fig. 3(d)). As discussed, our solution to this problem is to acquire and register multiple phase maps of the object surface, while our system is positioned by hand at different viewpoints. In this section, we show qualitative results that demonstrate our approach. We study a circular shaped glass painting with a diameter of $300\;mm$. From the magnification window in Fig. 2(e), it can be seen that the glass pieces in this painting exhibit high-frequency surface features. Moreover, some glass pieces are milky. For the results shown below, we scanned one half of the glass painting by acquiring 14 single views under different viewing angles and positions.

To assist in registration, we acquired an additional ‘white image’ (image of glass painting only illuminated by diffuse room light) at each viewing position. The registration transformation for the normal maps acquired at each single view is calculated from these ‘white images’. Performing registration with the ‘white images’ was found to be more robust than registration with calculated normal maps. For registration, we used the feature-based registration algorithms provided by the Matlab Computer Vision Toolbox. It should be noted that the usage of images which are captured under diffuse illumination is beneficial in this case since the diffuse illumination makes the object look similar from different viewing angles. No strong specular reflections (which look different from different viewing angles) disturb the feature extraction of the registration algorithm. With this trick, we are able to register subsequent views without applying markers or other fiducials onto the object surface, just by using the texture of the object itself. Figure 4 shows ‘white images’ of two subsequent views, their detected and mapped features, as well as the registration result.

It can be seen that the feature extraction and the subsequent registration transformation is applied on the whole FoV of the camera (not only on the limited effective FoV in the middle) in order to detect a large number of features with high quality. In this case, it can be beneficial to perform a simple internal calibration of the front camera (e.g., with a checkerboard) to compensate for distortion. This can reduce the registration error significantly. It should also be noted that such a distortion correction was avoided for the previous single-view measurements since most of the signal was measured in the middle of the FoV, where the distortions are small. In the future, we plan to develop methods that estimate the distortion parameters of the camera during registration without the need for an explicit calibration procedure. Figure 4(e) shows all 14 views after registration and stitching. Most parts of the object’s surface are densely reconstructed, and the high-frequency structures of the individual glass pieces are visible. However, some normals are still missing, mostly from the blue glass pieces in the painting. The structure of these pieces displays extraordinary high hills and deep craters, producing a wide distribution of normals that would require more than 14 views to be measured effectively.

 figure: Fig. 4.

Fig. 4. Multi-view normal map 3D reconstruction of large stained glass painting using image-based registration. a) and b) ’White images’ (images captured with black screen and diffuse room light illumination) before distortion correction. c) Detected and mapped features in the two subsequent ’white images’ (color-coded by green and magenta). d) Registered ’white images’. e) Visualization of stitched multi-view normal map result, consisting of 14 registered single-views.

Download Full Size | PDF

4. Additional experimental results

Although the presented method was motivated by the 3D measurement of stained glass artworks, the system is in no case restricted to this specific object type. A 3D surface acquisition with our uncalibrated method is possible as long as the overall shape of the object is flat, and the surface under test is relatively shiny.

Figure 5(a-c) displays the surface measurement of an oil painting. The three-dimensional analysis of painting surfaces is also of great interest to the cultural heritage community. The ability to separate surface texture from its shape or slope data is a valuable tool to understand different painting techniques (e.g., by looking at the directions of brush strokes). Monitoring of pigment degradation in paintings [3941] is another application that does not work reliably by only looking at captured 2D images. Our mobile 3D imaging method is well suited for the analysis of paintings ’in the wild’, i.e., directly on the museum wall. Figure 5(a) shows an image of a measured oil painting. The surface normals of the black region in the red box (approximately $70\;mm \times 80\;mm$) are acquired with our method. For better visualization of the hills and valleys of the brushstrokes, the acquired normal map is integrated into a depth map, using the Frankot-Chellappa surface integration algorithm [42]. Figures 5(b) and (c) show the calculated depth map from two different perspectives (z-component exaggerated for display purposes). The brush strokes, and even the underlying canvas can nicely be resolved.

 figure: Fig. 5.

Fig. 5. Deflectometric measurements of different surfaces: Paintings, technical, metallic, enameled ceramic, and fluid surfaces. a) Image of measured painting with marked $70\;mm \times 80\;mm$ measurement region. b) and c) Surface shape of the marked region, calculated by integration of the acquired normal map. Brushstrokes and canvas can nicely be resolved. d) Image of measured key ($70\;mm$ length). e) Measured normal map of the key. f) Water drops ($20\;mm \times 15\;mm$) on an enameled ceramic surface (coffee mug). g) Evaluated normal map. h) Normal maps of a 5 cent and a 10 cent coin. i) Circuit board with marked $22.5\;mm \times 15\;mm$ measurement region and measured normal map. Each metallic circle has a diameter of $\sim 2\;mm$

Download Full Size | PDF

Another potential field of application is the 3D acquisition of technical metallic surfaces. Figure 5(e) displays the acquired normal map of a metallic key ($70\;mm$ height, Fig. 5(d)), shaded with a specular finish. The normal maps of a 5 cent and a 10 cent US coin ($21\;mm$ / $18\;mm$ diameter) are shown in Fig. 5(h). Imprinted letters or symbols can be resolved, both for the key as well as for the coins. Figure 5(i) displays the normal map of a circuit board. The diameter of one single metallic ring is only about $2\;mm$. In the last example, we demonstrate the capability of our method to measure fluid surfaces, e.g., for the analysis of surface tension. Figure 5(g) shows the normal map that was acquired from water drops on an enameled ceramic surface (coffee mug). The water drops are arranged to form the letters ‘N U’ (Fig. 5(f)). The shape of each drop is clearly visible from the normal map. In the future, we plan to use our system to measure dynamic fluid surfaces with a single-shot PMD technique, such as [13,19,24,25]. In addition, we are developing algorithms capable of recovering surface normals from objects with much more complicated reflectivity.

5. Conclusion and outlook

In this paper, we presented a mobile Deflectometry system that is able to measure specular surfaces with a high normal variation and much larger than the system’s initial FoV in a hand-guided fashion. In order to sample the entire object surface densely with high resolution over a large FoV, we applied a feature-based registration to stitch normal maps from different viewing angles and positions. The system can be moved by freehand from one viewpoint to the other. No external guidance or fiducials affixed to the object are necessary.

We demonstrated the 3D surface measurement of stained glass surfaces using both single view and registered multi-view measurements. As a proof of principle, we scanned one half of a circular stained glass artwork with $300\;mm$ diameter by stitching together 14 single views. In a second experiment not shown in the paper, we tried registration of the whole artwork with 28 views. However, global registration errors were significant so that the first and last views did not fit together after one pass. This is a well-known problem for surface measurements with registration [43]. Reducing the global registration error is one of our main goals for future work.

Our evaluation process exploits a priori knowledge about the object to avoid extensive fringe and display calibration, which also solves the depth-normal ambiguity problem without the use of a second camera [7]. In the future, we seek to develop self-calibrating algorithms for multi-view measurements. Our plan is to apply a non-rigid registration on our data and obtain the information about the distortion from the calculated deformation fields. Moreover, we will work towards obtaining quantitative measurements without calibration. This work will build upon previously demonstrated self-calibrating PMD setups, e.g., shown in [9].

Although we have shown that hand-held measurements are possible with our system, PMD is commonly a multi-shot principle, and can, therefore, introduce motion blur. Single-shot PMD techniques that rely on single-sideband demodulation, e.g., like introduced in [19,24,25] will not work on objects like stained glass paintings because of the severe bandwidth restrictions. In the future, we want to explore other single-shot and/or motion-robust Deflectometry techniques that exploit additional modalities to solve the ambiguity problem. Examples of how such problems are solved in the field of line triangulation can be found in [2,21,4345]. Our future goal is to develop similar methods for Deflectometry. Ideally, the user only needs to continuously wave around his device in front of the object to obtain a dense 3D reconstruction after a few seconds.

Lastly, to foster the adoption of our technique by a broad audience, we plan to make our measurement App publicly available so that anyone with a mobile device can make 3D surface measurements of specular objects. Each user will be able to transform his phone or tablet into a 3D measurement instrument. We envision this framework will serve as a platform for crowd-sourced aggregation of surface shape acquisition/fingerprinting of unattributed artworks around the globe.

Funding

National Endowment for the Humanities (PR-258900-18); Andrew W. Mellon Foundation (41200637); National Science Foundation (NSF CAREER) (IIS-1453192).

Acknowledgments

The authors thank G. Häusler and C. Faber for the fruitful discussions.

Disclosures

The authors declare no conflict of interest.

References

1. R. Schwarte, Z. Xu, H.-G. Heinol, J. Olk, R. Klein, B. Buxbaum, H. Fischer, and J. Schulte, “New electro-optical mixing and correlating sensor: facilities and applications of the photonic mixer device (pmd),” Proc. SPIE 3100, 245–253 (1997). [CrossRef]  

2. F. Willomitzer and G. Häusler, “Single-shot 3d motion picture camera with a dense point cloud,” Opt. Express 25(19), 23451–23464 (2017). [CrossRef]  

3. M. Schaffer, M. Grosse, and R. Kowarschik, “High-speed pattern projection for three-dimensional shape measurement using laser speckles,” Appl. Opt. 49(18), 3622 (2010). [CrossRef]  

4. R. J. Woodham, “Photometric method for determining surface orientation from multiple images,” Opt. Eng. 19(1), 191139 (1980). [CrossRef]  

5. G. Häusler, “Verfahren und vorrichtung zur ermittlung der form oder der abbildungseigenschaften von spiegelnden oder transparenten objekten,” (Patent DE19944354A1, (1999)).

6. L. Huang, M. Idir, C. Zuo, and A. Asundi, “Review of phase measuring deflectometry,” Opt. Lasers Eng. 107, 247–257 (2018). [CrossRef]  

7. M. C. Knauer, J. Kaminski, and G. Häusler, “Phase measuring deflectometry: a new approach to measure specular free-form surfaces,” Proc. SPIE 5457, 366 (2004). [CrossRef]  

8. C. Faber, E. Olesch, R. Krobot, and G. Häusler, “Deflectometry challenges interferometry: the competition gets tougher!” Proc. SPIE 8493, 84930R (2012). [CrossRef]  

9. E. Olesch, C. Faber, and G. Häusler, “Deflectometric self-calibration for arbitrary specular surfaces,” in Proceedings of DGaO, (2011).

10. G. Häusler, C. Faber, E. Olesch, and S. Ettl, “Deflectometry vs. interferometry,” Proc. SPIE 8788, 87881C (2013). [CrossRef]  

11. R. B. Bergmann, J. Burke, and C. Falldorf, “Precision optical metrology without lasers,” in International Conference on Optical and Photonic Engineering (icOPEN 2015), vol. 9524A. K. Asundi and Y. Fu, eds., International Society for Optics and Photonics (SPIE, 2015), pp. 23–30.

12. G. P. Butel, G. A. Smith, and J. H. H. Burge, “Deflectometry using portable devices,” Opt. Eng. 54(2), 025111 (2015). [CrossRef]  

13. I. Trumper, H. Choi, and D. W. Kim, “Instantaneous phase shifting deflectometry,” Opt. Express 24(24), 27993–28007 (2016). [CrossRef]  

14. J. Riviere, P. Peers, and A. Ghosh, “Mobile Surface Reflectometry,” Comput. Graph. Forum 35(1), 191–202 (2016). [CrossRef]  

15. L. R. Graves, H. Quach, H. Choi, and D. W. Kim, “Infinite deflectometry enabling 2 pi -steradian measurement range,” Opt. Express 27(5), 7602–7615 (2019). [CrossRef]  

16. D. Lowe, “Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image,” (Patent US6711293B1, (2000)).

17. D. Lowe, “Object recognition from local scale-invariant features,” in Proceedings of the International Conference on Computer Vision. 2. pp. 1150–1157, (1999).

18. P. Su, R. E. Parks, L. Wang, R. P. Angel, and J. H. Burge, “Software configurable optical test system: a computerized reverse hartmann test,” Appl. Opt. 49(23), 4404–4412 (2010). [CrossRef]  

19. L. Huang, C. S. Ng, and A. K. Asundi, “Dynamic three-dimensional sensing for specular surface with monoscopic fringe reflectometry,” Opt. Express 19(13), 12809–12814 (2011). [CrossRef]  

20. M. Fischer, M. Petz, and R. Tutsch, “Model-Based Deflectometric Measurement of Transparent Objects,” (Fringe 2013 – 7th International Workshop on Advanced Optical Imaging and Metrology, Springer (2013)).

21. F. Willomitzer, “Single-Shot 3D Sensing Close to Physical Limits and Information Limits,” (Dissertation, Springer Theses (2019)).

22. G. P. Butel, G. A. Smith, and J. H. Burge, “Binary pattern deflectometry,” Appl. Opt. 53(5), 923–930 (2014). [CrossRef]  

23. M. Takeda and K. Mutoh, “Fourier transform profilometry for the automatic measurement of 3-d object shapes,” Appl. Opt. 22(24), 3977–3982 (1983). [CrossRef]  

24. Y. Liu, E. Olesch, Z. Yang, and G. Häusler, “Fast and accurate deflectometry with crossed fringes,” Adv. Opt. Technol. 3(4), 441–445 (2014). [CrossRef]  

25. M. Nguyen, Y. Ghim, and H. Rhee, “Single-shot deflectometry for dynamic 3d surface profile measurement by modified spatial-carrier frequency phase-shifting method,” Sci. Rep. 9(1), 3157 (2019). [CrossRef]  

26. M. Tarini, H. P. Lensch, M. Goesele, and H.-P. Seidel, “3d acquisition of mirroring objects using striped patterns,” Graph. Models 67(4), 233–259 (2005). [CrossRef]  

27. Y. Ding, J. Yu, and P. Sturm, “Recovering specular surfaces using curved line images,” in 2009 IEEE Conference on Computer Vision and Pattern Recognition, (2009), pp. 2326–2333.

28. S. Tin, J. Ye, M. Nezamabadi, and C. Chen, “3d reconstruction of mirror-type objects using efficient ray coding,” in 2016 IEEE International Conference on Computational Photography (ICCP), (2016), pp. 1–11.

29. T. Bonfort and P. Sturm, “Voxel carving for specular surfaces,” in Proceedings Ninth IEEE International Conference on Computer Vision, (2003), pp. 591–596 vol.1.

30. C. Godard, P. Hedman, W. Li, and G. J. Brostow, “Multi-view reconstruction of highly specular surfaces in uncontrolled environments,” in 2015 International Conference on 3D Vision, (2015), pp. 19–27.

31. B. Jacquet, C. Häne, K. Köser, and M. Pollefeys, “Real-world normal map capture for nearly flat reflective surfaces,” in 2013 IEEE International Conference on Computer Vision, (2013), pp. 713–720.

32. K. Ikeuchi, “Determining surface orientations of specular surfaces by using the photometric stereo method,” in Shape Recovery, L. B. Wolff, S. A. Shafer, and G. E. Healey, eds. (Jones and Bartlett Publishers, Inc., USA, 1992), pp. 268–276.

33. B. Tunwattanapong, G. Fyffe, P. Graham, J. Busch, X. Yu, A. Ghosh, and P. Debevec, “Acquiring reflectance and shape from continuous spherical harmonic illumination,” ACM Trans. Graph. 32(4), 1–12 (2013). [CrossRef]  

34. T. Chen, M. Goesele, and H.-P. Seidel, “Mesostructure from specularity,” in Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2, (IEEE Computer Society, Washington, DC, USA, 2006), CVPR ’06, pp. 1825–1832.

35. A. C. Sanderson, L. E. Weiss, and S. K. Nayar, “Structured highlight inspection of specular surfaces,” IEEE Trans. Pattern Anal. Machine Intell. 10(1), 44–55 (1988). [CrossRef]  

36. S. K. Nayar, A. C. Sanderson, L. E. Weiss, and D. A. Simon, “Specular surface inspection using structured highlight and gaussian images,” IEEE Trans. Robot. Automat. 6(2), 208–218 (1990). [CrossRef]  

37. C. Röttinger, C. Faber, M. Kurz, E. Olesch, G. Häusler, and E. Uhlmann, “Deflectometry for ultra-precision machining - measuring without rechucking,” in Proceedings of DGaO, (2011).

38. “Kokomo opalescent glass co,” https://www.kog.com.

39. S. Perkins, “New app reveals the hidden landscapes within georgia o’keeffe’s paintings,” Sci. Mag. (2019).

40. L. Strelich, “Why are georgia o’keeffe’s paintings breaking out in pimples?” Smithsonian Mag. (2019).

41. J. Salvant, M. Walton, D. Kronkright, C.-K. Yeh, F. Li, O. Cossairt, and A. K. Katsaggelos, “Photometric stereo by uv-induced fluorescence to detect protrusions on georgia o’keeffe’s paintings,” Met. Soaps Art (2019).

42. R. Frankot and R. Chellappa, “A method for enforcing integrability in shape from shading algorithms,” IEEE Trans. Pattern Anal. Machine Intell. 10(4), 439–451 (1988). [CrossRef]  

43. O. Arold, S. Ettl, F. Willomitzer, and G. Häusler, “Hand-guided 3D surface acquisition by combining simple light sectioning with real-time algorithms,” arXiv e-prints arXiv:1401.1946 (2014).

44. F. Willomitzer, S. Ettl, O. Arold, and G. Häusler, “Flying triangulation - a motion-robust optical 3d sensor for the real-time shape acquisition of complex objects,” AIP Conf. Proc. 1537, 19–26 (2013). [CrossRef]  

45. J. Qian, S. Feng, T. Tao, Y. Hu, K. Liu, S. Wu, Q. Chen, and C. Zuo, “High-resolution real-time 360°; 3d model reconstruction of a handheld object with fringe projection profilometry,” Opt. Lett. 44(23), 5751–5754 (2019). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. a) Handheld measurement of a stained glass painting with a mobile device. The reflections of the screen are visible on parts of the glass surface and reveal its three-dimensional structure. The measurement result (normal map) is displayed in the zoomed inset. b) Basic principle of ‘Phase Measuring Deflectometry’ (PMD): A screen with a fringe pattern is observed over the reflective surface of an object. The normal map of the object surface can be calculated from the deformation of the fringe pattern in the camera image.
Fig. 2.
Fig. 2. Photograph of objects to be measured with our system. a-d) Stained glass test tiles from the Kokomo glass factory [38], each with an edge length of $\sim 50\;mm$. Surface structure complexity and angular distribution of surface normals increase from a to d: ’33KDR’ (a), ’33RON’ (b), ’33WAV’ (c), ’33TIP’ (d). e) Large stained glass painting (diameter $300\;mm$), scanned with our multi-view technique by 14 views from different angles and positions.
Fig. 3.
Fig. 3. Single-view 3D reconstructions (surface normal maps) of Kokomo glass test tiles. ’33KDR’ (a), ’33RON’ (b), ’33WAV’ (c), ’33TIP’ (d). Measurements are performed with mounted tablet and no room lights. e) Reconstructions of ’33RON’ and ’33WAV’ measured under normal office light ($\sim 500lx$). e) Reconstructions for a handheld measurement of’33RON’ and ’33WAV’.
Fig. 4.
Fig. 4. Multi-view normal map 3D reconstruction of large stained glass painting using image-based registration. a) and b) ’White images’ (images captured with black screen and diffuse room light illumination) before distortion correction. c) Detected and mapped features in the two subsequent ’white images’ (color-coded by green and magenta). d) Registered ’white images’. e) Visualization of stitched multi-view normal map result, consisting of 14 registered single-views.
Fig. 5.
Fig. 5. Deflectometric measurements of different surfaces: Paintings, technical, metallic, enameled ceramic, and fluid surfaces. a) Image of measured painting with marked $70\;mm \times 80\;mm$ measurement region. b) and c) Surface shape of the marked region, calculated by integration of the acquired normal map. Brushstrokes and canvas can nicely be resolved. d) Image of measured key ($70\;mm$ length). e) Measured normal map of the key. f) Water drops ($20\;mm \times 15\;mm$) on an enameled ceramic surface (coffee mug). g) Evaluated normal map. h) Normal maps of a 5 cent and a 10 cent coin. i) Circuit board with marked $22.5\;mm \times 15\;mm$ measurement region and measured normal map. Each metallic circle has a diameter of $\sim 2\;mm$

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

I ( x , y ) = A ( x , y ) + B ( x , y ) cos ( ϕ ( x , y ) )   .
I m ( x , y ) = A ( x , y ) + B ( x , y ) cos ( ϕ ( x , y ) ϕ m )   ,
ϕ m = ( m 1 ) π 2   .
ϕ ( x , y ) = arctan I 2 ( x , y ) I 4 ( x , y ) I 1 ( x , y ) I 3 ( x , y )
n = 1 ϕ x ~ 2 + ϕ y ~ 2 + 1 ( ϕ x ~ ϕ y ~ 1 )   ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.