Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Calibration of position and orientation for point light source synchronously with single image in photometric stereo

Open Access Open Access

Abstract

In this paper, a calibration method for non-isotropic point light source is developed, which is capable of calibrating position and orientation of the point light source with a single image synchronously. Based on the bidirectional reflectance distribution function (BRDF), the relationship between the shape of observed surface and the recorded grayscale image is firstly investigated in a more accurate way. Based on this model, a cost function is proposed to evaluate the estimation error, and a new cumulative, error-free iteration method is designed to recursively estimate the point light source parameters. The average simulation rebuilding root mean square error (RMSE) is below 0.9. The experiments are performed by calibrating a point light source in a photometric stereo measurement system, and the corresponding RMSE is below 1.2 in the unit of grayscale. Further analysis shows one image is enough to achieve the precision calibration using our proposed method.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Photometric stereo is an important shape reconstruction technique in medicine research [1], archaeology [2] and on-line industry measurements [3,4], which can acquire 3D surface information with only a few CCD images by analyzing the relationship between image grayscale and scene shape [5–7]. The measurement accuracy of photometric technique is determined by the accurate acquisition of the light source information [8].

In the photometric stereo, the calibration of the point light source is very important [9–11], especially for high precision measurements like fingerprint detection [12] and dental modeling [13]. Many efforts have been put on the research of the point light source calibration, most of them employed certain well-known shape calibration targets. Jens Ackermann et al. [14] used a group of billiard balls to calibrate the position of an isotropic point light source. Xie et al. [6] used a partially bright and partially Lambertian surface to calibrate a set of LEDs. However, the above methods determined the orientation according to estimated position, this will lead the orientation calibration in the risk of “cumulative error”. Actually, there are also some reports to calibrate the orientation and position synchronously [15–19]. Katsumi Mori et al. [15] achieved the point light source calibration by using a cuboid, which simplified the calibration procedures and retrieved the light source information successfully. Jaesik Park et al. [16] calibrated the point light source by analyzing a serial of images recorded under a weakly textured planar scene. The two methods estimated the light source parameters by identifying the coordinates of several highlight points, this will need higher requirements for system setups and quantity of images in order to finish the optimum estimation in the following steps.

In this manuscript, an image based non-isotropic point light source calibration method is proposed, which can obtain the orientation and position of the light source synchronously. By analyzing the relationship between the illumination and the image grey levels in a more accurate way, the calibration principle by using a special designed Lambertian plane is introduced. During the estimation, all parameters of the light source can be divided into two groups according to the coupling relationship between them and a specific iterative approach is designed to determine the parameters. The RMSE in the simulations is below 0.9 and in the experiment it is below 1.2 with grayscale within the range of [0, 255]. Further analysis shows for the presented approach, one image is enough to achieve the satisfied calibration precision.

2. Light source model and properties

2.1 Point light source modeling

Consider the case of a uniform non-textured plane illuminated by a point light source observed by a calibrated camera, as shown in Fig. 1, where oc is the optical center of the camera, o is the center of image plane and f is the focal length. p is a point on surface Π, whose coordinates is [X,Y,Z] in camera frame ocxcyczc. p' is the projection of p with coordinates [u,v] in the image frame ouv . n(p) is the surface normal vector at point p, s(p) is the lighting vector presented the light beam transmitted from ps to p, and ps is the point light source. Here, in order to simplify the mathematical expression, s(p) is set to be oriented along the opposite direction of the incident light.

 figure: Fig. 1

Fig. 1 Geometry of a camera observing an object illuminated by a point light source.

Download Full Size | PDF

The grayscale of pixel p' conjugated to p can be expressed by Eq. (1),

I(p)=ρ(p)s(p)n(p)
where ρ(p) is the albedo of point p on the plane Π, for Lambertian surface ρ(p)=1. Here the lighting vector s(p) is actually the light source model which describes the variations of light intensity by using a vector pointing from p to ps, whose norm equals the intensity of the light ray along this direction. The expression of s(p) is presented by Eq. (2),
s(p)=Φ(θ,ϕ)r2psppps
where r is the distance between the light source and point p, 1/r2 is the attenuation factor due to the conservation of luminous energy in a non-absorbing medium [20]. θ and ϕ are the vertical and horizontal angle deviation of the light beam, and Φ(θ,ϕ)>0 is the intensity of the emitting light ray. In general, the intensity distribution of the light ray from the surface of the point light source is uniform around its light axis that can be indicated by a unit-length vector ns, which means Φ(θ,ϕ) is independent from ϕ, The lighting vector at p illuminated by ps is given by Eq. (3),

s(p)=Φ(θ)pps2ppspps

The dependency on θ of the intensity characterizes the non-isotropy property of the light source. As shown in Fig. 2, the varying radiant intensities of a light source are represented by its radiant intensity distribution (RID), which is often visualized by a polar graph.

 figure: Fig. 2

Fig. 2 Polar graphs of Φ(θ) radiant intensity distribution curves.

Download Full Size | PDF

Between the light emitting direction and light axis the intensity is decreasing and Φ(θ) can be calculated through Eq. (4),

Φ(θ)=Φ0cosμθ
where Φ0=Φ(0) corresponds to the light intensity in the direction of light axis and it is barely decided by the physical property of the light source. μ is the parameter to express the intensity distribution of the light source, for Lambertian source μ=1. Substituting Eq. (4) into Eq. (3), a more accurate light vector can be rewritten into Eq. (5),

s(p)=Φ0cosμθppspps3=Φ0(nsppspps)μppspps3

2.2 Gray level calculation

Luminance L(p) expresses the incident light on a surface pointp. The reflected light from this point can be defined as illuminationE(p). The relationship between L(p) and E(p) can be formulated by Eq. (6),

L(p)=ρ(p)πE(p)

Based on the Lambert cosine theorem, the illumination E(p) at surface point p can be calculated by Eq. (7),

E(p)=s(p)n(p)

On the image plane, the illumination ϵ(p) of the pixel p' is related to the luminance L(p) by the following relationship, as described by Eq. (8),

ϵ(p)=βcos4α(p)L(p)
where β is a coefficient describing the clarity of the image. α(p) is the angle between the light beam and the optical axis of the camera. cos4α(p) is responsible for the darkening effect of the image. Combining Eqs. (6), (7), (8), the gray level J(p') at the pixel p' is proportional to its illumination ϵ(p) as indicated by Eq. (9),
J(p)=γϵ(p)=cos4α(p)γβρ(p)πΦ0(nspsppsp)μppsn(p)pps3
where cos4α(p)=f/p2+f2 and γ is an optical coefficient. The “corrected gray level” I(P) is defined by Eq. (10),

I(p)=J(p)cos4α(p)

Regroup all the multiplicative coefficients into one coefficientΨ=γβρ0Φ0/π, the expression of corrected gray level of pixel p' can then be developed into Eq. (11),

I(p)=Ψ(nspsppsp)μppsn(p)pps3
where p is related to the shape of the object. (Ψ,ps,ns) are the parameters to describe the light source and will be calibrated in the next chapter.

3. Calibration methods

3.1 Calibration principle

According to Eq. (11), with the coordinatep, the normal vector n and the grayscale Iof the observed object, the parameters (Ψ,ps,ns) of the point light source can be estimated. The calibration method is as shown in Fig. 3, which shows the geometry relationship between the calibration target and its CCD image.

 figure: Fig. 3

Fig. 3 Geometry relationship between calibration target and CCD image.

Download Full Size | PDF

A Lambertain plate with a rigid and smooth surface is designed as a customized calibration target to calibrate the point light source. Based on the imaging principle, with the geometry property of circle marks, the mathematical expression of the calibration target can be calculated and then the coordinates of p and the normal vector n can be determined.

In order to facilitate subsequent calculations, a parameter named “estimated corrected gray level” I˜(p) is defined by Eq. (12),

I˜(p)=Ψ˜(n˜sp˜spp˜sp)μp˜psn(p)pp˜s3
where "~" means the estimation and (Ψ,ps,ns) can be estimated by minimizing the sum of the square residuals between the I(p) and I˜(p), as shown in Eq. (13),
Τ(Ψ,ps,ns)=j=1qpΠj(I(p)Ψ˜(n˜sp˜spp˜sp)μpp˜sn(p)pp˜s3)2
where T(Ψ,ps,ns) is the cost function, q is the number of position where the calibration target has been placed and j is the order of position. Note in the robustness point of view, this method doesn’t need all points within the field of view to finish the estimation.

By using the Circle Hough transform (CHT) [21,22], all of the center of circles on the calibration board can be identified. With the distance between the centers of every two circles next to each other, the center coordinates c of all the circles and the calibration target orientation n can both be calculated. Every point p on the target satisfies the following formula Eq. (14),

{pcn=0poc=kl
where l is a unit vector from p to oc and k is the distance between them. With the intrinsic matrix M of camera, the relationship between p=[X,Y,Z] and p=[u,v] can be given by Eq. (15),

Z[uv1]=Mp=[f000f0001][XYZ]

By establishing simultaneous equation of Eq. (14) and Eq. (15), the coordinates of certain point p can be calculated and then the cost function T(Ψ,ps,ns) can be established.

3.2 Parameters optimal estimation

In this part a special optimization-based approach is proposed to complete the estimation of parameters T(Ψ,ps,ns). Ψ is a scalar, the coordinates ps is [Xps,Yps,Zps], ns can be expressed with [θns,ϕns,1] under spherical coordinates system. Therefore, the parameters above can be categorized into six variables(Ψ,Xps,Yps,Zps,θns,ϕns), considering the coupling relation between .ps. and ns according to Eq. (11), the above six variables can be divided into two groups: Ψ and(Xps,Yps,Zps,θns,ϕns). During the process of iterative, all variables will be updated via the gradient descent direction of Eq. (13). The partial derivation of T(Ψ,ps,ns) is calculated by Eq. (16),

grad(T)=TΨ+Tps+Tns=0+TXps+TYps+TZps+Tθns+Tϕns

Based on the grouping relation between variables, Eq. (16) can be further divided into two parts: a constant zero and the partial derivatives of ps and ns. As a consequence, the iteration process is classified into the update of Xps,Yps,Zps,θns,ϕns and the update of Ψ, as illustrated by Fig. 4,where e is the accuracy threshold when T is less than e the calibration is considered as completed,ps and ns can be updated by the following Eq. (17),

[Xps(k)Yps(k)Zps(k)]=[Xps(k1)Yps(k1)Zps(k1)][TXpsTYpsTZps]Ω(T)[θns(k)ϕns(k)]=[θns(k1)ϕns(k1)][TθnsTϕns]ω(T)
where Xps(k),Yps(k),Zps(k),θns(k),ϕns(k) are the estimation in kth iteration, Ω(T),ω(T) are the functions of the step size. Here Ω(T)=ln(T+1),ω(T)=ln(T+1)*0.08. Since the two parameters ps,ns are simultaneously calculated, the effect of cumulative error can be avoided.

 figure: Fig. 4

Fig. 4 The iteration process.

Download Full Size | PDF

By using Eq. (17), we can obtain the estimation of the corrected gray level I(p)((k)) in the kth iteration step, which is determined by Eq. (18),

I(p)(k)=Ψ(k)(ns(k+1)ps(k+1)pps(k+1)p)μpps(k+1)n(p)pps(k+1)3
and Ψ is updated by the Eq. (19),
Ψ˜=allpixelpintheimageI(p)allpixelpintheimageI(p)(k)Ψ(k)Ψ(k+1)=a*Ψ(k)+b*Ψ˜
where a,b are the weight coefficients which modify the estimation of Ψ. A self-test process is used to avoid iteration falling into a local optimal solution. By comparing with previous results, the proposed algorithm will determine whether the decline percentage of T(Ψ,ps,ns) is less than 1%, if so, the current estimation will be considered as a local optimal solution and a small perturbation 0.02*T(Ψ,ps,ns). will be added to current result to help it jump out and let the iteration continue.

3.3 Simulation and analysis

Due to the impossibility of obtaining the true position and orientation of light source, the exactitude and accuracy of estimation will be evaluated by comparing the grayscale of CCD image and the image synthesized by using the estimated of light source parameters.

Synthetic images were simulated according to the light model given in Eq. (11), where the light position is [123, 23, −12] and the light orientation is set to be [0.5236, 0.2618] in camera coordinate system ocxcyczc. We generated the reflection map of the calibration target at 3 different poses and the simulation results are as displayed in the Fig. (5) below.

 figure: Fig. 5

Fig. 5 Simulation experiment results.

Download Full Size | PDF

The first column is the three synthetic images and the second column is the recovered images with the estimation (Ψ,ps,ns) of light source information. The last column shows the difference between the synthetic images and estimated images. The statistics of the deviation is shown in Table 1.

Tables Icon

Table 1. The statistics of deviation in the simulation

In Table 1, it can be seen that the average RMSE of the proposed method is 0.8713 and the average max different is 1.0032, which can successfully prove the feasibility of the proposed method.

4. Experiment and analysis

The experiment setup is shown in Fig. 6, where a CCD camera from BASLER with the resolution of 960*1280 pixels and a 3W point light source are used. The point light source and camera are mounted on a special designed holder and a LED is keep illuminating at constant power. To prove the feasibility of the algorithm proposed in this paper, two images are captured for the calibration target in different positions, where one is used to calibrate the light source and the other one is used to verify the accuracy of the calibration result.

 figure: Fig. 6

Fig. 6 Experiment system setup.

Download Full Size | PDF

The experiment result is illustrated in Fig. 7, where Fig. 7(a) is the image recorded by the CCD camera, Fig. 7(b) are the grayscale distributions of the CCD image and the synthetic image with the estimated of the point light source position and orientation calculated from Fig. 7(a) with the proposed method, Fig. 7(c) is the difference between the grayscale maps in Fig. 7(b), which is within the range of [-2,2] with the full scale gray value set to be from 0 to 255. Here note should be paid for the clarity display a horizontal shift is added in Fig. 7(b).

 figure: Fig. 7

Fig. 7 Experiment results.

Download Full Size | PDF

Another image is recorded for the calibration target in different position, which is used to verify the calibration accuracy, the result is shown in Fig. 8. Figure 8(a) is the CCD snapshot, Fig. 8(b) are the grayscale distribution of Fig. 8(a) and the simulated grayscale distribution according to the calibrated point light source position and orientation from Fig. 7, also the horizontal shift is added between them for the clarity display, Fig. 8(c) is the difference between the intensity maps in Fig. 8(b), which is also within the range of [-2, 2].

 figure: Fig. 8

Fig. 8 Verification experiment results.

Download Full Size | PDF

The statistics of the deviations for Fig. 7 and Fig. 8 can be found in Table 2. From Fig. 7, Fig. 8 and Table 2, it is easy to find that the average error of image gray level is less than 1.0 with the gray value set from 0 to 255, the standard deviation is below 1.1 and the RMSE is below 1.2. Considering the camera noise and influences of environmental light the calibration result is satisfactory.

Tables Icon

Table 2. Deviation statistical

Furthermore, we perform the calibrations with multiply images, the result is show in Table 3. By using the proposed iterative approach, we can find that the increase of the calibration images dose not effectively improve the calibration accuracy. Actually, the average error, standard deviation and the RMS remains in the same level when using one, three or five images for calibration, which means one image is accurate enough of this new method to calibrate a point light source and when the images qualities are not good, the existence of input images noise may even reduce the accuracy of calibration.

Tables Icon

Table 3. Deviation statistical in calibration

5. Conclusion

This paper presented a new non-isotropic point light source calibration method, which is capable of calibrating the position and the orientation of a point light source synchronously with only one image. With a more detailed analysis, the relationship between the grayscale of image, the position and orientation of the point light source can be determined. A cost function is designed to convert the point light source calibration into an optimization problem. By analyzing the coupling relationship between the point light source parameters, a new iteration process is proposed to perform the estimation. A customized Lambertain calibration target is designed to assist the calibration process. The simulations and experiments verified the accuracy and effectiveness of the proposed method.

Funding

National Natural Science Foundation of China and the Civil Aviation Administration of China (CAAC) (U1633101); The Fundamental Research Funds for the Central Universities of Civil Aviation University of China special funds (3122014H004).

References

1. N. Tsumura, Y. Miyake, and F. H. Imai, “Medical vision: measurement of skin absolute spectral-reflectance image and the application to component analysis,” in Proceedings of the 3rd International Conference on Multispectral Color Science (MCS’01), (2001), pp. 25–28.

2. F. Bernardini, H. Rushmeier, I. M. Martin, J. Mittleman, and G. Taubin, “Building a digital model of michelangelo’s florentine pieta,” IEEE Comput. Graph. Appl. 22(1), 59–67 (2002). [CrossRef]  

3. X. K. Z. P. Y. Chaolin, “On-line detection technique of tiny surface defects for metal plates and strips based on photometric stereo” J. Mech. Eng. 4, 006 (2013).

4. J. Wu, G. Ding, X. Chen, T. Han, X. Cai, L. Lei, and J. Wei, “Nano step height measurement using an optical method,” Sens. Actuators A Phys. 257, 92–97 (2017). [CrossRef]  

5. Y. Quéau, B. Durix, T. Wu, D. Cremers, F. Lauze, and J.-D. Durou, “Led-based photometric stereo: modeling, calibration and numerical solution,” J. Math. Imaging Vis. 60(3), 313–340 (2018). [CrossRef]  

6. L. Xie, Z. Song, G. Jiao, X. Huang, and K. Jia, “A practical means for calibrating an led-based photometric stereo system,” Opt. Lasers Eng. 64, 42–50 (2015). [CrossRef]  

7. S. Sengupta, H. Zhou, W. Forkel, R. Basri, T. Goldstein, and D. Jacobs, “Solving uncalibrated photometric stereo using fewer images by jointly optimizing low-rank matrix completion and integrability,” J. Math. Imaging Vis. 60(4), 563–575 (2018). [CrossRef]  

8. R. Kozera and A. N. Prokopenya, “Application of computer algebra to photometric stereo with two light sources,” Program. Comput. Softw. 44(2), 112–119 (2018). [CrossRef]  

9. S. Karaoglu, T. Yang Liu, Gevers, and A. W. M. Smeulders, “Point light source position estimation from rgb-d images by learning surface attributes,” IEEE Trans. Image Process. 26(11), 5149–5159 (2017). [CrossRef]   [PubMed]  

10. B. J. Boom, S. Orts-Escolano, X. X. Ning, S. McDonagh, P. Sandilands, and R. B. Fisher, “Interactive light source position estimation for augmented reality with an rgb-d camera,” Comput. Animat. Virtual Worlds 28(1), e1686 (2017). [CrossRef]  

11. X. Cao and H. Foroosh, “Camera calibration and light source orientation from solar shadows,” Comput. Vis. Image Underst. 105(1), 60–72 (2007). [CrossRef]  

12. G. McGunnigle and M. Chantler, “Recovery of fingerprints using photometric stereo,” in Irish Machine Vision and Image Processing Conference, (2001), pp. 192–199.

13. D. Laurendeau, L. Guimond, and D. Poussart, “A computer-vision technique for the acquisition and processing of 3-D profiles of dental imprints: an application in orthodontics,” IEEE Trans. Med. Imaging 10(3), 453–461 (1991). [CrossRef]   [PubMed]  

14. J. Ackermann, S. Fuhrmann, and M. Goesele, “Geometric point light source calibration” in VMV, (2013), pp. 161–168.

15. K. Mori, E. Watanabe, K. Watanabe, and S. Katagiri, “Estimation of object color, light source color, and direction by using a cuboid,” Syst. Comput. Jpn. 36(12), 1–10 (2005). [CrossRef]  

16. J. Park, S. N. Sinha, Y. Matsushita, Y.-W. Tai, and I. S. Kweon, “Calibrating a non-isotropic near point light source using a plane,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2014), pp.2259–2266. [CrossRef]  

17. D. Schnieders and K.-Y. K. Wong, “Camera and light calibration from reflections on a sphere,” Comput. Vis. Image Underst. 117(10), 1536–1547 (2013). [CrossRef]  

18. T. Aoto, T. Taketomi, T. Sato, Y. Mukaigawa, and N. Yokoya, “Position estimation of near point light sources using a clear hollow sphere” in Pattern Recognition (ICPR),201221st International Conference on, (IEEE, 2012), pp.3721–3724.

19. A. Giachetti, C. Daffara, C. Reghelin, E. Gobbetti, and R. Pintus, “Light calibration and quality assessment methods for reflectance transformation imaging applied to artworks’ analysis” in Optics for Arts, Architecture, and Archaeology V, vol. 9527 (International Society for Optics and Photonics, 2015), p. 95270B.

20. L. Ma, Y. Lyu, X. Pei, Y. M. Hu, and F. M. Sun, “Scaled SFS method for Lambertian surface 3D measurement under point source lighting,” Opt. Express 26(11), 14251–14258 (2018). [CrossRef]   [PubMed]  

21. T. D’Orazio, C. Guaragnella, M. Leo, and A. Distante, “A new algorithm for ball recognition using circle hough transform and neural classifier,” Pattern Recognit. 37(3), 393–408 (2004). [CrossRef]  

22. D. Koc-San, S. Selim, N. Aslan, and B. T. San, “Automatic citrus tree extraction from uav images and digital surface models using circular hough transform,” Comput. Electron. Agric. 150, 289–301 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Geometry of a camera observing an object illuminated by a point light source.
Fig. 2
Fig. 2 Polar graphs of Φ(θ) radiant intensity distribution curves.
Fig. 3
Fig. 3 Geometry relationship between calibration target and CCD image.
Fig. 4
Fig. 4 The iteration process.
Fig. 5
Fig. 5 Simulation experiment results.
Fig. 6
Fig. 6 Experiment system setup.
Fig. 7
Fig. 7 Experiment results.
Fig. 8
Fig. 8 Verification experiment results.

Tables (3)

Tables Icon

Table 1 The statistics of deviation in the simulation

Tables Icon

Table 2 Deviation statistical

Tables Icon

Table 3 Deviation statistical in calibration

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

I( p )=ρ(p)s(p)n(p)
s(p)= Φ(θ,ϕ) r 2 p s p p p s
s(p)= Φ(θ) p p s 2 p p s p p s
Φ(θ)= Φ 0 cos μ θ
s(p)= Φ 0 cos μ θ p p s p p s 3 = Φ 0 ( n s p p s p p s ) μ p p s p p s 3
L(p)= ρ(p) π E(p)
E(p)=s(p)n(p)
ϵ( p )=βco s 4 α( p )L(p)
J( p )=γϵ( p )= cos 4 α( p )γβ ρ(p) π Φ 0 ( n s p s p p s p ) μ p p s n(p) p p s 3
I( p )= J( p ) cos 4 α( p )
I( p )=Ψ ( n s p s p p s p ) μ p p s n(p) p p s 3
I ˜ ( p )= Ψ ˜ ( n ˜ s p ˜ s p p ˜ s p ) μ p ˜ p s n(p) p p ˜ s 3
Τ( Ψ, p s , n s )= j=1 q p Π j ( I( p ) Ψ ˜ ( n ˜ s p ˜ s p p ˜ s p ) μ p p ˜ s n(p) p p ˜ s 3 ) 2
{ pc n=0 p o c =kl
Z[ u v 1 ]=Mp=[ f 0 0 0 f 0 0 0 1 ][ X Y Z ]
grad(T)= T Ψ + T p s + T n s =0+ T X p s + T Y p s + T Z p s + T θ n s + T ϕ n s
[ X p s (k) Y p s (k) Z p s (k) ]=[ X p s (k1) Y p s (k1) Z p s (k1) ][ T X p s T Y p s T Z p s ]Ω(T) [ θ n s (k) ϕ n s (k) ]=[ θ n s (k1) ϕ n s (k1) ][ T θ n s T ϕ n s ]ω(T)
I ( p ) (k) = Ψ (k) ( n s (k+1) p s (k+1) p p s (k+1) p ) μ p p s (k+1) n(p) p p s (k+1) 3
Ψ ˜ = allpixel p intheimage I( p ) allpixel p intheimage I ( p ) (k) Ψ (k) Ψ (k+1) =a* Ψ (k) +b* Ψ ˜
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.