Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Optical filter highlighting spectral features Part II: quantitative measurements of cosmetic foundation and assessment of their spatial distributions under realistic facial conditions

Open Access Open Access

Abstract

We previously proposed a filter that could detect cosmetic foundations with high discrimination accuracy [Opt. Express 19, 6020 (2011)]. This study extends the filter’s functionality to the quantification of the amount of foundation and applies the filter for the assessment of spatial distributions of foundation under realistic facial conditions. Human faces that are applied with quantitatively controlled amounts of cosmetic foundations were measured using the filter. A calibration curve between pixel values of the image and the amount of foundation was created. The optical filter was applied to visualize spatial foundation distributions under realistic facial conditions, which clearly indicated areas on the face where foundation remained even after cleansing. Results confirm that the proposed filter could visualize and nondestructively inspect the foundation distributions.

©2011 Optical Society of America

1. Introduction

Spectral imaging has recently become an attractive research topic because this method can visualize the spatial distribution of an object’s properties that appear within its spectral features. Human skin is an attractive research target of spectral imaging because it contains rich information about health and mental conditions but a strictly noninvasive method is required for measuring them. Skin color is mainly due to human skin pigments such as melanin, carotene, and hemoglobin [2]. These pigments have characteristic spectral absorption properties in the visible wavelength region. Several methods for measuring and visualizing human skin pigmentation were reported and applied for evaluating human skin diseases like inflammation and melanoma [39]. In the field of cosmetics, in addition to such skin pigmentation measurements, a noninvasive method to measure applied cosmetic products is strongly desired. Because cosmetic products are used to hide skin flaws and display beautiful skin, the measurement of applied cosmetic products under realistic conditions is important for their development. Doi et al. reported a method for the estimation of the spectral reflectance of made-up skin from the spectral reflectance of bare skin and cosmetic foundations, and the results showed quite good estimation accuracy [10]. This technique can be also used for estimating the thickness of the foundation layer.

However, noninvasive inspection by using spectral imaging is still problematic because of its cost and the measuring skill requirements. In our previous study, a designing method of an optical filter that highlights an object’s spectral features instead of the spectral imaging technique was described [1]. Preece et al. described a similar approach for estimating the saturations of skin pigmentations that yielded better estimation accuracy than those of the conventional trichromatic signals of an RGB camera [11]. We designed our filter to highlight the spectral difference between two target colors by using only one filter that we mounted on front of the lens of an RGB digital camera. The experimental results showed that the color signals of human skin with and without an application of cosmetic foundations were distinctly separated using the filter. In addition, the presence of cosmetic foundations on the skin was perfectly detected from the facial image obtained. This discrimination was obtained by applying linear discriminant analysis (LDA) to the filtered color signals.

This study focuses on the fact that the discriminant score obtained by the LDA varies depending on the difference between the color signals of with/ without cosmetic foundation, implying that it might reflect the amount of applied cosmetic foundation. Therefore, we extended its application to the quantification of the amount of foundation and we finally applied the filter to the assessment of the spatial foundation distributions under realistic facial conditions.

2. Materials and methods

2.1. The optical filter for discrimination of cosmetic foundations

The details of the optical filter used for the discrimination of cosmetic foundations were described in the accompanying paper [1]. The spectral transmittance of the filter was theoretically designed to minimize the misclassification of LDA performed to the filtered spectral data sets of human skin in the presence and absence of cosmetic foundation. LDA was performed on the color signal space and the r-g chromaticity signal transformed from the RGB trichromatic signal was used as the color signal space. The RGB signal was defined using the spectral sensitivity of a commercial RGB digital camera (Nikon D70). The filter was realized as a multilayer thin film filter composed of 31 layers of SiO2 and TiO2 by using the vacuum deposition technology (Fig. 1 ).

 figure: Fig. 1

Fig. 1 (a) Spectral transmittance of the theoretically designed and the optically realized filter. Theoretical transmittance was designed by optimization. The optical filter was realized by the vacuum deposition technology. (b) Developed optical filter. The optical filter was made of a multilayer thin film that was composed of 31 layers of SiO2 and TiO2.

Download Full Size | PDF

The discrimination of human skin in the presence and absence of cosmetic foundation was obtained from the RGB image taken with the digital camera equipped with the filter. Also, in this case, RGB pixel values were transformed to chromaticity signals and the discriminant analysis was performed on the chromaticity coordinates. The computations are as follows:

C=(r,g)=(CRCR+CG+CB,CGCR+CG+CB),
fd(C)=(Σ1(μ1μ2))t(Cμ1+μ22)log(p2/p1),
where C is a color signal transformed from the trichromatic signal (output of the RGB camera) Ck (k∈{R,G,B}). A discriminant score fd (C) is computed using several predefined parameters and the observed color signal C. In Eq. (2), Σ −1 is a pooled variance-covariance matrix of predefined color signal sets C1 and C2, and μ1 and μ2 are the means of C1 and C2, respectively. The measurement of predefined data sets C1 and C2 is described in Part I [1]. When fd (C) > 0, an observed color signal is classified as category 1 otherwise as category 2. The prior probabilities p1 and p2 for categories 1 and 2 were assumed to be p1 = p2 = 0.5.

As described in the accompanying paper [1], the discriminant score computation for each pixel of the image yielded satisfactory detection results. Meanwhile, the current study aims to create the calibration curve for estimating the amount of cosmetic foundation from the discriminant score.

2.2. Quantitative measurement

To obtain the calibration curve, a quantitatively controlled amount of liquid cosmetic foundation was applied to the face and facial images were measured. The application area was 3 × 2 cm2 and there were 14 such areas of application over the face. The amounts of foundation applied were 0.5, 1, 1.5, 2, 3, 4, 5, 6, 8, and 10 μL. They were strictly controlled by recoating using a micropipette. The observation angles were −45, 0, and 45°. All the devices used for measurement were fixed through the measurement and the subjects turned sideways to take the profile images (Fig. 2 ). We extracted the color signals from the numbered areas in the figure manually to compute the discriminant scores. As shown in Fig. 2, several areas of application were used to obtain multiple observation angles. Thus, the estimation error arising from the observation angle will also be discussed using the crossover. The measurement device was a commercially available camera (Nikon D70) and a fluorescent light (Diva-Lite, 6300K) was selected as the source of illumination. Polarizing films were installed on both the camera and the source of illumination to eliminate specular reflection. This fluorescent light had a broad emission spectrum over the entire visible wavelength region, and lower heat was generated. The measurement was carried out on four Japanese females with ages in the range 18–40 years. The subjects chose their personal favorite color of cosmetic foundation.

 figure: Fig. 2

Fig. 2 Areas of application of cosmetic foundation for quantitative measurement. Cosmetic foundation was applied to 14 areas. Observation angles were −45, 0, and 45°. The application areas indicated in each facial image were used for the following analysis.

Download Full Size | PDF

2.3. Evaluation of facial conditions after cleansing

Next, we attempted to apply the filter to visualize the distribution of foundation across the human face under various conditions. This experiment aimed to construct a statistical discussion of foundation distribution under realistic facial conditions measured using the proposed filter. The visualization target was the difference in the foundation distributions depending on the method of cleansing. It is empirically known that foundation tends to remain around the eyes, nose, and hairline even after cleansing. We tested the developed filter to determine whether it could visualize and nondestructively inspect such foundation distributions under realistic conditions.

In this experiment, the following facial conditions were measured:

  • 1. Applying foundation
  • 2. Cleansed by the subject (CS)
  • 3. Applying foundation
  • 4. Cleansed in accordance with professional instruction (CP)

Here, we focus on the comparison of condition 2 (CS) and condition 4 (CP) to evaluate the differences in the finishing based on the cleansing methods adopted.

The measurement protocol of the camera and the illumination were the same as in previous experiments (2.2). The subjects in the measurement for each control were 20 Japanese women. The ages of the subjects were in the range 30–40. The measurement was performed at constant room temperature and relative humidity (24°C, 50%). In this experiment, two types of liquid foundations were used. Eleven subjects used the normal foundation and nine subjects used the high-water-resistant foundation. Their comparison will also be discussed.

3. Results

3.1. Results of the quantitative measurement

Figure 3(a) shows the discriminant scores for various amounts of applied cosmetic foundations that were manually extracted from the measured images. Error bars show the standard deviations. There were strong positive correlations with logarithmic character between the amounts and the discriminant scores. Figure 3(b) shows the estimation error arising from the observation angle. Solid lines show the discriminant scores of the forehead and jaw observed at a 0° angle. The scores at the same positions observed at −45° and +45° are shown as broken lines. There were few estimation errors between the observation angles. Thus, the estimation of the amount of foundation was not entirely restricted to the observation angle of the skin surface.

 figure: Fig. 3

Fig. 3 (a) Discriminant scores that were manually extracted from the measured images. The error bars show the standard deviations. (b) Estimation errors due to the observation angle. Solid lines show the discriminant scores of the forehead and jaw (Nos. 1–4, 13, and 14 shown in Fig. 2) observed at a 0° angle. Scores at the same positions observed at −45° and +45° are shown as broken lines.

Download Full Size | PDF

3.2. Calibration curve fitting

A logarithmic relationship between the discriminant score and the applied amount of foundation was determined using quantitative measurements. Thus, the amount of foundation may be estimated with high accuracy using discriminant score. In this study, we proposed two types of estimation formulae:

y=aln(x+c)+b,
y=a{ln(x+by0+c)ln(by0+c)}+y0.

Here, the output variable y is the discriminant score, the input variable x is the amount of foundation applied, and y0 is the discriminant score of bare skin. The other parameters a, b, and c are coefficients and constants. Thus, these inverse f unctions can be used as estimation formulae. Equation (3) is based on a simple logarithmic function using which it might be possible to transform a discriminant score to an amount of foundation with high accuracy owing to the relationship between them indicated in the previous section (see 3.1). However, the amount map computed using Eq. (3) would not be zero for the whole face even if no foundation was applied over the face because the discriminant scores of bare skin are not constant. This is undesirable when a slight amount of cosmetic foundation needs to be detected and weighed out. Therefore, we proposed another formulation (Eq. (4)) that includes image processing to meet the abovementioned requirement. In this formula, the estimated amount of foundation (variable x in Eq. (4)) is constantly zero when bare skin is observed. The image of bare skin measured in the same environment of measurement is required as the baseline image. The image transformation should also be applied to adjust the position of the facial features of two images to visualize the distribution of the cosmetic foundation.

We used the steepest descent and the least squares methods to solve the undefined parameters a, b, and c. Table 1 describes the estimated parameters, decision coefficients, and standard error of prediction (SEP). Finally, calibration equations to estimate the amount of foundation from the discriminant score are obtained:

Tables Icon

Table 1. Parameters and Evaluated Values of the Calibration Curve

x=exp{yba}c,
x=(by0+c)exp{yy0a}(by0+c).

Figure 4 shows the relationship between the applied and estimated amounts of foundation. Error bars denote the standard deviations. As described in Table 1, both the estimation formulae indicate high estimation accuracy of the applied cosmetic foundation. The decision coefficient and SEP showed slightly higher accuracy in the simpler formula. However, the estimation by using the baseline correction achieved a marginally better result for the case in which only a small amount had been applied (Fig. 4(a)). The SEP was advantageous for lesser amounts (<0.1 μL/cm2, Table 1).

 figure: Fig. 4

Fig. 4 Relationship between the applied and estimated amounts of foundation. Error bars denote the standard deviations. (a) Without baseline correction: decision coefficient is 0.9152 and SEP is 0.1557. (b) With baseline correction: decision coefficient is 0.8978 and SEP is 0.1645. Estimation accuracy is higher for (a) than that for (b). However, (b) has no error when the applied foundation is zero.

Download Full Size | PDF

3.3. Visualization results

Both the proposed calibration curves showed sufficient estimation accuracy as described in Table 1 and Figs. 4(a) and (b). To compare these two formulae, we computed the foundation maps of the images that we4re measured in the quantitative measurement (2.2). Figure 5 shows the maps of the computed amount of foundation of one subject. We applied this foundation to a make-up doll by using the “local weighted mean method” of image transformation [12].

 figure: Fig. 5

Fig. 5 Comparison of foundation maps computed using different calibration curves. (a)Without baseline correction. (b) With baseline correction. These are results of one subject. Computed images were obtained from a make-up doll image by using the “local weighted mean method” of image transformation [12].

Download Full Size | PDF

The maps of the amount of foundation were visualized in both ways with high estimation accuracy. When cosmetic foundation was not applied, slight errors appeared on the foundation map computed by the calibration curve without any baseline correction. Thus, the estimation formula with the baseline correction was more suitable when the estimation target of the amount of foundation was <0.167 μL/cm2. On the other hand, the calibration curve without any baseline correction results showed better estimation accuracy for most amounts of cosmetic foundation. Thus, the better estimation method should be selected depending on the visualization target.

Next, we applied this method to visualize the realistic made-up skin. In this experiment, cosmetic foundation was applied uniformly over the face so that the facial skin color appears uniform. In total, 120 μL of liquid foundation was applied. The instruments used for measurement were the same as in the previous measurements (Section 2.2). We used the calibration curve without baseline correction (Eq. (5)) for the estimation.

Figure 6 shows the estimated foundation maps. The cosmetic foundation was distributed unevenly even though the skin color looked uniform to the human eye.

 figure: Fig. 6

Fig. 6 Foundation maps of test data showing the foundation distribution of realistic made-up skin. Cosmetic foundation was uniformly applied over the face so that the facial skin color looks uniform.

Download Full Size | PDF

3.4. Evaluation of the facial condition after cleansing

Results achieved from the above examinations showed that the proposed filter had great sensitivity for detecting applied cosmetic foundation and could be a revolutionary visualization tool for the cosmetic research field. Thus, finally, the abovementioned techniques were applied to visualize the distribution of foundation across the human face after cleansing.

For all the measured images, we computed the discriminant score fd(C) for each pixel. Then, images for each control condition were averaged over all of the subjects on a “canonical face template” with adjustable positions for several facial features (eye, brow, mouth, nose, and face line) to evaluate the typical foundation distributions. A make-up doll was used as the canonical face template. We manually defined the control points and the mask of skin areas for each image and applied the image transformation to adjust the positions of the facial features for all the subjects. Finally, all pixel values of the averaged maps were transformed to the amounts of cosmetic foundations by using the calibration curve defined in the previous section. The calibration curve with baseline correction (Eq. (6)) was used to compute the average foundation amount map. The average map for condition CP was used as the baseline image.

Figure 7 shows the cosmetic foundation distributions of CS and CP. There were remains of the foundation in some areas even after cleansing, especially in the hairline and the areas around the eyes (Fig. 7(a)). Between CS and CP, the amount of foundation computed from these images showed significant difference (p < 0.05). The results reflected that the finish of the cleanse performed by the subject (CS) was poorer than the finish of the cleanse performed according to professional instruction (CP). In addition, the comparison of normal and high-water-resistant foundation showed significant differences (see Fig. 8 ). Thus, the obtained results confirmed the practical performance of our filter.

 figure: Fig. 7

Fig. 7 Average cosmetic foundation maps. (a) Foundation distribution of CS and (b) foundation distribution of CP. Average cosmetic foundation maps were computed using image transformation. Standard deviations among subjects were also computed for each condition and were used to show the reliability by changing the transparency rate depending on the standard deviation. The calibration curve with baseline correction (Eq. (6)) was used to compute the foundation map. All pixel values of (b) were zero because the average map of CP was used as the baseline image.

Download Full Size | PDF

 figure: Fig. 8

Fig. 8 Comparison of the normal and high-water-resistant foundation. Figures show the average foundation distribution maps of CS. (a) The average map of subjects who used normal foundation and (b) the average map for high-water-resistant foundation.

Download Full Size | PDF

4. Discussion

In [10], the reflectance spectra of made-up skin were estimated by the Kubelka–Munk theory [13] and good estimation accuracy were obtained. Meanwhile, the output of our filter had an obvious logarithmic relationship with the amount of cosmetic foundation. In this section, we described the comparison of these two methods of estimation.

In [10], estimation results were achieved using following equations:

R(λ)=(1RS)(Rm(λ)+Tm(λ)Rskin(λ)1Rm(λ)Rskin(λ))+RS,Rm(λ)=1am(λ)+bm(λ)cothDmbm(λ)Sm(λ),Tm(λ)=bm(λ)am(λ)sinhDmbm(λ)Sm(λ)+bm(λ)coshDmbm(λ)Sm(λ),
where R(λ) is the reflectance spectra of made-up skin. R(λ) is determined using four parameters: the specular reflectance between air and the skin surface RS; the reflectance spectra of the human skin surface Rskin; and the reflectance and transmittance spectra of cosmetic foundation Rm and Tm, respectively. Also, Tm and Rm are determined by the thickness Dm and the optical characteristics of cosmetic foundation am, bm, and Sm. According to [10], these optical characteristic parameters can also be estimated using the Kubelka–Munk theory as follows:
Sm(λ)=1bm(λ)D0(cot h1am(λ)R0(λ)bm(λ)cot h1am(λ)Rg(λ)bm(λ)),am=12(1R+R),bm=(am21),12
where D0 and R 0 are the thickness of the cosmetic foundation layer and the spectral reflectance of the surface on which a thin layer of cosmetic foundation is formed, respectively. The parameter Rg is the reflectance spectrum of a background material and Rinf is the spectral reflectance of the surface having a thick cosmetic foundation layer. Therefore, optical characteristic parameters of cosmetic foundations can be estimated using experimentally obtained parameters R0, Rg, Rinf, and D0. Then, the reflectance spectra of made-up skin R(λ) are determined using the thickness Dm and the bare skin reflectance Rskin.

Figure 9 shows the comparison of this theory and the calibration curve. Figure 9(a) shows the relationship between the relative thickness of the cosmetic foundation layer and the estimated spectral reflectance based on the Kubelka–Munk theory. This computation was performed on the assumption based on actual measured values in which the parameters are Rinf = 0.6, Rg = 0, R0 = Rinf/100, and D0 = 1. The spectral reflectance of bare skin Rskin was changed from 0.2 to 0.35 in steps of 0.01. Figure 9(b) shows the relationship between the estimated amount of liquid foundation and the discriminant score that was computed based on the calibration curve with the baseline correction. The discriminant scores of bare skin were changed from −7 to 2 in steps of 0.5. The estimated reflectance and the discriminant score demonstrated quite similar properties in a correlation with the amount of cosmetic foundation indicating that the developed optical filter was optimized to reflect the change in spectral reflectance by the application of cosmetic foundation to the filtered RGB outputs (Fig. 9). According to Eq. (8) and Fig. 7(a), the spectral reflectance of made-up skin was affected by the spectral reflectance of bare skin and the optical characteristics of cosmetic materials. Therefore, the output of our filter should also be affected by different types of cosmetic foundation even though we already confirmed detection for 30 products and estimation for three products. In future work, the calibration curve will be improved to include optical characteristics of cosmetic materials.

 figure: Fig. 9

Fig. 9 Comparison of the Kubelka–Munk theory and the calibration curve. (a) Relationship between the relative thickness of the cosmetic foundation layer and the estimated spectral reflectance based on the Kubelka–Munk theory. This computation was performed under the assumption based on actual measured values that the parameters were Rinf = 0.6, Rg = 0, R0 = Rinf/100, and D0 = 1. Spectral reflectance of bare skin Rskin was changed from 0.2 to 0.35 in steps of 0.01. (b) Relationship between the estimated amount of liquid foundation and the discriminant score that was computed based on the calibration curve with baseline correction. Discriminant scores in bare skin were changed from −7 to 2 in steps of 0.5.

Download Full Size | PDF

5. Conclusion

This study established the calibration curve to estimate the amount of applied cosmetic foundations and applied it to visualize the foundation distribution under realistic facial conditions.

We designed the spectral transmittance of the filter to enhance the spectral difference of two predefined spectral data sets. The designed theoretical spectral transmittance was optically realized as a multilayer thin film filter. The color distributions of the obtained RGB images taken with a digital camera equipped with the filter show a distinct enhancement of the spectral differences between the two sets of spectra, which were invisible to the human eye.

In addition, there were strong positive correlations between the amount of applied foundation and the discriminant score. Therefore, we plotted two calibration curves as described by Eq. (5) and Eq. (6). Equation (6) includes a baseline correction obtained from a bare skin image. Both equations showed high estimation accuracy as described in Table 1. Also, the visualization results in Fig. 4 showed high estimation accuracy for both methods. The calibration curve with the baseline correction (Eq. (6)) specifically achieved better estimation for lesser amounts while the decision coefficient of the calibration curve without any baseline correction (Eq. (5)) was higher.

The visualization results of the maps of the amounts of cosmetic foundation on realistically made-up skin displayed uneven foundation distributions (Fig. 6). The cosmetic foundation had been applied to make the skin color uniform all over the face. This result suggests that the optical filter and the calibration curves were very useful for evaluating various made-up conditions such as the evaluation of foundation deterioration with the passage of time.

Finally, we applied the optical filter to visualize the differences in the foundation distributions between the two cleansing methods. Visualized distributions of foundations clearly indicated the areas on the face where foundation remained even after cleansing.

Our method is not restricted to the field of cosmetic but could be applied to other applications such as food inspection, medical imaging, or other targets that require nondestructive inspection. The proposed system could be implemented as part of an online measuring system in a compact and an inexpensive way.

Acknowledgments

The authors appreciate the assistance of Itoh Optical Industrial Co. Ltd. for providing the optical filter and thereby helping in conducting the experiments. This work was also supported in part by the Global COE Program “Frontiers of Intelligent Sensing” from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan.

References and links

1. K. Nishino, M. Nakamura, M. Matsumoto, O. Tanno, and S. Nakauchi, “Optical filter for highlighting spectral features Part I: design and development of the filter for discrimination of human skin with and without an application of cosmetic foundation,” Opt. Express 19(7), 6020–6030 (2011). [CrossRef]   [PubMed]  

2. E. Angelopoulou, The reflectance spectrum of human skin, (Technical Report MS-CIS-99–29, GRASP Laboratory, Department of Computer and Information Science, University of Pennsylvania, USA, 1999).

3. N. Tsumura, M. Kawabuchi, H. Haneishi, and Y. Miyake, “Mapping pigmentation in human skin by multi-visible-spectral imaging by inverse optical scattering technique,” J. Imag. Sci. Tech. 45(5), 444–450 (2001).

4. I. V. Meglinski and S. J. Matcher, “Quantitative assessment of skin layers absorption and skin reflectance spectra simulation in the visible and near-infrared spectral regions,” Physiol. Meas. 23(4), 741–753 (2002). [CrossRef]   [PubMed]  

5. G. N. Stamatas, B. Z. Zmudzka, N. Kollias, and J. Z. Beer, “Non-invasive measurements of skin pigmentation in situ,” Pigment Cell Res. 17(6), 619–626 (2004). [CrossRef]  

6. M. Moncrieff, S. Cotton, E. Claridge, and P. Hall, “Spectrophotometric intracutaneous analysis: a new technique for imaging pigmented skin lesions,” Br. J. Dermatol. 146(3), 448–457 (2002). [CrossRef]   [PubMed]  

7. J. K. Wagner, C. Jovel, H. L. Norton, E. J. Parra, and M. D. Shriver, “Comparing quantitative measures of erythema, pigmentation and skin response using reflectometry,” Pigment Cell Res. 15(5), 379–384 (2002). [CrossRef]   [PubMed]  

8. G. N. Stamatas, M. Southall, and N. Kollias, “In vivo monitoring of cutaneous edema using spectral imaging in the visible and near infrared,” J. Invest. Dermatol. 126(8), 1753–1760 (2006). [CrossRef]   [PubMed]  

9. G. N. Stamatas and N. Kollias, “In vivo documentation of cutaneous inflammation using spectral imaging,” J. Biomed. Opt. 12(5), 051603 (2007). [CrossRef]   [PubMed]  

10. M. Doi, R. Ohtsuki, and S. Tominaga, “Spectral estimation of made-up skin color under various conditions,” Proc. SPIE (San Jose, California, USA), pp. 606204 (2006).

11. S. J. Preece and E. Claridge, “Spectral filter optimization for the recovery of parameters which describe human skin,” IEEE Trans. Pattern Anal. Mach. Intell. 26(7), 913–922 (2004). [CrossRef]  

12. G. Ardeshir, “Image Registration by approximation method,” Image Vis. Comput. 6(4), 255–261 (1988). [CrossRef]  

13. P. Kubelka, “New contributions to the optics of intensely light-scattering materials. Part I,” J. Opt. Soc. Am. 38(5), 448–457 (1948). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1
Fig. 1 (a) Spectral transmittance of the theoretically designed and the optically realized filter. Theoretical transmittance was designed by optimization. The optical filter was realized by the vacuum deposition technology. (b) Developed optical filter. The optical filter was made of a multilayer thin film that was composed of 31 layers of SiO2 and TiO2.
Fig. 2
Fig. 2 Areas of application of cosmetic foundation for quantitative measurement. Cosmetic foundation was applied to 14 areas. Observation angles were −45, 0, and 45°. The application areas indicated in each facial image were used for the following analysis.
Fig. 3
Fig. 3 (a) Discriminant scores that were manually extracted from the measured images. The error bars show the standard deviations. (b) Estimation errors due to the observation angle. Solid lines show the discriminant scores of the forehead and jaw (Nos. 1–4, 13, and 14 shown in Fig. 2) observed at a 0° angle. Scores at the same positions observed at −45° and +45° are shown as broken lines.
Fig. 4
Fig. 4 Relationship between the applied and estimated amounts of foundation. Error bars denote the standard deviations. (a) Without baseline correction: decision coefficient is 0.9152 and SEP is 0.1557. (b) With baseline correction: decision coefficient is 0.8978 and SEP is 0.1645. Estimation accuracy is higher for (a) than that for (b). However, (b) has no error when the applied foundation is zero.
Fig. 5
Fig. 5 Comparison of foundation maps computed using different calibration curves. (a)Without baseline correction. (b) With baseline correction. These are results of one subject. Computed images were obtained from a make-up doll image by using the “local weighted mean method” of image transformation [12].
Fig. 6
Fig. 6 Foundation maps of test data showing the foundation distribution of realistic made-up skin. Cosmetic foundation was uniformly applied over the face so that the facial skin color looks uniform.
Fig. 7
Fig. 7 Average cosmetic foundation maps. (a) Foundation distribution of CS and (b) foundation distribution of CP. Average cosmetic foundation maps were computed using image transformation. Standard deviations among subjects were also computed for each condition and were used to show the reliability by changing the transparency rate depending on the standard deviation. The calibration curve with baseline correction (Eq. (6)) was used to compute the foundation map. All pixel values of (b) were zero because the average map of CP was used as the baseline image.
Fig. 8
Fig. 8 Comparison of the normal and high-water-resistant foundation. Figures show the average foundation distribution maps of CS. (a) The average map of subjects who used normal foundation and (b) the average map for high-water-resistant foundation.
Fig. 9
Fig. 9 Comparison of the Kubelka–Munk theory and the calibration curve. (a) Relationship between the relative thickness of the cosmetic foundation layer and the estimated spectral reflectance based on the Kubelka–Munk theory. This computation was performed under the assumption based on actual measured values that the parameters were Rinf = 0.6, Rg = 0, R0 = Rinf /100, and D0 = 1. Spectral reflectance of bare skin Rskin was changed from 0.2 to 0.35 in steps of 0.01. (b) Relationship between the estimated amount of liquid foundation and the discriminant score that was computed based on the calibration curve with baseline correction. Discriminant scores in bare skin were changed from −7 to 2 in steps of 0.5.

Tables (1)

Tables Icon

Table 1 Parameters and Evaluated Values of the Calibration Curve

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

C = ( r , g ) = ( C R C R + C G + C B , C G C R + C G + C B ) ,
f d ( C ) = ( Σ 1 ( μ 1 μ 2 ) ) t ( C μ 1 + μ 2 2 ) log ( p 2 / p 1 ) ,
y = a ln ( x + c ) + b ,
y = a { ln ( x + b y 0 + c ) ln ( b y 0 + c ) } + y 0 .
x = exp { y b a } c ,
x = ( b y 0 + c ) exp { y y 0 a } ( b y 0 + c ) .
R ( λ ) = ( 1 R S ) ( R m ( λ ) + T m ( λ ) R s k i n ( λ ) 1 R m ( λ ) R s k i n ( λ ) ) + R S , R m ( λ ) = 1 a m ( λ ) + b m ( λ ) cot h D m b m ( λ ) S m ( λ ) , T m ( λ ) = b m ( λ ) a m ( λ ) sin h D m b m ( λ ) S m ( λ ) + b m ( λ ) cos h D m b m ( λ ) S m ( λ ) ,
S m ( λ ) = 1 b m ( λ ) D 0 ( cot  h 1 a m ( λ ) R 0 ( λ ) b m ( λ ) cot  h 1 a m ( λ ) R g ( λ ) b m ( λ ) ) , a m = 1 2 ( 1 R + R ) , b m = ( a m 2 1 ) , 1 2
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.