Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
Performance Comparison of MaxRGB [18] and MaxRGB with Preprocessing of the Images () by Bicubic Resizing to [19], GrayWorld (GW)[20], 3D Support Vector Regression (SVR) [4], Shades of Gray (SoG) [21], Edge-Based [22], Gray Surface Identification (GSI) [23], Color by Correlation (CbyC) [24], Gamut Mapping [25], N-Jet [26], and TPSa
CbyC I (bright pixels only) Hordley and Finlayson (Table 7 in [24]) (310 out of 321)
3.2
10
-
-
-
-
-
-
CbyC Gijsenij et al. (Table 3 in [26]) (290 out of 321)
6.8
-
-
-
-
-
-
-
Gamut Mapping Gijsenij et al. (Table 3 in [26]) (290 out of 321)
3.1
-
-
-
-
-
-
-
N-jet (complete 1-jet) (Table 3 in [26]) (290 out of 321)
2.1
-
-
-
-
-
-
-
SVR (3D)
2.2
8.0
25
3.1
-
3.5
-
11
TPS (leave-one-out)
0.6
2.1
10
0.6
0.5
1.6
2.7
7.2
TPS (threefold cross-validation)
1.2
3.6
23
1.0
0.8
2.9
4.9
14
Results involve real-data training and testing using the 321 SONY images. For TPS the images are converted to thumbnails; the other algorithms use the original images. Errors for Color by Correlation and Support Vector Regression are based on leave-one-out cross-validation. Errors for TPS are based on both leave-one-out cross-validation and threefold cross-validation and are reported in terms of both the angular and distance error measures. See Subsection 4B for error labels.
Table 2
Comparison of Several Different Algorithms in Table 1 via the Wilcoxon Signed-Rank Test with Rejection of the Null Hypothesis at the 5% Significance Levela
MaxRGB
GW
SoG
Edge1
Edge2
CbyC
SVR
GSI
TPS
MaxRGB
=
−
=
−
−
−
−
−
−
−
+
=
+
+
+
+
−
=
+
−
GW
=
−
=
−
−
−
−
−
−
−
SoG
+
−
+
=
=
+
=
=
=
−
Edge1
+
−
+
=
=
+
=
=
=
−
Edge2
+
−
+
−
−
=
−
=
=
−
CbyC
+
+
+
=
=
+
=
=
=
−
SVR
+
=
+
=
=
=
=
=
=
−
GSI
+
−
+
=
=
=
=
=
=
−
TPS
+
+
+
+
+
+
+
+
+
=
Here CbyC represents “Color by Correlation using bright pixels only” by Hordley and Finlayson [24]. A “+” means the algorithm listed in the corresponding row is better than the one in corresponding column. A “−” indicates the opposite, and an “=” indicates that the performance of the respective algorithms is statistically equivalent. (Since TPS by leave-one-out and threefold validation ranked the same in the Wilcoxon test, they are not listed separately in the table.)
Table 3
Performance Comparison of MaxRGB [18] and MaxRGB with Preprocessing (Labeled ) of the Images by Bicubic Resizing to [19], GrayWorld [20], Shades of Gray [21], Edge-Based [22], Color by Correlation [24], Gamut Mapping [25], N-Jet [26], Bayes-GT (with Threefold Cross-Validation) [17], and TPSa
Angular Error
Distance ()
Median
RMS
Max
Median
B75
RMS
W25
Max
Do-nothing
4.8
13
37
3.1
3.0
9.3
30
30
GW
3.7
6.2
25
2.6
2.1
4.5
7.5
20
MaxRGB
9.1
13
51
7.8
5.9
12
19
55
3.4
8.0
33
2.5
2.0
6.5
12
30
SoG ()
4.5
8.7
36
3.5
2.9
7.5
13
37
Edge-based (first order)
3.8
9.4
38
3.0
2.7
8.0
14
40
Edge-based (second order)
4.4
10
47
3.5
3.2
8.7
15
50
Gamut Mapping(full data set training)
4.3
8.4
32
3.2
2.6
6.8
12
24
N-jet (complete one-jet)
4.2
8.2
32
3.2
2.6
6.5
11
24
N-jet (complete two-jet)
4.1
8.0
32
3.1
2.5
6.3
11
24
Bayes-GT
5.8
8.9
34
5.0
3.8
7.3
12
28
TPS (threefold)
2.8
4.6
17
2.1
1.6
3.4
5.6
16
TPS (leave-one-out)
2.4
4.1
19
1.7
1.4
3.1
5.0
13
Results involve real-data training and testing using the 568 Canon images in the ColorChecker data set [17]. For TPS the images are converted to thumbnails; the other algorithms use the original images. Sometimes the Gamut Mapping and one-jet methods fail to provide an illumination estimate (four times for Gamut Mapping, one for one-jet). In such cases, we assign the illumination estimate as white with chromaticity . The TPS errors are based on leave-one-out cross-validation and threefold cross-validation. (Gamut Mapping was trained using the entire data set.)
Table 4
Comparison of Several Different Algorithms in Table 3 via the Wilcoxon Signed-Rank Test with Rejection of the Null Hypothesis at the 5% Significance Levela
MaxRGB
GW
SoG
Edge1
Edge2
Gamut
One-Jet
Two-Jet
Bayes-GT
TPS
MaxRGB
=
+
=
GW
+
−
=
SoG
+
−
−
=
Edge1
+
−
−
+
=
Edge2
+
−
−
+
−
=
Gamut Mapping
+
−
−
+
−
+
=
One-jet
+
−
−
+
−
+
+
=
Two-jet
+
−
−
+
−
+
+
+
=
Bayes-GT
+
−
−
−
−
=
−
−
−
=
TPS
+
+
+
+
+
+
+
+
+
+
=
A “+” means the algorithm listed in the corresponding row is better than the one in corresponding column. A “−” indicates the opposite, and an “=” indicates that the performance of the respective algorithms is statistically equivalent. (Since TPS evaluated either by leave-one-out or threefold cross-validation ranks the same in the Wilcoxon test, they are not listed separately in the table.)
Table 5
Performance Comparison of MaxRGB [18], MaxRGB with Preprocessing (Labeled ) of the Images by Bicubic Resizing to [19], GrayWorld [20], Shades of Gray [21], Edge-Based [22], and TPSa
Angular Distance
Distance ()
Methods
Median
RMS
Max
Median
B75
RMS
W25
Max
Do-nothing
17
19
37
16
13
15
16
17
MaxRGB
7.4
13
51
5.6
4.9
11
18
55
3.3
8.2
33
2.4
1.9
6.2
11
30
GW
4.2
9.5
36
3.3
2.6
7.4
12
33
SoG
4.3
8.8
36
3.1
2.6
7.1
12
37
Edge1
3.8
9.1
38
2.8
2.5
7.3
13
40
Edge2
4.4
9.7
47
3.3
3.0
7.9
14
50
TPS (threefold)
2.4
4.7
33
1.7
1.4
3.3
5.6
20
TPS (leave-one-out)
1.8
4.0
25
1.4
1.1
2.9
4.8
15
Results involve real-data training and testing using the combination of the 321 SFU Sony images [16] combined with the 568 Canon images of the ColorChecker data set [17]. For TPS the images are converted to thumbnails; the other algorithms use the original images. The TPS errors are based on leave-one-out cross-validation and threefold cross-validation.
Table 6
Comparison of Several Different Algorithms in Table 5 via the Wilcoxon Signed-Rank Test with Rejection of the Null Hypothesis at the 5% Significance Levela
MaxRGB
GW
SoG
Edge1
Edge2
TPS
MaxRGB
=
+
=
GW
+
−
=
SoG
+
−
−
=
Edge1
+
−
+
=
=
Edge2
+
−
−
−
−
=
TPS
+
+
+
+
+
+
=
A “+” means the algorithm listed in the corresponding row is better than the one in corresponding column. A “−” indicates the opposite, and an “=” indicates that the performance of the respective algorithms is statistically equivalent. (Since TPS by leave-one-out and threefold validation ranked the same, they are not listed separately in the table.)
Table 7
Performance Comparison of MaxRGB [18] and MaxRGB with Preprocessing of the Images by Bicubic Resizing to [19], GrayWorld [20], 3D Support Vector Regression [4], Shades of Gray [21], Edge-Based [22], Gray Surface Identification [23], Color by Correlation [24], N-Jet [26], and TPSa
Median Angular Error of TPS Illumination Estimates Taken over 4080 Images along with Training and Test Times as a Function of the Size of the Reduced Training Seta
Number of Clusters k
Median Angular Error
Training Time (Seconds)
Averaging Test Time per Image (Milliseconds)
21
40
0.06
1.60
45
40
0.09
2.27
61
39
0.11
2.34
82
43
0.15
3.06
111
52
0.29
3.62
131
45
0.37
4.52
157
10
0.51
5.12
192
8.0
0.76
5.99
208
7.5
0.92
6.03
242
6.4
1.24
6.94
255
6.4
1.33
6.77
286
6.1
1.67
7.26
318
6.1
2.07
7.04
345
6.0
2.43
7.42
364
5.7
2.73
7.69
379
5.7
2.94
7.84
283.18
67.24
The last row () means that all the images in subset A were used for training. The angular error is plotted in Fig. 3.
Tables (9)
Table 1
Performance Comparison of MaxRGB [18] and MaxRGB with Preprocessing of the Images () by Bicubic Resizing to [19], GrayWorld (GW)[20], 3D Support Vector Regression (SVR) [4], Shades of Gray (SoG) [21], Edge-Based [22], Gray Surface Identification (GSI) [23], Color by Correlation (CbyC) [24], Gamut Mapping [25], N-Jet [26], and TPSa
CbyC I (bright pixels only) Hordley and Finlayson (Table 7 in [24]) (310 out of 321)
3.2
10
-
-
-
-
-
-
CbyC Gijsenij et al. (Table 3 in [26]) (290 out of 321)
6.8
-
-
-
-
-
-
-
Gamut Mapping Gijsenij et al. (Table 3 in [26]) (290 out of 321)
3.1
-
-
-
-
-
-
-
N-jet (complete 1-jet) (Table 3 in [26]) (290 out of 321)
2.1
-
-
-
-
-
-
-
SVR (3D)
2.2
8.0
25
3.1
-
3.5
-
11
TPS (leave-one-out)
0.6
2.1
10
0.6
0.5
1.6
2.7
7.2
TPS (threefold cross-validation)
1.2
3.6
23
1.0
0.8
2.9
4.9
14
Results involve real-data training and testing using the 321 SONY images. For TPS the images are converted to thumbnails; the other algorithms use the original images. Errors for Color by Correlation and Support Vector Regression are based on leave-one-out cross-validation. Errors for TPS are based on both leave-one-out cross-validation and threefold cross-validation and are reported in terms of both the angular and distance error measures. See Subsection 4B for error labels.
Table 2
Comparison of Several Different Algorithms in Table 1 via the Wilcoxon Signed-Rank Test with Rejection of the Null Hypothesis at the 5% Significance Levela
MaxRGB
GW
SoG
Edge1
Edge2
CbyC
SVR
GSI
TPS
MaxRGB
=
−
=
−
−
−
−
−
−
−
+
=
+
+
+
+
−
=
+
−
GW
=
−
=
−
−
−
−
−
−
−
SoG
+
−
+
=
=
+
=
=
=
−
Edge1
+
−
+
=
=
+
=
=
=
−
Edge2
+
−
+
−
−
=
−
=
=
−
CbyC
+
+
+
=
=
+
=
=
=
−
SVR
+
=
+
=
=
=
=
=
=
−
GSI
+
−
+
=
=
=
=
=
=
−
TPS
+
+
+
+
+
+
+
+
+
=
Here CbyC represents “Color by Correlation using bright pixels only” by Hordley and Finlayson [24]. A “+” means the algorithm listed in the corresponding row is better than the one in corresponding column. A “−” indicates the opposite, and an “=” indicates that the performance of the respective algorithms is statistically equivalent. (Since TPS by leave-one-out and threefold validation ranked the same in the Wilcoxon test, they are not listed separately in the table.)
Table 3
Performance Comparison of MaxRGB [18] and MaxRGB with Preprocessing (Labeled ) of the Images by Bicubic Resizing to [19], GrayWorld [20], Shades of Gray [21], Edge-Based [22], Color by Correlation [24], Gamut Mapping [25], N-Jet [26], Bayes-GT (with Threefold Cross-Validation) [17], and TPSa
Angular Error
Distance ()
Median
RMS
Max
Median
B75
RMS
W25
Max
Do-nothing
4.8
13
37
3.1
3.0
9.3
30
30
GW
3.7
6.2
25
2.6
2.1
4.5
7.5
20
MaxRGB
9.1
13
51
7.8
5.9
12
19
55
3.4
8.0
33
2.5
2.0
6.5
12
30
SoG ()
4.5
8.7
36
3.5
2.9
7.5
13
37
Edge-based (first order)
3.8
9.4
38
3.0
2.7
8.0
14
40
Edge-based (second order)
4.4
10
47
3.5
3.2
8.7
15
50
Gamut Mapping(full data set training)
4.3
8.4
32
3.2
2.6
6.8
12
24
N-jet (complete one-jet)
4.2
8.2
32
3.2
2.6
6.5
11
24
N-jet (complete two-jet)
4.1
8.0
32
3.1
2.5
6.3
11
24
Bayes-GT
5.8
8.9
34
5.0
3.8
7.3
12
28
TPS (threefold)
2.8
4.6
17
2.1
1.6
3.4
5.6
16
TPS (leave-one-out)
2.4
4.1
19
1.7
1.4
3.1
5.0
13
Results involve real-data training and testing using the 568 Canon images in the ColorChecker data set [17]. For TPS the images are converted to thumbnails; the other algorithms use the original images. Sometimes the Gamut Mapping and one-jet methods fail to provide an illumination estimate (four times for Gamut Mapping, one for one-jet). In such cases, we assign the illumination estimate as white with chromaticity . The TPS errors are based on leave-one-out cross-validation and threefold cross-validation. (Gamut Mapping was trained using the entire data set.)
Table 4
Comparison of Several Different Algorithms in Table 3 via the Wilcoxon Signed-Rank Test with Rejection of the Null Hypothesis at the 5% Significance Levela
MaxRGB
GW
SoG
Edge1
Edge2
Gamut
One-Jet
Two-Jet
Bayes-GT
TPS
MaxRGB
=
+
=
GW
+
−
=
SoG
+
−
−
=
Edge1
+
−
−
+
=
Edge2
+
−
−
+
−
=
Gamut Mapping
+
−
−
+
−
+
=
One-jet
+
−
−
+
−
+
+
=
Two-jet
+
−
−
+
−
+
+
+
=
Bayes-GT
+
−
−
−
−
=
−
−
−
=
TPS
+
+
+
+
+
+
+
+
+
+
=
A “+” means the algorithm listed in the corresponding row is better than the one in corresponding column. A “−” indicates the opposite, and an “=” indicates that the performance of the respective algorithms is statistically equivalent. (Since TPS evaluated either by leave-one-out or threefold cross-validation ranks the same in the Wilcoxon test, they are not listed separately in the table.)
Table 5
Performance Comparison of MaxRGB [18], MaxRGB with Preprocessing (Labeled ) of the Images by Bicubic Resizing to [19], GrayWorld [20], Shades of Gray [21], Edge-Based [22], and TPSa
Angular Distance
Distance ()
Methods
Median
RMS
Max
Median
B75
RMS
W25
Max
Do-nothing
17
19
37
16
13
15
16
17
MaxRGB
7.4
13
51
5.6
4.9
11
18
55
3.3
8.2
33
2.4
1.9
6.2
11
30
GW
4.2
9.5
36
3.3
2.6
7.4
12
33
SoG
4.3
8.8
36
3.1
2.6
7.1
12
37
Edge1
3.8
9.1
38
2.8
2.5
7.3
13
40
Edge2
4.4
9.7
47
3.3
3.0
7.9
14
50
TPS (threefold)
2.4
4.7
33
1.7
1.4
3.3
5.6
20
TPS (leave-one-out)
1.8
4.0
25
1.4
1.1
2.9
4.8
15
Results involve real-data training and testing using the combination of the 321 SFU Sony images [16] combined with the 568 Canon images of the ColorChecker data set [17]. For TPS the images are converted to thumbnails; the other algorithms use the original images. The TPS errors are based on leave-one-out cross-validation and threefold cross-validation.
Table 6
Comparison of Several Different Algorithms in Table 5 via the Wilcoxon Signed-Rank Test with Rejection of the Null Hypothesis at the 5% Significance Levela
MaxRGB
GW
SoG
Edge1
Edge2
TPS
MaxRGB
=
+
=
GW
+
−
=
SoG
+
−
−
=
Edge1
+
−
+
=
=
Edge2
+
−
−
−
−
=
TPS
+
+
+
+
+
+
=
A “+” means the algorithm listed in the corresponding row is better than the one in corresponding column. A “−” indicates the opposite, and an “=” indicates that the performance of the respective algorithms is statistically equivalent. (Since TPS by leave-one-out and threefold validation ranked the same, they are not listed separately in the table.)
Table 7
Performance Comparison of MaxRGB [18] and MaxRGB with Preprocessing of the Images by Bicubic Resizing to [19], GrayWorld [20], 3D Support Vector Regression [4], Shades of Gray [21], Edge-Based [22], Gray Surface Identification [23], Color by Correlation [24], N-Jet [26], and TPSa
Median Angular Error of TPS Illumination Estimates Taken over 4080 Images along with Training and Test Times as a Function of the Size of the Reduced Training Seta
Number of Clusters k
Median Angular Error
Training Time (Seconds)
Averaging Test Time per Image (Milliseconds)
21
40
0.06
1.60
45
40
0.09
2.27
61
39
0.11
2.34
82
43
0.15
3.06
111
52
0.29
3.62
131
45
0.37
4.52
157
10
0.51
5.12
192
8.0
0.76
5.99
208
7.5
0.92
6.03
242
6.4
1.24
6.94
255
6.4
1.33
6.77
286
6.1
1.67
7.26
318
6.1
2.07
7.04
345
6.0
2.43
7.42
364
5.7
2.73
7.69
379
5.7
2.94
7.84
283.18
67.24
The last row () means that all the images in subset A were used for training. The angular error is plotted in Fig. 3.