Deep neural networks (DNNs) have been widely used for illuminant estimation, which commonly requires great efforts to collect sensor-specific data. In this paper, we propose a dual-mapping strategy—the DMCC method. It only requires the white points captured by the training and testing sensors under a D65 condition to reconstruct the image and illuminant data, and then maps the reconstructed image into sparse features. These features, together with the reconstructed illuminants, were used to train a lightweight multi-layer perceptron (MLP) model, which can be directly used to estimate the illuminant for the testing sensor. The proposed model was found to have performance comparable to other state-of-the-art methods, based on the three available datasets. Moreover, the smaller number of parameters, faster speed, and not requiring data collection using the testing sensor make it ready for practical deployment. This paper is an extension of Yue and Wei [Color and Imaging Conference (2023)], with more detailed results, analyses, and discussions.
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
Summary of the Performance of Various Methods, in Terms of Angular Errors, on the INTEL-TAU Datasets, Together with the Processing Time and Parameter Sizea
The results of the Gray-World, White-Patch, Shades-of-Gray, and Cheng-PCA were extracted from [22], and those of the Quasi-Unsupervised, SIIE, FFCC, C5, and MDLCC were extracted from [9,11]. The proposed method is highlighted in yellow.
Table 2.
Summary of the Performance of Various Methods, in Terms of Angular Errors, on the Cube [25] and NUS-8 [24] Datasetsa
For the NUS-8 dataset, the mean values of the eight sensors are reported here, with the detailed information shown in Supplement 1. The proposed method is highlighted in yellow.
Table 3.
Comparisons of Various Methods with and without the Diagonal Mapping and the Feature Extraction, in Terms of the Angular Errorb
Roughly equivalent to PCC [6].
The model was trained on Canon 5DSR, but tested on Sony IMX135. Sensor-invariant performance was evaluated on Sony IMX135.
Tables (3)
Table 1.
Summary of the Performance of Various Methods, in Terms of Angular Errors, on the INTEL-TAU Datasets, Together with the Processing Time and Parameter Sizea
The results of the Gray-World, White-Patch, Shades-of-Gray, and Cheng-PCA were extracted from [22], and those of the Quasi-Unsupervised, SIIE, FFCC, C5, and MDLCC were extracted from [9,11]. The proposed method is highlighted in yellow.
Table 2.
Summary of the Performance of Various Methods, in Terms of Angular Errors, on the Cube [25] and NUS-8 [24] Datasetsa
For the NUS-8 dataset, the mean values of the eight sensors are reported here, with the detailed information shown in Supplement 1. The proposed method is highlighted in yellow.
Table 3.
Comparisons of Various Methods with and without the Diagonal Mapping and the Feature Extraction, in Terms of the Angular Errorb
Roughly equivalent to PCC [6].
The model was trained on Canon 5DSR, but tested on Sony IMX135. Sensor-invariant performance was evaluated on Sony IMX135.