Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Transmission filters forming orthogonal basis for spectral imaging purposes

Open Access Open Access

Abstract

Hyperspectral imaging has become a common technique in many different applications, enabling accurate identification of materials based on their optical properties; however, it requires complex and expensive technical implementation. A less expensive way to produce spectral data, spectral estimation, suffers from complex mathematics and limited accuracy. We introduce a novel, to the best of our knowledge, method where spectral reflectance curves can be reconstructed from the measured camera responses without complex mathematics. We have simulated the method with seven non-negative broadband transmission filters extracted from Munsell color data through principal component analysis and used sensitivity and noise levels characteristic of the Retiga 4000DC 12-bit monochrome camera. The method is sensitive to noise but produces sufficient reproduction accuracy even with six filters.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Spectral imaging is a field of imaging science that is evolving rapidly with various techniques and equipment prototypes, and it has a vast number of applications, including remote sensing, industrial quality inspection, forensics, and medical imaging [16].

Spectral imaging is a method for gathering information on the optical properties of an object as a function of the wavelength of light. Whereas grayscale and RGB color images contain only one and three spectral bands, respectively, spectral images typically contain tens, hundreds, or even thousands of spectral bands. Basically, there are three different types of methods used for acquiring spectral images. The first method type is line-scan (pushbroom) spectral imaging, where the camera records all spectra from a single line on the object, and spectral image is formed by scanning the measurement line over the target of interest. This method is complex and rather slow and provides a wide spectral range and high resolution [69]. The second method type is wavelength-scan (staring) spectral imaging, which uses a monochrome camera and optical narrow bandpass filters (e.g., interference filters in a filter wheel, liquid crystal tunable filter, or acousto-optic tunable filter) or narrow-band-emitting light sources (e.g., light-emitting diodes) [1012]. This method has average speed, wavelength range, and spectral resolution [68]. The third method type is single shot (snapshot) spectral imaging, which is typically based on multi-sensor camera systems, or on using coded apertures in front of a monochrome camera. This method has low spectral resolution and no wavelength selectability but high throughput and wide spectral range, and it is also a fast method, while being moderately complex and expensive [6,8,13,14].

Spectral images can also be estimated from trichromatic measurements (e.g., RGB values or tristimulus values) [15], but the use of such methods is usually mathematically rather challenging and requires use of a predetermined training dataset; thus, the accuracy of the method is limited to spectral data having statistical properties similar to the training dataset.

The filter-wheel-based spectral imaging system consists of a digital grayscale camera and usually a set of narrow spectral bandpass filters. The number of filters corresponds to the number of spectral channels. Spectral reflectance data are reconstructed from the camera responses by using pseudo-inverse estimation, Wiener estimation, or principal component analysis. In this Letter, we introduce a method that has rather similar requirements for the hardware as the previously described filter-wheel system, but where spectral reflectance data can be estimated from the camera responses with a simple mathematical procedure. An adequate requirement for the used broadband filter transmittances is that their discrete representations become mutually orthogonal by subtracting their mean values. Next we will briefly introduce the required mathematical notations.

Generally, discrete representations of a spectral power distribution ${\textbf{s}}(\lambda)$ can be reconstructed by projecting a spectrum onto a set of orthonormal basis vectors ${{\textbf{v}}_i}$ and summing the vector projections together:

$$\hat{\textbf s} = \sum\limits_{i = 1}^M {{\textbf{v}}_i}{\textbf{v}}_i^T{\textbf{s}},$$
where $\hat{\textbf s}(\lambda)$ denotes an estimate for ${\textbf{s}}(\lambda)$. The accuracy of the reconstruction depends on the number of vectors, $M$, used for the reconstruction. All vectors in the equations are column vectors. If we use a normal monochrome CCD or CMOS camera sensor to take a picture of an object, we can measure one color component or channel by using a colored transmission filter in front of the camera lens. Transmission filter ${\textbf{t}}(\lambda)$ modulates the spectral power distribution of light and affects the camera response $r$. In the case of a set of several transmission filters ${{\textbf{t}}_i}$, corresponding camera responses would be
$${r_i} = ({\textbf{s}} \otimes {{\textbf{t}}_i}{)^T}{\textbf{c}},$$
where $\otimes$ denotes the element-wise product between two column vectors of the same size, and ${\textbf{c}}$ is spectral sensitivity of the camera sensor. In this method, the first filter should have 100% transmittance, i.e., the first picture is taken without a filter, so the corresponding unit vector is
$${{\textbf{v}}_1} = \frac{{{{\textbf{t}}_1}}}{{|{{\textbf{t}}_1}|}},$$
where $| \cdot |$ denotes the Euclidean norm of a vector. All other vectors ${{\textbf{v}}_i}$ have to be orthogonal to the first vector, and mean ${m_i}$ (mean of the elements of the vector ${{\textbf{t}}_i}$) is subtracted from each transmittance filter ${{\textbf{t}}_i}$, and they are normalized to unit vectors
$${{\textbf{v}}_i} = \frac{{{{\textbf{t}}_i} - {m_i}{{\textbf{t}}_1}}}{{|{{\textbf{t}}_i} - {m_i}{{\textbf{t}}_1}|}},\quad i \ge 2.$$

Next we can define modified camera responses ${\tilde r_i}$ using similar normalization as in Eqs. (3) and (4):

$${\tilde r_1} = \frac{{{r_1}}}{{|{{\textbf{t}}_1}|}}$$
and
$${\tilde r_i} = \frac{{{r_i} - {m_i}{r_1}}}{{|{{\textbf{t}}_i} - {m_i}{{\textbf{t}}_1}|}},\quad i \ge 2.$$

Now by substituting Eq. (2) into Eqs. (5) and (6), we can write modified camera responses in a general form:

$${\tilde r_i} = ({\textbf{s}} \otimes {{\textbf{v}}_i}{)^T}{\textbf{c}} = ({\textbf{s}} \otimes {\textbf{c}}{)^T}{{\textbf{v}}_i}.$$

If we make the assumption that the camera’s spectral sensitivity would be unity at all wavelengths, we see that

$$\hat{\textbf s} = \sum\limits_{i = 1}^M {{\textbf{v}}_i}{\tilde r_i} = \sum\limits_{i = 1}^M {{\textbf{v}}_i}{\textbf{v}}_i^T{\textbf{s}}.$$

However, in practice, the quantum efficiency of the sensor is always below unity, and typically the sensitivity is the highest in the middle part of the wavelength range and decreases at both ends; thus, in practice, we cannot reconstruct or estimate the spectral power distribution ${\textbf{s}}(\lambda)$ directly, but instead its product with the sensor sensitivity ${\textbf{s}} \otimes {\textbf{c}}$.

Usually spectral imaging systems are built to measure spectral reflectance, because it includes all essential color information of the object. Spectral reflectance factor [16] ${\boldsymbol \rho}(\lambda)$ can be estimated when the measurement taken from the sample surface, ${{\textbf{s}}_{\textbf{s}}}$, is divided by the measurement taken from the white reference sample, ${{\textbf{s}}_{\textbf{w}}}$, which diffusely reflects all the light falling on it:

$${\boldsymbol \rho} = \frac{{{{\textbf{s}}_{\textbf{s}}}}}{{{{\textbf{s}}_{\textbf{w}}}}} \approx \frac{{\widehat {{{\textbf{s}}_{\textbf{s}}} \otimes {\textbf{c}}}}}{{\widehat {{{\textbf{s}}_{\textbf{w}}} \otimes {\textbf{c}}}}}.$$

The quality of the spectral reconstruction can be estimated by the length of the difference vector or by cosine of the angle between the original spectrum and its reconstruction. The latter was suggested by Romero et al. [17] and is called the goodness-of-fit coefficient (GFC), whose values can vary in the range from zero to one, and 0.99 and 0.999 are the limits for acceptable and good spectral matches, respectively.

Accurate filter fabrication would require plenty of time and effort. To avoid this constraint, the experimental part of this study was completed with numerical simulations. Since spatial non-uniformity and nonlinear response of the camera detector can be easily determined and corrected, these factors are not considered in the simulations. Also the effect of noise can be compensated for by taking several pictures through the same filter and averaging over several measurements.

To make the simulation more realistic, we used the sensitivity and noise levels determined experimentally for the Retiga 4000DC monochrome camera [18]. The behavior of noise was experimentally determined by using an integrating sphere provided with a halogen light source and taking 100 pictures with exposure times of 7, 14, and 20 ms. We found that both the dark current and multiplicative noise can be modeled accurately with Gaussian noise. The illumination used in the simulations was the CIE standard illuminant D65.

It is obvious that a system based on a few broadband filters is not capable of measuring or reproducing spectra with sharp narrow peaks (e.g., fluorescent lights) accurately; however, it is a known fact that natural and most man-made objects have smooth reflectance curves [19]. Since Cohen’s first study [20], the characteristic vectors of the Munsell spectral data set have been used in various studies to test spectral reconstruction accuracy of reflectance spectra of the color samples. Thus, we have established our set of transmittance filters to the first six eigenvectors (EVs) solved from the Munsell data set [21] consisting of 1269 reflectance spectra of the color chips of the Munsell Book of Color-Matte Finish Collection. The method allows us to choose the level and re-scale the transmission of each filter freely; thus, negative transmission values typical of orthogonal EVs can be totally avoided. The transmittance of each filter, except the first one, was allowed to vary from 15% to 90% in transmittance scale.

These filters comprise a multilayer stack of alternating layers of titanium dioxide (${{\rm TiO}_2}$) and aluminium oxide (${{\rm Al}_2}{{\rm O}_3}$) deposited on a quartz (${{\rm SiO}_2}$) substrate and illuminated from the substrate side. The refractive indices of the atomic layer deposited ${{\rm TiO}_2}$ and ${{\rm Al}_2}{{\rm O}_3}$ (thermally grown at ${{120}^ \circ}{\rm C}$) were based on ellipsometry measurements (VASE Variable Angle Spectral Ellipsometer by J.A. Woollam) using Tauc–Lorentz and Sellmeier models. Simulation of the filter transmittance was based on the Fourier modal method of a thin film stack on $s$ (TE) and $p$ (TM) polarizations. Multiple reflections of thick ${{\rm SiO}_2}$ substrate were taken into account, and the mean value of transmission at $s$ and $p$ polarizations was used as the filter transmission. The optimization of the thicknesses of the layers was carried out with the help of the Nelder–Mead Simplex algorithm [22] in MATLAB (The MathWorks, Inc., USA) environment. In the optimization, the difference between the ideal spectra and transmittance spectra of the multilayer stacks was minimized. The number of layers was kept as low as possible for fabrication reasons, although the performance could be better with a higher number of layers. The optimized layer thicknesses of the stacks are listed in Table 1, and the simulated transmittance spectra of the optimized as well as ideal filters are illustrated in Figs. 13.

Tables Icon

Table 1. Parameters of the Filtersab

 figure: Fig. 1.

Fig. 1. Transmittance spectra of the first, second, and third filters. The dashed curves represent the ideal filter shapes and the solid curves the optimized filter stacks.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Transmittance spectra of the fourth and fifth filters. The dashed curves represent the ideal filter shapes and the solid curves the optimized filter stacks.

Download Full Size | PDF

 figure: Fig. 3.

Fig. 3. Transmittance spectra of the sixth and seventh filters. The dashed curves represent the ideal filter shapes and the solid curves the optimized filter stacks.

Download Full Size | PDF

Tables Icon

Table 2. CIEDE2000 Color Differences Between Original and Reconstructed Munsell Colors for Eigenvectors (EVs) and Different Numbers of Filters and Angles of Incidenceab

The accuracy of the method was evaluated by simulating the measurement and reconstruction of the reflectance spectra of 1269 Munsell color chips. Table 2 shows values of the average CIEDE2000 color differences [23] describing the average accuracy of the measurements. Six filters are sufficient to produce average color difference clearly below one unit—the limit that roughly represents the threshold of human color vision. Table 3 shows the values describing average GFCs. According to these results, four filters are enough on average to produce acceptable spectral reconstruction accuracy. Figure 4 shows an example spectrum representing the average reconstruction accuracy.

Tables Icon

Table 3. GFC for the Reconstructed Munsell Spectra with Eigenvectors (EVs) and Different Numbers of Filters and Angles of Incidencea

 figure: Fig. 4.

Fig. 4. Example of the Munsell reflectance spectrum No. 889 reconstructed with seven filters, normal incidence. Corresponding color difference is 0.56 and GDC is 0.9952.

Download Full Size | PDF

The method seems to be sensitive to noise because even the quantization errors of the camera response affect the results. If the bit depth of the camera sensor is lower than 10 bits, the differences increase significantly. The bit depth of the camera should be rather 12 or even higher. The same thing concerns the photon shot noise of the camera sensors; thus, the results shown in Tables 2 and 3 are based on the averages of 200 exposures through each filter. The amount of light coming through each broadband filter is relatively high; thus, one could expectedly use rather short exposure times. In the simulation, we used a noise level related to the exposure time of 20 ms, thus taking 200 pictures for each filter would require just a few seconds. The effect of the incident angle of light was tested using 5° and 10° angles to the normal of the substrate, and in all tested cases, it caused rather insignificant changes to the values shown in Tables 2 and 3.

Atomic layer deposition provides a highly accurate method for fabricating thin film structures. In theory, it should be accurate down to a single atomic layer, but in practice, deviations from target thickness can be expected to be within ${\pm}1\%$ tolerance. In uncertainty estimation, we varied refractive index parameters and film thicknesses by adding normally distributed random values. Standard deviation of each random value equals estimated standard measurement uncertainty (refractive index) or estimated fabrication accuracy (1% standard error on film thickness) of corresponding measurement. For each filter, 1000 variations were simulated, and finally standard deviation for each wavelength was obtained. Tolerances corresponding to the previously described standard deviations are marked in Tables 2 and 3 with numbers inside parentheses.

In conclusion, we have introduced a method for taking spectral images by using specially designed non-negative broadband transmission filters. The results show that both the colors and spectral curves can be reproduced from camera responses at an acceptable level by using just six filters. The method is applicable up to a 20° viewing angle. In this study, we have put emphasis on the color measurements, but the introduced method could be used with any other application, where the measurement of the spectral features is important, including the applications beyond the visible range of the electromagnetic spectrum. In addition to the multilayer interference filters used in this simulation, the method could also be used with tunable filter spectral imaging systems.

Funding

Academy of Finland (320166).

Acknowledgment

The authors thank Pasi Vahimaa for his comments on the Letter and Laure Fauch for measuring the sensitivity curve of the Retiga camera.

Disclosures

The authors declare no conflicts of interest.

REFERENCES

1. Y. Garini, I. T. Young, and G. McNamara, Cytometry Part A 69, 735 (2006). [CrossRef]  

2. G. A. Shaw and H. K. Burke, Lincoln Lab. J. 14, 3 (2003).

3. F. Zapata, M. López-López, J. M. Amigo, and C. Garcia-Ruiz, Forensic Sci. Int. 282, 80 (2018). [CrossRef]  

4. G. Lua and B. Fei, J. Biomed. Opt. 19, 010901 (2014). [CrossRef]  

5. P. Fält, J. Hiltunen, M. Hauta-Kasari, I. Sorri, V. Kalesnykiene, J. Pietilä, and H. Uusitalo, J. Imaging Sci. Technol. 55, 30509 (2011). [CrossRef]  

6. Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, J. Biomed. Opt. 18, 100901 (2013). [CrossRef]  

7. B. Boldrini, W. Kessler, K. Rebner, and R. W. Kessler, J. Near Infrared Spectrosc. 20, 438 (2012). [CrossRef]  

8. D. H. Foster and K. Amano, J. Opt. Soc. Am. A 36, 606 (2019). [CrossRef]  

9. J. Behmann, K. Acebron, D. Emin, S. Bennertz, S. Matsubara, S. Thomas, D. Bohnenkamp, M. T. Kuska, J. Jussila, H. Salo, A.-K. Mahlein, and U. Rascher, Sensors 18, 441 (2018). [CrossRef]  

10. H. Morris, C. Hoyt, and P. Treado, Appl. Spectrosc. 48, 857 (1994). [CrossRef]  

11. N. L. Everdell, I. B. Styles, A. Calcagni, J. Gibson, J. Hebden, and E. Claridge, Rev. Sci. Instrum. 81, 093706 (2010). [CrossRef]  

12. A. Spreinat, G. Selvaggio, L. Erpenbeck, and S. Kruss, J. Biophoton. 13, e201960080 (2020).

13. N. Hagen and M. W. Kudenov, Opt. Eng. 52, 090901 (2013). [CrossRef]  

14. Q. Cui, J. Park, R. T. Smith, and L. Gao, Opt. Lett. 45, 772 (2020). [CrossRef]  

15. V. Heikkinen, IEEE Trans. Image Process. 27, 3358 (2018). [CrossRef]  

16. G. Wyszecki and W. S. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae (Wiley, 1982).

17. J. Romero, A. García-Beltrán, and J. Hernández-Andrés, J. Opt. Soc. Am. A 14, 1007 (1997). [CrossRef]  

18. “QImaging Retiga4000DC datasheet,” https://www.biovis.com/resources/ccd/Retiga4000dc.pdf.

19. M. Flinkman, H. Laamanen, P. Vahimaa, and M. Hauta-Kasari, J. Opt. Soc. Am. A 29, 2566 (2012). [CrossRef]  

20. J. Cohen, Psychon. Sci. 1, 369 (1964). [CrossRef]  

21. University of Eastern Finland, “Computational spectral imaging spectral database,” https://www.uef.fi/web/spectral/-spectral-database.

22. J. A. Nelder and R. Mead, Comput. J. 7, 308 (1965). [CrossRef]  

23. “Improvement to Industrial Colour-Difference Evaluation,” CIE Technical Report CIE-142:2001 (CIE, 2001).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (4)

Fig. 1.
Fig. 1. Transmittance spectra of the first, second, and third filters. The dashed curves represent the ideal filter shapes and the solid curves the optimized filter stacks.
Fig. 2.
Fig. 2. Transmittance spectra of the fourth and fifth filters. The dashed curves represent the ideal filter shapes and the solid curves the optimized filter stacks.
Fig. 3.
Fig. 3. Transmittance spectra of the sixth and seventh filters. The dashed curves represent the ideal filter shapes and the solid curves the optimized filter stacks.
Fig. 4.
Fig. 4. Example of the Munsell reflectance spectrum No. 889 reconstructed with seven filters, normal incidence. Corresponding color difference is 0.56 and GDC is 0.9952.

Tables (3)

Tables Icon

Table 1. Parameters of the Filters a b

Tables Icon

Table 2. CIEDE2000 Color Differences Between Original and Reconstructed Munsell Colors for Eigenvectors (EVs) and Different Numbers of Filters and Angles of Incidence a b

Tables Icon

Table 3. GFC for the Reconstructed Munsell Spectra with Eigenvectors (EVs) and Different Numbers of Filters and Angles of Incidence a

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

s ^ = i = 1 M v i v i T s ,
r i = ( s t i ) T c ,
v 1 = t 1 | t 1 | ,
v i = t i m i t 1 | t i m i t 1 | , i 2.
r ~ 1 = r 1 | t 1 |
r ~ i = r i m i r 1 | t i m i t 1 | , i 2.
r ~ i = ( s v i ) T c = ( s c ) T v i .
s ^ = i = 1 M v i r ~ i = i = 1 M v i v i T s .
ρ = s s s w s s c ^ s w c ^ .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.