Abstract

In optical imaging, optical filters can be used to enhance the visibility of features-of-interest and thus aid in visualization. Optical filter design based on hyperspectral imaging employs various statistical methods to find an optimal design. Some methods, like principal component analysis, produce vectors that can be interpreted as filters that have a partially negative transmission spectrum. These filters, however, are not directly implementable optically. Earlier implementations of partially negative filters have concentrated on spectral reconstruction. Here we show a novel method for implementing partially negative optical filters for contrast-enhancement purposes in imaging applications. We describe the method and its requirements, and show its feasibility with color chart and dental imaging examples. The results are promising: visual comparison of computational color chart render and optical measurement show matching images, and visual inspection of dental images show increased contrast.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

As is well known, digital color photography relies on cameras that record the intensity of three distinct color bands — red (R), green (G), and blue (B) — and save this information in files generally described as RGB-images. Multispectral imaging extends this system by recording several different wavelength bands that need not be only in the visible range, but can also extend, e.g., to ultra-violet and infrared regions [1,2]. When the imaging system is further extended to image contiguous narrow wavelength bands over a wavelength range of interest, each pixel in the spectral image contains a contiguous spectrum and the system is known as hyperspectral imaging system [3].

Hyperspectral imaging records the full spectral data (e.g., spectral reflectance) of the imaged sample, which can then be used for various tasks, such as improving the performance of image segmentation and classification tasks compared to the usual RGB and grayscale images (e.g., cancerous cell detection [4] and remote sensing [5]). Individual spectral band images of a hyperspectral image may also show improved visibility of features-of-interest which allows optimal wavelength band selection for designing feature-specific imaging systems (e.g., narrow-band imaging [6,7] and burn depth assessment [8]). Furthermore, numerical optimization and data analysis methods can be used on spectral images to find combinations of different spectral bands that improve the visibility of certain features, allowing the design of optimal spectral transmittances for optical filters and optimal spectral power distributions for light sources [911].

While hyperspectral imaging can record the full spectral data of a sample, it is not always the best imaging approach for practical imaging applications. Typical downsides of spectral imaging are relatively long image acquisition times, challenges in producing a proper illumination over a broad wavelength range, shuffling between reference samples required for image correction and spectral reflectance calculations. Furthermore, the acquired spectral image cubes contain a large amount of spectral data that only a specialist can make sense of. Data analysis performed on hyperspectral images often aims to produce simple, cost-effective imaging setups or computationally-effective spectral data processing methods, for example, for segmenting or classifying features-of-interest from the image scene. Some data analysis methods, such as principal component analysis (PCA) [12], can be used to create useful visualizations of the spectral image data.

PCA finds the directions of maximum variance from the spectral data (principal component vectors). The individual spectra in a spectral image – which are, in practice, discrete $N$ dimensional vectors – can be projected onto a principal component vector. These projections (inner products) are scalar values that can then be reorganized to form a grayscale inner product image of the image scene. These inner product images can contain a strong contrast of various features in the image. Therefore, the ability to produce these inner product images using a simple imaging system would be very beneficial in many imaging applications.

The principal component vectors can also be interpreted as transmittance spectra of optical filters (or as emission spectra of light sources). However, the challenge in implementing these filters/illuminations in practice is that all principal components (except the first one, which is roughly the mean spectrum of the data) contain both positive and negative parts in the spectrum. Naturally, the spectral transmittances of real-world optical filters (and the emission spectra of light sources) can only have positive values. Therefore, any filter or illumination that contains both positive and negative parts in the spectrum (i.e., “partially negative filters”) cannot be implemented optically using a single optical element, such as an interference filter or a fixed-state light source.

In earlier works, Hayasaka et al. [13] solved the partial negativity of illumination spectra by applying multiplicative constants and bias vectors on partially negative principal component vectors, forcing them to have all-positive values. Another solution to the challenge of implementing partially negative optical filters is splitting the partially negative filter spectrum into its positive and negative parts, which can then be implemented as two separate optical filters or light sources [14]. These two separate filters (or illuminations) are used to acquire “positive” and “negative” grayscale images, which then are subtracted from each other to produce a final inner product image. These methods have been used almost exclusively for reflectance spectrum reconstruction by implementing few of the most significant principal component vectors as illuminations [1519].

In this paper, we propose to use the method of implementing positive and negative parts of the filter separately for applying the effect of partially negative optical filters for imaging purposes. Partially negative optical filters, produced by PCA, for example, have potential to offer stronger contrast enhancement for features-of-interest than fully positive filters (e.g. in [9]) due to larger intensity range (positive and negative ranges vs. positive only). While the simple computational approach in [10] could offer useful partially negative filters, the proposed weights would require very narrow spikes for positive and negative parts rendering optical implementation infeasible. Filters derived from lower-order principal components have relatively simple spectral shape and are easily implemented. To the best of our knowledge, this is the first time this method has been used for image contrast enhancement and for improving the visualization of specific features-of-interest in an imaging application. We used a commercially available spectrally tunable light source, that uses 10 different light emitting diode (LED) types in the visible range of light (400–700 nm), to produce the positive and negative parts of the filter spectrum. We present proof-of-concept imaging with partially negative filters and a color chart, and a practical application of the method on oral and dental imaging.

2. Methods

In sections 2.1 and 2.2, we present the theory behind the method for implementing partially negative filter spectra in imaging system. In section 2.3, an imaging setup and a proof-of-concept imaging examples are presented. Details of the proof-of-concept imaging are presented in section 2.4. In section 2.5, the background of the oral and dental spectral images used in the practical case example are presented.

2.1 Partially Negative Filters – Theory

A reflectance spectral image $R(x,y,\lambda )$ is a data cube of size $X \times Y \times N$, which can be calculated from image acquisition data using a standard flat-field correction method [3]:

$$R(x, y, \lambda_i) = \frac{s_{\mathrm{sample}}(x, y, \lambda_i) - s_{\mathrm{dark}}(x, y, \lambda_i)} {s_{\mathrm{ref}}(x, y, \lambda_i) - s_{\mathrm{dark}}(x, y, \lambda_i)} \times R_{\mathrm{ref}}(\lambda_i),$$
where $x=1,\dots ,X$ and $y=1,\dots ,Y$ are spatial coordinates, $\lambda _i$ is the wavelength band ($i=1,\dots ,N$), $s_{\mathrm {sample}}(x,y,\lambda _i)$ is the spectral band image measured from the sample, $s_{\mathrm {dark}}(x,y,\lambda _i)$ is the dark-current spectral band image, $s_{\mathrm {ref}}(x,y,\lambda _i)$ is the spectral band image measured from the reference sample, and $R_{\mathrm {ref}}(\lambda _i)$ is the reflectance spectrum of the reference sample.

An inner product image $I_{\mathrm {ip}}(x,y)$ is computed from a spectral image $R(x,y,\lambda )$ and an $N$-vector $\hat {e}(\lambda )$ as an inner product

$$I_{\mathrm{ip}}(x, y) = \sum_{i=1}^N R(x,y,\lambda_i) \hat{e}(\lambda_i) = \langle R(x, y, \lambda), \hat{e}(\lambda) \rangle.$$
Let us assume that the vector $\hat {e}(\lambda )$ contains both positive and negative parts (an example vector is shown in Fig. 1(a)). The positive and negative parts can be separated into their own vectors, $\hat {e}^+(\lambda )$ and $\hat {e}^-(\lambda )$, so that $\hat {e}(\lambda ) = \hat {e}^+(\lambda ) + \hat {e}^-(\lambda )$. Now, Eq. (2) can be written as
$$I_{\mathrm{ip}}(x,y) = \langle R(x,y,\lambda), \hat{e}^+(\lambda) \rangle + \langle R(x,y,\lambda), \hat{e}^-(\lambda) \rangle$$
$$= \langle R(x,y,\lambda), \hat{e}^+(\lambda) \rangle - \langle R(x,y,\lambda), | \hat{e}^-(\lambda) | \rangle$$
$$= I_{\mathrm{ip}}^+(x,y)-I_{\mathrm{ip}}^-(x,y),$$
where $I_{\mathrm {ip}}^+(x,y)$ and $I_{\mathrm {ip}}^-(x,y)$ are inner product images for all-positive vectors $\hat {e}^+(\lambda )$ and $|\hat {e}^-(\lambda )|$ (see Figs. 1(b) and (c)).

 figure: Fig. 1.

Fig. 1. (a) An example spectral transmission spectrum of a partially negative filter vector $\hat {e}(\lambda )$ in arbitrary units (a.u.), (b) its positive part $\hat {e}^+(\lambda )$, and (c) the absolute values $|\hat {e}^-(\lambda )|$ of the negative part.

Download Full Size | PPT Slide | PDF

Let us assume an ideal imaging setup (Fig. 2(a)) with a (non-spectrally-tunable) light source that has an emission spectrum $L(\lambda )$, and an optical filter with a spectral transmittance $F(\lambda )$ in front of a monochrome camera with a spectral sensitivity $S(\lambda )$. Here, $S(\lambda ) = T_{\mathrm {OBJ}}(\lambda ) S_{\mathrm {CAM}}(\lambda )$, where $T_{\mathrm {OBJ}}(\lambda )$ is the spectral transmittance of the objective lens used, and $S_{\mathrm {CAM}}(\lambda )$ is the spectral sensitivity of the camera sensor. A grayscale image $I_{\mathrm {cam}}(x,y)$ captured by the monochrome camera of a sample that has a spectral reflectance $R(x,y,\lambda )$ has a mathematical presentation

$$I_{\mathrm{cam}}(x,y) = \int_\lambda L(\lambda) R(x,y,\lambda) F(\lambda) S(\lambda) \mathrm{d}\lambda + \eta(x,y)$$
$$= \langle R(x,y,\lambda), L(\lambda) F(\lambda) S(\lambda) \rangle + \eta(x,y),$$
where $\eta (x,y)$ is noise.

 figure: Fig. 2.

Fig. 2. Imaging setup schematics: (a) an illuminant with spectrum $L(\lambda )$ illuminates a sample with reflectance spectrum $R(x,y,\lambda )$. The sample reflects light to the optical filter with a transmittance spectrum $F(\lambda )$. A monochrome camera with a sensitivity spectrum $S(\lambda )$ captures an image of the filtered light reflecting from the sample. (b) The illuminant and the filter are combined into a spectrally tunable light source with an illumination spectrum $X(\lambda )$.

Download Full Size | PPT Slide | PDF

Since we want to use an optical imaging system to capture an image that reproduces the effect a partially negative vector has on the spectral image of the sample-of-interest, let us choose $I_{\mathrm {ip}} = I_{\mathrm {cam}} - \eta (x,y)$. From Eqs. (2) and (7) we get $\langle R(x,y,\lambda ), \hat {e}(\lambda ) \rangle = \langle R(x,y,\lambda ), L(\lambda ) F(\lambda ) S(\lambda ) \rangle$. If we further assume that the light source used is spectrally tunable, the optical filter $F(\lambda )$ can be combined with the fixed-state light source $L(\lambda )$ (Fig. 2(b)) and we can shorten $X(\lambda )=L(\lambda )F(\lambda )$. The Eqs. (2) and (7) give us a clear connection between the partially negative filter $\hat {e}(\lambda )$ and the optical properties of the imaging system:

$$\hat{e}(\lambda) = L(\lambda) F(\lambda) S(\lambda) = X(\lambda) S(\lambda) ,$$
which can again be separated into positive and negative parts:
$$\hat{e}^+(\lambda) = X^+(\lambda) S(\lambda)$$
$$| \hat{e}^-(\lambda) | = X^-(\lambda) S(\lambda) .$$
For an arbitrary partially negative filter $\hat {e}(\lambda )$, a matching optical filter $X(\lambda )$ consists of positive and negative parts
$$X^+(\lambda) = \frac{\hat{e}^+(\lambda)}{S(\lambda)}$$
$$X^-(\lambda) = \frac{|\hat{e}^-(\lambda)|}{S(\lambda)}.$$
The spectra of both $X^+(\lambda )$ and $X^-(\lambda )$ are fully positive and can be implemented optically, e.g., by using a spectrally tunable light source. Thus, explicitly, the inner product image $I_{\mathrm {cam}}(x,y)$ in a real imaging system is as per Eqs. (4), (9) and (10) of form
$$I_{\mathrm{cam}}(x,y) = \langle R(x,y,\lambda), X^+(\lambda) S(\lambda) \rangle - \langle R(x,y,\lambda), X^-(\lambda) S(\lambda) \rangle$$
$$= I_{\mathrm{cam}}^+(x,y) - I_{\mathrm{cam}}^-(x,y) ,$$
where $I_{\mathrm {cam}}^+(x,y)$ and $I_{\mathrm {cam}}^-(x,y)$ are grayscale images captured using the optical filters (or illuminants) $X^+(\lambda )$ and $X^-(\lambda )$. The final inner product image (the effect of the partially negative filter) is then
$$I_{\mathrm{ip}}(x,y) = I_{\mathrm{cam}}(x,y)-\eta(x,y),$$
where noise image $\eta (x,y)$ can be acquired by taking a dark image with the same imaging setup as images $I^+_{\mathrm {cam}}(x,y)$ and $I^-_{\mathrm {cam}}(x,y)$.

This method to implement a partially negative filter spectrum in practice requires a spectrally tunable light source in order to implement the filter parts $X^+(\lambda )$ and $X^-(\lambda )$ as illuminations. Also, a priori knowledge of the spectral sensitivity of the monochrome camera $S(\lambda )$ is needed for Eqs. (11) and (12).

2.2 Partially Negative Filters – Visualization Considerations

When a partially negative filter $\hat {e}(\lambda )$ is applied on a reflectance spectral image $R(x,y,\lambda )$, as per Eq. (2), the resulting inner product image $I_{\mathrm {ip}}(x,y)$ may also contain negative values. This is also true for the imaged inner product image $I_{\mathrm {cam}}(x,y)$ in Eq. (14). In the computational case, the filter $\hat {e}(\lambda )$ may lead to a completely arbitrary value range and precision for the inner product image $I_{\mathrm {ip}}(x,y)$, while the imaged $I_{\mathrm {cam}}(x,y)$ is limited by camera’s precision. For example, an 8-bit camera would limit the range to $-255 \dots 255$, and a 16-bit camera to $-65535 \dots 65535$.

Contemporary display technologies and software, especially, tend to utilize RGB-color spaces, where red, green, and blue intensities are each described by an 8-bit unsigned integer, giving range $0 \dots 255$. This same range is also the limit in which grayscale pixel values must lie. Obviously, a partially negative inner product image must be scaled into display’s range for presentation on a display, at least.

Inner product images produced by principal component vectors may have low overall contrast, even if the features-of-interest would be highly contrasted. In these cases, various contrast enhancement methods can be used to scale the values so that the features-of-interest can be visualized. In our experiments, we scaled the images into float-range $0 \dots 1$, and, for the dental example (in Sec. 2.5), applied contrast-limited adaptive histogram equalization on both the inner product and reference images. Finally, the enhanced images were scaled into $0 \dots 255$ range and saved as 8-bit grayscale portable network graphics (PNG) files.

2.3 Partially Negative Filters – Optical Implementation

In order to test the proposed method for implementing partially negative optical filters in practice, we constructed an optical imaging setup (Fig. 4(a)). The setup consists of a spectrally tunable light source [20] (Tunable Spectral Light Engine, Edmund Optics, Inc., USA), which uses 10 different types of LEDs ($L_{\mathrm {LED},i}(\lambda )$, $i = 1 \dots 10$) in the visible range of light (400–700 nm; see Fig. 3(a)). By adjusting the relative intensities (weights $w_i$) of the LEDs, the shape of the emission spectrum ($L(\lambda )=\sum _{i=1}^{10} w_i L_{\mathrm {LED},i}(\lambda )$) can be changed. A liquid light guide (LLG; 5mm $\times$ 6’, Edmund Optics, Inc., USA) was connected to the spectrally tunable light source, and a ground glass diffuser (DG10-120, Thorlabs, Inc., USA) was placed at the output of the LLG. As a camera, a Photometrics Prime BSI (Teledyne Photometrics, Inc., USA) monochrome camera with spatial resolution of 2048$\times$2048 pixels was used. The camera objective was an Electrophysics 25mm f/1.3 C-mount objective lens (Sofradir EC, Inc., USA).

 figure: Fig. 3.

Fig. 3. Spectral properties of the devices: (a) emission spectra of the LEDs of the spectrally tunable light source (Edmund Optics Tunable Spectral Light Engine), (b) effective spectral transmission of the liquid light guide (Edmund Optics ø5mm liquid light guide) when connected to the light source, (c) spectral transmittance of the ground glass diffuser (Thorlabs DG10-120), (d) spectral transmission of the camera objective (Electrophysics 25mm f1.3), and (e) the spectral sensitivity of the monochrome camera (Photometrics Prime BSI). And, f) effective sensitivity of the camera.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4.

Fig. 4. Photographs and schematics of the imaging setups: (a) a proof-of-concept imaging setup, and its (b) schematic view, (c) oral and dental imaging setup, and its (d) schematic view. In the schematics, the abbreviations are as follows: STLS: spectrally tunable light source, LLG: liquid light guide, GGD: ground glass diffuser, CCM: ColorChecker Mini, OBJ: camera objective, and CAM: monochrome camera.

Download Full Size | PPT Slide | PDF

As can be seen from Eqs. (11) and (12), a priori knowledge of the spectral sensitivity of the monochrome camera $S(\lambda ) = T_{\mathrm {OBJ}}(\lambda ) S_{\mathrm {CAM}}(\lambda )$ is required for the implementation of positive and negative parts of the filter. In addition, for the proof-of-concept imaging example, it was important to use partially negative filter spectra that the spectrally tunable light source – combined with the LLG – could produce as precisely as possible. For these reasons, we carefully characterized the devices and optical components of the imaging setup. The following properties were measured: the spectral emission of the LEDs of the spectrally tunable light source, the effective light transmission of the LLG (when connected to the spectrally tunable LED light source), the spectral transmittance of the ground glass diffuser, the spectral transmission of the objective lens of the camera, and the spectral sensitivity of the monochrome camera. As noted in Sec. 2.2, we perform scaling operations to enhance image contrast. Consequently, in the context of this paper, the absolute values of the measured spectral quantities do not matter; the spectral shapes of the quantities, however, are essential.

The emission spectra of the LEDs were acquired by illuminating a $30 \times 30$ cm diffuse white reference plate (Edmund Optics, Inc., USA) with a single type of LED at a time at 100% brightness without the LLG, and by measuring the light scattered from the white plate with a Hamamatsu PMA-11 spectrometer (Hamamatsu Photonics, K.K., Japan). The Hamamatsu spectrometer removes the dark noise from the measured spectra automatically. The relative spectral power distributions of the LEDs, $L_{\mathrm {LED},i}(\lambda )$, of the light source are presented in Fig. 3(a). The peak emission wavelengths and full-width at half-maximum values (FWHM) of the LEDs are listed in Table 1.

Tables Icon

Table 1. Numbers of LEDs [20], the peak emission wavelengths and full-width at half-maximum values of Edmund Optics Tunable Spectral Light Engine LED spectra.

The effective spectral transmission $T_{\mathrm {LLG}}(\lambda )$ of the LLG was measured by illuminating a 13.5 cm diameter integrating sphere (819D-IS-5.3, Newport Corporation, USA) with the spectrally tunable light source (all LEDs on, 100% brightness) first with and then without the LLG. Light spectrum was measured for both cases by the Hamamatsu spectrometer and the effective transmission of the LLG was calculated as the ratio of the measured spectra. The effective spectral transmission of the light source’s liquid light guide is presented in Fig. 3(b).

The spectral transmission $T_{\mathrm {GGD}}(\lambda )$ of the ground glass diffuser was measured with a PerkinElmer Lambda 1050 spectrophotometer (PerkinElmer, Inc., USA), and is presented in Fig. 3(c). The spectral transmission $T_{\mathrm {OBJ}}(\lambda )$ of the objective lens was measured with the same device. The measurement light beam, however, spreads inside the device after going through the lens, and part of the transmitted light does not enter spectrophotometer’s integrating sphere nor reach the detector. Therefore, the measured values are not true spectral transmittance. In the scope of this paper, the measurement is sufficient, since only the spectral shape of the transmittance is needed, and the absolute values can be ignored. The spectral transmission of the camera’s objective Fig. 3(d).

The spectral sensitivity $S_{\mathrm {CAM}}(\lambda )$ was defined by illuminating the integrating sphere with monochromatic light produced by a monochromator, and imaging the inside of the sphere with the monochrome camera every 10 nm wavelength step. The camera did not have an objective lens attached. The exposure time was set as the longest exposure time without saturation at the brightest wavelength band, and the time was constant throughout the measurements. The relative spectral power distribution inside the integrating sphere was determined with the Hamamatsu spectrometer. The spectral sensitivity of the camera is presented in Fig. 3(e).

Combined with the spectral transmissions of the optical components in the imaging setup of Fig. 4(b), the effective relative sensitivity of the monochrome camera becomes

$$S_{\mathrm{eff}}(\lambda) = \frac{T_{\mathrm{LLG}}(\lambda)}{\max \left\{ T_{\mathrm{LLG}}(\lambda) \right\}} \times \frac{T_{\mathrm{GGD}}(\lambda)}{\max \left\{ T_{\mathrm{GGD}}(\lambda) \right\}} \times \frac{T_{\mathrm{OBJ}}(\lambda)}{\max \left\{ T_{\mathrm{OBJ}}(\lambda) \right\}} \times \frac{S_{\mathrm{CAM}}(\lambda)}{\max \left\{ S_{\mathrm{CAM}}(\lambda) \right\}}.$$
This effective sensitivity curve for the Photometrics Prime BSI camera is shown in Fig. 3(f).

A partially negative filter is converted into two optical filters $X^+(\lambda )$ and $X^-(\lambda )$ as per Eqs. (11) and (12). These optical filters are then loaded into the spectrally tunable light source’s control software (provided by the manufacturer). An inner product image $I_{\mathrm {cam}}(x,y)$ is then calculated from two separate image captures, $I_{\mathrm {cam}}^+(x,y)$ and $I_{\mathrm {cam}}^-(x,y)$.

2.4 Proof of Concept Imaging

Knowing the spectral properties of the Edmund Optics Tunable Spectral Light Engine, we created partially negative filters artificially. To ensure that the filters are precisely implementable on the light source, we began by designing the positive and negative parts $L^+(\lambda )$ and $L^-(\lambda )$ of the illumination, henceforth $L^\pm (\lambda )$, first. This was done by choosing suitable weights $w_i$ for the LEDs in the light source. By designing the partially negative filter $\hat {e}(\lambda )$ backwards, we can be sure that the positive and negative parts do not overlap and that the parts are implementable exactly as intended with the LED illumination spectra $L_{\mathrm {LED},i}(\lambda )$, which are shown in Fig. 3(a).

Using the designed illuminants $L_n^\pm (\lambda ) = \sum _{i=1}^{10} w_{i,n}^\pm L_{\mathrm {LED},i}(\lambda )$, $n = 1,2,3$, we imaged a sample, ColorChecker Mini (X-Rite, Inc., USA) color chart, with the Photometrics Prime BSI monochrome camera once for each illuminant pair. The camera was set on 16-bit HDR mode. Each image pair was captured with optimal exposure times $t_n^\pm$ for the positive and negative illumination images. Variable exposure times must be compensated in both the images and the spectra:

$$I_n^\pm(x,y) = I_{n,t}^\pm(x,y) / t_n^\pm$$
$$X_n^\pm(x,y) = X_{n,t}^\pm(x,y) / t_n^\pm$$
where $I_{n,t}^\pm (x,y)$ are the raw images and $X_{n,t}^\pm (\lambda )$ the illumination spectra since $X_{n,t}^\pm (\lambda ) = L_n^\pm (\lambda )$ as per design. The spectra were further scaled to $0 \dots 1$ range:
$$X_n^\pm(\lambda) = \frac{X_n^\pm(\lambda)} {\max \left\{ \max \left[ X_n^+(\lambda) \right], \max \left[ X_n^-(\lambda) \right] \right\}}.$$
We then combined the positive and negative parts into a partially negative illumination spectrum
$$X_n(\lambda) = X_n^+(\lambda) - X_n^-(\lambda),$$
and the positive and negative images $I_n^\pm (x,y)$ were combined into inner product images $I_n(x,y)$ as per Eq. (14). The partially negative filter $\hat {e}(\lambda )$ is then gained by removing the effects of the imaging system, Eq. (16), by applying Eq. (8):
$$\hat{e}_n(\lambda) = X_n(\lambda) S_{\mathrm{eff}}(\lambda).$$
The partially negative filters $\hat {e}_n(\lambda )$ can then be applied to a spectral image of the color chart $R_{\mathrm {CCM}}(x,y,\lambda )$ to compute a comparison inner product image
$$I_{\mathrm{CCM,n}} = \langle R_{\mathrm{CCM}}(x,y,\lambda), \hat{e}_n(\lambda) \rangle.$$
As noted in Sec. 2.2, pixel values of the comparison and imaged inner product images differ in range and scale. In order to compare the images, a common area (the vertical separating area between the center tiles) was selected in both comparison and imaged inner product images. The mean pixel values of these areas were used to scale the comparison image so that the mean values were equal.

2.5 Oral and Dental Spectral Images

We acquired oral and dental spectral images with a Specim IQ spectral camera (Specim, Spectral Imaging Ltd., Finland). The spectral range of the camera is 400–1000 nm with a spectral resolution of 2.9 nm (204 bands), and the spatial resolution of the captured images is 512$\times$512. The optical setup is similar to that used in dental application example, Fig. 4(b), except that for spectral imaging the camera was Specim IQ instead of Photometrics Prime BSI, and the light source was non-spectrally tunable Thorlabs OSL2 (Thorlabs, Inc., USA) halogen lamp connected to a ring illuminator (FRI61F50, Thorlabs, Inc., USA) instead of a diffused point light.

The oral and dental spectral images were gathered at the Dental School Clinic of the University of Eastern Finland (Kuopio, Finland). A research ethical permission was issued by the Research Ethics Committee of the Hospital District of Northern Savo (Kuopio, Finland). A fully informed written consent was obtained from each participant prior to the imaging.

A reference sample with a known reflectance spectrum ($R_{\mathrm {ref}}(\lambda )$), a ceramic matt diffuse gray tile (“Matt Diff Grey”, Ceram Research, Ltd., UK), was also imaged. The captured reference spectral image cannot be used for reflectance calculation in Eq. (1) directly, because the physical shapes of the reference sample (flat tile) and the targets-of-interest (oral cavity, teeth) are different. Because of this, a mean spectrum $m_{\mathrm {ref, mean}}(\lambda )$ was calculated from a small area taken from the middle of the reference sample in the captured reference spectral image. A new reference spectral image was then created by assigning the calculated mean spectrum $m_{\mathrm {ref,mean}}(\lambda )$ to every pixel $(x,y)$ in the reference spectral image. This new reference spectral image $s_{\mathrm {ref}}(x,y,\lambda )$ was then used in Eq. (1). The Specim IQ camera measures the dark-current spectral image ($s_{\mathrm {dark}}(x,y,\lambda )$) automatically for each spectral image.

Naturally, the significantly different shapes and illumination conditions of the oral cavity and the reference tile unavoidably cause the calculated reflectance values to be unrealistic: instead of being in the 0%-100% reflectance range, the flat-field corrected values are in an unknown range. However, the flat-field correction does not affect the shape of the reflectance spectra. In the scope of this paper, only the shapes of the reflectance spectra are needed and the absolute reflectance values are irrelevant. The reflectance spectral images were thus scaled so that the minimum and maximum values of the image are 0 and 1. Considering the method proposed in this paper, this scaling has no effect on the results.

Dental experts at the Institute of Dentistry (University of Eastern Finland, Kuopio, Finland) annotated the oral and dental reflectance spectral images with software specifically designed for the task. Currently the image database consists of 54 professionally annotated oral and dental spectral images. We extracted the spectra by class using the annotation masks. The spectra in the classes were then combined into sensible pairs, such as Attrition/Erosion–Enamel, Calculus–Enamel, or Ulcer–Oral mucosa. Then we performed principal component analysis on the class pairs and found the principal component vectors of the spectra.

Hyperspectral images exhibiting features-of-interest, such as blood vessels, calculus (Fig. 5(a)), or dental prosthetics (Fig. 5(b)), were used as base reference for principal component vector selection: these images were projected into the new basis defined by the principal component vectors and each inner product image was subjectively evaluated for its visualization performance. The best performing eigenvectors were collected and implemented as optical filters in an imaging system based on a spectrally tunable light source, presented in Figs. 4(c) and (d).

 figure: Fig. 5.

Fig. 5. Examples of spectral images used: (a) lower teeth show calculus and the oral mucosa blood vessels, and (b) the two upper front center teeth have prosthetic tips.

Download Full Size | PPT Slide | PDF

2.6 Dental Applications - Imaging

We performed principal component analysis on a randomly chosen set of spectra from classes “Initial caries” and “Enamel”, and on another set from classes “Calculus” and “Enamel”. We present three different eigenvectors $\hat {e}_n (\lambda )$, $n=4, 5, 6$, found in these sets that enhance the contrast of 1) blood vessels $\hat {e}_4(\lambda )$, 2) calculus $\hat {e}_5(\lambda )$, and 3) dental prosthetics $\hat {e}_6(\lambda )$. The eigenvectors were converted into optical partially negative filter spectra by applying Eq. (8) and Eq. (16):

$$X_n(\lambda) = \frac{\hat{e}_n(\lambda)}{S_{\mathrm{eff}}(\lambda)}.$$
Particle swarm optimization [21] was then used to find the optimal weights $w_{i,n}^\pm$ for the LED-types in the spectrally tunable light source, so that the root-mean-square-error between the illumination spectrum $L_n(\lambda ) =\sum _{i=1}^{10} w_{i,n}^\pm L_{\mathrm {LED},i}$ and the targeted partially negative filter spectrum $X_n(\lambda )$ was minimized. Further manual fitting was necessary, however, as the control software of the Tunable Spectral Light Engine automatically scales the loaded input spectrum to 0–max -range, so that at least one LED-type is at full power. Consequently, the positive and negative illuminations $L_n^\pm (\lambda )$ were not proportional to each other. We exported the illumination spectra from the control software, and fixed the disproportionality by hand in postprocessing by finding such factors $f_n^\pm$ so that the illumination spectra $L_n^\pm (\lambda )$ matched the intended positive and negative parts $X_n^\pm (\lambda )$.

We imaged the front teeth of two volunteers using all three partially negative filters (six illuminations $L_n^\pm (\lambda )$, $n = 4,5,6$). In addition, grayscale reference images were captured with a white light produced by the spectrally tunable light source (all LEDs on, 100% brightness). As with the proof-of-concept imaging, the positive and negative images $I_{\mathrm {n,cam}}^\pm (x,y)$ were captured with optimal exposure times $t_n^\pm$. The captured images were thus scaled

$$I_n^\pm(x,y) = I_{\mathrm{n,cam}}^\pm(x,y) \times \frac{f_n^\pm}{t_n^\pm}$$
to compensate for the disproportional illumination and variable exposure times. Next, the inner product images $I_n(x,y)$ as per Eq. (14) were calculated. While the inner product images have improved contrast between features, the overall contrast can be very low. Contrast-limited adaptive histogram equalization provided in the scikit-image [22] Python module was used to improve overall contrast. To exaggerate the contrast, the value of clip_limit-parameter of the equalize_adapthis -method was increased from its default value of 0.01 to 0.10. The contrast enhancement was also performed on the grayscale comparison images.

3. Results and Discussion

We divided the imaging examples of the implementation of partially negative optical filters into two parts: a) a proof-of-concept for which an artificial partially negative filter was implemented, and b) dental applications that use specific principal component analysis-based filters to improve the contrast of oral and dental features.

3.1 Proof of Concept

We chose three illumination pairs $L_n^\pm (\lambda )$, $n=1,2,3$, so that the positive and negative parts of $L_1^\pm (\lambda )$ do not have any overlap in their combined spectra, while $L_2^\pm (\lambda )$ contains slight overlap, and $L_3^\pm (\lambda )$ has deliberate strong overlap. The weights $w_i$, $i=1,\dots ,10$, used to implement these illuminations fulfill the set criteria, but are otherwise arbitrary. The artificially designed partially negative spectra $X_n(\lambda )$, and the resulting $\hat {e}_n(\lambda )$ are presented in Figs. 6(a-c). And, the positive and negative parts $X_n^\pm (\lambda )$ are presented in Figs. 6(d-f), along with the positive and negative parts $\hat {e}_n^\pm (\lambda )$. In this case, the illumination spectra $L_n^\pm (\lambda )$ are not shown, as they were designed to match precisely the target spectra $X_n^\pm (\lambda )$.

 figure: Fig. 6.

Fig. 6. Illumination emission and filter transmission spectra, in arbitrary units (a.u.): (a) $X_1(\lambda )$ and $\hat {e}_1(\lambda )$, (b) $X_2(\lambda )$ and $\hat {e}_2(\lambda )$, and (c) $X_3(\lambda )$ and $\hat {e}_3(\lambda )$, where illumination spectra are the blue continuous lines and filter spectra the orange dashed lines. Positive and negative illumination spectra (d) $X_1^+(\lambda )$ and $X_1^-(\lambda )$, (e) $X_2^+(\lambda )$ and $X_2^-(\lambda )$, and (f) $X_3^+(\lambda )$ and $X_3^-(\lambda )$, where the positive part spectra are the green continuous lines and negative part spectra the red dashed lines.

Download Full Size | PPT Slide | PDF

The reference inner product image $I_{\mathrm {CCM,1}}(x,y)$ for filter $\hat {e}_1 (\lambda )$ is presented in Fig. 7(a), and the imaged inner product image $I_1(x,y)$ in Fig. 7(b). In subjective visual evaluation, we consider this a relatively ideal example of the method, as the inner product images match closely and every color tile is reproduced precisely enough.

 figure: Fig. 7.

Fig. 7. Proof-of-concept inner product images: (a) computational and (b) imaged inner product images for filters $\hat {e}_1(\lambda )$ and $X_1(\lambda )$ in ideal case, (c) computational and (d) imaged inner product images for filters $\hat {e}_2(\lambda )$ and $X_2(\lambda )$ when the spectra have a slight overlap, and (e) computational and (f) imaged inner product images for filters $\hat {e}_3(\lambda )$ and $X_3(\lambda )$ when the spectra overlap significantly. The Specim IQ images on the left column were cropped from a larger image, and scaling is blurring them slightly. Photometrics Prime BSI images on the right look distorted because the color checker is slightly bent.

Download Full Size | PPT Slide | PDF

With filter 2 the computational reference (Fig. 7(c)) differs slightly from the imaged inner product image (Fig. 7(d)). This may be caused by slight overlap in $\hat {e}_2^+(\lambda )$ and $\hat {e}_2^- (\lambda )$ spectra.

Filter 3 was intentionally designed so, that the positive and negative parts $\hat {e}_3^+(\lambda )$ and $\hat {e}_3^-(\lambda )$ would have severe overlaps in their spectra. We believe this leads to the filter’s poor performance, evidenced in Figs. 7(e) and (f), which differ noticibly in multiple tiles.

3.2 Dental applications

The principal component vector spectra for 1) blood vessels $\hat {e}_4(\lambda )$, 2) calculus $\hat {e}_5(\lambda )$, and 3) dental prosthetics $\hat {e}_6(\lambda )$, are presented in Figs. 8(a-c), along with their optically compensated partially negative spectra $X_n(\lambda )$, and the fitted illumination spectra $L_n(\lambda )$, where $n = 4, 5, 6$. The root mean square error (RMSE) between the intended spectrum $X_4(\lambda )$ and the fitted illumination $L_4(\lambda )$ is 0.09, for $X_5(\lambda )$ and $L_5(\lambda )$ 0.10, and $X_6(\lambda )$ and $L_6(\lambda )$ 0.09. The RMSEs of $L_4(\lambda )$ and $L_5(\lambda )$ are 19% and 18% of the min-max range of their targets $X_4(\lambda )$ and $X_5(\lambda )$. These two values seem high and their graphs in Figs. 8(a,b) show poor fit at the ends of the spectra. The RMSE of $L_6(\lambda )$ is only 9% of the min-max range of $X_6(\lambda )$, indicating a better fit than $L_4(\lambda )$ and $L_5(\lambda )$, but it is caused by the spike at 400 nm expanding the min-max-range. In all cases, the fit suffers from the lack of LEDs closer to the 400 nm and 700 nm bands. The positive parts $e_n^+(\lambda )$, $X_n^+(\lambda )$ and $L_n^+(\lambda )$ are presented in Figs. 8(d-f), and the negative parts $e_n^-(\lambda )$, $X_n^-(\lambda )$, and $L_n^-(\lambda )$ in Figs. 8(g-i).

 figure: Fig. 8.

Fig. 8. Oral and dental contrast enhancement filters: (a) blood vessels, (b) calculus, and (c) prosthetics. The positive parts of the filters: (d) blood vessels (e) calculus, and (f) prosthetics. The negative parts of the filters: (g) blood vessels (h) calculus, and (i) prosthetics. The blue dashed lines are eigenvectors $\hat {e}_n(\lambda )$, orange dotted lines present optical filters $X_n(\lambda )$, and the green continuous lines present the fitted LED illumination spectra $L_n(\lambda )$ implementing the optical filters.

Download Full Size | PPT Slide | PDF

The contrast-enhanced blood vessel inner product image and the grayscale reference seem exceedingly similar in Figs. 9(a) and (b). The inner product image (Fig. 9(b)), however, gives a more detailed view of the blood vessels on some areas than the reference. The positive and negative illumination spectra (Figs. 8(d) and (g)) have overlapping regions, which may lower the partially negative illumination’s performance. The positive illumination spectrum extends beyond 600 nm, while the target stops there. Conversely, the negative illumination spectrum starts before the targeted 600 nm point. Furthermore, the start and end regions of the implemented spectrum differ drastically from the target spectrum.

 figure: Fig. 9.

Fig. 9. Contrast-enhanced (a) reference image and (b) inner product image for blood vessel filter, (c) reference image and (d) inner product image for calculus filter, and (e) reference image and (f) inner product image for prosthetics filter. Additionally, (g) an inner product image the dental prosthetics filter applied on a spectral image.

Download Full Size | PPT Slide | PDF

The blood vessel filter found in “Initial caries”–“Enamel” set might perform better had a class containing only blood vessels existed in our data sets, so that a pair like “Blood vessel“–“Oral mucosa” had been possible. Perhaps even more useful results could have been gained had the hypothetical “Blood vessel” class been split into “Artery” and “Vein” classes, possibly allowing creation of filters that separate the two blood vessel types.

The calculus illumination spectrum differs at the beginning and the end of the spectrum (Fig. 8(b)), like in the blood vessel case. Additionally, the center peak of the negative part (Fig. 8(h)) continues beyond intended region. Still, the contrast-enhanced inner product image (Fig. 9(d)) shows bright white calculus areas on the sides of the front teeth, on a upper left back tooth and on upper right back gingiva line. The grayscale reference (Fig. 9(c)) shows subtle white shades on some calculus areas while others are barely noticeable.

Our third example, dental prosthetics filter fails completely. The spectrum of this filter (Fig. 8(c)) is too complex for the spectrally tunable light source, and the positive and negative illuminations (Figs. 8(f) and (i)) fail to reproduce some of the peaks in the target spectrum. In the positive illumination, a 50 nm wide peak at 580 nm spreads over a 200 nm band, also covering a small peak at 680 nm. The negative illumination spectrum, on the other hand, is missing a peak at 410 nm. The inner product image (Fig. 9(f)) and the reference (Fig. 9(e)) neither shows the dental prosthetics (the latter being, likely, a desired feature in dental prosthetics).

When the partially negative filter $\hat {e}_3(\lambda )$ is applied to a spectral image of the same volunteer (Fig. 9(g)), the dental prosthetic is seen clearly on an contrast-enhanced image version. This filter was 7th principal component vector in the “Calculus”–“Enamel” spectrum set making the computational inner product image fairly noisy.

3.3 Discussion

The results from the imaging tests are promising. As expected, when the spectrally tunable light source can replicate the given spectral shapes of the positive and negative parts of the filter with high fidelity, the inner product image computed from the spectral image and the inner product image gained from the imaging by a monochrome camera are practically identical. When the spectrally tunable light source fails to produce the given spectral shapes, however, the inner product images gained from the computations and from the imaging are different.

During our experiments, we identified few shortcomings in our approach. As the partially negative vector is implemented as two optical filters, two measurements are required. This proved to be a challenge in the context of oral and dental imaging: involuntary movements between image captures lead to motion artifacts and necessitate the use of image registration. Image registration, however, cannot fully compensate the movements, and misalignments in the image pair impair the visibility of the features-of-interest. Thus, the method is more suitable for cases where the imaged scene is static.

Our intraoral imaging application places three essential requirements on the light source: it must be able to accurately reproduce the planned spectra, the intensity of the light emitted must be relatively high, and the light must stabilize quickly. In theory, the positive and negative parts of the planned spectrum do not overlap. In practice, and depending on the properties of the spectrally tunable light source, this might not hold and the parts may exhibit significant overlap, which can lead to poor performance in reproducing the effect of the partially negative filter. In our study, we found that the spectrally tunable light source used, Edmund Optics Tunable Spectral Light Engine, is not suitable for reproducing elaborate, high order, principal component vectors. This is due to the spectral sparsity of the LED-types, relatively broad spectral emission bands of the LEDs, and non-uniform spectral power distributions among the LEDs (Fig. 3(a)). The LEDs in the blue and red regions have relatively narrow spectral power distributions and the spectral separation is “short”, but the area 500-650 nm consists of only five LED-types, some of which have relatively broad spectral distributions. Despite these design choices, the device can reproduce the very first few principal components with sufficient accuracy. The brightness of the light source is quite enough for intraoral imaging allowing short exposure times. The emitted light also stabilizes quickly, which is an important feature in intraoral imaging due to the aforementioned involuntary movements.

4. Conclusions

We propose an imaging method for improved visualization of specific features-of-interest and contrast enhancement, based on partially negative optical filters. The partially negative filters are split into positive and negative parts that can then be implemented optically. We have shown the feasibility and weaknesses of the method with proof-of-concept and practical dental imaging examples. The results are promising and the proposed method can be used in various imaging applications.

Funding

Business Finland (4465/31/2017), European Regional Development Fund

Acknowledgments

The work is part of the Academy of Finland Flagship Programme, Photonics Research and Innovation (PREIN), decision 320166.

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: Principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006). [CrossRef]  

2. N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013). [CrossRef]  

3. Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013). [CrossRef]  

4. S. Zhu, K. Su, Y. Liu, H. Yin, Z. Li, F. Huang, Z. Chen, W. Chen, G. Zhang, and Y. Chen, “Identification of cancerous gastric cells based on common features extracted from hyperspectral microscopic images,” Biomed. Opt. Express 6(4), 1135–1145 (2015). [CrossRef]  

5. Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018). [CrossRef]  

6. M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009). [CrossRef]  

7. M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009). [CrossRef]  

8. S. V. Parasca, M. A. Calin, D. Manea, S. Miclos, and R. Savastru, “Hyperspectral index-based metric for burn depth assessment,” Biomed. Opt. Express 9(11), 5778–5791 (2018). [CrossRef]  

9. J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

10. P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

11. Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019). [CrossRef]  

12. D.-Y. Tzeng and R. S. Berns, “A review of principal component analysis and its applications to color technology,” Color Res. Appl. 30(2), 84–98 (2005). [CrossRef]  

13. N. Hayasaka, S. Toyooka, and T. Jaaskelainen, “Iterative feedback method to make a spatial filter on a liquid crystal spatial light modulator for 2d spectroscopic pattern recognition,” Opt. Commun. 119(5-6), 643–651 (1995). [CrossRef]  

14. A. A. Kamshilin and E. Nippolainen, “Chromatic discrimination by use of computer controlled set of light-emitting diodes,” Opt. Express 15(23), 15093–15100 (2007). [CrossRef]  

15. J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of munsell colors,” J. Opt. Soc. Am. A 6(2), 318–322 (1989). [CrossRef]  

16. T. Jaaskelainen, J. Parkkinen, and S. Toyooka, “Vector-subspace model for color representation,” J. Opt. Soc. Am. A 7(4), 725–730 (1990). [CrossRef]  

17. R. Piché, “Nonnegative color spectrum analysis filters from principal component analysis characteristic spectra,” J. Opt. Soc. Am. A 19(10), 1946–1950 (2002). [CrossRef]  

18. L. Fauch, E. Nippolainen, V. Teplov, and A. A. Kamshilin, “Recovery of reflection spectra in a multispectral imaging system with light emitting diodes,” Opt. Express 18(22), 23394–23405 (2010). [CrossRef]  

19. M. Flinkman, H. Laamanen, J. Tuomela, P. Vahimaa, and M. Hauta-Kasari, “Eigenvectors of optimal color spectra,” J. Opt. Soc. Am. A 30(9), 1806–1813 (2013). [CrossRef]  

20. Edmund Optics, “Tunable spectral light engine 11-175,” https://www.edmundoptics.com/document/download/461627. Accessed: 2019-09-30.

21. D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” in 2007 IEEE Swarm Intelligence Symposium, (2007), pp. 120–127.

22. S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014). [CrossRef]  

References

  • View by:

  1. Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: Principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006).
    [Crossref]
  2. N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
    [Crossref]
  3. Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
    [Crossref]
  4. S. Zhu, K. Su, Y. Liu, H. Yin, Z. Li, F. Huang, Z. Chen, W. Chen, G. Zhang, and Y. Chen, “Identification of cancerous gastric cells based on common features extracted from hyperspectral microscopic images,” Biomed. Opt. Express 6(4), 1135–1145 (2015).
    [Crossref]
  5. Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
    [Crossref]
  6. M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
    [Crossref]
  7. M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
    [Crossref]
  8. S. V. Parasca, M. A. Calin, D. Manea, S. Miclos, and R. Savastru, “Hyperspectral index-based metric for burn depth assessment,” Biomed. Opt. Express 9(11), 5778–5791 (2018).
    [Crossref]
  9. J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.
  10. P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.
  11. Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
    [Crossref]
  12. D.-Y. Tzeng and R. S. Berns, “A review of principal component analysis and its applications to color technology,” Color Res. Appl. 30(2), 84–98 (2005).
    [Crossref]
  13. N. Hayasaka, S. Toyooka, and T. Jaaskelainen, “Iterative feedback method to make a spatial filter on a liquid crystal spatial light modulator for 2d spectroscopic pattern recognition,” Opt. Commun. 119(5-6), 643–651 (1995).
    [Crossref]
  14. A. A. Kamshilin and E. Nippolainen, “Chromatic discrimination by use of computer controlled set of light-emitting diodes,” Opt. Express 15(23), 15093–15100 (2007).
    [Crossref]
  15. J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of munsell colors,” J. Opt. Soc. Am. A 6(2), 318–322 (1989).
    [Crossref]
  16. T. Jaaskelainen, J. Parkkinen, and S. Toyooka, “Vector-subspace model for color representation,” J. Opt. Soc. Am. A 7(4), 725–730 (1990).
    [Crossref]
  17. R. Piché, “Nonnegative color spectrum analysis filters from principal component analysis characteristic spectra,” J. Opt. Soc. Am. A 19(10), 1946–1950 (2002).
    [Crossref]
  18. L. Fauch, E. Nippolainen, V. Teplov, and A. A. Kamshilin, “Recovery of reflection spectra in a multispectral imaging system with light emitting diodes,” Opt. Express 18(22), 23394–23405 (2010).
    [Crossref]
  19. M. Flinkman, H. Laamanen, J. Tuomela, P. Vahimaa, and M. Hauta-Kasari, “Eigenvectors of optimal color spectra,” J. Opt. Soc. Am. A 30(9), 1806–1813 (2013).
    [Crossref]
  20. Edmund Optics, “Tunable spectral light engine 11-175,” https://www.edmundoptics.com/document/download/461627 . Accessed: 2019-09-30.
  21. D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” in 2007 IEEE Swarm Intelligence Symposium, (2007), pp. 120–127.
  22. S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
    [Crossref]

2019 (1)

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

2018 (2)

Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
[Crossref]

S. V. Parasca, M. A. Calin, D. Manea, S. Miclos, and R. Savastru, “Hyperspectral index-based metric for burn depth assessment,” Biomed. Opt. Express 9(11), 5778–5791 (2018).
[Crossref]

2015 (1)

2014 (1)

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

2013 (3)

M. Flinkman, H. Laamanen, J. Tuomela, P. Vahimaa, and M. Hauta-Kasari, “Eigenvectors of optimal color spectra,” J. Opt. Soc. Am. A 30(9), 1806–1813 (2013).
[Crossref]

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

2010 (1)

2009 (2)

M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
[Crossref]

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

2007 (1)

2006 (1)

Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: Principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006).
[Crossref]

2005 (1)

D.-Y. Tzeng and R. S. Berns, “A review of principal component analysis and its applications to color technology,” Color Res. Appl. 30(2), 84–98 (2005).
[Crossref]

2002 (1)

1995 (1)

N. Hayasaka, S. Toyooka, and T. Jaaskelainen, “Iterative feedback method to make a spatial filter on a liquid crystal spatial light modulator for 2d spectroscopic pattern recognition,” Opt. Commun. 119(5-6), 643–651 (1995).
[Crossref]

1990 (1)

1989 (1)

Berns, R. S.

D.-Y. Tzeng and R. S. Berns, “A review of principal component analysis and its applications to color technology,” Color Res. Appl. 30(2), 84–98 (2005).
[Crossref]

Boulogne, F.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Bratton, D.

D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” in 2007 IEEE Swarm Intelligence Symposium, (2007), pp. 120–127.

Calin, M. A.

Chen, W.

Chen, Y.

Chen, Z.

Chiba, T.

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

Ezoe, Y.

M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
[Crossref]

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

Fält, P.

J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

Fauch, L.

L. Fauch, E. Nippolainen, V. Teplov, and A. A. Kamshilin, “Recovery of reflection spectra in a multispectral imaging system with light emitting diodes,” Opt. Express 18(22), 23394–23405 (2010).
[Crossref]

J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

Flinkman, M.

Garini, Y.

Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: Principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006).
[Crossref]

Gouillart, E.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Guo, F.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

Hagen, N. A.

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Hallikainen, J.

Haneishi, H.

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

Hauta-Kasari, M.

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

M. Flinkman, H. Laamanen, J. Tuomela, P. Vahimaa, and M. Hauta-Kasari, “Eigenvectors of optimal color spectra,” J. Opt. Soc. Am. A 30(9), 1806–1813 (2013).
[Crossref]

P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

Hayasaka, N.

N. Hayasaka, S. Toyooka, and T. Jaaskelainen, “Iterative feedback method to make a spatial filter on a liquid crystal spatial light modulator for 2d spectroscopic pattern recognition,” Opt. Commun. 119(5-6), 643–651 (1995).
[Crossref]

He, X.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

Hori, K.

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

Horimatsu, T.

M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
[Crossref]

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

Huang, F.

Hyttinen, J.

P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

Jaaskelainen, T.

N. Hayasaka, S. Toyooka, and T. Jaaskelainen, “Iterative feedback method to make a spatial filter on a liquid crystal spatial light modulator for 2d spectroscopic pattern recognition,” Opt. Commun. 119(5-6), 643–651 (1995).
[Crossref]

T. Jaaskelainen, J. Parkkinen, and S. Toyooka, “Vector-subspace model for color representation,” J. Opt. Soc. Am. A 7(4), 725–730 (1990).
[Crossref]

J. P. S. Parkkinen, J. Hallikainen, and T. Jaaskelainen, “Characteristic spectra of munsell colors,” J. Opt. Soc. Am. A 6(2), 318–322 (1989).
[Crossref]

Kamshilin, A. A.

Kennedy, J.

D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” in 2007 IEEE Swarm Intelligence Symposium, (2007), pp. 120–127.

Kudenov, M. W.

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Kullaa, A.

P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

Kurabuchi, Y.

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

Laamanen, H.

Li, Q.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

Li, Z.

Liu, H.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

Liu, Y.

Ma, A.

Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
[Crossref]

Manea, D.

McNamara, G.

Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: Principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006).
[Crossref]

Miclos, S.

Miyamoto, S.

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
[Crossref]

Morita, S.

M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
[Crossref]

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

Murai, K.

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

Muto, M.

M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
[Crossref]

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

Nakaguchi, T.

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

Nakano, K.

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

Nippolainen, E.

Nunez-Iglesias, J.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Ohnishi, T.

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

Parasca, S. V.

Parkkinen, J.

Parkkinen, J. P. S.

Piché, R.

Riepponen, A.

P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

Savastru, R.

Schönberger, J. L.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

soon Ong, Y.

Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
[Crossref]

Su, K.

Teplov, V.

Toyooka, S.

N. Hayasaka, S. Toyooka, and T. Jaaskelainen, “Iterative feedback method to make a spatial filter on a liquid crystal spatial light modulator for 2d spectroscopic pattern recognition,” Opt. Commun. 119(5-6), 643–651 (1995).
[Crossref]

T. Jaaskelainen, J. Parkkinen, and S. Toyooka, “Vector-subspace model for color representation,” J. Opt. Soc. Am. A 7(4), 725–730 (1990).
[Crossref]

Tuomela, J.

Tzeng, D.-Y.

D.-Y. Tzeng and R. S. Berns, “A review of principal component analysis and its applications to color technology,” Color Res. Appl. 30(2), 84–98 (2005).
[Crossref]

Vahimaa, P.

van der Walt, S.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Wang, Y.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

Warner, J. D.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Xu, D.

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

Yager, N.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Yin, H.

Young, I. T.

Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: Principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006).
[Crossref]

Yu, T.

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Yukawa, Y.

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

Zhang, G.

Zhang, L.

Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
[Crossref]

Zhong, Y.

Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
[Crossref]

Zhu, S.

Zhu, Z.

Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
[Crossref]

Appl. Soft Comput. (1)

Y. Zhong, A. Ma, Y. soon Ong, Z. Zhu, and L. Zhang, “Computational intelligence in optical remote sensing image processing,” Appl. Soft Comput. 64, 75–93 (2018).
[Crossref]

Artif. Life Robotics (1)

Y. Kurabuchi, K. Murai, K. Nakano, T. Ohnishi, T. Nakaguchi, M. Hauta-Kasari, and H. Haneishi, “Optimal design of illuminant for improving intraoperative color appearance of organs,” Artif. Life Robotics 24(1), 52–58 (2019).
[Crossref]

Biomed. Opt. Express (2)

Color Res. Appl. (1)

D.-Y. Tzeng and R. S. Berns, “A review of principal component analysis and its applications to color technology,” Color Res. Appl. 30(2), 84–98 (2005).
[Crossref]

Cytometry, Part A (1)

Y. Garini, I. T. Young, and G. McNamara, “Spectral imaging: Principles and applications,” Cytometry, Part A 69A(8), 735–747 (2006).
[Crossref]

J. Biomed. Opt. (1)

Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt. 18(10), 100901 (2013).
[Crossref]

J. Gastroenterol. (1)

M. Muto, T. Horimatsu, Y. Ezoe, K. Hori, Y. Yukawa, S. Morita, S. Miyamoto, and T. Chiba, “Narrow-band imaging of the gastrointestinal tract,” J. Gastroenterol. 44(1), 13–25 (2009).
[Crossref]

J. Gastroenterol. Hepatol. (1)

M. Muto, T. Horimatsu, Y. Ezoe, S. Morita, and S. Miyamoto, “Improving visualization techniques by narrow band imaging and magnification endoscopy,” J. Gastroenterol. Hepatol. 24(8), 1333–1346 (2009).
[Crossref]

J. Opt. Soc. Am. A (4)

Opt. Commun. (1)

N. Hayasaka, S. Toyooka, and T. Jaaskelainen, “Iterative feedback method to make a spatial filter on a liquid crystal spatial light modulator for 2d spectroscopic pattern recognition,” Opt. Commun. 119(5-6), 643–651 (1995).
[Crossref]

Opt. Eng. (1)

N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng. 52(9), 090901 (2013).
[Crossref]

Opt. Express (2)

PeerJ (1)

S. van der Walt, J. L. Schönberger, J. Nunez-Iglesias, F. Boulogne, J. D. Warner, N. Yager, E. Gouillart, and T. Yu, the scikit-image contributors, “scikit-image: image processing in Python,” PeerJ 2, e453 (2014).
[Crossref]

Other (4)

Edmund Optics, “Tunable spectral light engine 11-175,” https://www.edmundoptics.com/document/download/461627 . Accessed: 2019-09-30.

D. Bratton and J. Kennedy, “Defining a standard for particle swarm optimization,” in 2007 IEEE Swarm Intelligence Symposium, (2007), pp. 120–127.

J. Hyttinen, P. Fält, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Contrast enhancement of dental lesions by light source optimisation,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 499–507.

P. Fält, J. Hyttinen, L. Fauch, A. Riepponen, A. Kullaa, and M. Hauta-Kasari, “Spectral image enhancement for the visualization of dental lesions,” in International Conference on Image and Signal Processing, (Springer, 2018), pp. 490–498.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. (a) An example spectral transmission spectrum of a partially negative filter vector $\hat {e}(\lambda )$ in arbitrary units (a.u.), (b) its positive part $\hat {e}^+(\lambda )$, and (c) the absolute values $|\hat {e}^-(\lambda )|$ of the negative part.
Fig. 2.
Fig. 2. Imaging setup schematics: (a) an illuminant with spectrum $L(\lambda )$ illuminates a sample with reflectance spectrum $R(x,y,\lambda )$. The sample reflects light to the optical filter with a transmittance spectrum $F(\lambda )$. A monochrome camera with a sensitivity spectrum $S(\lambda )$ captures an image of the filtered light reflecting from the sample. (b) The illuminant and the filter are combined into a spectrally tunable light source with an illumination spectrum $X(\lambda )$.
Fig. 3.
Fig. 3. Spectral properties of the devices: (a) emission spectra of the LEDs of the spectrally tunable light source (Edmund Optics Tunable Spectral Light Engine), (b) effective spectral transmission of the liquid light guide (Edmund Optics ø5mm liquid light guide) when connected to the light source, (c) spectral transmittance of the ground glass diffuser (Thorlabs DG10-120), (d) spectral transmission of the camera objective (Electrophysics 25mm f1.3), and (e) the spectral sensitivity of the monochrome camera (Photometrics Prime BSI). And, f) effective sensitivity of the camera.
Fig. 4.
Fig. 4. Photographs and schematics of the imaging setups: (a) a proof-of-concept imaging setup, and its (b) schematic view, (c) oral and dental imaging setup, and its (d) schematic view. In the schematics, the abbreviations are as follows: STLS: spectrally tunable light source, LLG: liquid light guide, GGD: ground glass diffuser, CCM: ColorChecker Mini, OBJ: camera objective, and CAM: monochrome camera.
Fig. 5.
Fig. 5. Examples of spectral images used: (a) lower teeth show calculus and the oral mucosa blood vessels, and (b) the two upper front center teeth have prosthetic tips.
Fig. 6.
Fig. 6. Illumination emission and filter transmission spectra, in arbitrary units (a.u.): (a) $X_1(\lambda )$ and $\hat {e}_1(\lambda )$, (b) $X_2(\lambda )$ and $\hat {e}_2(\lambda )$, and (c) $X_3(\lambda )$ and $\hat {e}_3(\lambda )$, where illumination spectra are the blue continuous lines and filter spectra the orange dashed lines. Positive and negative illumination spectra (d) $X_1^+(\lambda )$ and $X_1^-(\lambda )$, (e) $X_2^+(\lambda )$ and $X_2^-(\lambda )$, and (f) $X_3^+(\lambda )$ and $X_3^-(\lambda )$, where the positive part spectra are the green continuous lines and negative part spectra the red dashed lines.
Fig. 7.
Fig. 7. Proof-of-concept inner product images: (a) computational and (b) imaged inner product images for filters $\hat {e}_1(\lambda )$ and $X_1(\lambda )$ in ideal case, (c) computational and (d) imaged inner product images for filters $\hat {e}_2(\lambda )$ and $X_2(\lambda )$ when the spectra have a slight overlap, and (e) computational and (f) imaged inner product images for filters $\hat {e}_3(\lambda )$ and $X_3(\lambda )$ when the spectra overlap significantly. The Specim IQ images on the left column were cropped from a larger image, and scaling is blurring them slightly. Photometrics Prime BSI images on the right look distorted because the color checker is slightly bent.
Fig. 8.
Fig. 8. Oral and dental contrast enhancement filters: (a) blood vessels, (b) calculus, and (c) prosthetics. The positive parts of the filters: (d) blood vessels (e) calculus, and (f) prosthetics. The negative parts of the filters: (g) blood vessels (h) calculus, and (i) prosthetics. The blue dashed lines are eigenvectors $\hat {e}_n(\lambda )$, orange dotted lines present optical filters $X_n(\lambda )$, and the green continuous lines present the fitted LED illumination spectra $L_n(\lambda )$ implementing the optical filters.
Fig. 9.
Fig. 9. Contrast-enhanced (a) reference image and (b) inner product image for blood vessel filter, (c) reference image and (d) inner product image for calculus filter, and (e) reference image and (f) inner product image for prosthetics filter. Additionally, (g) an inner product image the dental prosthetics filter applied on a spectral image.

Tables (1)

Tables Icon

Table 1. Numbers of LEDs [20], the peak emission wavelengths and full-width at half-maximum values of Edmund Optics Tunable Spectral Light Engine LED spectra.

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

R ( x , y , λ i ) = s s a m p l e ( x , y , λ i ) s d a r k ( x , y , λ i ) s r e f ( x , y , λ i ) s d a r k ( x , y , λ i ) × R r e f ( λ i ) ,
I i p ( x , y ) = i = 1 N R ( x , y , λ i ) e ^ ( λ i ) = R ( x , y , λ ) , e ^ ( λ ) .
I i p ( x , y ) = R ( x , y , λ ) , e ^ + ( λ ) + R ( x , y , λ ) , e ^ ( λ )
= R ( x , y , λ ) , e ^ + ( λ ) R ( x , y , λ ) , | e ^ ( λ ) |
= I i p + ( x , y ) I i p ( x , y ) ,
I c a m ( x , y ) = λ L ( λ ) R ( x , y , λ ) F ( λ ) S ( λ ) d λ + η ( x , y )
= R ( x , y , λ ) , L ( λ ) F ( λ ) S ( λ ) + η ( x , y ) ,
e ^ ( λ ) = L ( λ ) F ( λ ) S ( λ ) = X ( λ ) S ( λ ) ,
e ^ + ( λ ) = X + ( λ ) S ( λ )
| e ^ ( λ ) | = X ( λ ) S ( λ ) .
X + ( λ ) = e ^ + ( λ ) S ( λ )
X ( λ ) = | e ^ ( λ ) | S ( λ ) .
I c a m ( x , y ) = R ( x , y , λ ) , X + ( λ ) S ( λ ) R ( x , y , λ ) , X ( λ ) S ( λ )
= I c a m + ( x , y ) I c a m ( x , y ) ,
I i p ( x , y ) = I c a m ( x , y ) η ( x , y ) ,
S e f f ( λ ) = T L L G ( λ ) max { T L L G ( λ ) } × T G G D ( λ ) max { T G G D ( λ ) } × T O B J ( λ ) max { T O B J ( λ ) } × S C A M ( λ ) max { S C A M ( λ ) } .
I n ± ( x , y ) = I n , t ± ( x , y ) / t n ±
X n ± ( x , y ) = X n , t ± ( x , y ) / t n ±
X n ± ( λ ) = X n ± ( λ ) max { max [ X n + ( λ ) ] , max [ X n ( λ ) ] } .
X n ( λ ) = X n + ( λ ) X n ( λ ) ,
e ^ n ( λ ) = X n ( λ ) S e f f ( λ ) .
I C C M , n = R C C M ( x , y , λ ) , e ^ n ( λ ) .
X n ( λ ) = e ^ n ( λ ) S e f f ( λ ) .
I n ± ( x , y ) = I n , c a m ± ( x , y ) × f n ± t n ±

Metrics