Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Simulations of fluorescence imaging in the oral cavity

Open Access Open Access

Abstract

We describe an end-to-end image systems simulation that models a device capable of measuring fluorescence in the oral cavity. Our software includes a 3D model of the oral cavity and excitation-emission matrices of endogenous fluorophores that predict the spectral radiance of oral mucosal tissue. The predicted radiance is transformed by a model of the optics and image sensor to generate expected sensor image values. We compare simulated and real camera data from tongues in healthy individuals and show that the camera sensor chromaticity values can be used to quantify the fluorescence from porphyrins relative to the bulk fluorescence from multiple fluorophores (elastin, NADH, FAD, and collagen). Validation of the simulations supports the use of soft-prototyping in guiding system design for fluorescence imaging.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Advances in imaging sensors and algorithms offer many opportunities to improve medical screening and diagnosis. While the vast majority of modern imaging devices were designed to support consumer photography, it is possible to design systems that are customized for specific applications. Implementing an imaging system for a domain-specific application requires selecting and integrating many different hardware and software components, including lights, filters, optics, sensors and image processing algorithms. The conventional design process is to build a physical prototype in order to test how it performs. An end-to-end simulation that models each component can replace this time-consuming and expensive design-build-test loop, thereby accelerating innovation. We are developing and validating end-to-end simulation tools for soft-prototyping imaging systems for several domains, including consumer photography [13], underwater imaging [4], AR/VR displays [5] and autonomous driving [610].

In this paper we implement simulations to prototype a system for imaging and quantifying fluorescence in the oral cavity. The motivation for designing this system is based on observations that tissue autofluorescence can be used to discriminate between normal and cancerous tissue [1113]. This finding has led to the development of several different types of imaging systems designed for non-invasive in-vivo measurements of tissue autofluorescence [14]. Because the autofluorescence signal is weak compared to reflected light, one must select special purpose components that can separate reflected and fluorescent photons.

The image systems simulations enable us to evaluate different combinations of illuminants, filters and sensors that excite biological tissue fluorophores and quantify the photons emitted by the fluorophores. We validate simulations by comparing the prototype predictions with measurements from a real system that is designed to evoke fluorescence from the oral cavity. The soft-prototyping tools are sufficiently accurate to form the foundation for future work that explores alternative image system designs. Our goal for simulating fluorescence imaging in the oral cavity is to support the development of imaging systems that can provide quantitative and diagnostic information to clinicians who are screening patients for oral cancer.

2. Background

Worldwide, there are more than half a million cases of oral cancer each year, and these have a five-year survival rate of less than 50% [1517]. Early detection of oral cancer lesions can save lives through successful resection of the lesion before it has metastasized [15]. Today, lesions are typically detected by a clinician using visual inspection and palpation and diagnosed by a pathologist analyzing histological samples from surgically removed lesions. Unfortunately, oral cancer lesions are difficult to detect and to discriminate from benign oral mucosal conditions. It would be much more effective if one could create a non-invasive imaging system that can provide quantitative and diagnostic information to clinicians who are screening patients for oral cancer.

Auto-fluorescent emissions from the oral cavity are one possible source of information. Several types of auto-fluorescent emissions in the oral cavity have been measured during precancerous and cancerous stages. These emissions have been measured using fiber optic illumination and spot spectroradiometric sensors [12,18,19]. The measurements reveal a complex set of changes in tissue auto-fluorescence in the presence of precancerous and cancerous tissue.

  • • Certain types of oral cavity tissue auto-fluorescence are reduced in the presence of precancerous and cancerous tissue. The reduced fluorescence have been attributed to a reduction in FAD (flavin adenine dinucleotide), a molecule that plays an important role in cell respiration and metabolism, and to changes in collagen and elastin that occur with cellular damage [2024].
  • • Other types of tissue fluorescence in the oral cavity are higher than normal in the presence of cancerous tissue. Several investigators have hypothesized that NADH fluorescence (the reduced form of nicotinamide adenine dinucleotide) increases in cancerous tissue [22,25].
  • • Investigators also report observing the distinctive spectral signature of porphyrins fluorescence in cancerous lesions [12,20,2629]. Porphyrins fluorescence is present in the mouths of many healthy individuals as well, and it can be measured in plaque [30], caries [31] and on the dorsal side of healthy tongues [23]. The porphyrins fluorescence is large, but not necessarily diagnostic of oral cancer [26].

The complex set of findings led to the development of several illumination systems designed to help dentists visualize oral mucosal abnormalities (e.g Velscope, OralID, Identafi). These products use short-wavelength LEDs to excite endogenous fluorophores in the oral mucosal tissue. The clinician is provided with glasses or a viewer that block the reflected short-wavelength light from the illuminant and enhance the visibility of the fluorescent emissions in the middle- and long-wavelengths [13,32,33]. The clinician is tasked with judging whether there are abnormally dark areas on the tongue and in other parts of the oral cavity.

In practice, the size of the measured fluorescent signal depends markedly on the choice of illuminant. Many empirical reports use a single, narrowband illuminant to excite fluorescence with peak wavelengths ranging between 350 and 450 nm. One of the objectives of soft prototyping is to explore the consequences of selecting different combinations of illuminants and sensors. We are restricting our analysis to imaging endogenous fluorescence, although our methodology can be applied to guide the design of imaging systems for detecting fluorescence induced by exogenous agents [34].

Further, it is desirable to build an oral cancer screening system based on a quantitative lab test, rather than the clinician’s visual judgment. A system that reliably measures fluorescence may provide the basis of a lab test that meaningfully assesses the health status of the oral cavity. A second objective is to establish how well the image system can quantify fluorescence from the oral cavity.

3. Methods: image systems simulation

The image systems soft-prototyping tools are based on a quantitative model of the scene and image acquisition device. Implementing the simulation requires defining: (a) a three-dimensional graphics model of the key elements (oral cavity, light and camera positions), (b) the lights and materials, including their spectral reflectance and fluorescence, and (c) a model of the camera, including its optics and sensor.

3.1 Geometry of the oral cavity, light, and camera

We use computer graphics packages, including Cinema 4D and Blender, to represent the geometry of the scene (Fig. 1(a)). This application requires geometric modeling of the size and shape of the oral cavity, the positions of the imaging system’s lights, filters, lens and sensor. A 3D model of the oral cavity represents the surfaces (tongue, lips, teeth, palate, floor of the mouth etc.) as meshes and the relative positions of the image system components as points or regions in 3d-coordinates. The geometric data are exported to a set of text files that are read and interpreted by the physically-based ray-tracing software (PBRT, [35]). The physical properties of the lights, materials, and lens are specified using the parameters of the PBRT software (Fig. 1(b)). The PBRT software accounts for scattering from multiple surfaces. Others have tested the accuracy of ray-tracing models of inter-reflections [36,37]. We have also validated end-to-end simulations that include PBRT simulations by comparing real and simulated digital camera images of a precisely constructed, three-dimensional high dynamic range test scene with surface inter-reflections [38]. The accuracy of our predictions of the real camera system in this paper further show that we adequately account for inter-reflections. Some additional critical properties, such as specifying tissue fluorescence or absolute spectral power distributions of the illumination, were enabled by modifications that we added to the open-source PBRT code for this project. These new features are included in the freely available Docker container used to create the simulations in this paper.

 figure: Fig. 1.

Fig. 1. Diagram of the imaging system simulation pipeline. a) The 3D mesh model of the oral cavity, as well as the positions of the light and camera, are defined in graphics software. b) The simulations incorporate models of the tongue texture map, surface reflectance, and tissue fluorescence. The ray tracing also models diffuse and glossy reflections. c) The camera model specifies the multi-element optics as well as the spectral quantum efficiency, geometric and electrical properties of the image sensor.

Download Full Size | PDF

Controlling the physical properties of the materials (e.g., fluorophore concentration, diffuse reflectance, spectral power distribution, light intensities) is an essential part of the simulation environment. We used the Matlab toolbox (ISET3d) to simplify programmatic control of the assets, materials, textures, and illumination [7,39]. The toolbox includes functions that read the PBRT text files, represent them as internal Matlab objects, and enable the user to set and get properties of the entire scene. The toolbox also includes functions that save out the modified parameters in PBRT format and then invoke the Docker container with the PBRT ray-tracer to render the scene spectral radiance or the sensor image irradiance. The ability to programmatically control the scene properties is essential as we test different systems and a range of measurement conditions, including different positions of the lights and camera with respect to the oral cavity. Figure 2 illustrates examples of different tongue and jaw poses viewed at different orientations.

 figure: Fig. 2.

Fig. 2. Mouth model rendered in various poses.

Download Full Size | PDF

3.2 Material scattering and fluorescence

The spectral radiance $L(\lambda )$ from the tissue surface can be partitioned into two additive components, the radiance due to the standard diffuse-glossy reflection $L_{r}$ and the radiance due to fluorescent emission $L_{f}$:

$$L(\lambda, w_o) = L_{r}(\lambda, w_o) + L_{f}(\lambda, w_o)$$

In the ray-tracing protocol of PBRT, the diffuse-glossy reflectance and fluorescent emission of the scene radiance are parameterized by the angles of the incident ray ($w_e$) and the outgoing ray ($w_o$). The radiance from the diffuse-glossy reflectance is calculated as the wavelength-by-wavelength product of the irradiance, $E(\lambda ,w_e)$, and the surface reflectance, $R(\lambda , w_e, w_o)$:

$$L_{r}(\lambda, w_o) = \int E(\lambda, w_e) R(\lambda, w_e, w_o) dw_e$$

The calculations of these angle-dependent ray intensities rely on standard material models, in this case a combination of a Lambertian term and a small glossy term that gives the tongue and teeth their shiny appearance.

The fluorescence properties of the material is characterized by an excitation emission function $F(\lambda , \lambda _i)$, where $\lambda _i$ is the incident light wavelength and $\lambda$ is the fluorescence emission wavelength. The calculation of the fluorescence is the product of the spectral irradiance with the excitation-emission matrix. Specifically, we calculate the fluorescent emission at $\lambda$, given an incident ray at $\lambda _i$ and angle $w_e$ , using:

$$L_{f}(\lambda) = \iint F(\lambda, \lambda_i)E(\lambda_i, w_e)d\lambda_i dw_e$$

We model the fluorescent emissions as Lambertian, so that no $w_o$ term is needed (the emission spectrum is independent of direction). The excitation and emission function for each fluorophore is also referred to as the Donaldson matrix. Stokes [40] observed that typically fluorescent emissions arise only at wavelengths that are longer (lower energy) than the excitation wavelength; consequently, the EEM is triangular. It is impossible for us to specify absolute levels for the entries of the EEM; in the simulations, we normalize the EEM so that the maximum value is one. Hence, fluorophore concentrations are estimated in relative units.

3.2.1 Fluorophore mixture model

We modeled five fluorophores that are commonly found in human oral cavity mucosal cells: NADH, FAD, elastin, collagen and porphyrins. The emissions from a mixture is the weighted $\alpha _{j}$ sum of these individual terms [41]:

$$L_{f}(\lambda) = \iint \sum_j \alpha_j F_j(\lambda,\lambda_i) E(\lambda_i,w_e) d\lambda_i dw_e$$

The final expression for the radiant intensity in the output direction, as a function of the fluorophores and standard reflectance is:

$$L(\lambda, w_o) = \iint \sum_j \alpha_j F_j(\lambda,\lambda_i) E(\lambda_i,w_e) d\lambda_i dw_e + \int R(\lambda,w_e,w_o)E(\lambda,w_e) dw_e$$

The excitation-emission matrices (EEMs) of the five fluorophores are represented in Fig. 3 top row. The bottom row of Fig. 3 plots the product of the EEMs with the spectral power distribution of the short wavelength illuminant used in the OralEye camera described later. Although some ability to separate out the fluorophores should be possible, the large overlap in the emission spectra of collagen, elastin, FAD and NADH makes it unlikely that one can precisely quantify the concentrations of these four fluorophores with this system. In contrast, the porphyrins have a spectral emission that does not overlap with the other four fluorophores, making it possible to separate porphyrins from these fluorophores (Fig. 3).

 figure: Fig. 3.

Fig. 3. Top row: Excitation and emission matrix for fluorophores commonly found in the oral cavity. The EEMs are triangular because in general the emission wavelength exceeds the excitation wavelength [42]. The emissions are calculated on the assumption that the peak value in the EEM is 1. Bottom row: Spectral radiance produced by multiplying the EEMs for each fluorophore with the spectral power distribution of the single short wavelength illuminant used in OralEye camera (See 4). The gray-shaded region shows wavelengths that are filtered out before reaching the OralEye sensor.

Download Full Size | PDF

3.3 End-to-end simulation of the image system components

3.3.1 Camera lights

The experimental camera (built by FengYun Vision Technologies and referred to as the "OralEye") is shown in Fig. 4. The camera is designed to acquire images for previewing and measuring tissue fluorescence. To meet this goal, the camera has two light sources: a ring of LEDs that provides broadband illumination (“white”), and a second array of short-wavelength sources (“blue") that are used to evoke the tissue fluorescence. The camera acquires images in rapid sequence using different sources with a programmable range of sensor gains and exposure durations. Figure 4 shows the spatial configuration of the two sets of LEDs. In addition, the figure shows measurements of the relative spectral energy distribution of the white and blue sources.

 figure: Fig. 4.

Fig. 4. Photos of the OralEye camera, illustrating the two camera light sources: broadband white light (top row) and blue light (bottom row). The spectral energy of the white is broadband; the blue LED peaks at 385 nm. The white light is used for previewing the oral cavity and the blue light is used for exciting fluorescence. The sensor is in the center of the camera behind a longpass and NIR filter (see Fig. 5(c)).

Download Full Size | PDF

In any realistic setting, it is impossible to precisely control the spatial distribution of the illumination. The illumination will be nonuniformly distributed over space and this nonuniformity will vary with the distance from the camera to the oral cavity. Further nonuniformities arise from secondary bounces of the light from the materials. This spatial illumination nonuniformity is modeled in the simulations and should to be handled by the algorithms that aim to quantify tissue fluorescence from the camera image.

3.3.2 Lens and filter selection

An essential aspect of the image system design is to select lights and filters that enable accurate detection of the fluorescent signal from the background of unwanted (diffuse and glossy) reflectance. The fluorescence signal intensity is more than three orders of magnitude lower than the signal level from typical reflectance in the oral cavity. The selection of the filters and lights is a significant factor in this design.

In order to separate the fluorescence and reflectance signals, the image system includes dichroic filters that limit the wavelengths (a) entering the scene from the blue LEDs, and (b) entering the camera from the scene (see Fig. 5). Specifically, the blue LEDs, with a wavelength centered at 385 nm, emit spectral energy in the wavelength range up to 440 nm that is 2.5 orders of magnitude lower than the peak energy at 385 nm. Even this light level, when scattered by the tissue in the oral cavity, will reduce sensitivity to fluorescence. We placed a dichroic filter (Hoya Y44) in front of the blue LEDs to further reduce the intensity of illuminant energy above 475 nm.

 figure: Fig. 5.

Fig. 5. Spectral characterization of the OralEye image system. a) The blue LED spectral energy (plotted on a logarithmic axis) peaks at 385 nm but only drops by 3 orders of magnitude at 450 nm. b) The LED emission band is further narrowed by a shortpass filter which blocks wavelengths > 425 nm. c) A filter in front of the sensor further limits light below 425 nm and longer than 700 nm. This filter is nearly transparent to wavelengths in the range between 450-700 nm. d) Effective sensor spectral responsivity of the three color channels.

Download Full Size | PDF

The blue LED sources also emit light in the NIR. We placed a second filter in front of the lens to block wavelengths less than 475 nm and greater than 700 nm from reaching the imaging sensor. Hence, under the blue LED illumination conditions, the sensor responds only to irradiance in a range from 475-700nm.

3.3.3 Sensor

The geometric and electrical sensor parameters, including pixel size, resolution (number of pixels), and noise properties are listed in Table 1. We implemented a sensor simulation using ISETCam [43].

Tables Icon

Table 1. Sensor geometric and electronic properties.

The sensor pixel response can be expressed as:

$$p = g \int q_e(\lambda) L(\lambda) d\lambda + \widetilde{N}.$$
where $p$ is the pixel response ($e^-$), $L$ is the irradiance($q/s/m^2/nm$), $q_e$ is the sensor spectral quantum efficiency($e^-/q$); $\widetilde {N}$ is noise ($e^-$); $g$ is a scale factor that combines pixel area and exposure time. The values of $g$ and $\widetilde {N}$ are calculated from the irradiance level (quantal noise) and the sensor electrical and geometric properties in the ISETCam simulation. ($q$ is quanta, $e^-$ is electrons, $s$ is seconds).

The ISETCam software has been validated in a number of independent experiments [4446]. For the OralEye device, we validated the sensor model by capturing images of a calibrated color target that included painted surfaces that emit fluorescence when illuminated with the short-wavelength ‘blue" light [47]. The sensor delivers the raw (linear) RGB data in the Bayer mosaic, which is proportional to the number of electrons. We used bilinear interpolation to demosaic the real and simulated sensor images shown in this paper.

3.4 Subjects

Ten participants (seven males, three females; median age 24 years, range 20-67 years) participated in the study, which was approved by the Institutional Review Board at Stanford University. All subjects gave informed consent. The OralEye measurements for each subject took less than one minute, and the exposure to the blue LED was approximately 30 ms. Spectrophotometric measurements of tongue radiance were obtained from two of the participants. These measurements, including setup time, took approximately ten minutes and the exposure to the blue LED was approximately 10 sec. All of the exposures were well within the safety limits for exposure to short wavelength light.

4. Results

Renderings from the end-to-end simulation of the oral cavity through the OralEye camera are compared with real OralEye images in Fig. 6(a-f). The images compare the measured and expected results when using the blue LED. The simulation models fluorescence of both the oral cavity and teeth [4851].

 figure: Fig. 6.

Fig. 6. Comparison between end-to-end simulations and OralEye images of the oral cavity in three participants. The images reflect the typical variation in the porphyrins concentration among healthy participants. Panels (a, c, e) show OralEye images and panels (b, d, f) show the corresponding simulations. (g) Comparison between measured (dotted) and simulated (solid) radiance of two points on the dorsal tongue from high and low porphyrins regions in two participants.

Download Full Size | PDF

Tongue fluorescence was modeled by a weighted combination of NADH, FAD, collagen, elastin and porphyrins emissions. Figure 6(g) compares the spectral radiance of simulated and measured fluorescence from two points on the dorsal surface of the tongues in two participants. The measurements were obtained using a PR670 spectroradiometer equiped with a longpass filter (Hoya Y44) that blocked light below 425 nm. Without this filter, the fluorescence would have been too weak to detect given the dynamic range of the spectroradiometer. The simulated radiance in Fig. 6(g) was also filtered by the same longpass filter.

We refer to the combined signal from NADH, FAD, collagen, and elastin as the “bulk fluorescence". There are many possible combinations of these four fluorophores that predict the measured radiance. For this reason, using the 385 nm LED alone does not enable separating (spectrally unmixing) the fluorophores in the bulk fluorescence signal. As a summary of the multiple solutions, we can say that the combinations that predicted bulk fluorescence measurements in two participants (see Fig. 6(g)) had zero concentration of NADH, low levels of FAD and collagen, and high levels of elastin.

The porphyrins concentrations were chosen to approximate the measured spectral radiance, which differed for the two participants. The region measured for the participant in panel (a-b) was predicted mainly from the bulk fluorescence with a small amount of porphyrins. The region measured for the participant in panel (c-d) was predicted from the bulk fluorescence with a high concentration of porphyrins.

To generate realistic spatial distributions of porphyrins, we utilized the additivity property of sensor responses from different fluorophores. We simulated two sensor images: one simulates the bulk fluorescence with no porphyrins, the second simulates porphyrins emissions with no bulk fluorescence. For each participant we created a spatial mask that indicates the locations with a significant porphyrins concentration. The final rendered image is the sum of the porphyrins sensor image multiplied by the mask and the bulk fluorescence image.

The specific geometry (overall size and shape, distinctiveness of the teeth, tongue position) differ, but the general color in the images and the properties of the nonuniform illumination are similar. A property shared by the simulations and the real images is that the absolute level of the digital values depends significantly on spatial variations in the illumination. This suggests that fluorophore estimates should be based on the relative, not absolute, RGB values.

4.1 Sensor chromaticity

The significant illumination variation in the scene contributes to the varying fluorescent emissions at different locations, eliminating the opportunity to use absolute RGB levels to measure fluorescence. Because we rely on the ratio of the RGB values, the chromatic information about the fluorophores is two-dimensional. Furthermore, the literature informs us that there are a large number of different fluorophores that can appear in the mucosal tissue in different combinations (Fig. 3). The simulations show that different combinations of tissue fluorophores may produce the same spectral radiance and, consequently, the same sensor responses.

For this system, there is one possible source of meaningful chromatic information. The porphyrins EEM differs substantially from the other principal fluorophores. Consequently, a strong porphyrins signal has an impact on the RGB values that can be distinguished from the other fluorophores.

We illustrate the expected impact of porphyrins, as measured in sensor chromaticity space, in Fig. 7. Figure 7(a) plots the simulated R and G values for two different groups of fluorophores. The green points represent a noisy signal from a bulk mixture of the FAD, collagen and elastin. The red points represent the signal expected from porphyrins, again assuming a particular concentration and illumination level. In both cases, the measurements might fall anywhere along the two dashed arrows depending on the fluorophore concentration and local illumination level. In this simulation we plot the data as if there is no diffuse or glossy light reflected from the tissue, though in reality both of the lines would start from a small non-zero position in the graph in the presence of the weak blue light reflected from the tissue. The contribution of the reflected light is relatively small because the filters remove most of the diffusely reflected light which is below 425 nm. In the complete simulation and when comparing with the measurements, we account for the light reflected from the tongue.

 figure: Fig. 7.

Fig. 7. Illustration of fluorescence sensor responses and sensor chromaticities. a) R and G sensor response representation. The points plotted in green and red are expected R,G signals, including noise, from the bulk fluorophore and porphyrins respectively. The signal expected from the mixture of these fluorophores is the vector sum of these two points, plotted in blue. For different relative amounts of the two fluorophores, the mixture will fall within the grey-shaded parallelogram. b) Sensor chromaticity representation. The sensor responses are normalized across three channels to eliminate the impact of non-uniform illumination and fluorophore concentration. The sensor chromaticities of the combined fluorophores will fall along the line connecting the chromaticities of the bulk fluorophore and porphyrins. The position along the line will depend on the relative strength of the two signals. c) Sensor chromaticities of the individual fluorophores. Each fluorophore is represented at a different location on the sensor chromaticity graph. For the excitation light of 385 nm and the OralEye spectral sensitivity, the porphyrins are widely separated from the cluster of the other four fluorophores.

Download Full Size | PDF

Depending on the fluorophore concentrations and illumination level, the combined signal might fall anywhere in the gray shaded region. For example, when the fluorophores are present at the concentrations indicated by the red and green line endpoints, the combined signal will be located at the tip of the blue shaded region. The position within the gray shaded region provides information about the relative amount of the bulk and porphyrins fluorophores.

Figure 7(b) represents the same information but plotted with respect to sensor chromaticity values (r,g). The sensor chromaticities are the R (G) values divided by the sum of the R, G and B values:

$$r = R/(R+G+B), g = G/(R+G+B)$$

From the formula, we can see that all RGB-values that fall along a line $\alpha (R,G,B)$ share the same sensor chromaticity. The reason for representing the data with respect to sensor chromaticity is that the value is invariant with respect to the absolute fluorophore concentration and absolute illuminant intensity, two factors that we cannot control. Mixtures of the two signals $(r_1, y_1)$ and $(r_2, y_2)$ will fall along a line between the two chromaticities (dashed line, Fig. 7(b)). The position on the line will depend on the relative intensity of the two emissions.

The expected positions in sensor chromaticity space of five different oral cavity fluorophores are shown in Fig. 7(c). For this camera the sensor chromaticities of most of the fluorophores fall in a small region of the sensor chromaticity plane. The proximity of these values, coupled with the metamerism described earlier, makes it difficult to discriminate the relative contributions of these fluorophores. The porphyrins contribution, however, is relatively distant and has the possibility of drawing the total signal away from the cluster. This is a feature of the simulation that we can confirm with respect to the measured images.

4.2 Validation with empirical measurements

Figure 8(a-c) shows the OralEye images captured from the dorsal tongues of three healthy individuals as shown in Fig. 6. These images were captured in a dark room using only the blue LED illuminant. Because of the filters, the captured light is almost entirely fluorescence; the reflected light is mainly confined to short wavelengths below the acceptance region of the camera. The teeth are very fluorescent and emit light over a wide range of wavelengths.

 figure: Fig. 8.

Fig. 8. Sensor chromaticity analysis of OralEye images of three participants. The sensor chromaticity values of pixels in the regions of interest denoted by the white rectangles are plotted as black points in the bottom row. The data fall along a line extending from the position of elastin extending to the position of the porphyrins. The length of the line and its endpoints differ between the subjects. The sensor chromaticity data in the middle column extend further to the left than the data in the left and right columns.

Download Full Size | PDF

We analyzed the sensor chromaticity in the OralEye images and compared them with the values expected from the simulation. Specifically, we selected a large region within the dorsal tongue (white box), and we plot the sensor chromaticity coordinates for all the pixels in this region (Fig. 8(d-f)). The sensor chromaticity values are shown along with the simulated values for the different fluorophores (Fig. 7(c)). The sensor chromaticity data align well with the expected chromaticity values, falling along a line that extends from the central position of the bulk fluorophores (NADH, FAD, collagen and elastin) in the direction of the porphyrins fluorophore. The fact that the sensor chromaticity values follow a similar pattern for all participants confirms (a) the accuracy of the simulations, and (b) the expectation that the primary difference we observe is explained by the porphyrins concentration.

Porphyrins fluorescence is measured on the dorsal side of healthy tongues illuminated with short wavelength light [23]. We observed no porphyrins emissions from the sides of the tongue, the ventral surface, or on the upper and lower palates. Differences in the amount of porphyrins signals on the dorsal tongue of different participants may be due to diet and the time of day. For example, the OralEye camera image shown in Fig. 8(b) on was captured from the tongue of an individual who had recently eaten lunch. The OralEye image in Fig. 8(c) was captured from the tongue of a vegetarian just before lunch. In a separate measurement, we observed porphyrins fluorescence on the tongue of this same individual just after drinking mango juice.

We collected data from ten subjects, and the general agreement in the sensor chromaticity properties from this admittedly small number of participants is encouraging. It suggests that we might be able to define a narrow, quantitative expectation for the chromaticity range in the healthy dorsal tongue. On the other hand, the data also suggest that with this camera design, it will be impossible to estimate the relative proportions of elastin, FAD, collagen, and NADH fluorescence from measurements of the bulk fluorescence.

5. Discussion

For the last twenty years, digital camera design has been driven by consumer photography applications. Hardware and software components have been optimized to capture radiance signals that humans can perceive, and the camera image processing pipeline is designed to produce images that appear to be pleasing to consumers [5254].

The first implementation of the OralEye image system uses hardware components that were developed for consumer photography; but the system has a different purpose. The system is intended to quantify the amount and type of tissue fluorescence in a large field of view within the oral cavity that is invisible to humans under normal viewing conditions. Consequently, the system design integrates special purpose illuminants, filters, and sensors that are outside of the usual scope of consumer photography. The images the system produces are not intended for consumers or clinicians to view, but rather for clinical laboratory tests that quantify the fluorophore concentrations in the oral cavity.

The simulations show that the current system design can estimate the combined emissions from relatively high concentrations of certain fluorophores (collagen, elastin and FAD), which we refer to as “bulk fluorescence". These fluorophores produce higher G sensor values than combinations that have lower concentrations of these tissue fluorophores. The system has only one excitation light, and the simulations reveal that several different concentrations of these fluorophores produce the same bulk fluorescence. Hence, it is not possible to determine the relative concentrations of these tissue fluorophores by analyzing the sensor data from the current system. The porphyrins, however, stand out because their fluorescence signal dominates the R sensor values. Hence, the system can measure the relative balance between the bulk fluorescence and the porphyrins. We confirm this ability using both simulations and experimental measurements.

The image systems simulations have helped us both understand and quantify the interaction between fluorophores, illuminant spectra and measured fluorescence. The validation of the simulations encourages us to use the software to explore new image system designs.

5.1 Design considerations

The first challenge we confronted in this design was to eliminate the impact of the reflected light. The reflected light was as much as four orders of magnitude more intense than the fluorescence emitted by oral mucosal tissue. As the measurements show, the system is inadequate to simultaneously measure the reflected and fluorescent components. We excluded the reflected light by placing a shortpass filter in front of the 385 nm LEDs, blocking light energy in the longer wavelengths from reaching the oral cavity. Selecting the light and filter was an essential part of designing the system.

A second challenge arises from the inability to illuminate the oral cavity uniformly. The complexity of the illuminant shading is due in part to the geometry of the lights, but it is also due to the fact that the oral cavity is a three dimensional structure with surfaces at different depths that can occlude and cast shadows on other surfaces. The sensor RGB values from nearby regions on the same surface may differ because the light is non-uniformly distributed over the surface, the orientation of the surface, shadows cast by the teeth, or the amount of indirect lighting. We suspect that this issue will persist through all system designs, and for this reason an approach based on sensor chromaticity may continue to prove helpful.

A third challenge we will confront is how to separate the signals within the bulk fluorescence. Through the validated simulation methods, we are exploring designs that include multiple excitation wavelengths and commercial multispectral sensors.

5.2 Applications

The ability to quantify the relative amounts of porphyrins and bulk fluorescence may benefit several applications in dentistry. For example, porphyrins fluorescence is generated by bacteria that accumulates on teeth and dentures [30], in crevices [31], and along the gum lines [55,56]. This observation led to the development of adjunct dental devices that use fluorescence imaging to help dentists visualize the location of bacteria associated with caries and gingivitis [57,58]. For these applications, there may be value in using an imaging system that can document the location and quantify the relative concentrations of porphyrins in different parts of the oral cavity.

The porphyrins fluorescence from the dorsal surface of tongues in healthy individuals [23] is attributed to a complex community of bacteria referred to as the oral microbiome [59]. Analysis of the tongue dorsum microbiome is understudied, particularly when compared to the amount of research devoted to studying the gut microbiome [60]. Monitoring and manipulating the oral microbiome will lead to a better understanding of the functional role that oral bacteria have on the dorsal surface of the tongue.

Dentists also use adjunct devices to visualize bulk fluorescence in oral mucosal tissue. Clinicians are trained to look for dark areas where bulk fluorescence is not visible as an indicator of the degradation of the structural integrity of tissue or changes in tissue metabolism. The assessment is visual and subjective and thus the efficacy of these devices depend on clinician experience. These devices may help clinicians find areas they might otherwise overlook, but they do not help them differentiate between dysplasia and benign inflammatory conditions [18]. Consequently, the sensitivity of these devices is high, but the specificity is low [61,62].

A main goal of our work is to design an image system that augment the subjective judgments of a clinician with a lab test that meaningfully assesses the health status of the oral cavity. We have shown that the OralEye camera can quantify the relative amounts of porphyrins and bulk fluorescence in healthy subjects. Assuming that NADH emissions are negligible, decreases in bulk fluorescence may indicate degradation of the structural integrity of tissue (associated with decreases in elastin and collagen) or changes in tissue metabolism (associated with a decrease in FAD). To evaluate the sensitivity and specificity of these measurements, it will be necessary to collect additional data from patients that have dysplasia and cancerous lesions.

5.3 Future work

We are extending the work we describe in this paper in three ways. First, we are using the current OralEye camera to collect additional data in both healthy individuals and patient populations. To pursue these measurements, we automated the data storage and analysis using a cloud-based data management system (Flywheel.io). This system anonymizes the data while at the same time storing important demographic information. The normative data that we are collecting in healthy individuals will define a distribution against which we can compare the data captured from the patient population. Aggregating these data and monitoring patient outcomes, should enable us to improve oral health predictions. Ultimately through the acquisition of quantitative data about the fluorescent signals from clinical cases, we may be able to implement meaningful diagnostic tools.

Second, we are using image systems simulation software to create soft-prototypes of multispectral imaging systems that combine multiple illuminants with multiple imaging sensors. Our simulations show that it will be necessary to use more than one excitation wavelength to quantify the relative concentrations of NADH and FAD. By calculating sensor chromaticity values, we will be able to quantify the relative concentrations of different tissue fluorophores. The simulations will enable us to determine whether it is possible to design multispectral imaging systems that can provide information about the relative concentrations of NADH, FAD, collagen and elastin and to predict the efficacy of the soft-prototypes before building a real physical device.

Third, we plan to implement a more complex tissue model. The light from the illuminant penetrates deeper into the tissue at longer wavelengths. We are planning to combine a penetration model with a model of fluorophore tissue depths [63]. This extension may be important to simulate imaging systems that use illuminants with longer wavelengths.

6. Conclusion

Image systems simulations enable us to create software prototypes of digital cameras and to predict the data we would capture for different combinations of tissue fluorophores, illumination and imaging sensors. We describe and provide open-source freely available software prototyping tools that can be used to design and evaluate new imaging systems based on multiple lights and novel imaging sensors [43].

We used image systems simulations to design an imaging system capable of exciting and measuring fluorescence in the oral cavity. We created a hardware prototype of the imaging system (OralEye) and compared the data we collected from the real device with the data we predicted from the software prototype. The simulations and data suggest that sensor chromaticity values, derived from real and simulated OralEye RGB camera data, are useful for estimating quantities that are invariant to changes in the spatial distribution of lighting. Specifically, the sensor chromaticity values can quantify the fluorescence due to porphyrins relative to the combined emissions from other fluorophores in the oral cavity, referred to as the bulk fluorescence. Additional data from patient populations and from different regions of the oral cavity should prove informative as to the diagnostic value of the porphyrins and bulk fluorescence estimates.

Acknowledgments

We thank Rangtao Huang, Tanglong Wang, and Xixi Li at FengYun Vision Technologies for software and hardware support of the experimental camera (OralEye). We thank Henryk Blasinski, Zhenyi Liu, Kaijun Feng and Krithin Kripakaren for their contributions to the simulation software and camera assembly. We thank Adam Wandell, Chris Holsinger, Tulio Valdez and Thomas Goossens for many helpful discussions and feedback about this project. The software tools described in this paper are open source and freely available at GitHub (see https://github.com/ISET/isetcam and https://github.com/ISET/iset3d) under the terms of the MIT license.

Disclosures

Zheng Lyu, Brian Wandell and Joyce Farrell declare no conflicts of interest. Haomiao Jiang has F, I and E commercial relationships with Facebook. Feng Xiao, Jian Rong and Tingcheng Zhang have F, I and E commercial relationships with FengYun Vision Technologies.

Data availability

The image data described in this work is freely available through the Stanford Data Repository [64].

References

1. J. E. Farrell, F. Xiao, P. B. Catrysse, and B. A. Wandell, “A simulation tool for evaluating digital camera image quality,” in Image Quality and System Performance, vol. 5294 (International Society for Optics and Photonics, 2003), pp. 124–131.

2. J. E. Farrell, P. B. Catrysse, and B. A. Wandell, “Digital camera simulation,” Appl. Opt. 51(4), A80–90 (2012). [CrossRef]  

3. J. E. Farrell and B. A. Wandell, Image Systems Simulation (John Wiley & Sons, Ltd, Chichester, UK, 2015), pp. 1–28.

4. H. Blasinski, T. Lian, and J. Farrell, “Underwater image systems simulation,” in Imaging and Applied Optics 2017 (3D, AIO, COSI, IS, MATH, pcAOP) (Optical Society of America, 2017), p. ITh3E.3.

5. T. Lian, J. Farrell, and B. Wandell, “Image systems simulation for 360 ° camera rigs,” IS&T Int. Symp. Electron. Imaging pp. 3531–35352018.

6. H. Blasinski, J. Farrell, T. Lian, Z. Liu, and B. Wandell, “Optimizing image acquisition systems for autonomous driving,” Electronic Imaging pp. 161-1–161-7 (2018)

7. Z. Liu, M. Shen, J. Zhang, S. Liu, H. Blasinski, T. Lian, and B. Wandell, “A system for generating complex physically accurate sensor images for automotive applications,” Electronic Imaging pp. 53-1–53-6 (2019).

8. Z. Liu, T. Lian, J. Farrell, and B. Wandell, “Soft prototyping camera designs for car detection based on a convolutional neural network,” in Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, (2019), pp. 0–0.

9. Z. Liu, T. Lian, J. Farrell, and B. A. Wandell, “Neural network generalization: the impact of camera parameters,” IEEE Access 8, 10443–10454 (2020). [CrossRef]  

10. Z. Liu, J. Farrell, and B. Wandell, “Isetauto: detecting vehicles with depth and radiance information,” IEEE Access 9, 41799–41808 (2021). [CrossRef]  

11. R. Richards-Kortum and E. Sevick-Muraca, “Quantitative optical spectroscopy for tissue diagnosis,” Annu. Rev. Phys. Chem. 47, 555 (1996). [CrossRef]  

12. A. Gillenwater, R. Jacob, R. Ganeshappa, B. Kemp, A. K. El-Naggar, J. L. Palmer, G. Clayman, M. F. Mitchell, and R. Richards-Kortum, “Noninvasive diagnosis of oral neoplasia based on fluorescence spectroscopy and native tissue autofluorescence,” Arch. Otolaryngol., Head Neck Surg. 124(11), 1251–1258 (1998). [CrossRef]  

13. M. Monici, “Cell and tissue autofluorescence research and diagnostic applications,” Biotechnol. Annu. Rev. 11, 227–256 (2005). [CrossRef]  

14. N. Ramanujam, “Fluorescence spectroscopy of neoplastic and non-neoplastic tissues,” Neoplasia 2(1-2), 89–117 (2000). [CrossRef]  

15. C. Llewellyn, N. Johnson, and K. Warnakulasuriya, “Risk factors for squamous cell carcinoma of the oral cavity in young people – a comprehensive literature review,” Oral Oncol. 37(5), 401–418 (2001). [CrossRef]  

16. T. van der Ploeg, F. Datema, R. Baatenburg de Jong, and E. W. Steyerberg, “Prediction of survival with alternative modeling techniques using pseudo values,” PLoS One 9(6), e100234 (2014). [CrossRef]  

17. S. Warnakulasuriya, “Global epidemiology of oral and oropharyngeal cancer,” Oral Oncol. 45(4-5), 309–316 (2009). [CrossRef]  

18. D. C. G. d. Veld, D. C. G. de Veld, M. Skurichina, M. J. H. Witjes, R. P. W. Duin, J. C. Henricus, and J. L. N. Roodenburg, “Clinical study for classification of benign, dysplastic, and malignant oral lesions using autofluorescence spectroscopy,” J. Biomed. Opt. 9, 940 (2004). [CrossRef]  

19. D. Roblyer, R. Richards-Kortum, K. Sokolov, A. K. El-Naggar, M. D. Williams, C. Kurachi, and A. M. Gillenwater, “Multispectral optical imaging device for in vivo detection of oral neoplasia,” J. Biomed. Opt. 13(2), 024019 (2008). [CrossRef]  

20. R. Alfano, D. Tata, J. Cordero, P. Tomashefsky, F. Longo, and M. Alfano, “Laser induced fluorescence spectroscopy from native cancerous and normal tissue,” IEEE J. Quantum Electron. 20(12), 1507–1511 (1984). [CrossRef]  

21. R. Drezek, C. Brookner, I. Pavlova, I. Boiko, A. Malpica, R. Lotan, M. Follen, and R. Richards-Kortum, “Autofluorescence microscopy of fresh cervical-tissue sections reveals alterations in tissue biochemistry with dysplasia,” (2001).

22. M. G. Müller, T. A. Valdez, I. Georgakoudi, V. Backman, C. Fuentes, S. Kabani, N. Laver, Z. Wang, C. W. Boone, R. R. Dasari, S. M. Shapshay, and M. S. Feld, “Spectroscopic detection and evaluation of morphologic and biochemical changes in early human oral carcinoma,” Cancer 97(7), 1681–1692 (2003). [CrossRef]  

23. D. C. G. De Veld, M. J. H. Witjes, H. J. C. M. Sterenborg, and J. L. N. Roodenburg, “The status of in vivo autofluorescence spectroscopy and imaging for Oral Oncol.,” Oral Oncol. 41(2), 117–131 (2005). [CrossRef]  

24. I. Pavlova, M. Williams, A. El-Naggar, R. Richards-Kortum, and A. Gillenwater, “Understanding the biological basis of autofluorescence imaging for oral cancer detection: high-resolution fluorescence microscopy in viable tissue,” Clin. Cancer Res. 14(8), 2396–2404 (2008). [CrossRef]  

25. I. Georgakoudi, B. C. Jacobson, J. Van Dam, V. Backman, M. B. Wallace, M. G. Müller, Q. Zhang, K. Badizadegan, D. Sun, G. A. Thomas, L. T. Perelman, and M. S. Feld, “Fluorescence, reflectance, and light-scattering spectroscopy for evaluating dysplasia in patients with barrett’s esophagus,” Gastroenterology 120(7), 1620–1629 (2001). [CrossRef]  

26. D. M. Harris and J. Werkhaven, “Endogenous porphyrin fluorescence in tumors,” Lasers Surg. Med. 7(6), 467–472 (1987). [CrossRef]  

27. F. H. J. Figge, G. S. Weiland, and L. O. J. Manganiello, “Cancer detection and therapy; affinity of neoplastic, embryonic, and traumatized tissues for porphyrins and metalloporphyrins,” Proc. Soc. Exp. Biol. Med. 68(3), 640–641 (1948). [CrossRef]  

28. A. M. d. C. Batlle, “Porphyrins, porphyrias, cancer and photodynamic therapy—a model for carcinogenesis,” J. Photochem. Photobiol., B 20(1), 5–22 (1993). [CrossRef]  

29. Y. Yuanlong, Y. Yanming, L. Fuming, L. Yufen, and M. Paozhong, “Characteristic autofluorescence for cancer diagnosis and its origin,” Lasers Surg. Med. 7(6), 528–532 (1987). [CrossRef]  

30. L. Coulthwaite, I. A. Pretty, P. W. Smith, S. M. Higham, and J. Verran, “The microbiological origin of fluorescence observed in plaque on dentures during QLF analysis,” Caries Res. 40(2), 112–116 (2006). [CrossRef]  

31. K. König, G. Flemming, and R. Hibst, “Laser-induced autofluorescence spectroscopy of dental caries,” Cell. Mol. Biol. 44, 1293–1300 (1998).

32. P. M. Lane, T. Gilhuly, P. Whitehead, H. Zeng, C. F. Poh, S. Ng, P. Michele Williams, L. Zhang, M. P. Rosin, and C. E. MacAulay, “Simple device for the direct visualization of oral-cavity tissue fluorescence,” (2006).

33. M. W. Lingen, J. R. Kalmar, T. Karrison, and P. M. Speight, “Critical evaluation of diagnostic aids for the detection of oral cancer,” Oral Oncol. 44(1), 10–22 (2008). [CrossRef]  

34. J. Vonk, J. G. de Wit, F. J. Voskuil, and M. J. H. Witjes, “Improving oral cavity cancer diagnosis and treatment with fluorescence molecular imaging,” Oral Dis. 27(1), 21–26 (2021). [CrossRef]  

35. M. Pharr and G. Humphreys, Physically Based Rendering, Second Edition: From Theory To Implementation, 2nd ed (Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2010).

36. C. M. Goral, K. E. Torrance, D. P. Greenberg, and B. Battaile, “Modeling the interaction of light between diffuse surfaces,” SIGGRAPH Comput. Graph. 18(3), 213–222 (1984). [CrossRef]  

37. G. W. Meyer, H. E. Rushmeier, M. F. Cohen, D. P. Greenberg, and K. E. Torrance, “An experimental evaluation of computer graphics imagery,” ACM Trans. Graph. 5(1), 30–50 (1986). [CrossRef]  

38. Z. Lyu, K. Kripakaran, M. Furth, E. Tang, B. Wandell, and J. Farrell, “Validation of image systems simulation technology using a cornell box,” (2021).

39. ISET3d: https://github.com/ISET/iset3d/wiki.

40. G. G. Stokes, “XXX. on the change of refrangibility of light,” Philos. Trans. R. Soc. London 142, 463–562 (1852). [CrossRef]  

41. H. Blasinski, J. Farrell, and B. Wandell, “Simultaneous surface reflectance and fluorescence spectra estimation,” IEEE Transactions on Image Process. 29, 8791–8804 (2020). [CrossRef]  

42. J. Lakowicz, Principles of Fluorescence Spectroscopy (Springer US, 2007).

43. ISETcam: https://github.com/ISET/isetcam/wiki.

44. J. Farrell, M. Okincha, and M. Parmar, “Sensor calibration and simulation,” in Digital Photography IV, vol. 6817 (International Society for Optics and Photonics, 2008), p. 68170R.

45. J. Chen, K. Venkataraman, D. Bakin, B. Rodricks, R. Gravelle, P. Rao, and Y. Ni, “Digital camera imaging system simulation,” IEEE Trans. Electron Devices 56(11), 2496–2505 (2009). [CrossRef]  

46. Z. Kripakaran, K. Furth, M. Tang, E. Wandell, B. Farrell, and J. Lyu, “Validation of image systems simulation technology using a cornell box,” Electronic Imaging (in preparation).

47. J. Farrell, Z. Lyu, Z. Liu, H. Blasinski, Z. Xu, J. Rong, F. Xiao, and B. Wandell, “Soft-prototyping imaging systems for oral cancer screening,” Electron. Imaging 2020(7), 212-1–212-7 (2020). [CrossRef]  

48. W. T. Wozniak and B. K. Moore, “Luminescence spectra of dental porcelains,” J. Dent. Res. 57(11-12), 971–974 (1978). [CrossRef]  

49. P. C. Foreman, “The excitation and emission spectra of fluorescent components of human dentine,” Arch. Oral Biol. 25(10), 641–647 (1980). [CrossRef]  

50. J. J. ten Bosch and J. C. Coops, “Tooth color and reflectance as related to light scattering and enamel hardness,” J. Dent. Res. 74(1), 374–380 (1995). [CrossRef]  

51. W. Luo, S. Westland, R. Ellwood, and I. Pretty, “Assessing gloss of tooth using digital imaging,” Conference on Colour in Graphics, Imaging, and Vision 2008, 307–311 (2008).

52. J. Adams, K. Parulski, and K. Spaulding, “Color processing in digital cameras,” IEEE micro 18(6), 20–30 (1998). [CrossRef]  

53. R. Ramanath, W. E. Snyder, Y. Yoo, and M. S. Drew, “Color image processing pipeline,” IEEE Signal Process. Mag. 22(1), 34–43 (2005). [CrossRef]  

54. R. Lukac and K. N. Plataniotis, Digital Camera Image Processing (Springer US, Boston, MA, 2006), pp. 171–179.

55. M. H. van der Veen, C. M. C. Volgenant, B. Keijser, J. b. M. ten Cate, and W. Crielaard, “Dynamics of red fluorescent dental plaque during experimental gingivitis—a cohort study,” J. Dent. 48, 71–76 (2016). [CrossRef]  

56. S.-Y. Han, B.-R. Kim, H.-Y. Ko, H.-K. Kwon, and B.-I. Kim, “Assessing the use of quantitative light-induced Fluorescence-Digital as a clinical plaque assessment,” Photodiagnosis Photodyn. Ther. 13, 34–39 (2016). [CrossRef]  

57. T. Gimenez, M. M. Braga, D. P. Raggio, C. Deery, D. N. Ricketts, and F. M. Mendes, “Fluorescence-based methods for detecting caries lesions: systematic review, meta-analysis and sources of heterogeneity,” PLoS One 8(4), e60421 (2013). [CrossRef]  

58. J. A. Rodrigues, K. W. Neuhaus, I. Hug, H. Stich, R. Seemann, and A. Lussi, “In vitro detection of secondary caries associated with composite restorations on approximal surfaces using laser fluorescence,” Oper. Dent. 35(5), 564–571 (2010). [CrossRef]  

59. S. A. Wilbert, J. L. Mark Welch, and G. G. Borisy, “Spatial ecology of the human tongue dorsum microbiome,” Cell Rep. 30(12), 4003–4015.e3 (2020). [CrossRef]  

60. J. R. Willis and T. Gabaldón, “The human oral microbiome in health and disease: From sequences to ecosystems,” Microorganisms 8(2), 308 (2020). [CrossRef]  

61. M. Mascitti, G. Orsini, V. Tosco, R. Monterubbianesi, A. Balercia, A. Putignano, M. Procaccini, and A. Santarelli, “An overview on current non-invasive diagnostic devices in Oral Oncol.,” Front. Physiol. 9, 1510 (2018). [CrossRef]  

62. R. Nagi, Y.-B. Reddy-Kantharaj, N. Rakesh, S. Janardhan-Reddy, and S. Sahu, “Efficacy of light based detection systems for early detection of oral cancer and oral potentially malignant disorders: Systematic review,” Med. Oral Patol. Oral Cir. Bucal 21, e447–55 (2016). [CrossRef]  

63. I. Pavlova, C. R. Weber, R. A. Schwarz, M. D. Williams, A. M. Gillenwater, and R. Richards-Kortum, “Fluorescence spectroscopy of oral tissue: Monte carlo modeling with site-specific tissue properties,” J. Biomed. Opt. 14(1), 014009 (2009). [CrossRef]  

64. Stanford Digital Repository for OralEye Camera Image Data: https://purl.stanford.edu/mc747zz6607.

Data availability

The image data described in this work is freely available through the Stanford Data Repository [64].

64. Stanford Digital Repository for OralEye Camera Image Data: https://purl.stanford.edu/mc747zz6607.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1.
Fig. 1. Diagram of the imaging system simulation pipeline. a) The 3D mesh model of the oral cavity, as well as the positions of the light and camera, are defined in graphics software. b) The simulations incorporate models of the tongue texture map, surface reflectance, and tissue fluorescence. The ray tracing also models diffuse and glossy reflections. c) The camera model specifies the multi-element optics as well as the spectral quantum efficiency, geometric and electrical properties of the image sensor.
Fig. 2.
Fig. 2. Mouth model rendered in various poses.
Fig. 3.
Fig. 3. Top row: Excitation and emission matrix for fluorophores commonly found in the oral cavity. The EEMs are triangular because in general the emission wavelength exceeds the excitation wavelength [42]. The emissions are calculated on the assumption that the peak value in the EEM is 1. Bottom row: Spectral radiance produced by multiplying the EEMs for each fluorophore with the spectral power distribution of the single short wavelength illuminant used in OralEye camera (See 4). The gray-shaded region shows wavelengths that are filtered out before reaching the OralEye sensor.
Fig. 4.
Fig. 4. Photos of the OralEye camera, illustrating the two camera light sources: broadband white light (top row) and blue light (bottom row). The spectral energy of the white is broadband; the blue LED peaks at 385 nm. The white light is used for previewing the oral cavity and the blue light is used for exciting fluorescence. The sensor is in the center of the camera behind a longpass and NIR filter (see Fig. 5(c)).
Fig. 5.
Fig. 5. Spectral characterization of the OralEye image system. a) The blue LED spectral energy (plotted on a logarithmic axis) peaks at 385 nm but only drops by 3 orders of magnitude at 450 nm. b) The LED emission band is further narrowed by a shortpass filter which blocks wavelengths > 425 nm. c) A filter in front of the sensor further limits light below 425 nm and longer than 700 nm. This filter is nearly transparent to wavelengths in the range between 450-700 nm. d) Effective sensor spectral responsivity of the three color channels.
Fig. 6.
Fig. 6. Comparison between end-to-end simulations and OralEye images of the oral cavity in three participants. The images reflect the typical variation in the porphyrins concentration among healthy participants. Panels (a, c, e) show OralEye images and panels (b, d, f) show the corresponding simulations. (g) Comparison between measured (dotted) and simulated (solid) radiance of two points on the dorsal tongue from high and low porphyrins regions in two participants.
Fig. 7.
Fig. 7. Illustration of fluorescence sensor responses and sensor chromaticities. a) R and G sensor response representation. The points plotted in green and red are expected R,G signals, including noise, from the bulk fluorophore and porphyrins respectively. The signal expected from the mixture of these fluorophores is the vector sum of these two points, plotted in blue. For different relative amounts of the two fluorophores, the mixture will fall within the grey-shaded parallelogram. b) Sensor chromaticity representation. The sensor responses are normalized across three channels to eliminate the impact of non-uniform illumination and fluorophore concentration. The sensor chromaticities of the combined fluorophores will fall along the line connecting the chromaticities of the bulk fluorophore and porphyrins. The position along the line will depend on the relative strength of the two signals. c) Sensor chromaticities of the individual fluorophores. Each fluorophore is represented at a different location on the sensor chromaticity graph. For the excitation light of 385 nm and the OralEye spectral sensitivity, the porphyrins are widely separated from the cluster of the other four fluorophores.
Fig. 8.
Fig. 8. Sensor chromaticity analysis of OralEye images of three participants. The sensor chromaticity values of pixels in the regions of interest denoted by the white rectangles are plotted as black points in the bottom row. The data fall along a line extending from the position of elastin extending to the position of the porphyrins. The length of the line and its endpoints differ between the subjects. The sensor chromaticity data in the middle column extend further to the left than the data in the left and right columns.

Tables (1)

Tables Icon

Table 1. Sensor geometric and electronic properties.

Equations (7)

Equations on this page are rendered with MathJax. Learn more.

L ( λ , w o ) = L r ( λ , w o ) + L f ( λ , w o )
L r ( λ , w o ) = E ( λ , w e ) R ( λ , w e , w o ) d w e
L f ( λ ) = F ( λ , λ i ) E ( λ i , w e ) d λ i d w e
L f ( λ ) = j α j F j ( λ , λ i ) E ( λ i , w e ) d λ i d w e
L ( λ , w o ) = j α j F j ( λ , λ i ) E ( λ i , w e ) d λ i d w e + R ( λ , w e , w o ) E ( λ , w e ) d w e
p = g q e ( λ ) L ( λ ) d λ + N ~ .
r = R / ( R + G + B ) , g = G / ( R + G + B )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.