Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Hyperspectral environmental illumination maps: characterizing directional spectral variation in natural environments

Open Access Open Access

Abstract

Objects placed in real-world scenes receive incident light from every direction, and the spectral content of this light may vary from one direction to another. In computer graphics, environmental illumination is approximated using maps that specify illumination at a point as a function of incident angle. However, to-date, existing public databases of environmental illumination specify only three colour channels (RGB). We have captured a new set of 12 environmental illumination maps (eight outdoor scenes; four indoor scenes) using a hyperspectral imaging system with 33 spectral channels. The data reveal a striking directional variation of spectral distribution of lighting in natural environments. We discuss limitations of using daylight models to describe natural environmental illumination.

Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation, and DOI.

1. Introduction

The vast majority of objects do not themselves emit light and are therefore visible only by virtue of the incident light they reflect. Characterizing the properties of natural lighting is therefore important in understanding the physical inputs that visual sensors receive in everyday life. The variation in manufactured indoor lighting is relatively accessible (e.g.[1]), but variation in natural daylights requires systematic measurement. In past studies, vigorous efforts have been made to characterize the spectral variation of illumination in natural outdoor environments. The most widely known is the CIE daylight model [2], which accounts for the major variations in the spectral content of natural daylight using only three basis functions. Other studies show similar statistical trends where the chromaticites of natural illuminants cluster closely around the daylight locus [3]. It is also well understood how the spectrum of daylight can change over time and atmospheric conditions [4]. In addition, recent work has expanded these observations to twilight [5]. The focus of these studies has been to characterize the dominant illumination in a scene. Such measurements have significantly advanced our understanding of the human visual system, especially of mechanisms underlying the perception of objects’ surface properties, such as colour and shape. For example, in order to extract information about surface colour, the visual system might exploit the constraint that the illumination in a natural environment is likely daylight [6].

However, the fact that the spectral content of local illumination varies as a function of position in a complex scene suggests a significant influence of nearby objects in reflecting light and thereby modifying its spectral content away from the daylight locus [7]. Indeed, several studies have modelled the effect of inter-reflection on the resultant signals from surfaces and illuminants [810]. Work has also been undertaken to characterize the directional variation in lighting, with particular emphasis on the directional distribution of intensity [11,12]. The so-called “light from above” heuristic has received a lot of attention as a candidate Bayesian prior to interpret the shape of objects from the variation in shading across their surface [13,14]. More recent work suggests that visual behaviours are sensitive to the complex directional structure of natural lighting beyond “light from above” [11,15].

The present study measured the directional variation in spectral content of environmental illumination. Natural illumination mostly originates from two sources: (i) direct emission from a light source and (ii) secondary (or higher-order) reflection from other surfaces in the scene. The spectral shape of light in the secondary reflection is modified by the surface spectral reflectance of the objects. If we imagine an object placed at a specific point in a scene, the spectral content of illumination hitting the object may vary significantly as a function of incident angle. In computer graphics this directional dependency of illumination is approximated using an environmental illumination map [16], which, at each pixel in the map, stores the intensity of a light coming from a particular direction towards a single point in the scene (e.g. an object or an eye). Environmental illumination maps have proved to be important in achieving realistic appearance when rendering computer-graphics generated images of objects. However, existing databases of environmental illumination maps store only three-channel (RGB) measurements of light, and their use in perceptual science has been limited to characterizing intensity variation [17]. Consequently, the spectral variation in directional lighting is largely unexplored.

In this paper, we report new hyperspectral measurements of environmental illumination, which specify the spectral content of incident light from every direction at a point in a scene. The analysis of the collected datasets revealed that the colours of illuminations are not limited to fall along the daylight locus, and instead can deviate significantly from this locus. It was also clearly shown that there is a striking directional variability in illumination in natural scenes. This variation was observed mainly as a function of elevation angle rather than azimuth angle. For outside scenes, lights from above tend to be intense and dominated by short wavelengths (typically evoking a bluish percept), whereas lights from below are likely to have one log unit lower intensity and lower colour temperature. It was also found that indoor scenes have relatively uniform illumination across angle.

2. Method

2.1 Hyperspectral imaging system

The hyperspectral imaging system consists of a low-noise Peltier-cooled digital camera that has a spatial resolution of 1344 $\times$ 1024 pixels (C4742-95-12ER, Hamamatsu Photonics K.K., Hamamatsu, Japan) equipped with a tunable liquid-crystal filter (VariSpec, model VS-VIS2-10-HC-35-SQ, Cambridge Research & Instrumentation, Inc., MA). Focal length was adjusted before each measurement and the aperture was set to F16 to achieve a large depth of focus. The intensity resolution was 12-bits per pixel which is linear with light intensity, confirmed with in-house laboratory measurements. Spectral measurements ranged from 400 nm to 720 nm in 10 nm steps. Each wavelength acquisition required a different exposure time to integrate a sufficient amount of light. Thus, before each image acquisition, we determined the exposure time using an automatic procedure so that the maximum pixel value in the acquired image fits within 85$\pm$5 % of the upper limit of dynamic range, per individual wavelength. Wide-field images were obtained by imaging a 3-inch diameter mirror sphere (AISI 52100 chrome steel ball, Simply Bearings Ltd, Lancashire, UK) with the hyperspectral camera, and four such images were stitched together to produce the full 4$\pi$ steradian panorama. Immediately after an image was acquired, we measured the reflected light from a flat gray reflectance surface corresponding to Munsell N7, oriented perpendicular to the imaging axis and placed immediately adjacent to the mirror sphere, using a spectrophotomer with traceable calibration (PR-650, Photo Research Inc., Chatsworth, CA) for the purpose of spectral radiance calibration. A more detailed specification of the system can be found elsewhere [18,19].

2.2 Measurement

All measurements were performed in Braga, Portugal. We collected data in eight outdoor scenes and four indoor scenes at Museu Nogueira da Silva and at the University of Minho, Gualtar Campus. Since the present imaging system is the wavelength scanning type, the time taken to sample all wavelengths was relatively long compared to other imaging methods. Although changes in lighting environment were inevitable during the measurement, we minimised the impact by choosing a time when the location of the sun was relatively stable (rather than when the sun is rising or setting). Before starting a measurement, we set up the imaging system as shown in Fig. 1(a). The mirror sphere was placed on top of an optical support attached to a tripod. The height between the middle of the sphere and the ground was fixed at 158 cm. The zoom of the lens was maximized to allow the greatest physical distance between the sphere and the imaging apparatus, thereby minimizing the size of the image of the apparatus reflected in the mirror sphere. The hyperspectral camera was positioned 89 cm from the sphere so that the image of the mirror sphere fits just within upper and lower limit of the overall image, as shown in Fig. 1(b). The spatial resolution of each sphere region was a 1024 $\times$ 1024. In the optical conditions of acquisition, the system has an acceptance angle of about 6 deg of visual angle and a line-spread function close to a Gaussian with standard deviation approximately 1.3 pixels at 550 nm thus producing a spatial resolution at least as good as that of the human eye. Before each acquisition, the camera alignment was adjusted to horizontal using a spirit level. To avoid direct sunlight, all outside measurements were performed in shadow. The mirror sphere reflects lights from a wide incident angle towards camera, but there are two issues if we try to construct an environmental illumination map from only a single image of the sphere. Firstly, the scene behind the sphere is a “blind spot” for the sphere. Secondly, the center of the sphere has the reflection of the imaging apparatus itself. To overcome these issues, four images of the sphere were taken, separated by 90 degrees, as shown in Fig. 1(c). The exposure time of each wavelength acquisition was determined by the aforementioned automatic procedure before the measurement and it was kept constant for all angles. Finally, a dark-field image was acquired with the same exposure time for calibration purposes using the hyperspectral camera with the lens cap on. Images were processed, transformed to an equirectangular image, like a world map, and stitched together to create a full panorama image as described in the next subsection.

 figure: Fig. 1.

Fig. 1. How to measure environmental illumination. (a) Set-up of the hyperspectral imaging system. The control PC was placed behind the camera so that we can erase it later along with the tripod. (b) Mirror sphere with a reference plate for calibration purposes. The spatial resolution of the sphere region was 1024 $\times$ 1024. (c) Schematic illustration of the measurement geometry. To cover the “blind-spot” of the sphere and to erase the reflection of the imaging apparatus, measurements were taken from four different angles.

Download Full Size | PDF

2.3 Image processing

Figure 2 shows the image-processing pipeline to produce an environmental illumination map from a set of mirror-sphere images. First, (i) each raw image I$_{Raw}$ (x, y, $\lambda$) of the sphere was corrected for dark noise in the imaging system using spectral images obtained with the camera occluded, and for spatial in-homogeneity due to off-axis vignetting using spectral images obtained with a uniform transmittance diffuser in front of the lenses and illuminated with spatially uniform light. Let the dark field image be I$_{DF}$ (x, y, $\lambda$) and the flat field image be I$_{FF}$ (x, y, $\lambda$). The corrected image I$_{1}$ (x, y, $\lambda$) is given by Eq. (1).

$$I_1(x,\;y,\;\lambda) = \frac{I_{Raw}(x,\;y,\;\lambda)-I_{DF}(x,\;y,\;\lambda)}{I_{FF}(x,\;y,\;\lambda)-I_{DF}(x,\;y,\;\lambda)}$$

 figure: Fig. 2.

Fig. 2. The image-processing pipeline. (i) A dark field image was measured with the imaging system with the lens cap on, and was subtracted from a raw image. Also, spatial inhomogeneites of the imaging system were corrected. (ii) Spectral radiance calibration was performed by equating the measurement from the hyperspectral imaging system to the reference measurement from the PR650 spectrophotometer. (iii) Any saturated pixels were removed, and filled-in using linear interpolation of horizontal nearest pixels. (iv) Specular reflection of the mirror sphere was measured by spectrophotometer and corrected. (v) The edge of the sphere receives and integrates lights from a wider region of the surrounding scene. This intensity compression was corrected using a cosine function of radius. (vi) Smoothing based on locally-weighted quadratic regression was performed for each pixel independently. (vii) The image coordinates were transformed to equirectangular projection. (viii) Image registration was performed to minimize the effects of chromatic aberration. (ix) The tripod was erased from each sphere image using an image from another angle. (x) Finally, four images taken from different angles were stitched together based on an automatic feature-detection algorithm to create a full-panorama hyperspectral environmental illumination map.

Download Full Size | PDF

At this stage, the image has arbitrary units. Thus, secondly, (ii) we further calibrated the image so that each pixel has units of spectral radiance (W$\cdot {}$sr/m$^{2}$/nm). Our approach here is to find a spatially-global, wavelength-dependent scaling function that equates the spectrum at the gray reference plate recorded by the hyperspectral image to the spectrum that was independently measured in spectral radiance by a spectrophotomter PR650. This procedure is independent of any assumptions about the geometry of the light from the scene. Let (u, v) be the region on the gray reference plate, a spectrum reflected back from this surface measured by PR650 be S(u, v, $\lambda$) and spectra at the corresponding region in the hyperspectral image be I$_1$ (u, v, $\lambda$). The desired scaling function w$_{SRC}$ ($\lambda$) is given by Eq. (2).

$$w_{SRC}(\lambda) = \frac{S(u,\;v,\;\lambda)}{I_1(u,\;v,\;\lambda)}$$
Then, we scaled whole hyperspectral image using this factor as shown in Eq. (3).
$$I_2(x,\;y,\;\lambda) = I_1(x,\;y,\;\lambda) \cdot w_{SRC}(\lambda)$$
Thirdly, as shown in Eq. (4), (iii) any saturated pixels were removed from the image using a binary mask M(x, y), where saturated and non-saturated pixels have zero and one at every wavelength, respectively. After the calculation, the resultant blank pixels were filled-in using linear interpolation of horizontal nearest pixels because our data suggested spectral shapes are more similar in the horizontal direction (seen later in Results and discussion).
$$I_3(x,\;y,\;\lambda) = fillin\left\{I_2(x,\;y,\;\lambda) \cdot M(x,\;y)\right\}$$
Fourth, (iv) the wavelength-dependent reflection function of the mirror sphere SR($\lambda$) measured by spectrophotometer (UV-3600 Plus UV-VIS-NIR, Shimadzu, Kyoto, Japan) was corrected by Eq. (5).
$$I_4(x,\;y,\;\lambda) = I_3(x,\;y,\;\lambda) / SR(\lambda)$$
Fifth, (v) since regions close to the edge of the mirror sphere receive (and integrate) lights from a wider range of angles in the scene, this intensity compression was corrected by a cosine function of the angle $\theta$ of the incoming lights towards the camera from a specific point on the sphere w$_{I}$ (x, y) as in Eq. (6).
$$I_5(x,\;y,\;\lambda) = I_4(x,\;y,\;\lambda) \cdot w_I(x,\;y)$$
Sixth, (vi) for each pixel independently, we smoothed the spectrum using a method that preserves the peak based on a locally weighted quadratic regression with span of 25% that was optimised to minimise information loss using a leave-one-out cross-validation method [20] as in Eq. (7). We chose a method that can be applied to individual spectra as opposed to a method based on spatial averaging to preserve spatial resolution.
$$I_6(x,\;y,\;\lambda) = smooth\left\{I_5(x,\;y,\;\lambda)\right\}$$
Seventh, (vii) each sphere image was converted to equirectangular coordinates [21] as in Eq. (8). This is a transformation in which the horizontal coordinate represents the azimuth and the vertical coordinate indicates the elevation. The elevation zero corresponds to the height of the camera.
$$I_6(x,\;y,\;\lambda) \xrightarrow{} I_7(\phi,\;\theta,\;\lambda)$$
Eighth, (viii) image registration was performed using the image from the 560 nm channel as a reference to correct for chromatic aberration as in Eq. (9). The affine transformation was determined to optimise the alignment, quantified by a cross correlation [22].
$$I_8(\phi,\;\theta,\;\lambda) = registration\left\{I_7(\phi,\;\theta,\;\lambda)\right\}$$
Ninth, (ix) we removed the reflection of the imaging system using an image from another angle I$_{8}$_$_{2}$ ($\phi$, $\theta$, $\lambda$) and a gaussian window Gaussian($\phi$, $\theta$) as in Eq. (10). Here, the centre coordinate and the standard deviation (width) of the gaussian function were carefully selected for each image by hand so that the imaging system was eliminated from the final image.
$$I_9(\phi,\;\theta,\;\lambda) = I_8(\phi,\;\theta,\;\lambda)\left\{ 1-Gaussian(\phi,\;\theta)\right\}+I_8\__2(\phi,\;\theta,\;\lambda) Gaussian(\phi,\;\theta)$$
Finally, tenth, (x) four sphere images were stitched together based on the SIFT automatic feature detection algorithm [23] to produce the 4$\pi$ steradian full-panorama environmental illumination map. Image features were found based on the V(λ)-weighted intensity image, and the same stitching transformation was applied to all wavelengths independently.

Parts of the methods described here are analogous to those described in a recent tutorial paper [24].

3. Results and discussion

Figure 3 shows the sRGB representation of the collected hyperspectral environmental illumination maps for eight outdoor scenes and four indoor scenes (Fig. 3(a) and 3(b) respectively). Figure 3(c) and 3(d) show the corresponding chromatic distributions of these maps expressed in the MacLeod-Boynton chromaticity diagram [25]. The horizontal axis is L/(L+M), which expresses the ratio of signals in the long and middle-wavelength sensitive cones, while the vertical axis indicates S/(L+M), which expresses the signal in the short-wavelength cones in relation to a combination of long and middle-wavelength sensitive cone signals. We used the Stockman and Sharpe cone fundamentals [26] to calculate LMS cone excitations. It is clear that the chromaticities of the illumination spectra distribute around the daylight locus, but that they also deviate substantially from the locus. This was consistent for both outdoor and indoor scenes. Thus, natural lighting is not only limited to the daylight locus, but instead includes a wider range of chromaticities. The white cross indicates the chromaticity of the mean spectrum across all pixels. For outdoor scenes, it has generally lower L/(L+M) and higher S/(L+M) in relation to the chromaticity of equal energy white shown by the intersection of gray dotted lines. For indoor scenes, the mean chromaticity is slightly shifted toward lower values of S/(L+M).

 figure: Fig. 3.

Fig. 3. sRGB images of hyperspectral environmental illumination at (a) 8 outdoor scenes and (b) 4 indoor scenes. (c) and (d) show the corresponding chromatic distributions in the MacLeod-Boynton chromaticity diagram for each scene. The cyan line indicates the daylight locus. The gray dotted lines show the chromaticity of equal energy white.

Download Full Size | PDF

More importantly, Fig. 4 shows that there is a variation in spectral shape and intensity level depending on incident angle. Each plot on the right-hand panel indicates the mean spectrum across all pixels within the corresponding rectangle depicted in the left-hand panel. The background colour of each plot indicates the log luminance level. For the campus scene (Fig. 4(a)), lights from above (θ >0) tend to have high intensities in the order of thousands cd/m$^{2}$ and high energy around the short-wavelength region. In contrast, lights reflected back from the ground and other objects, which are dominated by secondary (or higher-order) reflections, tend to have one log unit lower luminance and lower correlated colour temperature. In this scene, lights from below seem to have high energy at long-wavelengths presumably because vegetation tends to have high reflectance in the near infrared [27]. The variation across azimuth angles (φ) is much smaller. For the canteen scene (Fig. 4(b)) there is substantially less variation in intensity as a function of direction. Its spectral shape shows signatures of the daylight spectrum and of the fluorescent lighting, additionally modulated by inter-reflections. Collectively the measured datasets show that directional spectral variation seems to be highly systematic, and different between outdoor and indoor scenes. Also, these data clearly show that the “light from above” heuristic does not hold universally in natural environments because of the presence of secondary reflections. In summary, the obtained datasets revealed the large directional spectral variation in real-world visual environments, though the amount of directional variation depends on the scene, as shown in the Appendix.

 figure: Fig. 4.

Fig. 4. Directional spectral variation as a function of azimuth $\phi$ and elevation $\theta$. (a) Example of one outdoor scene (campus). Each plot on the right-hand panel represents the mean spectrum within the corresponding rectangle drawn on the left-hand image. The colour of each curve is the sRGB colour of the spectrum. The background colour represents the log luminance level. The black dotted line indicates the spectrum derived from CIE daylight basis functions. (b) The case for one indoor scene (canteen). See Appendix for 10 other scenes.

Download Full Size | PDF

The black dotted curve in each plot represents the spectrum reconstructed using tristimulus values of the spectrum and the daylight-basis assumption (see Eqs.(2a), (4a), (5a), [2]). Strictly speaking, these Eqs. can be applied only to chromaticities on the daylight locus, but here we attempt to test the extent to which a linear weighted sum of daylight basis functions accounts for observed spectral variation. The reconstructed spectrum based on the daylight assumption was normalized so that it had an equal area under the curve as that of the measured spectrum. It is clear for the campus scene Fig. 4(a) that the reconstructed spectra are not well matched to the real spectra for lower regions of the image, corresponding to low elevation angles. As expected, this shows that not all natural illuminant spectra can be recovered from daylight basis functions in a scene where secondary reflection exists. For the canteen scene Fig. 4(b), spectral shapes are more uniform across incident angle. Overall, spectral shapes rather resemble the CIE daylight function, presumably because the artificial overhead lighting from fluorescent tubes was mixed with daylight entering the room from the large window. It is notable that the regions oriented towards the large window are one to two log units higher in luminance.

It should be noted this trend depended on the scene, as shown in the Appendix. To see whether the similarity between observed spectra and reconstructed spectra by CIE daylight basis functions differs between the upper hemisphere (where directly emitted lights dominate) and the lower hemisphere (where the secondary reflection dominates), we calculated the root mean square error (RMSE) between the measured spectra and the reconstructed spectra for all panels in Fig. 4. Note that in Fig. 4, all spectra were scaled so that the maximum value of the spectrum becomes one within each panel for the sake of visibility. However, when we calculate RMSE, all spectra used for calculation were scaled so that they have the same area under the curve. This normalization allows us to compare spectral shape whilst ignoring intensity differences. Due to this normalization procedure, the units of these RMSE values were arbitrary. The RMSE values were separately averaged across the upper 18 panels (θ >= 0) and the lower 18 panels (θ < 0) for each scene. Figure 5 shows the mean RMSE values across upper panels and lower panels for (a) 8 outdoor scenes and (b) 4 indoor scenes.

 figure: Fig. 5.

Fig. 5. Average RMSE between measured spectra and reconstructed spectra based on the CIE daylight assumption, across the upper 18 panels (red circle symbols) and the lower 18 panels (gray diamond symbols). The panels (a) and (b) show 8 outdoor scenes and 4 indoor scenes, respectively. The p-values to show whether the upper part and lower part have significantly different RMSE values are indicated by signs above each symbol. The error bars indicate $\pm$ S.E. but their lengths were mostly smaller than the symbols.

Download Full Size | PDF

A two-way repeated-measures analysis of variance (ANOVA) was performed for (a) outdoor scenes with scene (scene 1 to scene 8), and hemisphere (upper and lower) as within-subject factors for the RMSE values. For (a) outdoor scenes, the main effect of hemisphere, scene and the interaction between two factors were all significant (F(1,17) = 512.5, $\eta ^{2}$ = 0.968, p < .00001; F(7,119) = 40.5, $\eta ^{2}$ = 0.704, p < .00001; F(7,119) = 52.6, $\eta ^{2}$ = 0.756, p < .00001). The analysis of interaction revealed that for scenes 1, 5, 6, 7 and 8, the lower parts showed significantly higher RMSE values (F(1,17) = 26.5, p = .000081; F(1,17) = 43.4, p < .00001; F(1,17) = 9.34, p = .00715; F(1,17) = 214.3, p < .00001; F(1,17) = 52.8, p < .00001, respectively). In contrast, scenes 2, 3, 4 did not show significant differences between the upper and lower hemispheres (F(1,17) = 3.68, p = .072; F(1,17) = 2.16, p = .160; F(1,17) = 3.38, p = .084, respectively). For (b) indoor scenes, the main effect of hemisphere was not significant (F(1,17) = 1.35, $\eta ^{2}$ = 0.282, p = .261). However, the main effect of scene was significant (F(3,51) = 73.4, $\eta ^{2}$ = 2.08, p < .00001). The interaction between the two factors was not significant (F(3,51) = 0.59, $\eta ^{2}$ = 0.186, p = .624). Thus, for indoor scenes, the dissimilarity of the measured spectra to that recovered from the daylight basis functions is similar between upper and lower hemispheres.

Overall, for outside scenes, secondary reflection alters the spectrum shape substantially, and thus the daylight assumption does not adequately account for the observed spectral variation. In contrast, for indoor scenes, dissimilarity (RMSE) was small overall compared to outdoor scenes, and the difference in applicability of the daylight model between upper and lower hemispheres was essentially absent.

 figure: Fig. 6.

Fig. 6. Heatmaps to show the spectral variation for (a) outdoor scenes and (b) indoor scenes. Each pixel indicates the RMSE between the measured spectrum and the mean spectrum across all pixels. The units are arbitrary. Note spectra in all pixels and the mean spectra were normalized to have equal area to allow us to compare the spectral shape. This calculation was performed independently for each scene so that the overall luminance differences within each scene did not affect the calculation. The number at the top left of each image indicates the mean RMSE value across all pixels.

Download Full Size | PDF

To compare the uniformity of spectral shape across direction, Fig. 6 shows heat maps in which each pixel indicates the RMSE between the measured spectrum and the mean spectrum across all pixels in that scene. Values at the top left of each image indicate the averaged RMSE across all pixels. If the scene had the same spectrum at every pixel, the heatmap would be uniformly black. Note that the mean spectrum across all pixels was calculated based on raw data, and thus it was dominated by intense spectra. However, again, when we calculated the RMSE, all spectra were normalized to have equal area so that we can compare the dissimilarity of spectral shape whilst ignoring intensity differences. Overall, the mean RMSE seems to be smaller for indoor scenes than for outdoor scenes. The mean of averaged RMSE values (shown at the top left in each panel) across 8 outdoor scenes was 7.95 and that for indoor scenes was 5.49, which was shown to be significantly different (p = 0.0121, t(10) = 3.06, t-test). Therefore, the indoor scenes have rather more uniform directional structure of spectral shapes across all directions than outdoor scenes. We also note that calculating RMSE of two normalized spectral distributions by their norm is not the only way to quantify the shape difference. However, we confirmed that a different metric - the Pearson correlation coefficient - leads to similar conclusions.

Additionally, we characterize the spatial structure of the measured environmental illumination maps, to understand the variation in spatial scale from one scene to another, and also from one spectral channel to another. For 2D image planes, the power spectrum of the 2-D Fourier transformation would answer this question, but since environmental illumination maps are specified in spherical coordinates we used a spherical-harmonic decomposition. Figure 7(a) shows a schematic illustration. The right-hand part of the panel shows the series of spectral images that comprise the original hyperspectral image (left-hand side). Spherical harmonics adopt two parameters, namely degree and order, that control the frequency and alignment of the basis functions. Detailed explanation is documented elsewhere (e.g. [28]). In Fig. 7(b) and 7(c) we show the power spectra (i.e. the absolute values of the spherical harmonic coefficients) as a function of degree, l. Coefficients were summed across orders, and each curve was normalized independently so that the highest point within the curve becomes zero. Each point was connected by spline interpolation. For all scenes, the power is highest at zero or small degrees, and decreases rapidly as the degree increases. Scenes marked by a plus symbol at the bottom left in the panel show a particularly strong wavelength dependency in that the power of short wavelengths decreases more rapidly than that of longer wavelengths. Therefore, for some scenes longer wavelengths seem to have finer spatial structure. However, note that this wavelength dependency was not observed for all scenes.

 figure: Fig. 7.

Fig. 7. Application of spherical-harmonic expansion to environmental illuminations. (a) Spherical-harmonic expansion was performed independently for each wavelength. The variable l represents the degree and the right-hand panel shows the reconstructed images up to a specified degree level. (b) Power spectra of spherical-harmonic expansion for wavelengths from 400 nm to 720 nm every 40 nm as a function of degree l for outdoor scenes. (c) Power spectra of indoor scenes.

Download Full Size | PDF

One important implication from the present findings relates to human perceptual colour constancy, which is the ability of the human colour vision system to support the stable perception of an object’s surface colour regardless of illumination. Most previous studies of human colour constancy have used a simplified experimental setting where an object is illuminated by a single-spectrum illumination [29]. Under such lighting environments, the computation to achieve colour constancy could be conceptualized as first estimating the illumination colour and then discounting its influence from the whole visual field. This approach may be difficult in real-world situations because an object receives different lights from every direction. Moreover, by slightly changing head or object position, the object samples a different set of incident lights. The measured data also raise an interesting question about which white point our visual system adapts to when there is more than one candidate white point in the scene. For example, adaptation to lights from above and to lights from below would result in substantially different retinal adaptation states. Nevertheless, various research suggests that our perception of surface properties of objects such as shape, colour or gloss is not critically impaired under such complex lighting environments [3039]. One might argue that the visual system would not have an access to the full illumination map at a glance. However, it is possible that we accumulate information over time by eye or head movements [40]. A recent behavioural study suggests that the visual system might indeed summarize the influence of illumination under a complex lighting environment via a relatively simple strategy such as spatio-temporal averaging of the illumination map [41]. Although further evidence is desired, this line of research will potentially provide interesting and novel insights towards understanding human vision in more realistic environments. Another notable feature is the failure of the daylight assumption for lights from below for several scenes. This is consistent with one study on colour constancy that suggests the visual system may not actually rely on the daylight assumption [42].

One limitation of the present study is related to the spatial resolution of the image due to the use of a mirror sphere and limited resolution of the hyperspectral camera itself (the image of the sphere was sampled at 1024$\times$1024 pixels). At the edge of the sphere, the reflected image of a scene is highly compressed so spatial resolution is low for these regions. However, by stitching four images we were able to eliminate the left and right edges of the image where compression is most severe. Specifically, we used only the central 82.0% of each sphere image and, since the compression follows a cosine function (as described in the panel (v) in Fig. 2), the azimuthal compression factor was always less than 27.3%. Since the elevation angle of the camera was not changed, we were not able to eliminate the compressed regions at the top and bottom of the sphere. Thus, light from directly above and directly below was more sparsely sampled but, in general, there was less spatial variation in the scenes in these directions. For future measurements, we plan to introduce a fish-eye lens which can more uniformly sample illuminations across incident angles compared to a mirror sphere. Spatial resolution might additionally be lowered due to some of the post-processing of the image, such as image registration (correction of lateral chromatic aberration) or image stitching, since these manipulations might introduce misalignment. The second limitation is that the present measurements avoided direct sunlight, and thus the dynamic range was also limited. Hyperspectral measurement of a single sphere image can take over 10 minutes to capture sufficient light for all wavelengths. We prioritized the data-acquisition time over the dynamic range, which would require a change of the exposure time, because otherwise the lighting environment is likely to change during the measurement. Methods to measure multispectral environmental illuminations have been improving over recent years [43,44], and it will be important to measure additional hyperspectral datasets for further generalization of the findings. Nevertheless, the present study provides an important first step toward understanding the complex nature of environmental illumination, especially in the spectral domain.

4. Conclusion

Based on novel datasets collected in this study, we found that the chromaticites of natural illuminants are not restricted to the daylight locus, but instead show a much wider chromatic variation than previously thought. More importantly, the data show that natural lighting environments possess significant directional spectral variation. The degree of variation seems to be much larger in outdoor scenes than in indoor scenes.

Appendix

 figure: Fig. 8.

Fig. 8. Directional spectral variation for the remaining 7 outdoor scenes. The way the data is plotted follows Fig. 4. The colour of each curve is the sRGB colour of the spectrum. The background colour represents the log luminance level. The black dotted line indicates the spectrum derived from CIE daylight basis functions.

Download Full Size | PDF

Figures 8 and 9 show directional spectral variation for other outdoor and indoor scenes, respectively. Here, we see that the degree of spectral variation depends somewhat on the scene. However, we observe a similar general trend as described in the main text. That is, for outdoor scenes as shown in Fig. 8, light from above tends to be more bluish and high luminance whilst lights from below have less energy in the short-wavelength region and lower intensity. Indoor scenes have relatively uniform spectral shape regardless of direction, but regions that include large windows or that are open to the sky have particularly high luminance.

 figure: Fig. 9.

Fig. 9. Directional spectral variation for the remaining 3 indoor scenes. The colour of each curve is the sRGB colour of the spectrum. The background colour represents the log luminance level. The black dotted line indicates the spectrum derived from CIE daylight basis functions.

Download Full Size | PDF

Funding

Experimental Psychology Society; Fundação para a Ciência e a Tecnologia(UID/FIS/04650/2019); Aso Group; Kikawada Foundation; Japan Student Services Organization; Great Britain Sasakawa Foundation; Wellcome Trust (094595/Z/10/Z).

Acknowledgments

Authors acknowledge Museu Nogueira da Silva for the use of their garden to collect datasets and Andreia E. Gomes, Catarina F. M. Herdeiro, Eduardo G. Vicente and Joshua Harvey for help with measurement. Authors also thank David H. Foster and Tom C. B. McLeish for advice on image processing. This study was supported by a Study Visit grant from the Experimental Psychology Society, UK, awarded to T.M. TM’s DPhil studentship is funded by awards from the Aso Group, the Kikawada Foundation, the Japanese Student Services Organization and the Sasakawa Fund.

References

1. K. W. Houser, M. Wei, A. David, M. R. Krames, and X. S. Shen, “Review of measures for light-source color rendition and considerations for a two-measure system for characterizing color rendition,” Opt. Express 21(8), 10393–10411 (2013). [CrossRef]  

2. D. B. Judd, D. L. MacAdam, G. Wyszecki, H. W. Budde, H. R. Condit, S. T. Henderson, and J. L. Simonds, “Spectral distribution of typical daylight as a function of correlated color temperature,” J. Opt. Soc. Am. 54(8), 1031–1040 (1964). [CrossRef]  

3. J. Hernández-Andrés, J. Romero, J. L. Nieves, and R. L. Lee, “Color and spectral analysis of daylight in southern Europe,” J. Opt. Soc. Am. 18(6), 1325–1335 (2001). [CrossRef]  

4. J. Tian, Z. Duan, W. Ren, Z. Han, and Y. Tang, “Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision,” Opt. Express 24(7), 7266–7286 (2016). [CrossRef]  

5. M. Spitschan, G. K. Aguirre, D. H. Brainard, and A. M. Sweeney, “Variation of outdoor illumination as a function of solar elevation and light pollution,” Sci. Rep. 6(1), 26756 (2016). [CrossRef]  

6. D. H. Brainard and W. T. Freeman, “Bayesian color constancy,” J. Opt. Soc. Am. 14(7), 1393–1411 (1997). [CrossRef]  

7. S. M. Nascimento, K. Amano, and D. H. Foster, “Spatial distributions of local illumination color in natural scenes,” Vision Res. 120, 39–44 (2016). [CrossRef]  

8. R. Deeb, J. V. de Weijer, D. Muselet, M. Hebert, and A. Tremeau, “Deep spectral reflectance and illuminant estimation from self-interreflections,” J. Opt. Soc. Am. A 36(1), 105–114 (2019). [CrossRef]  

9. T. Machida, N. Yokoya, and H. Takemura, “Surface reflectance modeling of real objects with interreflections,” in Proceedings of IEEE International Conference on Computer Vision, (IEEE, 2003), pp. 170–177.

10. R. Deeb, D. Muselet, M. Hebert, and A. Tremeau, “Interreflections in computer vision: A survey and an introduction to spectral infinite-bounce model,” J. Math Imaging Vis. 60(5), 661–680 (2018). [CrossRef]  

11. Y. Morgenstern, W. S. Geisler, and R. F. Murray, “Human vision is attuned to the diffuseness of natural light,” J. Vis. 14(9), 15 (2014). [CrossRef]  

12. A. A. Mury, S. C. Pont, and J. J. Koenderink, “Structure of light fields in natural scenes,” Appl. Opt. 48(28), 5386–5395 (2009). [CrossRef]  

13. V. Ramachandran, “Perception of shape from shading,” Nature 331(6152), 163–166 (1988). [CrossRef]  

14. P. Mamassian and R. Goutcher, “Prior knowledge on the illumination position,” Cognition 81(1), B1–B9 (2001). [CrossRef]  

15. Y. Morgenstern, R. F. Murray, and L. R. Harris, “The human visual system’s assumption that light comes from above is weak,” Proc. Natl. Acad. Sci. 108(30), 12551–12553 (2011). [CrossRef]  

16. P. Debevec, “Rendering synthetic objects into real scenes,” in Proceedings of the ACM SIGGRAPH, (ACM, 1998), pp. 19–24.

17. R. O. Dror, A. S. Willsky, and E. H. Adelson, “Statistical characterization of real-world illumination,” J. Vis. 4(9), 11 (2004). [CrossRef]  

18. S. M. C. Nascimento, F. P. Ferreira, and D. H. Foster, “Statistics of spatial cone-excitation ratios in natural scenes,” J. Opt. Soc. Am. 19(8), 1484–1490 (2002). [CrossRef]  

19. D. H. Foster, K. Amano, S. M. Nascimento, and M. J. Foster, “Frequency of metamerism in natural scenes,” J. Opt. Soc. Am. 23(10), 2359–2372 (2006). [CrossRef]  

20. A. M. Molinaro, R. Simon, and R. M. Pfeiffer, “Prediction error estimation: A comparison of resampling methods,” Bioinformatics 21(15), 3301–3307 (2005). [CrossRef]  

21. E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec, High dynamic range imaging: acquisition, display, and image-based lighting (Morgan Kaufmann Publishers Inc., 2005).

22. B. Zitová and J. Flusser, “Image registration methods: A survey,” Image Vis. Comput. 21(11), 977–1000 (2003). [CrossRef]  

23. Y. Li, Y. Wang, W. Huang, and Z. Zhang, “Automatic image stitching using SIFT,” in Proceedings of IEEE Conference on Audio, Language and Image Processing, (IEEE, 2008), pp. 568–571.

24. D. H. Foster and K. Amano, “Hyperspectral imaging in color vision research: tutorial,” J. Opt. Soc. Am. 36(4), 606–627 (2019). [CrossRef]  

25. D. I. A. MacLeod and R. M. Boynton, “Chromaticity diagram showing cone excitation by stimuli of equal luminance,” J. Opt. Soc. Am. 69(8), 1183–1186 (1979). [CrossRef]  

26. A. Stockman and L. T. Sharpe, “The spectral sensitivities of the middle- and long-wavelength-sensitive cones derived from measurements in observers of known genotype,” Vision Res. 40(13), 1711–1737 (2000). [CrossRef]  

27. R. Navalgund and M. B. Rajani, “The science behind archaeological signatures from space,” Curr. Sci. 113(10), 1859–1872 (2017). [CrossRef]  

28. R. Ramamoorthi, “An Efficient Representation for Irradiance Environment Maps,” in Proceedings of SIGGRAPH ’01 on Computer graphics and interactive techniques, (ACM, 2001), 497–500.

29. L. Arend and A. Reeves, “Simultaneous color constancy,” J. Opt. Soc. Am. 3(10), 1743–1751 (1986). [CrossRef]  

30. R. W. Fleming, R. O. Dror, and E. H. Adelson, “Real-world illumination and the perception of surface reflectance properties,” J. Vis. 3(5), 3 (2003). [CrossRef]  

31. K. Doerschner, H. Boyaci, and L. T. Maloney, “Testing limits on matte surface color perception in three-dimensional scenes with complex light fields,” Vision Res. 47(28), 3409–3423 (2007). [CrossRef]  

32. L. K. Doerschner and H. Boyaci, “Estimating the glossiness transfer function induced by illumination change and testing its transitivity,” J. Vis. 10(4), 8 (2010). [CrossRef]  

33. M. Olkkonen and D. H. Brainard, “Perceived glossiness and lightness under real-world illumination,” J. Vis. 10(9), 5 (2010). [CrossRef]  

34. I. Motoyoshi and H. Matoba, “Variability in constancy of the perceived surface reflectance across different illumination statistics,” Vision Res. 53(1), 30–39 (2012). [CrossRef]  

35. T. Morimoto and H. E. Smithson, “Discrimination of spectral reflectance under environmental illumination,” J. Opt. Soc. Am. A 35(4), B244–B255 (2018). [CrossRef]  

36. K. Doerschner, H. Boyaci, and L. T. Maloney, “Human observers compensate for secondary illumination originating in nearby chromatic surfaces,” J. Vis. 4(2), 3–105 (2004). [CrossRef]  

37. J. N. Yang and S. K. Shevell, “Surface color perception under two illuminants: The second illuminant reduces color constancy,” J. Vis. 3(5), 4–379 (2003). [CrossRef]  

38. H. Boyaci, K. Doerschner, and L. T. Maloney, “Perceived surface color in binocularly viewed scenes with two light sources differing in chromaticity,” J. Vis. 4(9), 1–679 (2004). [CrossRef]  

39. R. J. Lee and H. E. Smithson, “Context-dependent judgments of color that might allow color constancy in scenes with multiple regions of illumination,” J. Opt. Soc. Am. 29(2), A247–A257 (2012). [CrossRef]  

40. H. E. Smithson and Q. Zaidi, “Colour constancy in context: Roles for local adaptation and levels of reference,” J. Vis. 4(9), 3 (2004). [CrossRef]  

41. T. Morimoto and H. E. Smithson, “Identifying surface colours across different environmental illuminations,” Perception 48, 47 (2019).

42. P. B. Delahunt and D. H. Brainard, “Does human color constancy incorporate the statistical regularity of natural daylight?” J. Vis. 4(2), 1–81 (2004). [CrossRef]  

43. S. Tominaga, A. Matsuura, and T. Horiuchi, “Spectral Analysis of Omnidirectional Illumination in a Natural Scene,” J. Imaging Sci. Technol. 54(4), 040502 (2010). [CrossRef]  

44. K. Hirai, N. Osawa, M. Hori, T. Horiuchi, and S. Tominaga, “High-Dynamic-Range Spectral Imaging System for Omnidirectional Scene Capture,” J. Imaging 4(4), 53 (2018). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. How to measure environmental illumination. (a) Set-up of the hyperspectral imaging system. The control PC was placed behind the camera so that we can erase it later along with the tripod. (b) Mirror sphere with a reference plate for calibration purposes. The spatial resolution of the sphere region was 1024 $\times$ 1024. (c) Schematic illustration of the measurement geometry. To cover the “blind-spot” of the sphere and to erase the reflection of the imaging apparatus, measurements were taken from four different angles.
Fig. 2.
Fig. 2. The image-processing pipeline. (i) A dark field image was measured with the imaging system with the lens cap on, and was subtracted from a raw image. Also, spatial inhomogeneites of the imaging system were corrected. (ii) Spectral radiance calibration was performed by equating the measurement from the hyperspectral imaging system to the reference measurement from the PR650 spectrophotometer. (iii) Any saturated pixels were removed, and filled-in using linear interpolation of horizontal nearest pixels. (iv) Specular reflection of the mirror sphere was measured by spectrophotometer and corrected. (v) The edge of the sphere receives and integrates lights from a wider region of the surrounding scene. This intensity compression was corrected using a cosine function of radius. (vi) Smoothing based on locally-weighted quadratic regression was performed for each pixel independently. (vii) The image coordinates were transformed to equirectangular projection. (viii) Image registration was performed to minimize the effects of chromatic aberration. (ix) The tripod was erased from each sphere image using an image from another angle. (x) Finally, four images taken from different angles were stitched together based on an automatic feature-detection algorithm to create a full-panorama hyperspectral environmental illumination map.
Fig. 3.
Fig. 3. sRGB images of hyperspectral environmental illumination at (a) 8 outdoor scenes and (b) 4 indoor scenes. (c) and (d) show the corresponding chromatic distributions in the MacLeod-Boynton chromaticity diagram for each scene. The cyan line indicates the daylight locus. The gray dotted lines show the chromaticity of equal energy white.
Fig. 4.
Fig. 4. Directional spectral variation as a function of azimuth $\phi$ and elevation $\theta$. (a) Example of one outdoor scene (campus). Each plot on the right-hand panel represents the mean spectrum within the corresponding rectangle drawn on the left-hand image. The colour of each curve is the sRGB colour of the spectrum. The background colour represents the log luminance level. The black dotted line indicates the spectrum derived from CIE daylight basis functions. (b) The case for one indoor scene (canteen). See Appendix for 10 other scenes.
Fig. 5.
Fig. 5. Average RMSE between measured spectra and reconstructed spectra based on the CIE daylight assumption, across the upper 18 panels (red circle symbols) and the lower 18 panels (gray diamond symbols). The panels (a) and (b) show 8 outdoor scenes and 4 indoor scenes, respectively. The p-values to show whether the upper part and lower part have significantly different RMSE values are indicated by signs above each symbol. The error bars indicate $\pm$ S.E. but their lengths were mostly smaller than the symbols.
Fig. 6.
Fig. 6. Heatmaps to show the spectral variation for (a) outdoor scenes and (b) indoor scenes. Each pixel indicates the RMSE between the measured spectrum and the mean spectrum across all pixels. The units are arbitrary. Note spectra in all pixels and the mean spectra were normalized to have equal area to allow us to compare the spectral shape. This calculation was performed independently for each scene so that the overall luminance differences within each scene did not affect the calculation. The number at the top left of each image indicates the mean RMSE value across all pixels.
Fig. 7.
Fig. 7. Application of spherical-harmonic expansion to environmental illuminations. (a) Spherical-harmonic expansion was performed independently for each wavelength. The variable l represents the degree and the right-hand panel shows the reconstructed images up to a specified degree level. (b) Power spectra of spherical-harmonic expansion for wavelengths from 400 nm to 720 nm every 40 nm as a function of degree l for outdoor scenes. (c) Power spectra of indoor scenes.
Fig. 8.
Fig. 8. Directional spectral variation for the remaining 7 outdoor scenes. The way the data is plotted follows Fig. 4. The colour of each curve is the sRGB colour of the spectrum. The background colour represents the log luminance level. The black dotted line indicates the spectrum derived from CIE daylight basis functions.
Fig. 9.
Fig. 9. Directional spectral variation for the remaining 3 indoor scenes. The colour of each curve is the sRGB colour of the spectrum. The background colour represents the log luminance level. The black dotted line indicates the spectrum derived from CIE daylight basis functions.

Equations (10)

Equations on this page are rendered with MathJax. Learn more.

I1(x,y,λ)=IRaw(x,y,λ)IDF(x,y,λ)IFF(x,y,λ)IDF(x,y,λ)
wSRC(λ)=S(u,v,λ)I1(u,v,λ)
I2(x,y,λ)=I1(x,y,λ)wSRC(λ)
I3(x,y,λ)=fillin{I2(x,y,λ)M(x,y)}
I4(x,y,λ)=I3(x,y,λ)/SR(λ)
I5(x,y,λ)=I4(x,y,λ)wI(x,y)
I6(x,y,λ)=smooth{I5(x,y,λ)}
I6(x,y,λ)I7(ϕ,θ,λ)
I8(ϕ,θ,λ)=registration{I7(ϕ,θ,λ)}
I9(ϕ,θ,λ)=I8(ϕ,θ,λ){1Gaussian(ϕ,θ)}+I8_2(ϕ,θ,λ)Gaussian(ϕ,θ)
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.