## Abstract

Underwater spectral imaging is a promising method for mapping, classification and health monitoring of coral reefs and seafloor inhabitants. However, the spectrum of light is distorted during the underwater imaging process due to wavelength-dependent attenuation by the water. This paper presents a model-based method that accurately restores brightness of underwater spectral images captured with narrowband filters. A model is built for narrowband underwater spectral imaging. The model structure is derived from physical principles, representing the absorption, scattering and refraction by water and the optical properties of narrowband filters, lenses and image sensors. The model coefficients are calibrated based on spectral images captured underwater and in air. With the imaging model available, energy loss due to water attenuation is restored for images captured at different underwater distances. An experimental setup is built and experiments are carried out to verify the proposed method. Underwater images captured within an underwater distance of 260 cm are restored and compared with those in air. Results show that the relative restoration error is 3.58% on average for the test images, thus proving the accuracy of the proposed method.

© 2016 Optical Society of America

## 1. Introduction

Spectral imaging is an important method for high-fidelity color reproduction [1], object recognition and identification [2], health monitoring of plants or organism [3], etc. Although the applications of spectral imaging are mostly found in land, this technique has also proven to be valuable in subaqueous surroundings, e.g, in the detection of red tide [4], classification of algae and corals [5–7], health condition of corals [8, 9], identification of underwater minerals [10]. However, it is challenging to obtain spectral images of underwater objects of high fidelity, mainly due to the wavelength-dependent absorption and scattering by water on the permeating light, which distorts the spectral energy distribution in underwater images as well as causes the images to be dark and hazy [11–13].

Intensive research has been done to remove the effects of water column on images, by shortening the underwater imaging distance [6–8], comparing the measurement with references [10, 14–16] or correcting the spectral distortion based on attenuation models [17–22].

For instance, in [6, 8], the light path in water is set to be so short that the influence of water can be neglected. In some studies, standard Spectralon panel with known reflectance is placed beside the underwater object of interest to estimate the attenuation of water, thus the reflectance of the object can be determined [5, 10, 14, 15]. In the work by Mobley [16], a library is built beforehand, containing the spectral radiance reflected by various objects under different water depth in different water environment. Spectral measurement in the field is then compared with the spectrums in the library such that the one with highest similarity is selected; hence the depth and optical properties of the water are determined.

Others are devoted to modeling of the underwater imaging process, calibration of the optical properties of water (e.g., attenuation and scattering coefficients) and correction of the underwater images. The water attenuation is modeled by the Beer-Lambert law in most works [17–24], but various affecting factors are considered in different models, e.g., the scattering of water [18, 23], vignetting in the image [20, 21, 23], response of the camera [19, 23]. During the calibration of the water attenuation coefficient, spectrometers are commonly used to measure the downwelling radiance from the sunlight or atmosphere or reflected spectrum of gray reference in different underwater depths [17–19, 24]. To simplify the calibration process, underwater images taken in different distances or water depth are also used [20, 21]. With knowledge of the attenuation coefficient, the brightness (or intensity) of underwater images are then corrected according to the Beer-Lambert law or model of the imaging system [17–22].

It’s worth noting that the work in [17–22] mainly focuses on color correction of underwater images captured with 3-channel color cameras, where broadband color filters (e.g., red, green and blue filters) with bandwidth more than 100 nm are commonly used. But to highlight spectral features of the object from background, narrowband filters (typically with a full-width-half-maximum (FWHM) of no more than 10 nm) are desired so that images can be acquired at characteristic wavelengths or specific wavelengths of interest [25, 26, 28].

Therefore in this paper, the feasibility of using narrowband filters for underwater spectral imaging is preliminarily studied and calibration and restoration methods for narrowband imaging are investigated. Modeling of underwater spectral imaging with narrowband filters is discussed by referring to the optical properties of water, narrowband filter, lens and image sensor. Model coefficients (e.g., attenuation coefficient of the water, transmittance of the optical window) are calibrated from narrowband spectral images captured both in air and underwater for the same object. Energy loss in underwater images is then restored based on the imaging model and underwater distance.

The main contribution of this paper lies in the exploration of using narrowband filters for underwater spectral imaging and subsequent calibration and restoration methods. Benefitted from decrement in bandwidth of the filter and parameterization in the model, the relative restoration error less than 5% is achieved.

The paper is structured as follows. Modeling of underwater spectral imaging process is investigated in Section 2. In Section 3, the method for model coefficients calibration and image restoration are presented with detailed algorithms. The experimental setup is described in Section 4, followed by experiments and results in Section 5. Discussion is in Section 6 and the paper is concluded in Section 7.

## 2. Modeling of narrowband underwater spectral imaging

#### 2.1. Spectral imaging in air

The schematics of the narrowband spectral imaging systems under investigation are depicted in Fig. 1(a) and Fig. 1(b) for imaging in air and underwater, respectively. The imaging system mainly consists of a lens (or lens system), a tunable narrowband filter, and an image sensor (e.g. a CCD sensor or CMOS sensor in a camera). The light emitted or reflected by the object is focused by the lens onto the image sensor, which is placed in the image plane of the lens. The passband of the filter can be tuned, e.g. using a filter wheel, a liquid crystal tunable filter, or an acoustic-optic tunable filter, so that only light within a narrow wavelength range can be transmitted to the image sensor.

Consider a point *P* on an object in air that is imaged by the imaging system. Denote the coordinates of point *P* as *P*(*x,y*), the object distance as *z*, and the focal length of the lens as *f*; then, the coordinates of the image point *P*′ (denoted as *P*′(*x*′, *y*′)) and the image distance (denoted as *z*′) are given by [29]

Since achromatic lenses are commonly used in spectral imaging systems to reduce the achromatic aberration, the focal length of the lens is considered wavelength-independent in the operational wavelength range.

Let *L*(*x,y,λ*) be the radiance emitted or reflected from point *P*, where *λ* is the wavelength of light in air; then, the irradiance at *P*′ (denoted as *E*(*x*′,*y*′,*λ,z*)) can be represented as [29]

*D*is the diameter of the aperture of the imaging system (e.g., the diameter of the lens), and

*τ*(

_{l}*λ*) and

*τ*(

_{f}*λ*) are the transmissivity of the lens and filter, respectively. For simplicity of expression, terms

*C*and

*G*(

*z*) are defined as in Eq. (2), where

*C*is related to the aperture and focus length of the lens and

*G*(

*z*) shows how the irradiance changes with the object distance

*z*and focal length

*f*.

The angle *θ* is the view angle and cos^{2} *θ* is given by

The term cos^{4} *θ* represents the natural vignetting of the imaging system, which leads to inhomogeneous irradiance distribution in the image sensor. In this paper, only the paraxial case is considered, e.g., if the size of the object is much smaller than the object distance *z*, then the angle *θ* is close to 0, cos *θ ≈* 1, and the natural vignetting is neglected. Therefore Eq. (2) is simplified as

The image sensors in digital cameras usually consists of a matrix of pixels, where the irradiance in each pixel is converted into brightness of the image. Suppose light in the image point *P*′ falls on a pixel in the *i*th row and *j*th column of the image sensor. Denote the exposure area of the pixel as *S _{i,j}*. Hence, the brightness of this pixel (denoted as

*I*(

*i, j,λ*)) can be represented as

_{c},z*λ*is the central wavelength of the filter and the passband of the filter is represented as [

_{c}*λ*Δ

_{c}−*λ*

_{1},

*λ*+ Δ

_{c}*λ*

_{2}] where Δ

*λ*

_{1}and Δ

*λ*

_{2}are wavelength intervals.

*q*(

*λ*) is the spectral response of the image sensor.

*t*is the exposure time of the image sensor.

*a*represents the conversion from optical power to brightness of the pixel. Because the passband of the filter is within the response range of the image sensor, the integration is over the passband of the filter.

As a pixel is in the scale of microns, the irradiance is considered uniform within the exposure area *S _{i,j}*. Integration over space coordinates and wavelength can be separated and Eq. (5) be written in a compact form as

The terms *A* and *C* depend on the configuration of the camera and the parameters of the lens, respectively. The influence of the object distance is embraced in *G*(*z*). The new-defined term *H*(*x,y,λ _{c}*) shows how the brightness of the image can be changed by tuning the central wavelength of the filter.

#### 2.2. Underwater spectral imaging

While taking spectral images underwater, the imaging system should be sealed waterproof and the underwater object can be imaged through an optical window made of glass or sapphire (see Fig. 1(b)). Not only is the size of the image changed due to the refraction of water, but the spectral radiance is also attenuated by the water.

Referring to Fig. 1(b), for the point *P*(*x,y*) on the object, the coordinates of the image point
${P}_{w}^{\prime}$ are given by [29]

*z*is the equivalent underwater distance(see Fig. 2), defined as where

_{w}*l*′,

*l*and

*d*are the distances in air, water and glass, respectively.

*n*and

_{w}*n*are the refractive indices of water and glass, respectively. In this study, as the underwater spectral imaging system operates in a wavelength range of [400 nm,700 nm], the variation in

_{g}*n*and

_{g}*n*with respect to the wavelength is neglected. Since the refractive indices of water and glass are more than 1, it seems the object was brought closer to the lens. As a consequence, the image is enlarged.

_{w}According to the Beer-Lambert law [31], the radiance emitted or reflected by the underwater object is attenuated exponentially with respect to the underwater distance. In addition, the scattering of light by both the water and the particles in the water should be considered in underwater imaging [13, 23]. In a similar way to Eq. (4), the irradiance in the image point ${P}_{w}^{\prime}$ (denoted as ${E}_{w}({x}_{w}^{\prime},{y}_{w}^{\prime},\lambda ,{z}_{w})$) can be represented as

*β*(

*λ*) represents the transmissivity of the optical window and

*α*(

*λ*) is the attenuation coefficient of the water. The term

*E*(

_{b}*λ*)

*e*

^{−ν}^{(}

^{λ}^{)}

*represents the influence of scattered light on the background of the image with parameters*

^{l}*E*(

_{b}*λ*) and

*ν*(

*λ*). As scattering changes the direction of light propagation, not only the optical power in the image point ${P}_{w}^{\prime}$ is reduced (as embraced in the attenuation coefficient

*α*(

*λ*)), but the background of the image is hazed, as in the term

*E*(

_{b}*λ*)

*e*

^{−}

^{ν}^{(}

^{λ}^{)}

*.*

^{l}*E*(

_{s}*λ*) represents the influence of stray light, which adds to the hazing of the image as well.

By comparing Eq. (4) and Eq. (9), it can be seen that the irradiance in underwater image is reduced by both attenuation and refraction of water. By attenuation of water, the total light power arriving in the image plane is reduced, while, due to refraction of water, the underwater image is enlarged compared with the image in air at the same object distance. Therefore, the light power per unit area in the image plane is reduced and the underwater images appear darker than those taken in air.

When imaging with a digital camera, suppose the image point
${P}_{w}^{\prime}$ is within a pixel with coordinates (*i _{w}, j_{w}*); then, similar to Eq. (5) and (6), the brightness of the pixel can be expressed as

*I*(

_{w}*i*) is the pixel brightness.

_{w}, j_{w},λ_{c},z_{w}As narrowband filters are used in imaging, the coefficients *α*(*λ*), *β*(*λ*), *E _{b}*(

*λ*),

*ν*(

*λ*) and

*E*(

_{s}*λ*) are considered constant in the passband of the filter, i.e.,

*λ*∈ [

*λ*Δ

_{c}−*λ*

_{1},

*λ*+ Δ

_{c}*λ*

_{2}]. Therefore, Eq. (10) can be simplified as

*κ*(

*λ*) and

_{c}*γ*(

*λ*) represent the influence of scattered light and stray light, respectively. Since coefficients

_{c}*α*(

*λ*),

*β*(

*λ*),

*E*(

_{b}*λ*),

*ν*(

*λ*) and

*E*(

_{s}*λ*) vary with wavelength in fact, Eq. (11) only holds if the bandwidth of the filter is infinitely small (i.e., Δ

*λ*

_{1}+ Δ

*λ*

_{2}approaches 0). The approximation error increases with the bandwidth of the filter in general, but also depends on the variation in the coefficients. Therefore it’s important to keep the bandwidth of the filter narrow enough, especially in the wavelengths where the coefficients vary rapidly.

## 3. Spectral image restoration

By combining Eq. (6) and Eq. (12), connection can be made between the image brightness in air and underwater as

The term *k*(*λ _{c},l*) represents the attenuation by the water and the optical window, and

*b*(

*λ*) represents the hazing because of scattering, stray light, etc. The goal of image restoration is to compensate for the energy loss in underwater images due to water and optical window attenuation.

_{c},lDefine a coefficient vector *ϕ* (*λ*) = [*α*(*λ*),*ν*(*λ*),*β*(*λ*),*κ*(*λ*),*γ*(*λ*)] ∈ ℝ^{5}. If all coefficients in *ϕ* (*λ*) are known, the energy loss in underwater images can be rectified based on the relationship in Eq. (13). However, in practice, the coefficients vary in different regions of the oceans and may also change with respect to time. Therefore, it is important to calibrate the coefficients *in-situ* to restore the underwater spectral images accurately.

#### 3.1. Calibration of coefficients

As relationship has been made in Eq. (13), coefficients in *ϕ*(*λ*) can be estimated by data-fitting for given air image and underwater images of the same object. However, it is required that the brightness *I*(*i, j,λ _{c},z*) and

*I*(

_{w}*i*) should correspond to the same point

_{w}, j_{w},λ_{c},z_{w}*P*in the object, which is difficult to implement in practice if not impossible. For this reason, a calibration object with easily-recognizable pattern is necessary to make correspondence for the same point or same region among air and underwater images. For instance, a matrix of squares displayed in a phone screen (see Fig. 3) is used as the calibration object in experiments later on. The color is uniform in each square for ease of making correspondence between images, but different between squares to improve the accuracy of coefficient estimation.

Suppose we have spectral images of the calibration object in air and underwater. All images are acquired with the central wavelength of the filter *λ _{c}* =

*λ*

_{1}. The image in air is captured at an object distance of

*z*=

*z*

_{1}. The images underwater are captured at a series of underwater distances

*l*=

*l*

_{1},

*l*

_{2},

*…,l*(

_{M}*M*is the number of distances). The coefficients in

*ϕ*(

*λ*) can be calibrated based on Eq. (13) as follows.

Take brightness of *N* regions in the air image and the corresponding regions in underwater images (*N* is the number of regions for calibration), and a set of Eqs. can be built as

Writing Eq. (14) in matrix form, we have a compact matrix Eq. as

The unknown terms *k*(*λ*_{1},*l*_{1}) and *b*(*λ*_{1},*l*_{1}) can be estimated by solving Eq. (15) with linear least squares (LLS) method as

*X*. To make sure that the matrix

*D*is invertible, at least two regions of different brightness should be selected, i.e.,

^{T}D*D*has full column rank. By selecting more regions into the Eq. set, the accuracy of estimation can be improved.

To have an estimate of the water attenuation coefficient *α*(*λ*), underwater images captured at different distances can be used. According to the definition of *k*(*λ,l*) in Eq. (13), a set of Eqs. can be developed for a series of underwater distances as

Unknowns *α*(*λ*_{1}) and *β*(*λ*_{1}) can be estimated by solving an optimization problem as

*J*is the cost function to be minimized by the optimization algorithm. $\widehat{\alpha}({\lambda}_{1})$ and $\widehat{\beta}({\lambda}_{1})$ are the estimates of

*α*(

*λ*

_{1}) and

*β*(

*λ*

_{1}), respectively.

Similarly, another set of Eqs. to estimate the hazing coefficients is formed as

Coefficients *ν*(*λ*_{1}), *κ*(*λ*_{1}) and *γ*(*λ*_{1}) are estimated by solving an optimization problem as

*J*′ is the cost function to be minimized by the optimization algorithm. $\widehat{\nu}({\lambda}_{1})$, $\widehat{\kappa}({\lambda}_{1})$ and $\widehat{\gamma}({\lambda}_{1})$ are the estimates of

*ν*(

*λ*

_{1}),

*κ*(

*λ*

_{1}) and

*γ*(

*λ*

_{1}), respectively.

By repeating the procedures above for different wavelengths *λ _{c}* =

*λ*

_{1},

*λ*

_{2},⋯,

*λ*(where

_{p}*p*is the number of wavelengths to be tuned in the imaging system), the coefficients in

*ϕ*(

*λ*) can be estimated.

#### 3.2. Image restoration

With coefficients estimated, following Eq. (13), the underwater images can be restored as

To evaluate the accuracy of the restoration method, comparison is made between the restored underwater images and images captured in air. The restoration error is defined as

*I*(

_{e}*i, j,λ*) is the restoration error. The term $\frac{G(z)}{G({z}_{w})}$ accounts for the change in brightness of underwater images due to change in image size and $\widehat{I}(i,j,{\lambda}_{c},z)$ is the estimated image brightness in air. The relative restoration error is defined as

_{c},zTherefore the restoration method can be summarized as follows.

**Model-based narrowband underwater spectral image restoration method** (general description and *pseudo code* implementation)

- Spectral image acquisition
*Capture a spectral image cube for a calibration object in air at a distance of z, with wavelength λ*=*λ*_{1},*λ*_{2},⋯,*λ*_{p}(where p is the number of filters).*Capture M underwater spectral image cubes for the calibration object at underwater distances l*=*l*_{1},*l*_{2},⋯,*l*=_{M}(M is the number of underwater distances), respectively. Each cube is captured at wavelength λ*λ*_{1},*λ*_{2},⋯,*λ*_{p}. - Coefficient calibration
for

*λ*=_{c}*λ*_{1},*λ*_{2},⋯,*λ*_{p}for

*l*=*l*_{1},*l*_{2},⋯,*l*_{M}*Estimate k*(*λ,l*)*and b*(*λ,l*)*using Eqs. (15)–(16).*end

*Estimate coefficients α*(*λ*),*β*(*λ*),*ν*(*λ*),*κ*(*λ*)*and γ*(*λ*)*using Eqs. (17)–(20).*end

- Image restoration

*Image brightness restoration using Eq. (21).*

*Error evaluation using Eq. (22) and Eq. (23).*

## 4. Experimental setup

To verify the restoration method, an experimental setup is implemented as shown in Fig. 3, which mainly consists of a mobile phone, a water tank and a spectral imaging system.

The mobile phone (Xiaomi 3, Xiaomi, China) is placed in a waterproof glass box, with 60 color pieces displayed on its LCD screen, to act as the underwater object to be imaged. The color pieces are squares in 6 rows and 10 columns. Each column presents a hue of the Munsell color system [32], and each row presents different combinations of value and chroma. As discussed in Section 3.1, the screen pattern is designed for calibration of model coefficients, such that the same color piece can be easily recognized among images captured in air and underwater.

The waterproof box (*L*35 *× W*25 × *H*50 cm) is made of 6 mm-thick quartz glass, placed in a water tank (*L*300 *× W*30 *× H*30 cm) made of 10 mm-thick quartz glass. The water tank is filled with clean tap water.

The spectral imaging system is placed out of the water tank. It mainly consists of an imaging lens, a mirror, a rotation filter wheel with a set of filters and a monochrome CCD camera. The lens is a cemented doublet (GCL-010607, Daheng, China), with a focal length of 150 mm and a diameter of 38.1 mm. The computer-controlled rotation filter wheel is installed with 15 filters, each having a tiny magnet beside. The wheel is driven by a stepper motor with the rotation angle monitored by a Hall sensor. The filters (FB serious, Thorlabs, USA) on the wheel have a FWHM of 10 nm (see Fig. 3(c)), with the central wavelength ranging from 420 nm to 700 nm at an interval of 20 nm. The camera (Lm-165M, Lumenera, USA) has a resolution of 1392 × 1040 pixels and a dynamic range of 66 dB. During image acquisition, the filters are spun onto the light axis of the camera successively, and stay on the axis until the camera is fully exposed.

## 5. Experiments and results

#### 5.1. Preliminary tests

Preliminary tests are carried out before spectral image acquisition, to evaluate the stability of the phone screen and the linearity of the camera. The mobile phone is switched on for more than 10 minutes before the test. The output light intensity of certain color piece is measured by a fiber spectrometer (FLA5000, Jingfei, China) for 3500 scans at an integral time of 50 ms. During the test, the phone is kept charged with a voltage-stabilized source.

The response of the spectrometer on the color piece (3,7) (i.e., the piece in the 3*rd* row and 7*th* column of the pattern) is shown in Fig. 4, where the response at wavelengths of 447 nm, 540 nm, and 660 nm are evaluated. The intensity variation for certain wavelength *λ* is defined as

*I*(

_{max}*λ*),

*I*(

_{min}*λ*) and

*I*(

_{mean}*λ*) are the maximal, minimal, and average intensity of certain wavelength during the observation, respectively. The intensity variation is 1%, 1.7% and 8.9% for wavelengths of 447 nm, 540 nm, and 660 nm, respectively, indicating a stable light output by the phone screen.

In the test of the camera linearity, an integrating sphere with a standard light source inside is used as the imaging object. The camera takes images of the exit port of the integrating sphere, with the exposure time ranging from 5 ms to 500 ms at an interval of 5 ms. From these images, the dependence of image brightness on exposure time is analyzed. A linear dependent coefficient of 0.99996 indicates good linearity between image brightness and exposure time of the camera.

#### 5.2. Spectral image acquisition

To avoid overexposed or dark images, a scan of the spectral images of the phone screen is conducted before image acquisition, during which the exposure time ranges from 50 ms to 500 ms, and the appropriate exposure time of each wavelength is selected. Then one spectral image cube is acquired in air at a fixed distance of 340 cm and 26 spectral image cubes are acquired for underwater distances of 10 cm, 20 cm, ⋯, 260 cm, respectively. Each image cube (both in air and underwater) consists of 15 images captured at wavelengths of 420 nm, 440 nm, ⋯, 700 nm, respectively. The image cubes captured at underwater distances of 10 cm, 20 cm, 40 cm, 50 cm, 70 cm, ⋯, 250 cm, 260 cm (i.e., every two of three distances) are used for coefficient calibration and the rest for test.

After image collection, the image brightness is normalized to an exposure time of 50 ms. The time-normalized image brightness *I _{n}* is calculated as

*I*

_{0}is the brightness of the raw image before normalization and

*t*is the exposure time in ms. The raw brightness

*I*

_{0}is a unitless integer in the range of [0, 255]. The exposure time

*t*is always more than 50 ms during image acquisition, hence the normalized brightness

*I*is in the range of [0, 1].

_{n}The irradiance (in W *·* m^{−}^{2} *·* nm^{−}^{1}) at the image sensor can be determined if the imaging system is calibrated by standard calibration facility and method, e.g., as described in [27]. However, since the focus of this paper is on the restoration method (i.e., to restore the underwater image such that its brightness is close to that in air), normalized brightness *I _{n}* is used throughout image processing and analysis in the following steps.

#### 5.3. Coefficients calibration

As described in Section 3.1, brightness in air image and underwater images corresponding to the same point or region in the calibration object are required for coefficients calibration. For this purpose, the central 5 *×* 5 pixels in each color piece image are selected and the brightness in this region is averaged to represent the brightness of each color piece in each image. That is, we have 60 brightness values for each image, each value presenting the brightness of individual color piece in the image.

The coefficients *k*(*λ,l*) and *b*(*λ,l*) are estimated based on Eq. (15) and Eq. (16). The fitting results are shown in Fig. 5 for the wavelength of 460 nm and 620 nm at an underwater distance of 10 cm, 40 cm and 240 cm. To evaluate the accuracy of estimation, the relative standard deviation (RSD) is defined as

Here std(*y*) is the standard deviation of *y*. For the fitting of *k*(*λ,l*) and *b*(*λ,l*), RSD is 5.5%, 3.2% and 6.0% for underwater distance of 20 cm, 160 cm and 260 cm, respectively, at a wavelength of 460 nm, showing accurate estimation of *k*(*λ,l*) and *b*(*λ,l*).

With *k*(*λ,l*) available, unknown coefficients *α*(*λ*) and *β*(*λ*) are estimated based on Eq. (17) and (18). The fitting results are shown in Fig. 6 for the wavelengths of 460 nm and 620 nm. The maximal RSD is 8.66% for the entire calibration data set and 9.56% for the test set, which is very close to the calibration set.

Likewise, the hazing coefficients *ν*(*λ*), *κ*(*λ*) and *γ*(*λ*) are estimated from *b*(*λ,l*) based on Eq. (19) and (20). The fitting results are shown in Fig. 7 for wavelengths of 460 nm and 620 nm as well. From the plots, it can be seen that the fitting curve is very close to the data points, indicating accurate fitting of the coefficients.

#### 5.4. Image restoration

With all necessary coefficients (i.e., *α*(*λ*), *β*(*λ*), *ν*(*λ*), *κ*(*λ*) and *γ*(*λ*)) estimated, image restoration is accomplished according to Eq. (21). In Fig. 8(a), the raw underwater images and the images after restoration have been shown for comparison for wavelengths of 460 nm and 620 nm, at underwater distances of 60 cm, 150 cm and 240 cm (i.e., all in the test image set). As the underwater distance increases, the brightness of underwater images decreases due to water attenuation as well as the size of the image gets smaller. It is also clearly visible that the brightness of the images has been improved significantly after restoration.

In order to quantify the accuracy of image restoration, the restoration error *I _{e}* and the relative restoration error

*ε*are evaluated as in Eq. (22) and Eq. (23), respectively. For the wavelength of 460 nm, the relative restoration error is 4.90%, 5.73% and 5.66% for the distances of 60 cm, 150 cm and 240 cm, respectively. For the wavelength of 620 nm, the relative error is 4.45% for a underwater distance of 240 nm, which indicates that the energy loss in 620 nm has been compensated quite significantly.

Figure 9 shows the spectra of four color pieces before and after restoration. The raw spectra are collected from underwater images with a underwater distance of 240 cm. Results are also compared with the spectra in air. It can be seen that the spectra after restoration almost overlap those in air. The restoration errors of four pieces are all less than 0.02. The relative restoration error is 3.58% in average for all 60 color pieces, all distances and wavelengths, indicating accurate restoration of the spectral energy.

To show the restoration results intuitionally, color images are presented in Fig. 10, where the color images are constructed from spectral images as

*S*(

_{c}*λ*) is the quantum efficiency of a color camera (MER-030-120UC-L, Daheng, China),

*T*(

*λ*) is the filters’ peak transmittance,

*S*(

_{m}*λ*) is the quantum efficiency of the monochrome camera used for acquiring spectral images (Lm-165M, Lumenera) and

*I*(

_{c}*x,y*) represents the brightness of each primary color. It can be seen in Fig. 10 that the restored color image is quite close to the one in air. The effect of restoration is clearly visible.

## 6. Discussion

#### 6.1. Influence of the number of filters on restoration

As per the independence of calibration and restoration between each spectral band for the proposed method, the increasing number of filters without changing the bandwidth of the narrowband filter has no effect on the accuracy of image restoration for each spectral band. In this paper, 15 narrowband filters are preliminarily selected with central wavelengths between 420 nm and 700 nm, just to test and verify the effectiveness of the proposed method in the visible range and show the generality of the method.

But in general, if more filters are used, then spectral radiance can be measured at more wavelengths, which will be helpful, for instance, for high-fidelity color restoration where spectral information is required for the whole visible range. In other cases, if only the spectral radiance at certain wavelengths are of interest, then the central wavelength of the filters can be selected to only cover these wavelengths, thus reducing the complexity of the imaging system.

#### 6.2. Influence of the filter bandwidth on restoration

As in modeling of the underwater spectral imaging system, the deduction from Eq. (10) to Eq. (12) is based on the condition that the bandwidth of filters are so narrow that the coefficients are considered constant within the passband. The modeling error will increase and the accuracy of underwater image restoration will be degraded if broadband filters (e.g. red, green and blue color filters in standard color cameras) are used or if the attenuation coefficient of water varies rapidly with wavelength. On the other hand, narrower bandwidth leads to better modeling accuracy, but meanwhile results in light intensity decrease in each band, thus reducing the signal-to-noise ratio in images. Therefore, it is important to seek balance between the bandwidth of the filter and the light intensity in each band.

#### 6.3. Dependence of the restoration error on the distance and wavelength

As shown in Fig. 11, the relative restoration error *ε* varies with the underwater distance and wavelength. As the underwater distance increases, the energy loss due to water attenuation gets more severe. The underwater images suffers more from stray light in the environment and noise in the imaging system, which leads to increase in the restoration error.

The restoration error also varies with the central wavelength of the narrowband filter (see Fig. 11(b)). The error decreases with wavelength in the range of [420 nm, 540 nm] and then increases in the range of [540 nm, 680 nm]. This may be due to the fact that the spectral radiance emitted from the mobile phone screen changes significantly within the wavelength range of [420 nm, 480 nm], such that the approximation in Eq. (11) (i.e., the coefficients are constant within the passband of the filter) results in larger error for this wavelength range. As the wavelength increases from 540 nm, the attenuation coefficient of water increases as well. Therefore the signal level in underwater images is reduced and the restoration error increases.

#### 6.4. Sensitivity to realistic environmental conditions

Compared with lab conditions, the optical properties of the water change with time and locations in realistic environmental conditions. In order to investigate how the restoration error changes with the error in the calibrated coefficient, a simulation is conducted with random deviation added on the calibrated coefficients *α* and *ν*. The maximal amplitude of the deviation changes from 1% to 15% with an interval of 1%, with 500 run for each amplitude. The relative restoration error of the test images are evaluated for each test, with the maximum and mean shown in Fig. 12(a) for a central wavelength of 460 nm. The mean restoration error increases from lower than 6% to about 8% when the deviation increases to 15%, while the maximal restoration error reaches 14% when the deviation is 10%, much faster than the mean error. Therefore monitoring of the optical property of the water is important and re-calibration is necessary if there is significant change in water property.

Measurement error (or noise) in underwater distance also has influence on coefficient calibration and image restoration. Simulation is conducted to investigate how the restoration error changes with the error in the underwater distance measurement. Random noise with maximal amplitude in the range of 1%-15% is introduced in distance measurement in case of both coefficient calibration and image restoration. The restoration error is evaluated, 500 run for each amplitude. The maximal and mean relative restoration errors are shown in Fig. 12(b). It can be seen that both the mean and maximal restoration error increases with the measurement error. The maximal restoration error reaches 9% when the distance measurement error is about 10%.

## 7. Conclusion and future work

In this paper, a model is built for narrowband underwater spectral imaging, considering the optical properties of water, narrowband filters, lens and the camera. Calibration and the restoration methods are then proposed based on the model. An experimental setup is built in lab to verify the proposed method. The restored images are compared with images captured in air and results show that the relative restoration error is 3.58% in average for the test image group. Thus the accuracy of the restoration method is proved.

Future work will focus on spectral reflectance restoration for underwater objects and field test of the method.

## Acknowledgments

The authors would like to thank Qiang Yongfa, Na Di, Li Ruiqi and Chen Yao for their support in the experiments. The work is financially supported by the National High-tech R&D Program of China (863 Program) (No. 2014AA093400), National Natural Science Foundation of China (NO. 11304278) and Open Fund of State Key Laboratory of Satellite Ocean Environment Dynamics (No. SOED1606).

## References and links

**1. **C. Liu, W. Liu W, X Lu, W Chen, and J Yang, “Potential of multispectral imaging for real-time determination of colour change and moisture distribution in carrot slices during hot air dehydration,” Food Chem. **195**, 110–116 (2016). [CrossRef]

**2. **P. Launeau, A. R. Cruden, and J. L. Bouchez, “Mineral recognition in digital images of rocks: a new approach using multichannel classification,” Can. Mineral. **32**, 919 (1994).

**3. **R. Ismail, O. Mutanga, and U. Bob, “Forest health and vitality: the detection and monitoring of pinus patula trees infected by sirex noctilio using digital multispectral imagery,” Southern Hemisphere Forestry Journal **69**, 39–47 (2007). [CrossRef]

**4. **Y. H. Ahn and P. Shanmugam, “Detecting the red tide algal blooms from satellite ocean color observations in optically complex Northeast-Asia Coastal waters,” Remote Sens. Environ. **103**, 419–437 (2006). [CrossRef]

**5. **A. C. R. Gleason, R. P. Reid, and K. J. Voss, “Automated classification of underwater multispectral imagery for coral reef monitoring,” in Proceedings of IEEE/MTS OCEANS’07 (IEEE, 2007), pp. 1–8.

**6. **M. Mehrubeoglu, M. Y. Teng, and P. V. Zimba, “Resolving mixed algal species in hyperspectral images,” Sensors **14**, 1–21 (2013). [CrossRef]

**7. **M. Mehrubeoglu, D. K. Smith, S. W. Smith, K. B. Strychar, and L. McLauchlan, “Investigating coral hyperspectral properties across coral species and coral state using hyperspectral imaging,” Proc. SPIE **8870**, 88700M (2013). [CrossRef]

**8. **D. Zawada, “Image processing of underwater multispectral imagery,” IEEE Journal of Oceanic Engineering **28**, 583–594 (2003). [CrossRef]

**9. **P. J. Mumby, J. D. Hedley, J. Chisholm, C. Clark, H. Ripley, and J. Jaubert, “The cover of living and dead corals from airborne remote sensing,” Coral Reefs **23**, 171–183 (2004). [CrossRef]

**10. **S. Aarrestad, “Use of underwater hyperspectral imagery for geological characterization of the seabed,” Masters thesis (Norwegian University of Science and Technology, 2014).

**11. **P. J. Mumby, C. D. Clark, E. P. Green, and A. J. Edwards, “Benefits of water column correction and contextual editing for mapping coral reefs,” International Journal of Remote Sensing **19**, 203–210 (1998). [CrossRef]

**12. **H. Holden and E. LeDrew, “Effects of the water column on hyperspectral reflectance of submerged coral reef features,” Bulletin of Marine Science **69**, 685–699 (2001).

**13. **H. Y. Yang, P. Y. Chen, C. C. Huang, Y. Z. Zhuang, and Y. H. Shiau, “Low complexity underwater image enhancement based on dark channel prior,” in Proceedings of International Conference on Innovations in Bio-inspired Computing and Applications, (IEEE, 2011), pp. 17–20.

**14. **R. Pettersen, G. Johnsen, P. Bruheim, and T. Andreassen, “Development of hyperspectral imaging as a bio-optical taxonomic tool for pigmented marine organisms,” Organisms Diversity & Evolution **14**, 237–246 (2014). [CrossRef]

**15. **G. Johnsen, Z. Volent, H. Dierssen, R. Pettersen, M. Ardelan, F. Sreide, P. Fearns, M. Ludvigsen, and M. Moline, “Underwater hyperspectral imagery to create biogeochemical maps of seafloor properties,” in *Subsea Optics and Imaging*, J. Watson and O. Zielinski, eds. (Woodhead Publishing, 2013), pp. 508–535. [CrossRef]

**16. **C. D. Mobley, L. K. Sundman, C. O. Davis, J. H. Bowles, T. V. Downes, R. A. Leathers, M. J. Montes, W. P. Bissett, D. D. R. Kohler, R. P. Reid, E. M. Louchard, and A. C. R. Gleason, “Interpretation of hyperspectral remote-sensing imagery by spectrum matching and look-up tables,” Appl. Opt. **44**, 3576–3592 (2005). [CrossRef] [PubMed]

**17. **J. Åhlén, E. Bengtsson, and D. Sundgren, “Evaluation of underwater spectral data for colour correction applications,” in Proceedings of the 5th WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing (World Scientific and Engineering Academy and Society, 2006), pp. 321–326.

**18. **J. Åhlén, “Colour correction of underwater images using spectral data,” Ph.D. thesis (Acta Universitatis Up-saliensis, 2005).

**19. **D. L. Bongiorno, M. Bryson, and S. B. Williams, “Dynamic spectral-based underwater colour correction,” in Proceedings of IEEE/MTS OCEANS’13 (IEEE2013), pp. 1–9.

**20. **J. Kaeli, H. Singh, C. Murphy, and C. Kunz, “Improving color correction for underwater image surveys,” in Proceedings of IEEE/MTS OCEANS’11 (IEEE, 2011), pp. 805–810.

**21. **M. Bryson, M. Johnson-Roberson, O. Pizarro, and S. B. Williams, “Colour-consistent structure-from-motion models using underwater imagery,” in *Robotics: Science and Systems VIII*, N. Roy, P. Newman, and S. Srinivasa, eds. (MIT, 2013).

**22. **A. Yamashita, M. Fujii, and T. Kaneko, “Color registration of underwater images for underwater sensing with consideration of light attenuation,” in Proceedings of 2007 IEEE International Conference on Robotics and Automation (IEEE, 2007), pp. 4570–4575.

**23. **M. Boffety, F. Galland, and A. Allais, “Color image simulation for underwater optics,” Appl. Opt. **51**, 5633–5642 (2012). [CrossRef] [PubMed]

**24. **D. R. Mishra, S. Narumalani, D. Rundquist, and M. Lawson, “Characterizing the vertical diffuse attenuation coefficient for downwelling irradiance in coastal waters: Implications for water penetration by high resolution satellite data,” ISPRS Journal of photogrammetry and remote sensing **60**, 48–64 (2005). [CrossRef]

**25. **M. A. Kara, M. Ennahachi, P. Fockens, F. J. W. ten Kate, and J. H. M. Bergman, “Detection and classification of the mucosal and vascular patterns (mucosal morphology) in Barrett’s esophagus by using narrow band imaging,” Gastrointest Endosc. **64**, 155–166 (2006). [CrossRef] [PubMed]

**26. **M. A. Kara, A. Mobammed, F. P. Peters, P. Fockens, F. J. W. ten Kate, and J. H. M. Bergman, “Endoscopic video-autofluorescence imaging followed by narrow band imaging for detecting early neoplasia in Barrett’s esophagus,” Gastrointest Endosc. **64**, 176–185 (2006). [CrossRef] [PubMed]

**27. **S. W. Brown, G. P. Eppeldauer, P. George, and R. Keith, “NIST facility for spectral irradiance and radiance responsivity calibrations with uniform sources,” Metrologia **37**, 579–583 (2000). [CrossRef]

**28. **A. Ibrahim, S. Tominaga, and T. Horiuchi, “Invariant representation for spectral reflectance images and its application,” EURASIP J. Image Vide. **2011**, 1–12 (2011). [CrossRef]

**29. **X. Li and Z. Ceng, *Geometrical Optics, Aberrations and Optical Design*, 2nd ed. (Zhejiang University, 2007).

**30. **S. Lin and L. Zhang, “Determining the radiometric response function from a single grayscale image,” in Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 66–73.

**31. **D. F. Swinehart, “The beer-lambert law,” J. Chem. Educ. **39**, 333 (1962). [CrossRef]

**32. **A. H. Munsell, “Munsell color system”, http://munsell.com/

**33. ** Thorlabs, “Optical filters”, http://www.thorlabs.com/