## Abstract

A demand for division of focal plane (DoFP) polarization imaging technology grows rapidly as nanofabrication technologies become mature. For real-time polarization imaging, a DoFP polarimeter often trades off its spatial resolution, which may cause instantaneous field of view (IFoV) errors. To deal with such problems, interpolation methods are often used to fill the missing polarization information. This paper presents an interpolation technique using Newton’s polynomial for DoFP polarimeter demosaicking. The interpolation is performed in the polarization difference domain with the interpolation error taken into consideration. The proposed method uses an edge classifier based on polarization difference and a fusion scheme to recover more accurate boundary features. Experiments using both synthetic and real DoFP images in visible and long-wave infrared spectrum demonstrate that the proposed interpolation method outperforms the state-of-the-art techniques quantitatively as well as visually to reduce nonconformities caused by high-frequency energy.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

#### 1.1. Background

As a basic physical property of light beyond intensity and wavelength, polarization provides the characteristic information of the object such as surface smoothness, three-dimensional (3D) normal and material composition [1–3]. Polarization imaging has been widely applied in anti-interference object detection [4], 3D reconstruction [5], image dehazing [6], visual navigation [7], and biomedical imaging [8]. A polarization imaging system collects polarization information in different orientations. Various types of existing polarization imagers include division of time, division of amplitude, division of aperture, and division of focal plane (DoFP) [9]. DoFP polarimeter has received much attention due to its integrated sample structure and the ability of capturing polarization information in real time. DoFP polarimeters capture images using a micro-polarizer array (MPA) image sensor as shown in Fig. 1. Each pixel takes the polarized light at only one of four orientations, e.g. 0, 45, 90, and 135 degrees. Missing polarization components at neighboring pixels need to be estimated, or interpolated, since the instantaneous field of view (IFoV) [10] errors will become significant. This polarization interpolation process is called polarization demosaicking (PDM).

To enhance spatial resolution and to reduce IFoV errors, several PDM methods have been developed for DoFP imaging. Classical PDM algorithms include bilinear, bicubic, bicubic spline, and gradient-based interpolation methods [11–13]. All these methods perform interpolation in individual polarization channels, and therefore the correlation information between different polarization orientations can be ignored. Interpolation methods using bilinear, bicubic and bicubic spline functions are low-pass filters which will distort high frequency details. Gradient-based methods interpolate missing polarization components across the four basic gradient directions to obtain more accurate edge feature. However, the interleaved gradients may introduce nonconformities due to IFoV. An edge preserving interpolation method [14] detects the edge directly in the DoFP images using intensity correlation. This method fills missing intensity along a single direction depending on a judgement of the two orthogonal directions, which can lose potentially useful estimates because of such binary decision. Ahmed et al. [15] proposed an interpolation method in a “residual” domain using a guided filter for edge preservation. Unfortunately, however, the applicability to real DoFP images is limited because ideal guided images are difficult to obtain. More recent approaches [16,17] include sparse representation-based and deep learning-based PDM methods. The sparse representation-based methods achieve better PDM results, but are more time-consuming than other methods. Deep learning-based PDM methods produce good results using convolutional demosaicing networks. Training data consists of synthetic images by down sampling the images captured with division-of-time polarization imaging systems to four polarization angles in a DoFP pattern. Therefore, the performance of this convolutional demosaicing network remains unknown for real DoFP data. The training method may not be directly applied to long-wave infrared (LWIR) data, since the emitted and reflected radiation of front-mounted polarizer will be received by infrared focal plane array [18], which makes the division-of-time polarimeter and DoFP polarimeter are essentially different in LWIR spectrum.

Reconstructing missing information also occurs in color imaging sensors with Bayer pattern pixelated color filters array (CFA) [19]. Since Bayer CFA samples the green band at twice the rate of the other two color bands, the demosaicking process always begins from green channel and then the red and blue channels using the estimated green channel with spectral correlation. Several interpolation algorithms [20,21] have been proposed to recover a high-resolution color image from Bayer-filtered color components based on color difference interpolation because of its simplicity for implementation and high accuracy. However, these algorithms may fail to be directly applied in the polarization domain due to inherent difference between color and polarization imaging mechanisms. Recent research [22] showed that the properties of multispectral images used for spectral filter array (SFA) demosaicking are similar in DoFP images, and some SFA demosaicking schemes have been successfully extended to PDM by considering the inter-channel correlation. These extended methods were tested only on a small database of images, and the conclusions need to be validated more deeply.

This paper presents an interpolation technique using Newton’s polynomial function for DoFP polarimeter demosaicking. A polarization difference model is proposed in DoFP images demosaicking to reduce the high-frequency energy that leads to deviation in intensity domain. The Newton’s polynomial interpolation is then performed in the polarization difference domain to estimate the interpolation errors that have not been investigated in other polynomial interpolation methods [11, 12]. An edge classifier based on polarization difference is applied, and a fusion strategy, not a binary decision, of interpolation results at orthogonal directions is performed to recover the details more accurately. The proposed method is compared with bilinear, bicubic, bicubic spline, gradient-based, and residual interpolation methods for both synthetic and real DoFP images in the visible and LWIR spectra quantitatively using RMSE and visually. Experiment results demonstrate that the proposed method outperforms the state-of-the-art techniques for both synthetic and real DoFP data.

#### 1.2. Linear polarization calculation

A DoFP polarimeter with the structure in Fig. 1 to capture intensity and polarization information of a scene is essentially a intensity-based method. The four intensity measurements *I*(*θ*) of the light wave filtered by linear polarizer oriented at *θ* (*θ* = 0°, 45°, 90°, 135°) from the DoFP sensor super-pixel are necessary and sufficient to compute the polarization parameters as follows [23]

*S*_{0}, *S*_{1} and *S*_{2} are known as the first three Stokes parameters. *S*_{0} is the total intensity of the light. The parameters *S*_{1} and *S*_{2} describe the amount of linear polarization. To observe polarization, two polarization properties are of most interest, degree of linear polarization (DoLP) and angle of polarization (AoP), defined as [24]

The Stokes parameter *S*_{3} that describes circular polarization is not captured with the DoFP imager as shown in Fig. 1 for lack of retarder. In fact, the measurements with four linear polarization filters offset by 45° are optimal for maximization of signal-to-noise ratio and minimization of systematic error [25,26].

## 2. Method

This section shows statistical analysis of the proposed polarization difference model and the Newton’s polynomial interpolation method with interpolation error estimation. Then an overview of the proposed DoFP image demosaicking method follows. The proposed interpolation method combines the polynomial interpolation error estimation and the polarization difference to obtain high spatial resolution.

#### 2.1. A polarization difference model

In DoFP image demosaicking problems, false and strong polarization signatures occur at discontinuities in either polarization or intensity domain, which contain essentially high-frequency information [13,15]. Major difficulty in DoFP demosaicking is to reconstruct high-frequency information such as edge and boundary of an object that may lead to interpolation deviations. If high-frequency energy is reduced, the interpolation deviation can be also reduced. This paper assumes that high-frequency energies can be reduced in the polarization difference domain, which is a reasonable assumption since all four polarization channels have polarization correlations and similar image structures such as texture and edge. A similar assumption is widely used in color/spectral-difference interpolation schemes [20, 27, 28] for CFA/SFA demosaicking, and compelling performance is achieved based on this assumption.

To verify this assumption, a statistical analysis is carried out on high-frequency energy of intensity images and polarization difference images for various scenes. Figure 2 shows 12 scenes containing strong edges in the visible and long-wave infrared spectra, 6 visible scenes and 6 LWIR scenes. We calculate the means of high-frequency energies of 4 polarization intensity images and all 6 polarization difference images whose spatial frequency exceeds 0.15 cycles per pixel, where the signal can hardly be reconstructed [12, 15]. Figures 3(a) and 3(b) show high-frequency energies of the scenes in the visible and LWIR, respectively. The means of high-frequency energies of intensity images are shown in red (from dark red to light red) while the ones of polarization difference images are in blue (from dark blue to light blue). The statistical results indicate a clear reduction of the high-frequency energies in the polarization difference images. Figure 4 plots the intensity images, polarization difference images and corresponding spectrograms of the two scenes in Fig. 2. The high-frequency energies are significantly reduced in the spectrograms of polarization difference images as in the yellow dashed rectangle regions. The spectrograms of other scenes have similar variation trends. With the ability to reduce the high-frequency energies, the polarization difference model can help demosaicking DoFP images more accurately.

#### 2.2. Newton’s polynomial interpolation method with interpolation error estimation

Newton’s polynomial interpolation is the applied to the demosaicking process with the polarization difference model. Conventional polynomial interpolation approaches [10–14] often do not take interpolation error into consideration, which is ineluctable for PDM. This paper combines Newton’s polynomial interpolation and the polarization difference model to estimate the interpolation error and to achieve better interpolation performance.

Assume that a function *f* ∈ *C*^{n+1}[*a*, *b*] and *x*_{0}, *x*_{1}, ..., *x _{n}* are distinct nodes in [

*a*,

*b*], and the corresponding

*n*-th Newton’s interpolating polynomial is

*f*[

*x*

_{0},

*x*

_{1}, ...,

*x*] is the divided difference [29]. Once the given function has been interpolated, we wish to know how good the interpolating polynomial is. Let

_{n}*f*be

*n*+ 1 times continuously differentiable on a closed interval [

*a*,

*b*], then for each

*x*in the interval, there exists

*c*in that interval such that where the error function can be given in the form of the Taylor series [29]

For simplicity and effectiveness, we consider *n* = 1 for two points *x*_{0} and *x*_{1} = *x*_{0} + *h*, it follows from Eqs. (3) and (5) that there is a point *ξ* such that

Let $x={x}_{0}+\frac{h}{2}$, we have

In Fig. 5, the pixels are spaced one pixel apart in the DoFP image, that is, *h* = 2. Then, the estimation of *I*_{90}(*i*, *j* − 1), which is missing in position (*i*, *j* − 1), can be calculated using Eq. (8)

Unfortunately, we cannot calculate *f″*(*ξ*), because the values of *I*_{90}(*i*, *j* + 1), *I*_{90}(*i*, *j* − 1) and *I*_{90}(*i*, *j* − 3) are unknown. However, we can estimate a good candidate value for *f″*(*ξ*) by considering the polarization difference model. From Eq. (10), let the polarization difference plane Λ between *I*_{135} and *Î*_{90} be defined as

By applying the polarization difference model of Eq. (11), we obtain

The polarization differences are quite constant over small regions; i.e. Λ(*i*, *j* +1) ≈ Λ(*i*, *j* −1) ≈ Λ(*i*, *j* − 3). Therefore, Eq. (12) can be approximated to

Substituting Eq. (13) into Eq. (9)

The estimate of *I*_{90}(*i*, *j* + 1) can be computed similarly.

#### 2.3. MPA demosaicking with Newton’s polynomial interpolation method

This section introduces the proposed Newton’s polynomial interpolation-based MPA demosaicking method by interpolating three missing pixels at 90° sampled position, *I*_{90}(*i*, *j*) in Fig. 5 as an example. One can find that there still are some high-frequency energies in polarization difference domain as shown in Fig. 4, and the assumption that the polarization differences are quite constant over small region may fail in this situation. To solve this problem, we propose an edge classifier to refine the interpolation results in edge and boundary regions. Since the polarization difference plane is smoother than the polarization intensity plane, interpolating a pixel using the polarization difference plane is much more effective than using the original values. In proposed method, we interpolate the missing polarization values with the use of the polarization difference model.

First, we interpolate the missing 0° polarization information, which is orthogonal to 90° sampled pixel. As show in Fig. 6(a), we compute two estimations of *Î*_{90}(*i*, *j*) in two orthogonal diagonal directions with the use of the polarization difference model. We define the interpolation predictor in polarization difference domain for the 45° diagonal direction as

*I*

_{0}−

*I*

_{90}. Using the polarization difference model, we have

Note that we can use some intermediate computation results to reduce computation burden.

The second step is to make a final estimation of *Î*_{0}(*i*, *j*) based on the edge classifiers. Let *φ*^{45°} and *φ*^{−45°} be the costs of the 45° and −45° diagonal directions, respectively, defined as [21]

The ratio Φ of the two cost value is used as the edge classifier, which is calculated by

If Φ is larger than a preset threshold, *τ* (In the experiments, *τ* = 5.8), the pixel is in a strong edge region. In this case, missing *Î*_{0}(*i*, *j*) is interpolated as follow:

Pixels that are not classified into a strong edge region (Φ ≤ *τ*) are interpolated using the weighted sum. Here *ω*^{45°} and *ω*^{−45°} are weights for the two orthogonal diagonal directions, respectively, defined as

*ε*is a small number added to the denominators to avoid division by zero. For computational simplicity, we use the directional differences as

*d*

^{45°}and

*d*

^{−45°}in Eq. (21), defined as

This will guarantee that the direction of strong change has small weight.

After interpolating the *Î*_{0}(*i*, *j*), missing 45° and 135° polarization information is filled at 90° sampled position, *I*_{90}(*i*, *j*) in Fig. 5. As shown in Fig. 6(b), we can only obtain an estimation in vertical direction, ${\widehat{I}}_{45}^{V}(i,j)$ according to Eq. (16), because we do not have the value of 45° polarization sampled in horizontal direction. However, we can get compelling estimation of missing 45° polarization information in horizontal direction (black dashed rectangles in Fig. 6(b)) by the above interpolation process with a classifier in two orthogonal diagonal directions. Based on this, we can have ${\widehat{I}}_{45}^{H}(i,j)$ for an estimation in horizontal direction. Followed by an edge classifier similar to Eqs. (18) and (19) but in horizontal and vertical directions, the missing *Î*_{45}(*i*, *j*) can be interpolated. Likewise we can recover the missing *Î*_{135}(*i*, *j*) as shown in Fig. 6(c). Repeating the above interpolation procedures, we can fill the missing polarization information in all the other spatial position. Figure 7 shows key procedures of the proposed MPA demosaicking method.

## 3. Experiment results

To test comprehensively the performance of the proposed MPA demosaicking algorithm, we extensively carry out three different experiments. First, we compare and evaluate the modulation transfer function (MTF) response of different interpolation methods. Second, visual comparison and quantitative comparison in synthetic DoFP images, including the visible and LWIR spectra. Then real raw DoFP images are used to demonstrate the performance of the proposed method.

#### 3.1. Modulation transfer function evaluation

The modulation transfer function (MTF) is a function of spatial frequency, which is commonly used to measure the ability to restore details of scene of an imaging system. The MTF is computed as the ratio of the contrast of the output sinusoidal pattern over the contrast of the input sinusoidal pattern for different frequencies. Therefore, a spatially varying sinusoidal pattern at various frequencies is used as a object image to evaluate the MTF of an imaging system. For a DoFP polarization imaging sensor which records the polarization information of the scene in a single shot, the four input object polarization images can be defined as spatially varying sinusoids [12]

*f*and

_{x}*f*are the spatial frequency elements in the horizontal and vertical directions in cycles per pixel. The original images are rearranged to a simulated image in sampling pattern of the DoFP polarization imaging sensor, and the original and interpolated images are then used to compute the MTF of the system. Next, the Fourier transform is computed on the intensity images,

_{y}*S*

_{0}, from the original and interpolated data. In Eq. (23), the object image is a cosine function with horizontal and vertical frequency (

*f*,

_{x}*f*). Accordingly, the Fourier image has two main frequency peaks at (

_{y}*f*,

_{x}*f*) and (−

_{y}*f*, −

_{x}*f*). Therefore, the MTF is computed as the ratio of the magnitude of the main frequency peak in the interpolated result,

_{y}*M*(

_{interpolated}*f*,

_{x}*f*), over the magnitude of the main frequency peak in the original image,

_{y}*M*(

_{original}*f*,

_{x}*f*).

_{y}The MTF results of different interpolation methods are presented in Fig. 8. For residual interpolation method in [15], four original high resolution polarization images are used as guided images, which cannot be applied to DoFP images where the ideal high resolution guided images are hard to obtain for the PDM problem. On the other hand, the input DoFP image also cannot be directly used as the reference image because its checkerboard effect may cause artifacts. In this paper, mean filtered result of input DoFP image with a 2 × 2 window is used as a reference image. This process is simple and maintains the major edge information of the scene and won’t affect the overall performance, because interpolation is carried out in the residual domain which guarantees reconstruction accuracy. In the following, the same scheme is applied in synthetic and real DoFP data tests. For an intuitive comparison, the MTF responses of five different interpolation methods are plotted in a single chart across equal vertical and horizontal frequencies, *f _{x}* =

*f*. The spatial frequency

_{y}*f*and

_{x}*f*are swept from 0 up to 0.5 cycles per pixel because the MTF response drops to zero when the spatial frequency is higher 0.5 cycles per pixel due to the Nyquist sampling theorem, and the MTF response is computed at each frequency.

_{y}Figure 8 shows that the MTF responses of first four interpolation methods decreased relatively quickly at low spatial frequencies and all drops to zero at *f _{x}* =

*f*= 0.375. The residual interpolation method is overall superior to the gradient-based interpolation method, while both have similar performance at high frequencies. The bicubic spline interpolation method preserves low frequency features as the residual interpolation does but has a poor performance at high frequencies. On the contrary, our method has sustained higher gain below 0.25 cycles per pixel and decreases gradually to zero at 0.5 cycles per pixel. The proposed interpolation method best preserves both low and high frequency features.

_{y}One may find that there are discontinuities and values over one in MTF result of the proposed method. Notice that there are periodic values (including zero) in object image because of the spatially varying sinusoids as defined in Eq. (23), which will lead to the appearance of zero values when calculating cost of direction *φ* in Eq. (18) and directional difference *d* in Eq. (22). Therefore, the situation of being divided by zero or a very small number exists in calculation of ratio Φ and weight *ω*. At each frequency, we count the amount with one of *d* in two directions is zero and the amount with one of *φ* in two directions is zero respectively. And two statistical results are shown in Fig. 9(a) and Fig. 9(b). We further calculate the number of Φ ≤ *τ* at each frequency, and the result is drawn in Fig. 9(c). It can be easily found that frequencies where discontinuities appear in Fig. 9 and our MTF result (including values over one at lower frequency) fit well, which explains the presence of discontinuities and values over one in MTF result of the proposed method. And it will not affect the overall performance of our method as will be demonstrated in following tests.

#### 3.2. Test on synthetic DoFP images

To validate the proposed DoFP demosaicking algorithm, we used synthetic DoFP mosaic test images of all scenes in Fig. 2. The first six groups of four polarization images at 0°, 45°, 90° and 135° orientations in visible spectrum were captured by a Nikon D90 DSLR camera with a linear polarization filter. Similarly, the rest six groups of four polarization images from scene 7 to 12 in Fig. 2 were recorded by a LWIR camera FLIR T420 with an infrared linear polarizer (Thorlabs WP50H-B). The synthetic DoFP mosaic images are then generated by down-sampling the four captured polarization images as the sampling pattern in Fig. 1.

By performing different interpolation methods on test images, we obtained four interpolated images for etch methods. Next, we can evaluate the interpolation performance by comparing interpolated images with true images. The interpolation methods that are applied as comparison are bilinear interpolation [12], bicubic interpolation [12], bicubic spline interpolation [12], gradient-based interpolation [13], and residual interpolation [15]. Figure 10 shows the blown-up results on four selected scenes in Fig. 2, including two reconstructed intensity comparisons of the regions in green rectangles in scenes 5 and 10 and two reconstructed DoLP comparisons of the regions in black rectangles in scenes 6 and 7. In Fig. 10, the first two rows show intensity (*S*_{0}) images and error images in jet false color model of different methods for intuitive comparison, and the DoLP images are listed in the rest rows in HSV false color model.

For visual comparison, we put the ground-truth polarization images in first column. It shows that the interpolation methods in [12] produce highly visible jaggy artifacts which are mitigated in the results of methods in [13] and [15] in blow-up DoLP images, however, the slight serrated artifacts still exist. The last column presents the results of our proposed method, and it obtained basically free of jaggy artifacts results with that of the ground-truth polarization images. Moreover, our method generated closely similar intensity results with the ground truth, while other methods produce notable interpolation error.

For a quantitative comparison, we employ the root mean square error (RMSE) value to evaluate the accuracy of the different interpolation methods.

*M*and

*N*represent height and width of image.

*I*(

_{true}*i*,

*j*) and

*I*(

_{interpolated}*i*,

*j*) represents true image and interpolated image respectively. The average RMSE results of all 12 scenes in Fig. 2 are listed in Table 1, which are in agreement with the visual quality evaluations, and the best results are bolded. The proposed polynomial interpolation method has the lowest RMSE. By applying polynomial interpolation with a polarization difference model, we get better performance in reconstruction of high frequency details. And RMSE comparison results showed that our method reconstructs polarization images with the highest accuracy.

The accuracy of calculations of both the degree and angle of polarization depend strongly on the noise in DoFP images. The noise in the measurements recorded by both camera based systems and spectrometers can lead to significant artefacts and incorrect conclusions about high degrees of polarization when in fact none exist [30]. Several denoising methods [31–34] were developed to estimate and reduce the noise and then perform more accurate PDM interpolation. In order to analyze how the temporal noise effect the interpolation performance, the PSNR is used to as an objective evaluation criterion. The scene 9 shown in Fig. 2 is used and the standard variance of the noise is increased by every 0.5 from 1 to 20. The corresponding PSNR curves are plotted in Fig. 11. The PSNR of different methods decreases quickly when the noise increases. For intensity image, the proposed method performs the best with weak noise, bilinear, bicubic, and bicubic spline interpolation methods bring better results when noise increases because they are essentially low pass filters. Our method outperforms other methods in DoLP image. To achieve better interpolation performance on strong noise, a denoising method [32–34] should be applied.

#### 3.3. Experiments on real DoFP images

A similar comparison study was also carried out on two real DoFP images. The first scene is cars in a parking lot, which was captured using our own developed DoFP LWIR polarization imager. The developed polarization imager is structured with a wire-grid micro-polarizer array as described in [35], and several MPA based polarization imagers [36–42] were developed in different spectra. The spatial resolution of each frame is 640 × 512 and the bit depth is 14 bits. And a nonuniformity correction method [43] was used to calibrate the LWIR DoFP images. The second test real data is an indoor scene which was recorded with Sony IMX250-MZR polarized image sensor that works in visible spectrum. The Genie Nano-M2450-Polarized model features a monochrome quad polarization filter, resolution of 2448 × 2048 pixels, and image capture of 35 frames-per-second. For calibration of Sony IMX250-MZR sensor, method [44] is applied in visible spectrum. Two real DoFP images are denoised with PCA-based method [32] and shown in Fig. 12.

Figure 13 is the result of blow-up DoLP images in green regions in Fig. 12 with different interpolation methods. As shown in the first row, most of the smooth background objects such as the road can be reconstructed free of visible artifacts by interpolation methods in [12, 13, 15]. However, on the car, where some sharp edges accompany abrupt polarization changes, these methods cannot faithfully recover the edge features. For sharp edges of weak polarization correlation, no sufficient information exists within a single polarization channel to reconstruct the polarization signal. This is the situation where the polarization difference can come to the rescue. For the second scene, our method generated the best result that is in agreement with that in the first scene. The proposed method has clear advantages over all others in terms of visual quality. Most of the polarization artifacts are eliminated and many sharp edge structures that are missing or distorted in other interpolation methods are well reconstructed by the polarization-difference-based polynomial interpolation demosaicking.

## 4. Conclusion

In this paper, a Newton’s polynomial interpolation method with polarization difference model is proposed for DoFP polarimeter PDM. The main contributions of the proposed method are: (1) a conception of polarization difference model, (2) interpolation error estimation with the combination of Newton’s polynomial interpolation and polarization difference, (3) a fusion strategy with an edge classifier in polarization difference domain. The proposed method was tested in synthetic and real DoFP images in both visible and LWIR spectrum, compared with bilinear, bicubic, bicubic spline, gradient-based and residual interpolation methods. The results of RMSE and visual comparison show that the proposed scheme outperforms existing PDM methods.

## Funding

National Natural Science Foundation of China (NSFC) (61771391, 61371152); Science, Technology and Innovation Commission of Shenzhen Municipality (JCYJ20170815162956949); Korean National Research Foundation (NRF-2016R1D1A1B01008522).

## References

**1. **P. Terrier, V. Devlaminck, and J. M. Charbois, “Segmentation of rough surfaces using a polarization imaging system,” J. Opt. Soc. Am. A **25**, 423–430 (2008). [CrossRef]

**2. **D. Miyazaki, T. Shigetomi, M. Baba, R. Furukawa, S. Hiura, and N. Asada, “Surface normal estimation of black specular objects from multiview polarization images,” Opt. Eng. **56**, 041303 (2016). [CrossRef]

**3. **M. W. Hyde, S. C. Cain, J. D. Schmidt, and M. J. Havrilla, “Material classification of an unknown object using turbulence-degraded polarimetric imagery,” IEEE Trans. Geosci. Remote. Sens. **49**, 264–276 (2011). [CrossRef]

**4. **N. Li, Y. Zhao, Q. Pan, and S. G. Kong, “Removal of reflections in LWIR image with polarization characteristics,” Opt. Express **26**, 16488–16504 (2018). [CrossRef] [PubMed]

**5. **Z. Chen, X. Wang, and R. Liang, “Snapshot phase shift fringe projection 3D surface measurement,” Opt. Express **23**, 667–673 (2015). [CrossRef] [PubMed]

**6. **L. Shen, Y. Zhao, Q. Peng, J. C.-W. Chan, and S. G. Kong, “An iterative image dehazing method with polarization,” IEEE Trans. Multimed. (2018) (in press). [CrossRef]

**7. **M. Reda, Y. Zhao, and J. C.-W. Chan, “Polarization guided autoregressive model for depth recovery,” IEEE Photonics J. **9**, 1–16 (2017). [CrossRef]

**8. **M. Garcia, C. Edmiston, T. York, R. Marinov, S. Mondal, N. Zhu, G. P. Sudlow, W. J. Akers, J. Margenthaler, S. Achilefu, R. Liang, M. A. Zayed, M. Y. Pepino, and V. Gruev, “Bio-inspired imager improves sensitivity in near-infrared fluorescence image-guided surgery,” Optica **5**, 413–422 (2018). [CrossRef] [PubMed]

**9. **J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. **45**, 5453–5469 (2006). [CrossRef] [PubMed]

**10. **B. M. Ratliff, C. F. LaCasse, and J. S. Tyo, “Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery,” Opt. Express **17**, 9112–9125 (2009). [CrossRef] [PubMed]

**11. **S. Gao and V. Gruev, “Image interpolation methods evaluation for division of focal plane polarimeters,” in *Infrared Technology and Applications XXXVII*, vol. 8012 (International Society for Optics and Photonics, 2011), p. 80120N. [CrossRef]

**12. **S. Gao and V. Gruev, “Bilinear and bicubic interpolation methods for division of focal plane polarimeters,” Opt. Express **19**, 26161–26173 (2011). [CrossRef]

**13. **S. Gao and V. Gruev, “Gradient-based interpolation method for division-of-focal-plane polarimeters,” Opt. Express **21**, 1137–1151 (2013). [CrossRef] [PubMed]

**14. **J. Zhang, H. Luo, B. Hui, and Z. Chang, “Image interpolation for division of focal plane polarimeters with intensity correlation,” Opt. Express **24**, 20799–20807 (2016). [CrossRef] [PubMed]

**15. **A. Ahmed, X. Zhao, V. Gruev, J. Zhang, and A. Bermak, “Residual interpolation for division of focal plane polarization image sensors,” Opt. Express **25**, 10651–10662 (2017). [CrossRef] [PubMed]

**16. **J. Zhang, H. Luo, R. Liang, A. Ahmed, X. Zhang, B. Hui, and Z. Chang, “Sparse representation-based demosaicing method for microgrid polarimeter imagery,” Opt. Lett. **43**, 3265–3268 (2018). [CrossRef] [PubMed]

**17. **J. Zhang, J. Shao, H. Luo, X. Zhang, B. Hui, Z. Chang, and R. Liang, “Learning a convolutional demosaicing network for microgrid polarimeter imagery,” Opt. Lett. **43**, 4534–4537 (2018). [CrossRef] [PubMed]

**18. **S. Li, W. Jin, R. Xia, L. Li, and X. Wang, “Radiation correction method for infrared polarization imaging system with front-mounted polarizer,” Opt. Express **24**, 26414–26430 (2016). [CrossRef] [PubMed]

**19. **B. E. Bayer, “Color imaging array,” (1976). US Patent 3,971,065.

**20. **X. Li, B. Gunturk, and L. Zhang, “Image demosaicing: A systematic survey,” in *Visual Communications and Image Processing 2008*, vol. 6822 (International Society for Optics and Photonics, 2008), p. 68221J. [CrossRef]

**21. **J. Wu, M. Anisetti, W. Wu, E. Damiani, and G. Jeon, “Bayer demosaicking with polynomial interpolation,” IEEE Trans. Image Process. **25**, 5369–5382 (2016). [CrossRef]

**22. **S. Mihoubi, P. J. Lapray, and L. Bigué, “Survey of demosaicking methods for polarization filter array images,” Sensors **18**, 3688 (2018). [CrossRef]

**23. **D. H. Goldstein, *Polarized Light, Revised and Expanded* (Marcel Dekker, Inc, 2003). [CrossRef]

**24. **Y. Zhao, C. Yi, S. G. Kong, Q. Pan, and Y. Cheng, *Multi-band Polarization Imaging and Applications* (SpringerBerlin Heidelberg, 2016). [CrossRef]

**25. **J. S. Tyo, “Design of optimal polarimeters: maximization of signal-to-noise ratio and minimization of systematic error,” Appl. Opt. **41**, 619 (2002). [CrossRef] [PubMed]

**26. **J. S. Tyo, “Optimum linear combination strategy for an N-channel polarization-sensitive imaging or vision system,” J. Opt. Soc. Am. A **15**, 359–366 (1998). [CrossRef]

**27. **S. P. Jaiswal, L. Fang, V. Jakhetiya, J. Pang, K. Mueller, and O. C. Au, “Adaptive multispectral demosaicking based on frequency-domain analysis of spectral correlation,” IEEE Trans. Image Process. **26**, 953–968 (2017). [CrossRef]

**28. **K. H. Chung and Y. H. Chan, “Color demosaicing using variance of color differences,” IEEE Trans. Image Process. **15**, 2944–2955 (2006). [CrossRef] [PubMed]

**29. **J. Stoer and R. Bulirsch, *Introduction to numerical analysis*, vol. 12 (Springer Science & Business Media, 2013).

**30. **A. B. Tibbs, I. M. Daly, D. R. Bull, and N. W. Roberts, “Noise creates polarization artefacts,” Bioinspir. Biomim. **13**, 015005 (2017). [CrossRef] [PubMed]

**31. **E. Gilboa, J. P. Cunningham, A. Nehorai, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using Gaussian processes,” Opt. Express **22**, 15277–15291 (2014). [CrossRef] [PubMed]

**32. **J. Zhang, H. Luo, R. Liang, W. Zhou, B. Hui, and Z. Chang, “PCA-based denoising method for division of focal plane polarimeters,” Opt. Express **25**, 2391–2400 (2017). [CrossRef]

**33. **S. Li, W. Ye, H. Liang, X. Pan, X. Lou, and X. Zhao, “K-SVD based denoising algorithm for DoFP polarization image sensors,” in *Circuits and Systems (ISCAS), 2018 IEEE International Symposium on*, (IEEE, 2018), pp. 1–5.

**34. **A. Abubakar, X. Zhao, S. Li, M. Takruri, E. Bastaki, and A. Bermak, “A Block-Matching and 3-D filtering algorithm for Gaussian noise in DoFP polarization images,” IEEE Sens. J. **18**, 7429–7435 (2018). [CrossRef]

**35. **D. Vorobiev, Z. Ninkov, and M. Gartley, “Polarization in a snap: imaging polarimetry with micropolarizer arrays,” in *Polarization: Measurement, Analysis, and Remote Sensing XI*, vol. 9099 (International Society for Optics and Photonics, 2014), p. 909904.

**36. **M. Zhang, X. Wu, N. Cui, N. Engheta, and J. Van der Spiegel, “Bioinspired focal-plane polarization image sensor design: from application to implementation,” Proc. IEEE **102**, 1435–1449 (2014). [CrossRef]

**37. **X. Zhao, F. Boussaid, A. Bermak, and V. G. Chigrinov, “High-resolution thin “guest-host” micropolarizer arrays for visible imaging polarimetry,” Opt. Express **19**, 5565–5573 (2011). [CrossRef] [PubMed]

**38. **M. Garcia, C. Edmiston, R. Marinov, A. Vail, and V. Gruev, “Bio-inspired color-polarization imager for real-time in situ imaging,” Optica **4**, 1263–1271 (2017). [CrossRef]

**39. **M. Sarkar, D. S. S. Bello, C. van Hoof, and A. J. Theuwissen, “Biologically inspired CMOS image sensor for fast motion and polarization detection,” IEEE Sens. J. **13**, 1065–1073 (2013). [CrossRef]

**40. **K. Sasagawa, S. Shishido, K. Ando, H. Matsuoka, T. Noda, T. Tokuda, K. Kakiuchi, and J. Ohta, “Image sensor pixel with on-chip high extinction ratio polarizer based on 65-nm standard CMOS technology,” Opt. Express **21**, 11132–11140 (2013). [CrossRef] [PubMed]

**41. **N. R. Malone, A. Kennedy, R. Graham, Y. Thai, J. Stark, J. Sienicki, and E. Fest, “Staring MWIR, LWIR and 2-color and scanning LWIR polarimetry technology,” in *Infrared Remote Sensing and Instrumentation XIX*, vol. 8154 (International Society for Optics and Photonics, 2011), p. 81540T. [CrossRef]

**42. **Y. Maruyama, T. Terada, T. Yamazaki, Y. Uesaka, M. Nakamura, Y. Matoba, K. Komori, Y. Ohba, S. Arakawa, and Y. Hirasawa, “3.2-MP back-illuminated polarization image sensor with four-directional Air-Gap wire grid and 2.5-μm pixels,” IEEE Trans. Electron Devices **65**, 2544–2551 (2018). [CrossRef]

**43. **H. Liu, Z. Shi, B. Feng, B. Hui, and Y. Zhao, “Non-uniformity calibration for MWIR polarization imagery obtained with integrated microgrid polarimeters,” in *Selected Papers of the Chinese Society for Optical Engineering Conferences held October and November 2016*, vol. 10255 (International Society for Optics and Photonics, 2017), p. 102554D.

**44. **S. B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express **21**, 21039–21055 (2013). [CrossRef] [PubMed]