## Abstract

Long-wave infrared (LWIR) imaging has been successfully used in surveillance applications in low illumination conditions. However, infrared energy reflected from smooth surfaces such as floors and metallic objects may reduce object detection and tracking accuracies. In this paper, we present a novel reflection removal method using polarization properties of the reflection in LWIR imagery. Reflection can be distinguished from the scene by two unique characteristics of polarization: the difference of two orthogonal polarized components (OPC) and the uniformity of angle of polarization (AoP). The OPC difference helps locate the regions of reflection. The uniformity of AoP in the reflection region pose a strong constraint for reflection detection. The proposed joint reflection detection method combines the OPC difference and the uniformity of AoP can detect actual reflection region. Then the closed-form matting method improves the robustness of the method and removes the reflection from the scene. Experiment results demonstrate that the proposed scheme effectively removes the reflection in challenging situations where many existing techniques may fail.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## 1. Introduction

Long-wave infrared (LWIR) imaging sensors have been successfully deployed in various night vision and all-weather surveillance applications [1–3]. However, LWIR imaging systems often find difficulties in indoor scenes with glossy floor surfaces or in outdoor scenes in rainy weather conditions, where reflective surfaces are involved in imaging. In such conditions, reflection removal of infrared energy reflected from the surface may become critical for detecting or tracking objects like pedestrians in night vision applications. Reflection can be easily mistaken as a part of real objects, which makes the estimation of size, shape, or position of the object challenging. Reflection-free images help improve the performance of recognition and tracking of object of interest.

Reflection removal mainly includes two cases: the partially transmitting surfaces such as glass and the optically semi-diffuse surfaces such as floors. For the first case, most reflection removal methods focus on the separation of the transmitted and reflected layers [4–9]. There are two popular reflection removal methods based on polarization [4, 5]. The MI-based (mutual information based) [4] and the ICA-based (independent component analysis based) [5] reflection removal methods both make the assumption that the reflectance and the transmittance are independent which cannot be applied to all scenes, for example, the separation of objects that are similar in shape and intensity. Other relevant separation methods include depth of field guided reflection removal method [7], reflection removal using smoothing algorithm [8], and method exploiting reflection change [9]. The most related case to our study is the optically semi-diffuse surfaces such as floors and rainy roads [10–16]. Methods based on chrominance such as [10–12], use the difference between the foreground and the background in the RGB space to remove the reflections, which cannot be applied directly in the LWIR spectrum. Zhao and Nevatia [13] proposed a geometrical model that is insensitive to reflection for human tracking. Unfortunately, this method may not work when the scene includes other large objects than a human such as backpacks, suitcases, and umbrellas. In [14] a more sophisticated method that takes into account both geometric and chromatic information to remove the reflections is proposed. This method is applied as a postprocessing phase to the output of a foreground detection system based on background subtraction, which indicates its potential limitation that the foreground detection is a prerequisite for its application. The recent paper [17] presents a polarization-based direct and indirect photon separation technique for the problem of separating reflections in the visible, near infrared, and LWIR. This method applies Stokes algebra and Mueller calculus formulae with an edge-based correlation technique. The accuracy of image separation is affected primarily by the polarization of the light reflected from the original objects, the accuracy of the refractive index (medium) and the incident angle estimates, and the BRDF of the transparent or semi-glossy reflector. Therefore, the inaccurate refractive index and the incident angle estimates can lead to inaccuracy separation results.

This paper presents a reflection removal technique based on polarization properties of the reflection to detect and remove the reflection in LWIR imagery for all-weather surveillance applications. The light becomes polarized when it interacts with matter through various mechanisms, such as scattering, reflection or transmission and carries an orthogonal set of information compared to color information [18]. The reflection is partially polarized [18] and the angle of polarization (AoP) in the reflection region is uniform. Such polarization properties can make the reflection region distinguishable from the background without performing any foreground detection processing. In this context, we first locate the reflection regions using the difference of two orthogonal polarized components (OPC), which are perpendicular and parallel to the plane of incidence. The proposed joint reflection detection method combines the OPC difference and the uniformity of AoP in reflection regions to successfully detect actual reflection regions. Then the closed-form matting method is applied to improve the robustness of the detection, and the reflection is removed by a background reference image. This method guarantees that the recovered emission energy is consistent with that of surrounding environment. The main contributions and advantages of the proposed method are:

- • To demonstrate the uniformity of AoP in reflection region using the Kirchhoff’s law, Fresnel equations, and Mueller Matrices for reflection at an air-dielectric interface
- • To develop a joint reflection detection method based on the uniformity of AoP and the OPC difference

## 2. Method

Figure 1 summarizes the procedures of the proposed reflection removal method. For a given LWIR image, the OPC difference locates potential reflection region. The first step is to acquire four images of the same scene at different polarization angles in 0°, 45°, 90° and 135°, simultaneously to extract a pair of orthogonal images according to the model of the polarization difference. We locate the reflection region using this information. Figure 1(b) shows the OPC difference image obtained from the extracted orthogonal images. The second step is to perform the joint reflection removal by combining the OPC difference and the uniformity of AoP in reflection region as in Fig. 1(c). The third step is to obtain more robust detection results using the closed-form matting method and the reflection matte result is shown in Fig. 1(d). Figure 1(e) shows resulting LWIR image with reflections removed using a background reference image to guarantee that the recovered emission energy is consistent with that of the surrounding environment.

#### 2.1 Reflection detection with the OPC difference

Thermal radiation of the target object reflected from a smooth surface forms the reflection. Figure 2 shows the geometry of how reflection is formed. The reflection contains the reflected radiation *R* and emitted radiation *E*, and both are expressed as the sum of two orthogonal polarized components, that is, $R={R}_{\perp}+{R}_{\parallel}$ and $E={E}_{\perp}+{E}_{\parallel}$.

For an unpolarized target with unit emissivity of temperature *T _{t}*, total

*s*-polarized radiance and

*p*-polarized radiance leaving the surface at temperature

*T*in direction

_{m}*θ*[19] are given as

*P*(

*T*) denotes the Planck Blackbody radiance curve at temperature

*T*,

*ε*is the emissivity and

*r*is the reflectivity in the

*s*- and

*p*-polarization states. Special subscripts ⊥ and ∥ are added to

*ε*and

*r*that correspond to the polarized components perpendicular and parallel to the plane of incidence. In Eqs. (1) and (2), we assume that the emissivity and reflectivity are spectrally flat over the wavelength range of interest, which is a reasonable assumption when working in the LWIR. For a reflective regime summarized by Rogne [20], the measured polarization signature is primarily due to reflections from the surface, where ${T}_{t}\gg {T}_{m}$. Equations (1) and (2) can be rewritten as

The difference between two orthogonal polarized components mainly depend on the reflectivity difference in the *s*- and *p*-polarization states. For a smooth surface with refractive index *n*, two polarized components of reflectance perpendicular and parallel to the plane of incidence are defined as

*θ*is the angle of incidence (viewing angle) and ${\theta}_{t}(\theta ,n)\text{=arcsin}((sin\theta )/n)$ from Snell’s law [21]. Figure 3 shows the values of ${r}_{{}_{\perp}}$ and ${r}_{{}_{\parallel}}$ for various values of

*θ*for smooth floor whose refractive index is similar to that of glass in LWIR. There are great difference between ${r}_{{}_{\perp}}$ and ${r}_{{}_{\parallel}}$ in most angles of incidence, which means two orthogonal polarized components in reflection are different according to Eqs. (3) and (4). The difference between two orthogonal polarized components is very small when the angle of incidence is smaller than 10° or almost at 90°. However, it is unlikely that the viewing angle is at 90° in surveillance systems, and the reflection is negligible when the angle of incidence is smaller than 10°. Based on this observation, a preliminary detection of reflection depends on how to extract two orthogonal image components.

Consider three polarized images, ${I}_{i}(x)$, *i* = 1,2,3, each of which was captured with a polarized angle separated by 45°: *ϕ*_{1}, *ϕ*_{1} + 45°, and *ϕ*_{1} + 90°. From these three images, we compute ${\varphi}_{1}-{\varphi}_{\perp}(x)$ instead of the explicit values of *ϕ*_{1} and *ϕ*_{⊥}(x). We then compute two orthogonal image components ${I}_{\perp}(x)$ and ${I}_{\parallel}(x)$, the sum of intensities of the perpendicular and parallel components, respectively, in the reflected and emitted radiations. From [22],

We assume that ${\varphi}_{1}-{\varphi}_{\perp}(x)$ is within [-45°,45°] for the unique solution of ${\scriptscriptstyle \raisebox{1ex}{$1$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}ta{n}^{-1}(\cdot )$. If ${\varphi}_{1}-{\varphi}_{\perp}(x)$ is smaller than −45° or greater than 45°, the computed value is ± 90° different from the true value. In this case, we simply exchange ${I}_{\perp}$ and ${I}_{\parallel}$ because the sign of $\mathrm{cos}2(\cdot )$ is reversed. With two orthogonal images extracted, we can detect reflection region using the difference of ${I}_{\perp}$ and ${I}_{\parallel}$:

Equation (9) expresses the difference of OPC and ${D}_{o}(x)$ is normalized to (0,1) during implementation. The region where the difference of OPC is greater than a preset threshold, which is 0.55 in our experiments, is selected as a potential reflection. Figures 4(a) and 4(b) show two orthogonal images and Fig. 4(c) is corresponding difference of OPC image. In Fig. 4(d), the binarization shows that the reflection detection only based on the difference of OPC can locate potential reflection regions.

#### 2.2 Demonstrating the uniformity of AoP and performing the joint reflection detection

Noise exists in dark background area in the potential reflection region since the reflection detection based on the difference of OPC may take background noise as part of reflection. Therefore we need additional constraints to obtain more accurate detection result. The reflected radiation is polarized, and polarization characteristics usually include the degree of linear polarization (DoLP) and the angle of polarization (AoP). Since the OPC difference plays the same role as DoLP, we further study the properties of AoP in reflection region. Since a light vector can be decomposed into two orthogonal components [18], we only consider the reflection light that propagates along the observation direction from the reflection region since the reflected radiation dominates. The transverse components are perpendicular and parallel to the plane of incidence in terms of complex notation

The Stokes parameters for the reflected field are given in [18]

Substituting Eqs. (10) and (11) into Eqs. (14) to (17) gives

Substituting Eqs. (19) and (20) into Eq. (22),

The molecule and the denominator are divided by ${R}_{\parallel}^{2}$ and we can get

Substituting Eqs. (3) and (4) into Eq. (24) gives

The AoP of reflection region is only related to the angle of incidence *θ* and the phase difference *δ*. Figure 5 shows the AoP of reflection respect to *θ* and *δ*. For a particular viewing angle *θ*, the ratio ${r}_{\perp}(\theta )/{r}_{{}_{\parallel}}(\theta )$ is a constant and so as *δ* cause reflected radiation is polarized. Figure 5 demonstrates that the AoP of reflection region is smooth under most situation except for the phase difference *δ* is $\pm \pi /2+2m\pi $ when the light is circularly polarized, which rarely happens in real scenes. The AoP of reflection fluctuates slightly near a constant value because the *θ* of reflection region varies within a small range that can be ignored, which refers to the uniformity of AoP in the reflection region.

From the last step that the OPC difference is related to the angle of incidence, so we first research the AoP of reflection under different viewing angle. Figure 6 shows that the target used for this study is a plane blackbody and the thermal radiation of blackbody is reflected by a marble tile. The blackbody was heated to 80°C and the viewing angles were 50, 60, 70 and 80 degrees.

Under different viewing angles, we obtained the corresponding AoP images as shown in the second row in Fig. 7. The AoP in reflection regions (regions in red circle) is uniformly distributed which is different from the other regions. And then we made statistical analysis of distribution of AoP in reflection regions. As in the third row in Fig. 7, we computed the pixel numbers of different AoP in reflection regions and plotted the results in different charts. The statistical results show that the AoP in reflection regions distribute close to a particular value in different viewing angle.

The AoP of reflection is uniform under different viewing angles and we can know that the AoP of reflection is independent from target temperature according to Eq. (25). With a fixed viewing angle *θ* = 65°, we further obtained the AoP images as in Fig. 8 when the blackbody was heated to different temperatures from 40°C to 80°C in 10°C interval. Similarly, we made statistical analysis of distribution of AoP in reflection regions (regions in red circle) and this time we plot the results in the same chart. Figure 8 shows that with a fixed viewing angle the AoP in reflection regions distribute basically close to a same value in different blackbody temperature, which verifies that the AoP in reflection region is uniform and independent from the target temperature.

Combining the OPC difference and the uniformity of AoP in the reflection region, we propose a novel joint reflection detection method which is described by

*AoP*(x) is the angle of polarization image, ${P}_{i},i=1\cdots k$ represents

*k*dominant AoP values of

*k*reflection regions and

*η*is a positive regulation coefficient. ${D}_{joint}(x)$ combines two polarization constraints which are $\text{|}{I}_{\perp}(x)-{I}_{\parallel}(x){\text{|}}^{2}$ for the OPC difference and ${\sum}_{i=1}^{k}\mathrm{exp}[-\eta {(AoP(x)-{P}_{i})}^{2}]$ for the uniformity of AoP. In section 2.1 we obtained a preliminary reflection detection result ${D}_{o}(x)$ and based on this we can further compute

*k*dominant AoP values ${P}_{i},i=1\cdots k$ of

*k*reflection regions using the statistical result of AoP. By setting an appropriate

*η*, ${\sum}_{i=1}^{k}\mathrm{exp}[-\eta {(AoP(x)-{P}_{i})}^{2}]$ can extract the pixels whose values are close to ${P}_{i},i=1\cdots k$ in

*AoP*(x) and transfer the target pixels with dominant AoP values into values close to one and turn others to values close to zero. During implementation, we normalize $\text{|}{I}_{\perp}(x)-{I}_{\parallel}(x){\text{|}}^{2}$ to (0,1) and the joint detection results of reflection are shown in Fig. 9. Using a same threshold, we can get the corresponding binary detection results [Fig. 9(c) and Fig. 9(d)] of detection results based on the difference of OPC and our joint reflection detection method. The results in Fig. 9 shows that our joint reflection detection method can get much more accurate detection result while using only the difference of OPC may contain error detection caused by background noise.

#### 2.3 Reflection alpha matting

Although joint reflection detection result is satisfactory, there still are a few small holes in reflection regions and we want to further improve the robustness of detection. Therefore we incorporate joint reflection detection result and image alpha matting for better reflection detection. Table 1 shows that the input image is a false color fusion result of the original infrared image, the difference image $\text{|}{I}_{\perp}(x)-{I}_{\parallel}(x){\text{|}}^{2}$ of OPC and the AoP image *AoP*(x). Similar to interactive image matting, we first specify reflection samples and non-reflection samples using the joint reflection detection result by morphological erosion and dilation operation automatically, and construct a tripmap for the input image. With the tripmap, we can extract the reflection alpha matte using the specified samples as the constraints. More specially, we set *α* = 1 for the specified reflection samples, and *α* = 0 for the specified non-reflection samples. Then we apply the closed-form matting method [23, 24] and minimize the following energy equation for computing the reflection alpha:

*L*is the Laplacian matrix, ${D}_{S}$ is a diagonal matrix whose diagonal elements are ones for constrained pixels and zero for all other pixels, and the vector ${b}_{S}$ contains specified alpha values for the constrained non-reflection pixels and zero for all other pixels. Minimizing above energy equation, we can get the reflection alpha

*α*.

With the computed reflection alpha matte *α*, we can identify the reflection regions of the input image. Specially, given threshold ${\delta}_{1}$ and ${\delta}_{2}$, the pixels with $\alpha <{\delta}_{1}$ can be considered as the pixels in the non-reflection region. The pixels with $\alpha >{\delta}_{2}$ can be considered as pixels in the reflection regions. The pixels with ${\delta}_{1}\le \alpha \le {\delta}_{2}$ can be considered as the pixels in the reflection boundaries. In the reflection boundaries, the value usually changes dramatically. In our experiments, we set ${\delta}_{1}=0.1$ and ${\delta}_{2}=0.9$. Figure 10 shows the reflection matte result.

Based on the above reflection detection results, the reflection can be removed from the scene using a background reference image (suitably created and updated in video image sequence). As shown in Fig. 11, we realize the reflection removal by replacing the pixels in reflection region with the corresponding pixels in the background reference image.

## 3. Experiment results

We tested the proposed joint reflection detection method in real division of focal plane (DoFP) [25–27] LWIR polarization images that were captured using a DoFP LWIR polarization imager. In Fig. 12, we selected eight frames from the video with a pedestrian walked through the scene, and the detection results and the reflection removal results show that good qualitative results can be achieved using our method.

In order to quantitatively evaluate the performance of the proposed method, we generated ground truth reflection maps. For all the ground truth reflection maps and our detected reflection maps, we consider the pixel value greater than 0.9 belongs to reflection region and that smaller than 0.1 belongs to non-reflection region. Then the detection error is defined as

where ${N}_{over}$ denotes the number of pixels of the reflection region in the detected reflection map but not in the ground truth reflection map. ${N}_{under}$ is the number of pixels of the reflection region in the ground truth reflection map but not in our detected reflection map. ${N}_{truth}$ is the total number of pixels of the reflection region in the ground truth reflection map.A similar index is defined as [12]

where*TH*is the height of the true object normalized on the total height of the bounding box (see Fig. 13), ${h}_{t}$ is the true height of the object, ${h}_{r}$ is the height of the reflection,

*h*is the height of the object after refection removal and $h={h}_{t}$ before the refection removal. It is possible that ${h}_{t}>{h}_{r}+h$ if the algorithm completely removes the reflection, but also cuts away part of the object, and that is when the reciprocal of

*TH*is applied. Figure 14 shows the ground truth reflection maps and detected reflection maps. Corresponding detection errors and values of

*TH*are illustrated in Table 2. To show the performance of proposed method more intuitively, we further plot the detection error and

*TH*in different charts as shown in Fig. 15. The detection errors of selected frames are all smaller than 6% and the

*TH*(the proportion of target) has been significantly improved after reflection removal, which proves the accuracy and effectiveness of our proposed reflection detection method.

In addition, we compare our method with three other reflection removal methods based on polarization and a blind reflection removal method. These approaches include the MI-based [4], the ICA-based [5], the EO photon separation [17] reflection removal methods, and the SPICA-based (sparse independent component analysis based) [6] blind reflection removal method. The results are shown in Fig. 16. The ICA-based method cannot remove the reflections effectively, and also reduces the contrast between the target and the background. The MI-based method gets better results than the ICA-based method, but it also removes part details of the background and generates some artifacts in reflection regions. The EO method is considered as a more truthful separation method in [17], and yet it cannot remove the reflection completely. The patch-wise separation [17] may get better results, but it’s much more time consuming, and which limits its application in surveillance system. The SPICA-based blind reflection removal method retains better contrast between the target and the background, but there still are some reflections are remained in the scene. Robustness of blind source separation is highly dependent on the correlation of the underlying components, and therefore, its performance is limited by the inherent noise in the different components. These methods focus on the case of transparent surface, where an image reflected by the surface may be superimposed on the image of the observed object in the most or even the whole field of view. However, in the case of optically semi-diffuse surface such as floors in this paper, the reflection of interest is a small part of scene, and these separation techniques may lead error separation results in the original objects and the background. Moreover, the first two polarization based methods [4,5] both make the assumption that the reflected and the transmitted layers are independent, which may not be viable in all the scenes cause, for example, the separation of objects that are similar in shape and intensity. In contrast, our method can detect the reflections accurately and then remove the reflections from the original scene without changing the object and the background.

Current mainstream reflection removal methods in surveillance applications basically focus on the evaluation of the baseline that separates the target and its reflection vertically [14, 15]. However, those methods may fail when the scene contains several pedestrians and there is occlusion between the pedestrians, or the pedestrian and its reflection are not vertical (maybe horizontal when a person near a glass wall or any other direction). First, we captured the video data when scene contained several persons and the situations are: three persons stood in different baseline, two persons were walking towards to each other and one was blocked by another, and the combination of the above two cases. Figure 17 shows experiment results that the proposed method gives accurate reflection detection and removal results in all the three situations. Then, we test our method when the object and its reflection are not vertical and the situations are: a person stood near a glass wall and the DoFP infrared camera had a certain angle (not vertical or horizontal) to the optic axis, respectively. The results of this experiment are shown in Fig. 18 and our method can also generate accurate reflection detection and removal results under all those situations. Therefore, our method can apply to all those complex situations where the current mainstream reflection removal methods may fail.

## 4. Conclusion

This paper presents removal of image reflections generated by objects on reflecting medium in LWIR spectrum using polarization properties of reflection. We first locate the reflection regions using the difference between two OPC components. Then we propose the uniformity of AoP in reflection region, and further demonstrate it by theoretical and experimental analysis. Based on the uniformity of AoP and the OPC difference we further develop a joint reflection detection method which produces a detection result nearly the same as the true reflection region. Finally, we apply the closed-form matting method to get more robust detection of reflection, and remove the reflection guarantee the recovered emission energy is consistent with that of surrounding environment. We test our joint reflection detection method in real DoFP LWIR polarization images obtained by our own developed DoFP LWIR polarization imager under conventional and complex situations, and the results are satisfactory.

## Funding

National Natural Science Foundation of China (NSFC) (61771391, 61371152); National Natural Science Foundation of China and Korea National Research Foundation Joint Funded Cooperation Program (61511140292); Korean National Research Foundation (NRF-2016R1D1A1B01008522).

## References and links

**1. **C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. **106**, 288–299 (2007). [CrossRef]

**2. **Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6. [CrossRef]

**3. **H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE **5405**, 506–516 (2004). [CrossRef]

**4. **Y. Schechner, J. Shamir, and N. Kiryuati, “Polarization-based decorrelation of transparent layers: the inclination angle of an invisible surface,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1999), pp. 814–819. [CrossRef]

**5. **H. Farid and E. H. Adelson, “Separating reflections from images by use of independent component analysis,” J. Opt. Soc. Am. A **16**(9), 2136–2145 (1999). [CrossRef] [PubMed]

**6. **A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. **15**(1), 84–91 (2005). [CrossRef]

**7. **R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.

**8. **T. Sirinukulwattana, G. Choe, and I. S. Kweon, “Reflection removal using disparity and gradient-sparsity via smoothing algorithm,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2015), pp. 1940–1944. [CrossRef]

**9. **Y. Li and M. S. Brown, “Exploiting reflection change for automatic reflection removal,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2432–2439. [CrossRef]

**10. **A. Teschioni and C. S. Regazzoni, “A robust method for reflections analysis in color image sequences,” in Signal Processing Conference (EUSIPCO 1998) (IEEE, 1998), pp. 1–4.

**11. **E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. **29**(3), 272–285 (2008). [CrossRef]

**12. **D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

**13. **T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE Trans. Pattern Anal. Mach. Intell. **26**(9), 1208–1221 (2004). [CrossRef] [PubMed]

**14. **D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. **22**(11), 1623–1633 (2012). [CrossRef]

**15. **M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE **7257**, 725709 (2009). [CrossRef]

**16. **S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. **6**, 13–19 (2016).

**17. **Y. Ding, A. Ashok, and S. Pau, “Real-time robust direct and indirect photon separation with polarization imaging,” Opt. Express **25**(23), 29432–29453 (2017). [CrossRef]

**18. **D. H. Goldstein, *Polarized Light*, 2nd ed. (Marcel Dekker, Inc, 2003).

**19. **J. S. Tyo, B. M. Ratliff, J. K. Boger, W. T. Black, D. L. Bowers, and M. P. Fetrow, “The effects of thermal equilibrium and contrast in LWIR polarimetric images,” Opt. Express **15**(23), 15161–15167 (2007). [CrossRef] [PubMed]

**20. **T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE **1317**, 242–252 (1990). [CrossRef]

**21. **E. Hecht, *Optics*, 4th ed. (Pearson Education, 2002).

**22. **N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. **36**(2), 209–221 (2014). [CrossRef] [PubMed]

**23. **L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. **24**(11), 4623–4636 (2015). [CrossRef] [PubMed]

**24. **A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. **30**(2), 228–242 (2008). [CrossRef] [PubMed]

**25. **Y. Zhao, *Multi-band Polarization Imaging and Applications*. (Springer, 2016).

**26. **V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express **18**(18), 19087–19094 (2010). [CrossRef] [PubMed]

**27. **C. Xu, J. Ma, C. Ke, Y. Huang, Z. Zeng, and W. Weng, “Numerical study of a DoFP polarimeter based on the self-organized nanograting array,” Opt. Express **26**(3), 2517–2527 (2018). [CrossRef] [PubMed]