Abstract

Long-wave infrared (LWIR) imaging has been successfully used in surveillance applications in low illumination conditions. However, infrared energy reflected from smooth surfaces such as floors and metallic objects may reduce object detection and tracking accuracies. In this paper, we present a novel reflection removal method using polarization properties of the reflection in LWIR imagery. Reflection can be distinguished from the scene by two unique characteristics of polarization: the difference of two orthogonal polarized components (OPC) and the uniformity of angle of polarization (AoP). The OPC difference helps locate the regions of reflection. The uniformity of AoP in the reflection region pose a strong constraint for reflection detection. The proposed joint reflection detection method combines the OPC difference and the uniformity of AoP can detect actual reflection region. Then the closed-form matting method improves the robustness of the method and removes the reflection from the scene. Experiment results demonstrate that the proposed scheme effectively removes the reflection in challenging situations where many existing techniques may fail.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Long-wave infrared (LWIR) imaging sensors have been successfully deployed in various night vision and all-weather surveillance applications [1–3]. However, LWIR imaging systems often find difficulties in indoor scenes with glossy floor surfaces or in outdoor scenes in rainy weather conditions, where reflective surfaces are involved in imaging. In such conditions, reflection removal of infrared energy reflected from the surface may become critical for detecting or tracking objects like pedestrians in night vision applications. Reflection can be easily mistaken as a part of real objects, which makes the estimation of size, shape, or position of the object challenging. Reflection-free images help improve the performance of recognition and tracking of object of interest.

Reflection removal mainly includes two cases: the partially transmitting surfaces such as glass and the optically semi-diffuse surfaces such as floors. For the first case, most reflection removal methods focus on the separation of the transmitted and reflected layers [4–9]. There are two popular reflection removal methods based on polarization [4, 5]. The MI-based (mutual information based) [4] and the ICA-based (independent component analysis based) [5] reflection removal methods both make the assumption that the reflectance and the transmittance are independent which cannot be applied to all scenes, for example, the separation of objects that are similar in shape and intensity. Other relevant separation methods include depth of field guided reflection removal method [7], reflection removal using smoothing algorithm [8], and method exploiting reflection change [9]. The most related case to our study is the optically semi-diffuse surfaces such as floors and rainy roads [10–16]. Methods based on chrominance such as [10–12], use the difference between the foreground and the background in the RGB space to remove the reflections, which cannot be applied directly in the LWIR spectrum. Zhao and Nevatia [13] proposed a geometrical model that is insensitive to reflection for human tracking. Unfortunately, this method may not work when the scene includes other large objects than a human such as backpacks, suitcases, and umbrellas. In [14] a more sophisticated method that takes into account both geometric and chromatic information to remove the reflections is proposed. This method is applied as a postprocessing phase to the output of a foreground detection system based on background subtraction, which indicates its potential limitation that the foreground detection is a prerequisite for its application. The recent paper [17] presents a polarization-based direct and indirect photon separation technique for the problem of separating reflections in the visible, near infrared, and LWIR. This method applies Stokes algebra and Mueller calculus formulae with an edge-based correlation technique. The accuracy of image separation is affected primarily by the polarization of the light reflected from the original objects, the accuracy of the refractive index (medium) and the incident angle estimates, and the BRDF of the transparent or semi-glossy reflector. Therefore, the inaccurate refractive index and the incident angle estimates can lead to inaccuracy separation results.

This paper presents a reflection removal technique based on polarization properties of the reflection to detect and remove the reflection in LWIR imagery for all-weather surveillance applications. The light becomes polarized when it interacts with matter through various mechanisms, such as scattering, reflection or transmission and carries an orthogonal set of information compared to color information [18]. The reflection is partially polarized [18] and the angle of polarization (AoP) in the reflection region is uniform. Such polarization properties can make the reflection region distinguishable from the background without performing any foreground detection processing. In this context, we first locate the reflection regions using the difference of two orthogonal polarized components (OPC), which are perpendicular and parallel to the plane of incidence. The proposed joint reflection detection method combines the OPC difference and the uniformity of AoP in reflection regions to successfully detect actual reflection regions. Then the closed-form matting method is applied to improve the robustness of the detection, and the reflection is removed by a background reference image. This method guarantees that the recovered emission energy is consistent with that of surrounding environment. The main contributions and advantages of the proposed method are:

  • • To demonstrate the uniformity of AoP in reflection region using the Kirchhoff’s law, Fresnel equations, and Mueller Matrices for reflection at an air-dielectric interface
  • • To develop a joint reflection detection method based on the uniformity of AoP and the OPC difference

2. Method

Figure 1 summarizes the procedures of the proposed reflection removal method. For a given LWIR image, the OPC difference locates potential reflection region. The first step is to acquire four images of the same scene at different polarization angles in 0°, 45°, 90° and 135°, simultaneously to extract a pair of orthogonal images according to the model of the polarization difference. We locate the reflection region using this information. Figure 1(b) shows the OPC difference image obtained from the extracted orthogonal images. The second step is to perform the joint reflection removal by combining the OPC difference and the uniformity of AoP in reflection region as in Fig. 1(c). The third step is to obtain more robust detection results using the closed-form matting method and the reflection matte result is shown in Fig. 1(d). Figure 1(e) shows resulting LWIR image with reflections removed using a background reference image to guarantee that the recovered emission energy is consistent with that of the surrounding environment.

 figure: Fig. 1

Fig. 1 Procedures of the proposed reflection removal scheme. (a) Original LWIR image. (b) The OPC difference. (c) Joint reflection detection result. (d) Reflection matte result. (e) Reflection removal result.

Download Full Size | PPT Slide | PDF

2.1 Reflection detection with the OPC difference

Thermal radiation of the target object reflected from a smooth surface forms the reflection. Figure 2 shows the geometry of how reflection is formed. The reflection contains the reflected radiation R and emitted radiation E, and both are expressed as the sum of two orthogonal polarized components, that is, R=R+R and E=E+E.

 figure: Fig. 2

Fig. 2 Mode of how reflection is formed in LWIR and we consider the measured polarization signature is primarily due to reflections from the surface of the medium.

Download Full Size | PPT Slide | PDF

For an unpolarized target with unit emissivity of temperature Tt, total s-polarized radiance and p-polarized radiance leaving the surface at temperature Tm in direction θ [19] are given as

L(θ)=R+E=P(Tt)r(θ)+P(Tm)ε(θ)
L(θ)=R+E=P(Tt)r(θ)+P(Tm)ε(θ)
where P(T) denotes the Planck Blackbody radiance curve at temperature T, ε is the emissivity and r is the reflectivity in the s- and p-polarization states. Special subscripts ⊥ and ∥ are added to ε and r that correspond to the polarized components perpendicular and parallel to the plane of incidence. In Eqs. (1) and (2), we assume that the emissivity and reflectivity are spectrally flat over the wavelength range of interest, which is a reasonable assumption when working in the LWIR. For a reflective regime summarized by Rogne [20], the measured polarization signature is primarily due to reflections from the surface, where TtTm. Equations (1) and (2) can be rewritten as

L(θ)R=P(Tt)r(θ)
L(θ)R=P(Tt)r(θ)

The difference between two orthogonal polarized components mainly depend on the reflectivity difference in the s- and p-polarization states. For a smooth surface with refractive index n, two polarized components of reflectance perpendicular and parallel to the plane of incidence are defined as

r(θ,n)=sin2(θθt(θ,n))sin2(θ+θt(θ,n))andr(θ,n)=tan2(θθt(θ,n))tan2(θ+θt(θ,n))
respectively, according to the Fresnel equations [21]. Here, θ is the angle of incidence (viewing angle) and θt(θ,n)=arcsin((sinθ)/n) from Snell’s law [21]. Figure 3 shows the values of r and r for various values of θ for smooth floor whose refractive index is similar to that of glass in LWIR. There are great difference between r and r in most angles of incidence, which means two orthogonal polarized components in reflection are different according to Eqs. (3) and (4). The difference between two orthogonal polarized components is very small when the angle of incidence is smaller than 10° or almost at 90°. However, it is unlikely that the viewing angle is at 90° in surveillance systems, and the reflection is negligible when the angle of incidence is smaller than 10°. Based on this observation, a preliminary detection of reflection depends on how to extract two orthogonal image components.

 figure: Fig. 3

Fig. 3 Two polarized components perpendicular and parallel to the plane of incidence of reflectance vary with respect to the angle of incidence, and the two components are different under most angles of incidence (the refractive index is about 6.74 for common marble tiles in LWIR spectrum).

Download Full Size | PPT Slide | PDF

Consider three polarized images, Ii(x), i = 1,2,3, each of which was captured with a polarized angle separated by 45°: ϕ1, ϕ1 + 45°, and ϕ1 + 90°. From these three images, we compute ϕ1ϕ(x) instead of the explicit values of ϕ1 and ϕ(x). We then compute two orthogonal image components I(x) and I(x), the sum of intensities of the perpendicular and parallel components, respectively, in the reflected and emitted radiations. From [22],

[ϕ1ϕ(x)]=12tan1(I1(x)+I3(x)2I2(x)I1(x)I3(x))
I(x)=I1(x)+I3(x)2+I1(x)I3(x)2cos2[ϕ1ϕ(x)]
I(x)=I1(x)+I3(x)2I1(x)I3(x)2cos2[ϕ1ϕ(x)]

We assume that ϕ1ϕ(x) is within [-45°,45°] for the unique solution of 12tan1(). If ϕ1ϕ(x) is smaller than −45° or greater than 45°, the computed value is ± 90° different from the true value. In this case, we simply exchange I and I because the sign of cos2() is reversed. With two orthogonal images extracted, we can detect reflection region using the difference of I and I:

Do(x)=|I(x)I(x)|2

Equation (9) expresses the difference of OPC and Do(x) is normalized to (0,1) during implementation. The region where the difference of OPC is greater than a preset threshold, which is 0.55 in our experiments, is selected as a potential reflection. Figures 4(a) and 4(b) show two orthogonal images and Fig. 4(c) is corresponding difference of OPC image. In Fig. 4(d), the binarization shows that the reflection detection only based on the difference of OPC can locate potential reflection regions.

 figure: Fig. 4

Fig. 4 Reflection Detection Based on Difference of OPC. (a) Perpendicular component I. (b) Parallel component I. (c) Difference of OPC. (d) Binary result of (c).

Download Full Size | PPT Slide | PDF

2.2 Demonstrating the uniformity of AoP and performing the joint reflection detection

Noise exists in dark background area in the potential reflection region since the reflection detection based on the difference of OPC may take background noise as part of reflection. Therefore we need additional constraints to obtain more accurate detection result. The reflected radiation is polarized, and polarization characteristics usually include the degree of linear polarization (DoLP) and the angle of polarization (AoP). Since the OPC difference plays the same role as DoLP, we further study the properties of AoP in reflection region. Since a light vector can be decomposed into two orthogonal components [18], we only consider the reflection light that propagates along the observation direction from the reflection region since the reflected radiation dominates. The transverse components are perpendicular and parallel to the plane of incidence in terms of complex notation

ER(t)=E0exp[i(ωt+δ)]
ER(t)=E0exp[i(ωt+δ)]
where E0 and E0 are the amplitudes, and δ and δ are the phases, respectively. According to Eqs. (3) and (4), we have

E0=R=P(Tt)r(θ)
E0=R=P(Tt)r(θ)

The Stokes parameters for the reflected field are given in [18]

S0R=cosθ(ERER+ERER)
S1R=cosθ(ERERERER)
S2R=cosθ(ERER+ERER)
S3R=icosθ(ERERERER)

Substituting Eqs. (10) and (11) into Eqs. (14) to (17) gives

S0R=cosθ(E02+E02)=cosθ(R2+R2)
S1R=cosθ(E02E02)=cosθ(R2R2)
S2R=cosθ(2E0E0cosδ)=cosθ(2RRcosδ)
S3R=cosθ(2E0E0sinδ)=cosθ(2RRsinδ)
where δ=δδ, and the angle of polarization of reflected filed can be defined as

AoPR=12tan1(S2RS1R)

Substituting Eqs. (19) and (20) into Eq. (22),

AoPR=12tan1(cosθ(2RRcosδ)cosθ(R2R2))=12tan1(2RRcosδR2R2)

The molecule and the denominator are divided by R2 and we can get

AoPR=12tan1(2(R/R)cosδ(R/R)21)

Substituting Eqs. (3) and (4) into Eq. (24) gives

AoPR=12tan1(2(r(θ)/r(θ))cosδ(r(θ)/r(θ))21)

The AoP of reflection region is only related to the angle of incidence θ and the phase difference δ. Figure 5 shows the AoP of reflection respect to θ and δ. For a particular viewing angle θ, the ratio r(θ)/r(θ) is a constant and so as δ cause reflected radiation is polarized. Figure 5 demonstrates that the AoP of reflection region is smooth under most situation except for the phase difference δ is ±π/2+2mπ when the light is circularly polarized, which rarely happens in real scenes. The AoP of reflection fluctuates slightly near a constant value because the θ of reflection region varies within a small range that can be ignored, which refers to the uniformity of AoP in the reflection region.

 figure: Fig. 5

Fig. 5 The AoP of reflection region vary with respect to θ and δ.

Download Full Size | PPT Slide | PDF

From the last step that the OPC difference is related to the angle of incidence, so we first research the AoP of reflection under different viewing angle. Figure 6 shows that the target used for this study is a plane blackbody and the thermal radiation of blackbody is reflected by a marble tile. The blackbody was heated to 80°C and the viewing angles were 50, 60, 70 and 80 degrees.

 figure: Fig. 6

Fig. 6 The target for the study of AoP in reflection region is a plane blackbody, and we put a marble tile in front of it as a reflective medium.

Download Full Size | PPT Slide | PDF

Under different viewing angles, we obtained the corresponding AoP images as shown in the second row in Fig. 7. The AoP in reflection regions (regions in red circle) is uniformly distributed which is different from the other regions. And then we made statistical analysis of distribution of AoP in reflection regions. As in the third row in Fig. 7, we computed the pixel numbers of different AoP in reflection regions and plotted the results in different charts. The statistical results show that the AoP in reflection regions distribute close to a particular value in different viewing angle.

 figure: Fig. 7

Fig. 7 Experiments on uniformity of AoP under different viewing angle. (a)-(d) correspond to viewing angle 50, 60, 70 and 80 degrees, respectively. The first row is original infrared image, the second row is AoP image, and the third row is corresponding statistical results of the distribution of AoP in reflection regions (regions in red circle).

Download Full Size | PPT Slide | PDF

The AoP of reflection is uniform under different viewing angles and we can know that the AoP of reflection is independent from target temperature according to Eq. (25). With a fixed viewing angle θ = 65°, we further obtained the AoP images as in Fig. 8 when the blackbody was heated to different temperatures from 40°C to 80°C in 10°C interval. Similarly, we made statistical analysis of distribution of AoP in reflection regions (regions in red circle) and this time we plot the results in the same chart. Figure 8 shows that with a fixed viewing angle the AoP in reflection regions distribute basically close to a same value in different blackbody temperature, which verifies that the AoP in reflection region is uniform and independent from the target temperature.

 figure: Fig. 8

Fig. 8 Experiment on uniformity of AoP under different blackbody temperature. (a) is original infrared image, and (b)-(f) represent AoP images in 40,50,60,70, and 80C, respectively. (g) shows the statistical results of the distribution of AoP in reflection regions under different blackbody temperature in a single chart.

Download Full Size | PPT Slide | PDF

Combining the OPC difference and the uniformity of AoP in the reflection region, we propose a novel joint reflection detection method which is described by

Djoint(x)=|I(x)I(x)|2i=1kexp[η(AoP(x)Pi)2]
where AoP(x) is the angle of polarization image, Pi,i=1k represents k dominant AoP values of k reflection regions and η is a positive regulation coefficient. Djoint(x) combines two polarization constraints which are |I(x)I(x)|2 for the OPC difference and i=1kexp[η(AoP(x)Pi)2] for the uniformity of AoP. In section 2.1 we obtained a preliminary reflection detection result Do(x) and based on this we can further compute k dominant AoP values Pi,i=1k of k reflection regions using the statistical result of AoP. By setting an appropriate η, i=1kexp[η(AoP(x)Pi)2] can extract the pixels whose values are close to Pi,i=1k in AoP(x) and transfer the target pixels with dominant AoP values into values close to one and turn others to values close to zero. During implementation, we normalize |I(x)I(x)|2 to (0,1) and the joint detection results of reflection are shown in Fig. 9. Using a same threshold, we can get the corresponding binary detection results [Fig. 9(c) and Fig. 9(d)] of detection results based on the difference of OPC and our joint reflection detection method. The results in Fig. 9 shows that our joint reflection detection method can get much more accurate detection result while using only the difference of OPC may contain error detection caused by background noise.

 figure: Fig. 9

Fig. 9 Joint reflection detection. (a) Difference of OPC. (b) Joint reflection detection result. (c) Binary result of (a). (d) Binary result of (b).

Download Full Size | PPT Slide | PDF

2.3 Reflection alpha matting

Although joint reflection detection result is satisfactory, there still are a few small holes in reflection regions and we want to further improve the robustness of detection. Therefore we incorporate joint reflection detection result and image alpha matting for better reflection detection. Table 1 shows that the input image is a false color fusion result of the original infrared image, the difference image |I(x)I(x)|2 of OPC and the AoP image AoP(x). Similar to interactive image matting, we first specify reflection samples and non-reflection samples using the joint reflection detection result by morphological erosion and dilation operation automatically, and construct a tripmap for the input image. With the tripmap, we can extract the reflection alpha matte using the specified samples as the constraints. More specially, we set α = 1 for the specified reflection samples, and α = 0 for the specified non-reflection samples. Then we apply the closed-form matting method [23, 24] and minimize the following energy equation for computing the reflection alpha:

α=argminαTLα+λ(αTbST)DS(αbS)
where L is the Laplacian matrix, DS is a diagonal matrix whose diagonal elements are ones for constrained pixels and zero for all other pixels, and the vector bS contains specified alpha values for the constrained non-reflection pixels and zero for all other pixels. Minimizing above energy equation, we can get the reflection alpha α.

Tables Icon

Table 1. Algorithm outline of reflection alpha matting

With the computed reflection alpha matte α, we can identify the reflection regions of the input image. Specially, given threshold δ1 and δ2, the pixels with α<δ1 can be considered as the pixels in the non-reflection region. The pixels with α>δ2 can be considered as pixels in the reflection regions. The pixels with δ1αδ2 can be considered as the pixels in the reflection boundaries. In the reflection boundaries, the value usually changes dramatically. In our experiments, we set δ1=0.1 and δ2=0.9. Figure 10 shows the reflection matte result.

 figure: Fig. 10

Fig. 10 Reflection matte result. (a) Joint reflection detection result. (b) Binary result of (a). (c) Reflection matte map, where the white regions represent reflection regions and the black regions are non-reflection regions.

Download Full Size | PPT Slide | PDF

Based on the above reflection detection results, the reflection can be removed from the scene using a background reference image (suitably created and updated in video image sequence). As shown in Fig. 11, we realize the reflection removal by replacing the pixels in reflection region with the corresponding pixels in the background reference image.

 figure: Fig. 11

Fig. 11 Reflection removal result. (a) Original LWIR image. (b) Reflection matte map. (c) Background reference image. (d) Reflection removal result.

Download Full Size | PPT Slide | PDF

3. Experiment results

We tested the proposed joint reflection detection method in real division of focal plane (DoFP) [25–27] LWIR polarization images that were captured using a DoFP LWIR polarization imager. In Fig. 12, we selected eight frames from the video with a pedestrian walked through the scene, and the detection results and the reflection removal results show that good qualitative results can be achieved using our method.

 figure: Fig. 12

Fig. 12 Experiment results in real DoFP infrared polarization data. (a)-(h) represent different selected frame. The first row is infrared intensity image, the second row is the reflection detection map and the third row is reflection removal result.

Download Full Size | PPT Slide | PDF

In order to quantitatively evaluate the performance of the proposed method, we generated ground truth reflection maps. For all the ground truth reflection maps and our detected reflection maps, we consider the pixel value greater than 0.9 belongs to reflection region and that smaller than 0.1 belongs to non-reflection region. Then the detection error is defined as

E=Nover+NunderNtruth×100%
where Nover denotes the number of pixels of the reflection region in the detected reflection map but not in the ground truth reflection map. Nunder is the number of pixels of the reflection region in the ground truth reflection map but not in our detected reflection map. Ntruth is the total number of pixels of the reflection region in the ground truth reflection map.

A similar index is defined as [12]

TH=hthr+h×100%
where TH is the height of the true object normalized on the total height of the bounding box (see Fig. 13), ht is the true height of the object, hr is the height of the reflection, h is the height of the object after refection removal and h=ht before the refection removal. It is possible that ht>hr+h if the algorithm completely removes the reflection, but also cuts away part of the object, and that is when the reciprocal of TH is applied. Figure 14 shows the ground truth reflection maps and detected reflection maps. Corresponding detection errors and values of TH are illustrated in Table 2. To show the performance of proposed method more intuitively, we further plot the detection error and TH in different charts as shown in Fig. 15. The detection errors of selected frames are all smaller than 6% and the TH (the proportion of target) has been significantly improved after reflection removal, which proves the accuracy and effectiveness of our proposed reflection detection method.

 figure: Fig. 13

Fig. 13 Detection before and after reflection removal.

Download Full Size | PPT Slide | PDF

 figure: Fig. 14

Fig. 14 The ground truth reflection maps and our detected reflection maps. (a)-(h) represent different selected frames. The first row is infrared intensity image, the second row is the ground truth reflection map and the third row is corresponding our detected reflection map.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Performance of the proposed reflection removal method

 figure: Fig. 15

Fig. 15 (a) The detection error. (b) The TH before and after reflection removal.

Download Full Size | PPT Slide | PDF

In addition, we compare our method with three other reflection removal methods based on polarization and a blind reflection removal method. These approaches include the MI-based [4], the ICA-based [5], the EO photon separation [17] reflection removal methods, and the SPICA-based (sparse independent component analysis based) [6] blind reflection removal method. The results are shown in Fig. 16. The ICA-based method cannot remove the reflections effectively, and also reduces the contrast between the target and the background. The MI-based method gets better results than the ICA-based method, but it also removes part details of the background and generates some artifacts in reflection regions. The EO method is considered as a more truthful separation method in [17], and yet it cannot remove the reflection completely. The patch-wise separation [17] may get better results, but it’s much more time consuming, and which limits its application in surveillance system. The SPICA-based blind reflection removal method retains better contrast between the target and the background, but there still are some reflections are remained in the scene. Robustness of blind source separation is highly dependent on the correlation of the underlying components, and therefore, its performance is limited by the inherent noise in the different components. These methods focus on the case of transparent surface, where an image reflected by the surface may be superimposed on the image of the observed object in the most or even the whole field of view. However, in the case of optically semi-diffuse surface such as floors in this paper, the reflection of interest is a small part of scene, and these separation techniques may lead error separation results in the original objects and the background. Moreover, the first two polarization based methods [4,5] both make the assumption that the reflected and the transmitted layers are independent, which may not be viable in all the scenes cause, for example, the separation of objects that are similar in shape and intensity. In contrast, our method can detect the reflections accurately and then remove the reflections from the original scene without changing the object and the background.

 figure: Fig. 16

Fig. 16 Comparison of different methods. (a) Image without reflection removal. (b) The ICA-based reflection removal results. (c) The SPICA-based reflection removal results. (d) The MI-based reflection removal results. (e) EO photon separation results. (f) Our method. And the different rows represent different frames.

Download Full Size | PPT Slide | PDF

Current mainstream reflection removal methods in surveillance applications basically focus on the evaluation of the baseline that separates the target and its reflection vertically [14, 15]. However, those methods may fail when the scene contains several pedestrians and there is occlusion between the pedestrians, or the pedestrian and its reflection are not vertical (maybe horizontal when a person near a glass wall or any other direction). First, we captured the video data when scene contained several persons and the situations are: three persons stood in different baseline, two persons were walking towards to each other and one was blocked by another, and the combination of the above two cases. Figure 17 shows experiment results that the proposed method gives accurate reflection detection and removal results in all the three situations. Then, we test our method when the object and its reflection are not vertical and the situations are: a person stood near a glass wall and the DoFP infrared camera had a certain angle (not vertical or horizontal) to the optic axis, respectively. The results of this experiment are shown in Fig. 18 and our method can also generate accurate reflection detection and removal results under all those situations. Therefore, our method can apply to all those complex situations where the current mainstream reflection removal methods may fail.

 figure: Fig. 17

Fig. 17 Experiment on complex situations that contain several persons. (a) Three persons stood in different position. (b) Two person when one was blocked by another and the reflection region in red line belongs to the front person and the reflection region in yellow line belongs to another one. (c) The combination of the above two cases. And the first row is infrared intensity image where the green dotted lines are baselines, the second row is reflection detection map and the third row is reflection removal result.

Download Full Size | PPT Slide | PDF

 figure: Fig. 18

Fig. 18 Experiments on complex situations that the camera is not vertical. (a) Infrared intensity image where the green dotted lines are baselines. (b) Reflection detection map. (c) Reflection removal results. The first row camera is slightly sloping and the second row is a person sit near a glass wall.

Download Full Size | PPT Slide | PDF

4. Conclusion

This paper presents removal of image reflections generated by objects on reflecting medium in LWIR spectrum using polarization properties of reflection. We first locate the reflection regions using the difference between two OPC components. Then we propose the uniformity of AoP in reflection region, and further demonstrate it by theoretical and experimental analysis. Based on the uniformity of AoP and the OPC difference we further develop a joint reflection detection method which produces a detection result nearly the same as the true reflection region. Finally, we apply the closed-form matting method to get more robust detection of reflection, and remove the reflection guarantee the recovered emission energy is consistent with that of surrounding environment. We test our joint reflection detection method in real DoFP LWIR polarization images obtained by our own developed DoFP LWIR polarization imager under conventional and complex situations, and the results are satisfactory.

Funding

National Natural Science Foundation of China (NSFC) (61771391, 61371152); National Natural Science Foundation of China and Korea National Research Foundation Joint Funded Cooperation Program (61511140292); Korean National Research Foundation (NRF-2016R1D1A1B01008522).

References and links

1. C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. 106, 288–299 (2007). [CrossRef]  

2. Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6. [CrossRef]  

3. H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004). [CrossRef]  

4. Y. Schechner, J. Shamir, and N. Kiryuati, “Polarization-based decorrelation of transparent layers: the inclination angle of an invisible surface,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1999), pp. 814–819. [CrossRef]  

5. H. Farid and E. H. Adelson, “Separating reflections from images by use of independent component analysis,” J. Opt. Soc. Am. A 16(9), 2136–2145 (1999). [CrossRef]   [PubMed]  

6. A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005). [CrossRef]  

7. R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.

8. T. Sirinukulwattana, G. Choe, and I. S. Kweon, “Reflection removal using disparity and gradient-sparsity via smoothing algorithm,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2015), pp. 1940–1944. [CrossRef]  

9. Y. Li and M. S. Brown, “Exploiting reflection change for automatic reflection removal,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2432–2439. [CrossRef]  

10. A. Teschioni and C. S. Regazzoni, “A robust method for reflections analysis in color image sequences,” in Signal Processing Conference (EUSIPCO 1998) (IEEE, 1998), pp. 1–4.

11. E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. 29(3), 272–285 (2008). [CrossRef]  

12. D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

13. T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1208–1221 (2004). [CrossRef]   [PubMed]  

14. D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012). [CrossRef]  

15. M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE 7257, 725709 (2009). [CrossRef]  

16. S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).

17. Y. Ding, A. Ashok, and S. Pau, “Real-time robust direct and indirect photon separation with polarization imaging,” Opt. Express 25(23), 29432–29453 (2017). [CrossRef]  

18. D. H. Goldstein, Polarized Light, 2nd ed. (Marcel Dekker, Inc, 2003).

19. J. S. Tyo, B. M. Ratliff, J. K. Boger, W. T. Black, D. L. Bowers, and M. P. Fetrow, “The effects of thermal equilibrium and contrast in LWIR polarimetric images,” Opt. Express 15(23), 15161–15167 (2007). [CrossRef]   [PubMed]  

20. T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–252 (1990). [CrossRef]  

21. E. Hecht, Optics, 4th ed. (Pearson Education, 2002).

22. N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 209–221 (2014). [CrossRef]   [PubMed]  

23. L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. 24(11), 4623–4636 (2015). [CrossRef]   [PubMed]  

24. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008). [CrossRef]   [PubMed]  

25. Y. Zhao, Multi-band Polarization Imaging and Applications. (Springer, 2016).

26. V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010). [CrossRef]   [PubMed]  

27. C. Xu, J. Ma, C. Ke, Y. Huang, Z. Zeng, and W. Weng, “Numerical study of a DoFP polarimeter based on the self-organized nanograting array,” Opt. Express 26(3), 2517–2527 (2018). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. 106, 288–299 (2007).
    [Crossref]
  2. Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6.
    [Crossref]
  3. H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
    [Crossref]
  4. Y. Schechner, J. Shamir, and N. Kiryuati, “Polarization-based decorrelation of transparent layers: the inclination angle of an invisible surface,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1999), pp. 814–819.
    [Crossref]
  5. H. Farid and E. H. Adelson, “Separating reflections from images by use of independent component analysis,” J. Opt. Soc. Am. A 16(9), 2136–2145 (1999).
    [Crossref] [PubMed]
  6. A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005).
    [Crossref]
  7. R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.
  8. T. Sirinukulwattana, G. Choe, and I. S. Kweon, “Reflection removal using disparity and gradient-sparsity via smoothing algorithm,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2015), pp. 1940–1944.
    [Crossref]
  9. Y. Li and M. S. Brown, “Exploiting reflection change for automatic reflection removal,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2432–2439.
    [Crossref]
  10. A. Teschioni and C. S. Regazzoni, “A robust method for reflections analysis in color image sequences,” in Signal Processing Conference (EUSIPCO 1998) (IEEE, 1998), pp. 1–4.
  11. E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. 29(3), 272–285 (2008).
    [Crossref]
  12. D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.
  13. T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1208–1221 (2004).
    [Crossref] [PubMed]
  14. D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012).
    [Crossref]
  15. M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE 7257, 725709 (2009).
    [Crossref]
  16. S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).
  17. Y. Ding, A. Ashok, and S. Pau, “Real-time robust direct and indirect photon separation with polarization imaging,” Opt. Express 25(23), 29432–29453 (2017).
    [Crossref]
  18. D. H. Goldstein, Polarized Light, 2nd ed. (Marcel Dekker, Inc, 2003).
  19. J. S. Tyo, B. M. Ratliff, J. K. Boger, W. T. Black, D. L. Bowers, and M. P. Fetrow, “The effects of thermal equilibrium and contrast in LWIR polarimetric images,” Opt. Express 15(23), 15161–15167 (2007).
    [Crossref] [PubMed]
  20. T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–252 (1990).
    [Crossref]
  21. E. Hecht, Optics, 4th ed. (Pearson Education, 2002).
  22. N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 209–221 (2014).
    [Crossref] [PubMed]
  23. L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. 24(11), 4623–4636 (2015).
    [Crossref] [PubMed]
  24. A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008).
    [Crossref] [PubMed]
  25. Y. Zhao, Multi-band Polarization Imaging and Applications. (Springer, 2016).
  26. V. Gruev, R. Perkins, and T. York, “CCD polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010).
    [Crossref] [PubMed]
  27. C. Xu, J. Ma, C. Ke, Y. Huang, Z. Zeng, and W. Weng, “Numerical study of a DoFP polarimeter based on the self-organized nanograting array,” Opt. Express 26(3), 2517–2527 (2018).
    [Crossref] [PubMed]

2018 (1)

2017 (1)

2016 (1)

S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).

2015 (1)

L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. 24(11), 4623–4636 (2015).
[Crossref] [PubMed]

2014 (1)

N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 209–221 (2014).
[Crossref] [PubMed]

2012 (1)

D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012).
[Crossref]

2010 (1)

2009 (1)

M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE 7257, 725709 (2009).
[Crossref]

2008 (2)

E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. 29(3), 272–285 (2008).
[Crossref]

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008).
[Crossref] [PubMed]

2007 (2)

J. S. Tyo, B. M. Ratliff, J. K. Boger, W. T. Black, D. L. Bowers, and M. P. Fetrow, “The effects of thermal equilibrium and contrast in LWIR polarimetric images,” Opt. Express 15(23), 15161–15167 (2007).
[Crossref] [PubMed]

C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. 106, 288–299 (2007).
[Crossref]

2005 (1)

A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005).
[Crossref]

2004 (2)

H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
[Crossref]

T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1208–1221 (2004).
[Crossref] [PubMed]

1999 (1)

1990 (1)

T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–252 (1990).
[Crossref]

Adelson, E. H.

Ashok, A.

Black, W. T.

Boger, J. K.

Bowers, D. L.

Bronstein, A. M.

A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005).
[Crossref]

Bronstein, M. M.

A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005).
[Crossref]

Brown, M. S.

Y. Li and M. S. Brown, “Exploiting reflection change for automatic reflection removal,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2432–2439.
[Crossref]

Carmona, E. J.

E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. 29(3), 272–285 (2008).
[Crossref]

Choe, G.

T. Sirinukulwattana, G. Choe, and I. S. Kweon, “Reflection removal using disparity and gradient-sparsity via smoothing algorithm,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2015), pp. 1940–1944.
[Crossref]

Conte, D.

D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012).
[Crossref]

D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

Dai, C.

C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. 106, 288–299 (2007).
[Crossref]

Ding, Y.

Farid, H.

Fetrow, M. P.

Foggia, P.

D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012).
[Crossref]

D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

Geers, G.

Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6.
[Crossref]

Goldmann, L.

M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE 7257, 725709 (2009).
[Crossref]

Gruev, V.

Ha, S. V. U.

S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).

Hebert, P.

H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
[Crossref]

Huang, Y.

Hwee, T. A.

R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.

Ibarra-Castanedo, C.

H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
[Crossref]

Karaman, M.

M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE 7257, 725709 (2009).
[Crossref]

Ke, C.

Kiryuati, N.

Y. Schechner, J. Shamir, and N. Kiryuati, “Polarization-based decorrelation of transparent layers: the inclination angle of an invisible surface,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1999), pp. 814–819.
[Crossref]

Kong, N.

N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 209–221 (2014).
[Crossref] [PubMed]

Kot, A. C.

R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.

Kweon, I. S.

T. Sirinukulwattana, G. Choe, and I. S. Kweon, “Reflection removal using disparity and gradient-sparsity via smoothing algorithm,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2015), pp. 1940–1944.
[Crossref]

Levin, A.

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008).
[Crossref] [PubMed]

Li, X.

C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. 106, 288–299 (2007).
[Crossref]

Li, Y.

Y. Li and M. S. Brown, “Exploiting reflection change for automatic reflection removal,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2432–2439.
[Crossref]

Li, Z.

Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6.
[Crossref]

Lischinski, D.

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008).
[Crossref] [PubMed]

Ma, J.

Maldague, X. P.

H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
[Crossref]

Martínez-Cantos, J.

E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. 29(3), 272–285 (2008).
[Crossref]

Mira, J.

E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. 29(3), 272–285 (2008).
[Crossref]

Nevatia, R.

T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1208–1221 (2004).
[Crossref] [PubMed]

Pau, S.

Percannella, G.

D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012).
[Crossref]

D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

Perkins, R.

Pham, L. H.

S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).

Pham, N. T.

S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).

Ratliff, B. M.

Regazzoni, C. S.

A. Teschioni and C. S. Regazzoni, “A robust method for reflections analysis in color image sequences,” in Signal Processing Conference (EUSIPCO 1998) (IEEE, 1998), pp. 1–4.

Rice, J. E.

T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–252 (1990).
[Crossref]

Rogne, T. J.

T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–252 (1990).
[Crossref]

Schechner, Y.

Y. Schechner, J. Shamir, and N. Kiryuati, “Polarization-based decorrelation of transparent layers: the inclination angle of an invisible surface,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1999), pp. 814–819.
[Crossref]

Shamir, J.

Y. Schechner, J. Shamir, and N. Kiryuati, “Polarization-based decorrelation of transparent layers: the inclination angle of an invisible surface,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1999), pp. 814–819.
[Crossref]

Shi, B.

R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.

Shin, J. S.

N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 209–221 (2014).
[Crossref] [PubMed]

Sikora, T.

M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE 7257, 725709 (2009).
[Crossref]

Sirinukulwattana, T.

T. Sirinukulwattana, G. Choe, and I. S. Kweon, “Reflection removal using disparity and gradient-sparsity via smoothing algorithm,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2015), pp. 1940–1944.
[Crossref]

Smith, F. G.

T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–252 (1990).
[Crossref]

Tai, Y. W.

N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 209–221 (2014).
[Crossref] [PubMed]

Teschioni, A.

A. Teschioni and C. S. Regazzoni, “A robust method for reflections analysis in color image sequences,” in Signal Processing Conference (EUSIPCO 1998) (IEEE, 1998), pp. 1–4.

Torresan, H.

H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
[Crossref]

Tran, H. M.

S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).

Tufano, F.

D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

Turgeon, B.

H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
[Crossref]

Tyo, J. S.

Vento, M.

D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012).
[Crossref]

D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

Wan, R.

R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.

Weiss, Y.

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008).
[Crossref] [PubMed]

Weng, W.

Wu, Q.

Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6.
[Crossref]

Xiao, C.

L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. 24(11), 4623–4636 (2015).
[Crossref] [PubMed]

Xu, C.

York, T.

Zeevi, Y. Y.

A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005).
[Crossref]

Zeng, Z.

Zhang, J.

Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6.
[Crossref]

Zhang, L.

L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. 24(11), 4623–4636 (2015).
[Crossref] [PubMed]

Zhang, Q.

L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. 24(11), 4623–4636 (2015).
[Crossref] [PubMed]

Zhao, T.

T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1208–1221 (2004).
[Crossref] [PubMed]

Zheng, Y.

C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. 106, 288–299 (2007).
[Crossref]

Zibulevsky, M.

A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005).
[Crossref]

Comput. Vis. Image Underst. (1)

C. Dai, Y. Zheng, and X. Li, “Layered representation for pedestrian detection and tracking in infrared imagery,” Comput. Vis. Image Underst. 106, 288–299 (2007).
[Crossref]

IEEE Trans. Circ. Syst. Video Tech. (1)

D. Conte, P. Foggia, G. Percannella, and M. Vento, “Removing object reflections in videos by global optimization,” IEEE Trans. Circ. Syst. Video Tech. 22(11), 1623–1633 (2012).
[Crossref]

IEEE Trans. Image Process. (1)

L. Zhang, Q. Zhang, and C. Xiao, “Shadow remover: Image shadow removal based on illumination recovering optimization,” IEEE Trans. Image Process. 24(11), 4623–4636 (2015).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

A. Levin, D. Lischinski, and Y. Weiss, “A closed-form solution to natural image matting,” IEEE Trans. Pattern Anal. Mach. Intell. 30(2), 228–242 (2008).
[Crossref] [PubMed]

N. Kong, Y. W. Tai, and J. S. Shin, “A physically-based approach to reflection separation: from physical modeling to constrained optimization,” IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 209–221 (2014).
[Crossref] [PubMed]

T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1208–1221 (2004).
[Crossref] [PubMed]

Int. J. Imaging Syst. Technol. (1)

A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imaging Syst. Technol. 15(1), 84–91 (2005).
[Crossref]

J. Opt. Soc. Am. A (1)

Opt. Express (4)

Pattern Recognit. Lett. (1)

E. J. Carmona, J. Martínez-Cantos, and J. Mira, “A new video segmentation method of moving objects based on blob-level knowledge,” Pattern Recognit. Lett. 29(3), 272–285 (2008).
[Crossref]

Proc. SPIE (3)

M. Karaman, L. Goldmann, and T. Sikora, “Improving object segmentation by reflection detection and removal,” Proc. SPIE 7257, 725709 (2009).
[Crossref]

H. Torresan, B. Turgeon, C. Ibarra-Castanedo, P. Hebert, and X. P. Maldague, “Advanced surveillance systems: combining video and thermal imagery for pedestrian detection,” Proc. SPIE 5405, 506–516 (2004).
[Crossref]

T. J. Rogne, F. G. Smith, and J. E. Rice, “Passive target detection using polarized components of infrared signatures,” Proc. SPIE 1317, 242–252 (1990).
[Crossref]

REV J. Electron. Commun. (1)

S. V. U. Ha, N. T. Pham, L. H. Pham, and H. M. Tran, “Robust Reflection Detection and Removal in Rainy Conditions using LAB and HSV Color Spaces,” REV J. Electron. Commun. 6, 13–19 (2016).

Other (10)

D. H. Goldstein, Polarized Light, 2nd ed. (Marcel Dekker, Inc, 2003).

D. Conte, P. Foggia, G. Percannella, F. Tufano, and M. Vento, “Reflection removal in colour videos,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2010), pp. 1788–1791.

Y. Schechner, J. Shamir, and N. Kiryuati, “Polarization-based decorrelation of transparent layers: the inclination angle of an invisible surface,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1999), pp. 814–819.
[Crossref]

Z. Li, Q. Wu, J. Zhang, and G. Geers, “SKRWM based descriptor for pedestrian detection in thermal images, ” in Proceedings of IEEE Workshop on Multimedia Signal Processing (IEEE, 2011), pp. 1–6.
[Crossref]

R. Wan, B. Shi, T. A. Hwee, and A. C. Kot, “Depth of field guided reflection removal,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2016), pp. 21–25.

T. Sirinukulwattana, G. Choe, and I. S. Kweon, “Reflection removal using disparity and gradient-sparsity via smoothing algorithm,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2015), pp. 1940–1944.
[Crossref]

Y. Li and M. S. Brown, “Exploiting reflection change for automatic reflection removal,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2013), pp. 2432–2439.
[Crossref]

A. Teschioni and C. S. Regazzoni, “A robust method for reflections analysis in color image sequences,” in Signal Processing Conference (EUSIPCO 1998) (IEEE, 1998), pp. 1–4.

E. Hecht, Optics, 4th ed. (Pearson Education, 2002).

Y. Zhao, Multi-band Polarization Imaging and Applications. (Springer, 2016).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1
Fig. 1 Procedures of the proposed reflection removal scheme. (a) Original LWIR image. (b) The OPC difference. (c) Joint reflection detection result. (d) Reflection matte result. (e) Reflection removal result.
Fig. 2
Fig. 2 Mode of how reflection is formed in LWIR and we consider the measured polarization signature is primarily due to reflections from the surface of the medium.
Fig. 3
Fig. 3 Two polarized components perpendicular and parallel to the plane of incidence of reflectance vary with respect to the angle of incidence, and the two components are different under most angles of incidence (the refractive index is about 6.74 for common marble tiles in LWIR spectrum).
Fig. 4
Fig. 4 Reflection Detection Based on Difference of OPC. (a) Perpendicular component I . (b) Parallel component I . (c) Difference of OPC. (d) Binary result of (c).
Fig. 5
Fig. 5 The AoP of reflection region vary with respect to θ and δ.
Fig. 6
Fig. 6 The target for the study of AoP in reflection region is a plane blackbody, and we put a marble tile in front of it as a reflective medium.
Fig. 7
Fig. 7 Experiments on uniformity of AoP under different viewing angle. (a)-(d) correspond to viewing angle 50, 60, 70 and 80 degrees, respectively. The first row is original infrared image, the second row is AoP image, and the third row is corresponding statistical results of the distribution of AoP in reflection regions (regions in red circle).
Fig. 8
Fig. 8 Experiment on uniformity of AoP under different blackbody temperature. (a) is original infrared image, and (b)-(f) represent AoP images in 40,50,60,70, and 80C, respectively. (g) shows the statistical results of the distribution of AoP in reflection regions under different blackbody temperature in a single chart.
Fig. 9
Fig. 9 Joint reflection detection. (a) Difference of OPC. (b) Joint reflection detection result. (c) Binary result of (a). (d) Binary result of (b).
Fig. 10
Fig. 10 Reflection matte result. (a) Joint reflection detection result. (b) Binary result of (a). (c) Reflection matte map, where the white regions represent reflection regions and the black regions are non-reflection regions.
Fig. 11
Fig. 11 Reflection removal result. (a) Original LWIR image. (b) Reflection matte map. (c) Background reference image. (d) Reflection removal result.
Fig. 12
Fig. 12 Experiment results in real DoFP infrared polarization data. (a)-(h) represent different selected frame. The first row is infrared intensity image, the second row is the reflection detection map and the third row is reflection removal result.
Fig. 13
Fig. 13 Detection before and after reflection removal.
Fig. 14
Fig. 14 The ground truth reflection maps and our detected reflection maps. (a)-(h) represent different selected frames. The first row is infrared intensity image, the second row is the ground truth reflection map and the third row is corresponding our detected reflection map.
Fig. 15
Fig. 15 (a) The detection error. (b) The TH before and after reflection removal.
Fig. 16
Fig. 16 Comparison of different methods. (a) Image without reflection removal. (b) The ICA-based reflection removal results. (c) The SPICA-based reflection removal results. (d) The MI-based reflection removal results. (e) EO photon separation results. (f) Our method. And the different rows represent different frames.
Fig. 17
Fig. 17 Experiment on complex situations that contain several persons. (a) Three persons stood in different position. (b) Two person when one was blocked by another and the reflection region in red line belongs to the front person and the reflection region in yellow line belongs to another one. (c) The combination of the above two cases. And the first row is infrared intensity image where the green dotted lines are baselines, the second row is reflection detection map and the third row is reflection removal result.
Fig. 18
Fig. 18 Experiments on complex situations that the camera is not vertical. (a) Infrared intensity image where the green dotted lines are baselines. (b) Reflection detection map. (c) Reflection removal results. The first row camera is slightly sloping and the second row is a person sit near a glass wall.

Tables (2)

Tables Icon

Table 1 Algorithm outline of reflection alpha matting

Tables Icon

Table 2 Performance of the proposed reflection removal method

Equations (29)

Equations on this page are rendered with MathJax. Learn more.

L ( θ ) = R + E = P ( T t ) r ( θ ) + P ( T m ) ε ( θ )
L ( θ ) = R + E = P ( T t ) r ( θ ) + P ( T m ) ε ( θ )
L ( θ ) R = P ( T t ) r ( θ )
L ( θ ) R = P ( T t ) r ( θ )
r ( θ , n ) = sin 2 ( θ θ t ( θ , n ) ) sin 2 ( θ + θ t ( θ , n ) ) a n d r ( θ , n ) = tan 2 ( θ θ t ( θ , n ) ) tan 2 ( θ + θ t ( θ , n ) )
[ ϕ 1 ϕ ( x ) ] = 1 2 tan 1 ( I 1 ( x ) + I 3 ( x ) 2 I 2 ( x ) I 1 ( x ) I 3 ( x ) )
I ( x ) = I 1 ( x ) + I 3 ( x ) 2 + I 1 ( x ) I 3 ( x ) 2 cos 2 [ ϕ 1 ϕ ( x ) ]
I ( x ) = I 1 ( x ) + I 3 ( x ) 2 I 1 ( x ) I 3 ( x ) 2 cos 2 [ ϕ 1 ϕ ( x ) ]
D o ( x ) = | I ( x ) I ( x ) | 2
E R ( t ) = E 0 exp [ i ( ω t + δ ) ]
E R ( t ) = E 0 exp [ i ( ω t + δ ) ]
E 0 = R = P ( T t ) r ( θ )
E 0 = R = P ( T t ) r ( θ )
S 0 R = cos θ ( E R E R + E R E R )
S 1 R = cos θ ( E R E R E R E R )
S 2 R = cos θ ( E R E R + E R E R )
S 3 R = i cos θ ( E R E R E R E R )
S 0 R = cos θ ( E 0 2 + E 0 2 ) = cos θ ( R 2 + R 2 )
S 1 R = cos θ ( E 0 2 E 0 2 ) = cos θ ( R 2 R 2 )
S 2 R = cos θ ( 2 E 0 E 0 cos δ ) = cos θ ( 2 R R cos δ )
S 3 R = cos θ ( 2 E 0 E 0 sin δ ) = cos θ ( 2 R R sin δ )
A o P R = 1 2 tan 1 ( S 2 R S 1 R )
A o P R = 1 2 tan 1 ( cos θ ( 2 R R cos δ ) cos θ ( R 2 R 2 ) ) = 1 2 tan 1 ( 2 R R cos δ R 2 R 2 )
A o P R = 1 2 tan 1 ( 2 ( R / R ) cos δ ( R / R ) 2 1 )
A o P R = 1 2 tan 1 ( 2 ( r ( θ ) / r ( θ ) ) cos δ ( r ( θ ) / r ( θ ) ) 2 1 )
D j o i n t ( x ) = | I ( x ) I ( x ) | 2 i = 1 k exp [ η ( A o P ( x ) P i ) 2 ]
α = arg min α T L α + λ ( α T b S T ) D S ( α b S )
E = N o v e r + N u n d e r N t r u t h × 100 %
T H = h t h r + h × 100 %

Metrics