Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Angle of polarization calibration for omnidirectional polarization cameras

Open Access Open Access

Abstract

Polarization cameras quantify one of the fundamental properties of light and capture intrinsic properties of the imaged environment that are otherwise omitted by color sensors. Many polarization applications, such as underwater geolocalization and sky-based polarization compass, require simultaneous imaging of the entire radial optical field with omnidirectional lenses. However, the reconstructed angle of polarization captured with omnidirectional lenses has a radial offset due to redirection of the light rays within these lenses. In this paper, we describe a calibration method for correcting angle of polarization images captured with omnidirectional lenses. Our calibration method reduces the variance of reconstructed angle of polarization from 76.2$^\circ$ to 4.1$^\circ$. Example images collected both on an optical bench and in nature, demonstrate the improved accuracy of the reconstructed angle of polarization with our calibration method. The improved accuracy in the angle of polarization images will aid the development of polarization-based applications with omnidirectional lenses.

© 2023 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Single chip polarization imaging sensors for the visible spectrum were first reported in 2010 and have been commercially available since 2018 [13]. Even though silicon based imaging sensors have been around since the 1960s, it took over 50 years of research and development in micro- and nano-fabrication to realize polarization imagers [4,5]. These imaging devices require unique set of steps to fabricate and integrate pixelated polarization filters with an array of imaging elements. The pixelated polarization filters are comprised of metallic nanowires with sub-wavelength dimensions. For example, to image polarization in the blue spectrum (i.e. $\sim$450 nm), the width of the aluminum nanowires in the polarization filter should be at least one fifth of the imaged wavelength (i.e. $\sim$90 nm). Furthermore, the nanowires height-to-width aspect ratio should be at least 2:1 with air gaps between individual wires to achieve extinction ratios above 100. These fabrication requirements have only been possible over the last decade [6]. With the rapid dissemination of polarization imaging technology, many new applications have emerged ranging from early cancer detection to monitoring crop viability and underwater geolocalization [714].

Current state-of-the-art polarization imaging sensors utilize a super-pixel configuration [13]. Each super-pixel is comprised of four individual pixels that filter the incoming light with polarization filters oriented at 0$^\circ$, 45$^\circ$, 90$^\circ$ and 135$^\circ$, respectively. Since these pixels are adjacent with respect to each other, angle of polarization (AoP) and degree of linear polarization (DoLP) can be reconstructed via the following equations:

$$\begin{bmatrix} S_0\\S_1\\S_2 \end{bmatrix} = \begin{bmatrix} I_0 + I_{90}\\I_0 - I_{90}\\I_{45}-I_{135}, \end{bmatrix}$$
$$AoP = \frac{1}{2} \tan^{{-}1} (\frac{S_2}{S_1}),$$
$$DoLP = \frac{\sqrt{S_2^2+S_1^2}}{S_0}.$$

In Eq. (1), the first three Stokes parameters $S_0$, $S_1$ and $S_2$ are computed from the four pixels equipped with individual pixelated polarization filters, denoted as $I_0$ through $I_{135}$. The AoP and DoLP for each super-pixel is computed via Eqs. (2) and (3), respectively. Note that these reconstruction equations assume that the instantaneous field of view is the same across the super-pixel, which might not always be the case. Interpolation algorithms partially mitigate this issue and improve both the spatial resolution and the accuracy of the reconstructed polarization information [15,16].

In many polarization-based remote sensing applications, such as underwater geolocalization or sky-based compass, real-time sampling of the radial field of view is required [9,10]. Hence, polarization sensors must be equipped with omnidirectional lenses, which requires calibration for correct reconstruction of AoP images. Calibration of omnidirectional lenses has been explored for both grayscale and color images [17]. However, the calibration process for omnidirectional polarization sensors is fundamentally different that the one for color and grayscale omnidirectional cameras.

Polarization information is a vector quantity typically represented as a Stokes vector. The angle of polarization, computed from the Stokes vectors, describes the main angle of oscillation as light propagates in time and space. AoP is defined with respect to a reference coordinate system, which is typically the focal plane of the polarization sensor. Thus the interpretation of polarization information may require a lens-dependent calibration method. In contrast, color and grayscale images are scalar quantities, and therefore they do not suffer from reference coordinate system-induced offsets. In this paper, we describe a calibration method for polarization cameras equipped with omnidirectional lenses. Our calibration method is validated on calibrated targets reproduced in lab settings and on images collected in the field.

Although several previous works [1822] also discuss the topic of polarization camera calibration, they mainly focus on improving polarization accuracy for single sensor equipped with regular lenses. In this paper, we propose that when an omnidirectional lens is used, the whole hemispherical field-of-view suffers from an inherent radial offset, which should be calibrated to recover the true AoP differences across the frame.

2. Singlet vs. omnidirectional lenses

The angle of polarization is affected differently by singlet and omnidirectional lenses. To understand these differences, let’s first consider a smooth object (for example, a cup) being illuminated from all directions and imaged with a singlet lens with a limited filed of view. Specifically, we consider a single point $A$ on the object with a surface normal $\mathbf {n}$. The AoP for all light beams reflected or scattered from this point will be parallel to the plane defined by the surface normal although their degree of linear polarization will be different. Depending on the lens aperture, a portion of the reflected light beams from this point will be focused at one point on the polarization sensor’s focal plane. Since all reflected light beams from point $A$ have the same AoP, the reconstructed AoP at the focal plane accurately reflects the true AoP at point $A$. Figure 1(a) visualizes the process described above. This high level and intuitive explanation of the polarization effects in lenses can also be validated via Jones or Muller matrix mathematical framework [23].

 figure: Fig. 1.

Fig. 1. (a) When imaging a scene with regular lenses, all reflected light beams from point $A$ have the same AoP and they are focused at one point on the sensors’s imaging plane. Since the field of view is relatively narrow, we can reconstruct the true AoP value from the sensor readings. (b) In contrast, an omnidirectional polarization camera receives light from all directions. As the angular offset between the light beam and the focal plane’s coordinate system varies, the corresponding super-pixel is activated differently for the same incident AoP. (c) As a result, the reconstructed AoP has an inherent radial offset dependent on each pixel’s heading. The radial offset is shown in false color, where red and blue indicate 0$^\circ$ and 90$^\circ$ offset, respectively.

Download Full Size | PDF

Next, let’s consider a scene comprised of three cups placed 45$^\circ$ apart, illuminated from all directions and imaged by a polarization sensor equipped with an omnidirectional lens. We will simplify the analysis by considering only three coplanar reflected waves from three identical points on the individual cups. The reflected light from these three points will be partially linearly polarized with the dominant axes of oscillation (i.e. angle of polarization) parallel to the plane defined by the surface normal at these points. Hence, the three reflected waves have the same angle of polarization but different propagation direction on the same plane, as shown in Fig. 1(b). These reflected light beams are then redirected to the focal plane with an omnidirectional lens and recorded by three super-pixels, respectively. The three beams will have maximum response at the 0$^\circ$, 45$^\circ$ and 90$^\circ$ polarization pixel, respectively, because their propagation direction is offset by 45 degrees in the reference plane of the image sensor. As a result, the reconstructed AoP for the three beams with respect to the coordinate system defined by the focal plane of the imager will have an offset of 45$^\circ$ with respect to each other even though their original AoP states are identical.

To summarize, when using an omnidirectional lens with a polarization sensor, the reconstructed AoP at pixel location $(i,j)$ will have an inherent radial offset with respect to the focal plane of the camera, henceforth denoted $\phi _{i,j}$. Therefore, to correct this inherent offset, the calibrated Stokes vector must be rotated with respect to the reference frame by angle $\phi _{i,j}$, which is shown in Eqs. (4) through (6):

$$\phi_{i,j} = \tan^{{-}1}(\frac{w/2-j}{h/2-i}),$$
$$M_{i,j} = \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & \cos(2\phi_{i,j}) & \sin(2\phi_{i,j}) & 0\\ 0 & -\sin(2\phi_{i,j}) & \cos(2\phi_{i,j}) & 0\\ 0 & 0 & 0 & 1 \end{bmatrix},$$
$$S'_{i,j} = M_{i,j}S_{i,j},$$

In these equations, $h$ and $w$ are the height and width of the captured frame, $M_{i,j}$ is the Mueller matrix for rotation with respect to the reference frame at pixel coordinates $(i,j)$, and $S_{i,j}$ and $S'_{i,j}$ are the Stokes vectors before and after the calibration, respectively. The calibrated Stokes vector can be expanded based on the equations above into the forms below:

$$S'_0 = S_0,$$
$$S'_1 = \cos(2\phi_{i,j})S_1 + \sin(2\phi_{i,j})S_2,$$
$$S'_2 ={-}\sin(2\phi_{i,j})S_1 + \cos(2\phi_{i,j})S_2.$$
$$S'_3 = S_3,$$

The calibrated AoP can be computed by substituting Eqs. (8) and (9) into Eq. (2) and simplified as follows:

$$\begin{aligned}AoP'_{i,j} &= \frac{1}{2}\tan^{{-}1}(\frac{S'_2}{S'_1})\\ &= \frac{1}{2}\tan^{{-}1}\frac{-\sin(2\phi_{i,j})S_1 + \cos(2\phi_{i,j})S_2}{\cos(2\phi_{i,j})S_1 + \sin(2\phi_{i,j})S_2}\\ &= \frac{1}{2}\tan^{{-}1}\frac{-\tan(2\phi_{i,j}) + S_2/S_1}{1 + \tan(2\phi_{i,j})S_2/S_1}\\ &= \frac{1}{2}\tan^{{-}1}\frac{-\tan(2\phi_{i,j}) + \tan(\tan^{{-}1}(2AoP_{i,j}))}{1 + \tan(2\phi_{i,j})\tan(\tan^{{-}1}(2AoP_{i,j}))}\\ &= \frac{1}{2}\tan^{{-}1}(\tan(2AoP_{i,j}-2\phi_{i,j}))\\ &= AoP_{i,j} - \phi_{i,j}. \end{aligned}$$

The calibrated DoLP can also be computed by substituting Eqs. (7) through (9) into Eq. (3) with the final result shown by Eq. (12):

$$\begin{aligned}DoLP'_{i,j} &= \frac{\sqrt{{S'_2}^2+{S'_1}^2}}{S'_0}\\ &= \frac{\sqrt{(-\sin(2\phi_{i,j})S_1 + \cos(2\phi_{i,j})S_2)^2 + (\cos(2\phi_{i,j})S_1 + \sin(2\phi_{i,j})S_2)^2}}{S_0}\\ &= \frac{\sqrt{S_2^2+S_1^2}}{S_0}\\ &= DoLP_{i,j}. \end{aligned}$$

From Eqs. (7) through (12), we can make a few observations. The intensity and degree of linear polarization is not affected by the calibration routine, while the angle of polarization has per pixel calibration correction. Figure 1(c) depicts the inherent angular offset across the image, assuming the center of the omnidirectional lens coincides with the camera’s optical center. The angular offset is presented in false color, where red color denotes 0$^\circ$ offset, yellow color denotes 45$^\circ$ offset and light blue color denotes 90$^\circ$ offset. We may conclude that the calibration pattern is only dependent on pixel coordinates and independent from the incident light’s Stokes parameters or illumination direction. Hence, our calibration method works for both active and passive (i.e. natural) illumination.

3. Validation of omnidirectional lens calibration

To validate our empirically derived offset for the angle of polarization image, we performed a series of experiments on an optical bench (Fig. 2). We used a FLIR polarization camera (model BFS-U3-51S5P-C using Sony’s IMX250MZR sensor) equipped with an omnidirectional lens (Fujinon FE185C057HA-1). An LED light source (Thorlabs M530L4) was coupled to an integrating sphere (Newport 819D-IS-5.3) to produce uniform and depolarized light. At the output port of the integrating sphere, a linear polarization filter (Thorlabs LPVISA100-MP) was placed on a computer-controlled rotational stage (Thorlabs HDR50). The FLIR camera was placed on a secondary, computer-controlled rotational stage. The two rotational stages allow us to control the incident angle of polarization as well as the camera’s heading angle. Hence, we can illuminate a limited but well defined spots on the omnidirectional lens with different angles of polarization. This experiment allows us to perform comprehensive evaluation of the radial offset on the angle of polarization across the lens. We align the integrating sphere, the polarization filter and the camera to the same height so that the captured frame displays a single bright spot near the peripheral of the field of view. Figure 2 shows the experimental setup. During actual data collection, the setup is covered by black aluminum foils to block out environmental light.

 figure: Fig. 2.

Fig. 2. Depiction of our imaging setup for evaluating radial offset in the reconstructed angle of polarization recorded with omnidirectional lenses. Polarization image sensor equipped with an omnidirectional lens is mounted on a rotational stage and illuminated with different incident angle of polarization.

Download Full Size | PDF

The full data-collecting process iterates over all combinations of incident angles of polarization and camera headings, with the former ranging from $0^\circ$ to $175^\circ$ and the latter ranging from $0^\circ$ to $355^\circ$, both with a $5^\circ$ interval. For each angular setting, a total of 20 frames with integration times of 5 ms and 10 ms are collected and averaged to reduced temporal noise. The two integration times are necessary to record the high dynamic range of data produced by the four individual polarization sensitive pixels when the linear polarization filter is rotated. We calculate the mean and standard deviation on the patch of pixels illuminated in the image sensor by our optical setup. We also examined spectral dependency on the calibration results by collecting data under two different illumination wavelengths (660 nm and 530 nm). All measurements are from a single camera device.

Figure 3(a) shows the photoresponses from four individual pixels within a super-pixel as a function of different incident angles of polarization and three different camera headings. The left most figure is collected for camera heading of 0$^\circ$; the middle and right figures are collected for camera headings of 60$^\circ$ and 120$^\circ$, respectively. We observe that the photoresponse for the four pixels within each super-pixel follow Malus law of polarization and they are offset by 45$^\circ$ due to the individual filter arrangement. When we compare the photoresponses for the I0 pixel between the three camera headings, we observe that the maximum is shifted by 60$^\circ$ due to the radial offset introduced by the omnidirectional lens. These radial offsets can be observed in the reconstructed AoPs shown in Fig. 3(b).

 figure: Fig. 3.

Fig. 3. (a) For three different camera heading angles (0$^\circ$, 60$^\circ$ and 120$^\circ$), the photo response from four individual pixels within a super-pixel are plotted as a function of different incident angles of polarization. The photo response from the four pixel follow Malus law of polarization. Orange dashed lines mark maximal $I_0$ values. (b) The reconstructed AoP from the four supper-pixel readings using Eq. (1) and (2) are plotted for the three different camera headings. Since the camera headings are offset by 60$^\circ$, both the individual pixel’s photo response and reconstructed AoPs have an offset equal to the camera heading degree.

Download Full Size | PDF

In Fig. 4(a), we plot the reconstructed AoP for a fixed incident angle of polarization and different camera heading angles. This image is generated by combining multiple AoP images with different headings into a single image. Since only a small elliptical region is generated within the filed of view of a single image, the combined image depicts about 20 distinct elliptical regions collected at different time points. Due to the radial offset observed in the omnidirectional lens, the reconstructed AoP has different values (i.e. different false colors) at different locations in the image plane. The calibration process successfully recovers a consistent color for all camera heading angles and the result is visualized in Fig. 4(b). Moreover, in Fig. 4(c) we depict the mean and standard deviation of the reconstructed AoP for different incident AoPs, calculated over all values of camera heading angles. Since AoP values are cyclic, we use the following equations to compute circular means and standard deviations:

$$\mu = \frac{1}{2}\mathrm{Arg}(\frac{1}{n}\sum_{j=1}^n\exp(2i\theta_j)),$$
$$\sigma = \frac{1}{2}\sqrt{-2\ln(|\frac{1}{n}\sum_{j=1}^n\exp(2i\theta_j)|)}.$$

 figure: Fig. 4.

Fig. 4. (a) When incident $AoP$ on the image sensor is kept constant, the reconstructed AoP values change with different camera headings angles, which is visualized as elliptical blobs with different false colors. (b) After calibration, all blobs display the same color, indicating that the calibrated AoP have the same value for all camera headings. (c) The mean (solid line) and first standard deviation (shaded area around each curve) of the reconstructed raw and calibrated AoP for all camera heading angles.

Download Full Size | PDF

Here $\theta _i$ denotes the reconstructed AoP for the $i$-th camera heading and $\mathrm {Arg}(c)$ is the argument of complex number $c$.

In Fig. 4(c), we can observe high variance in the raw reconstructed AoP. The standard deviation ($\sigma$) for the raw reconstructed AoP with a $5^\circ$ sampling interval at 530 nm illumination is $84.3^\circ$. After calibration, the standard deviation is decreased to $3.7^\circ$. Furthermore, with angular sampling rate of $10^\circ$, the standard deviation of the AoP before and after calibration are 76.2$^\circ$ and 4.1$^\circ$, respectively. When the incident illumination is changed to 660 nm and the sampling rate is set to $5^\circ$, the standard deviations of the AoP before and after calibration are $84.4^\circ$ and $3.6^\circ$, respectively. In summary, these experiments validate that our calibration process produces accurate AoP images that are agnostic to camera heading, incident angle of polarization and illumination wavelength.

The standard variation in the calibrated AoP is significantly reduced compared to the raw AoP by over an order of magnitude. The stochastic variations in the calibrated AoP in the image sensor is due to both spatial and temporal noise. Due to variations in the read out electronics, especially the in-pixel source follower, and variations in the pixelated polarization filters, the spatial non-uniformity of the image sensor is around 1%. Furthermore, the output of each pixel has temporal noise which is due to photon shot noise and read-out noise. Both shot noise and read out noise contribute to the noise in the reconstructed angle of polarization. In [24], we show that the angel of polarization depends on both the intensity and degree of linear polarization of the incident target. When we combine both the spatial and temporal noise together, we can account for most of the stochastic variations in the calibrated angel of polarization. Finally, the total intensity (i.e. first Stokes parameter) and degree of linear polarization are the same before and after calibration, which was corroborated by the experimental data.

4. Calibration validation on real life images

To further validate our method, we collected images of real-world objects and scenes. First, we placed three black cups around the camera. Figure 5 shows the intensity image, the raw AoP visualization and the calibrated version. In Fig. 5(b), the top-left cup displays a purple-to-yellow pattern in false color counterclockwise, while the bottom cup shows the opposite. This is due to the radial offset inherent to omnidirectional lenses. After calibration, however, all three cups have same false-color patterns ranging from red-to-yellow pattern counterclockwise. This suggests that the calibration process may benefit downstream tasks such as surface reconstruction.

 figure: Fig. 5.

Fig. 5. (a) We use an omnidirectional polarization camera to take a photo of three black cups placed around the field of view. The whole setup is illuminated by a single unpolarized light source. (b) The raw AoP visualization. Notice the difference in color patterns at the upper half of each cup due to the radial offset introduced by the omnidirectional lens. (c) The calibrated AoP image has color patterns that are similar for all three cups. Pixels with light intensity above 5% are masked out in both AoP visualizations.

Download Full Size | PDF

Next, we test our calibration method on omnidirectional sky images taken in Florida Keys, USA collected on December 10th, 2020. The intensity images and the corresponding raw AoP visualizations are shown in Fig. 6(b) and Fig. 6(c) respectively, where each row represents a different time of the day. We then leverage the state-of-the-art parametric model for sun polarization estimation, libRadtran, to calculate the polarization pattern for each of the cases (Fig. 6(a)). There is an obvious gap between the the parametric model’s prediction and the recorded raw angle of polarization image. After we apply the calibration process to all three cases, the resulting AoP patterns are visualized in Fig. 6(d). The calibrated AoP pattern visually match the modeled AoP provided by libRadtran. These visual results indicate that our calibration routine corrects radial offset in reconstructed AoP images.

 figure: Fig. 6.

Fig. 6. We test our calibration algorithm on real-world scenes by selecting three different times of day (from top to bottom: early morning, noon and late afternoon) and comparing parametric model outputs with actual readings from the omnidirectional polarization camera. (a) AoP patterns predicted by libRadtran simulation for a known sun’s position. (b) The intensity images in grayscale. (c) The raw AoP image. (d) The calibrated AoP image. Notice the similarity between (a) and (d). Pixels with light intensity above 95% and below 5% are masked out in all AoP visualizations.

Download Full Size | PDF

For quantitative evaluation, we calculate the density histogram for each of the AoP images in Fig. 6, excluding masked out pixels and fixing the number of bins to 25. We denote the resulting histograms for the predicted pattern, the raw AoP image and the calibrated AoP image $H_{\mathrm {pred}}$, $H_{\mathrm {raw}}$ and $H_{\mathrm {cal}}$ respectively. We then measure the disparity between $H_{\mathrm {pred}}$ and the other two histograms by two metrics: intersection over union (IoU, higher is better) and histogram difference (HD, lower is better). Formally:

$$IoU(H, H') = \frac{\frac{1}{n}\sum_{i=1}^n\min(H_i, H'_i)}{\frac{1}{n}\sum_{i=1}^n\max(H_i, H'_i)},$$
$$HD(H, H') = \frac{1}{n}\sum_{i=1}^n|H_i - H'_i|.$$

The IoUs between $H_{\mathrm {pred}}$ and $H_{\mathrm {raw}}$ for the three samples are 51.75%, 51.92% and 43.92% respectively; while the IoUs between $H_{\mathrm {pred}}$ and $H_{\mathrm {cal}}$ are 74.56%, 62.39% and 88.84%. The HDs between $H_{\mathrm {pred}}$ and $H_{\mathrm {raw}}$ for the three samples are 0.638, 0.635 and 0.782 respectively; while the HDs between $H_{\mathrm {pred}}$ and $H_{\mathrm {cal}}$ are 0.293, 0.465 and 0.119. Both metrics prove that the calibration process reduces the disparity between real-world AoP observations and patterns predicted by well-established parametric models.

5. Conclusion

In this paper, we identify the problem of inconsistent angles of polarization reading in omnidirectional polarization cameras as a result of reference coordinate system shift. We study its cause and subsequently propose a simple yet effective process to counter such effect. We demonstrate its effectiveness through experiments based on both lab-collected and real-world data.

The calibration process enables us to further extend the application of omnidirectional polarization camera to a variety of tasks, especially those which are sensitive to AoP values or require accurate quantitative analysis across the whole field of view, such as underwater geolocalization and sky compass.

Funding

Air Force Office of Scientific Research (FA9550-18-1- 027); Office of Naval Research (N00014-19-1- 2400, N00014-21-1-2177).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. V. Gruev, R. Perkins, and T. York, “Ccd polarization imaging sensor with aluminum nanowire optical filters,” Opt. Express 18(18), 19087–19094 (2010). [CrossRef]  

2. W.-L. Hsu, G. Myhre, K. Balakrishnan, N. Brock, M. Ibn-Elhaj, and S. Pau, “Full-stokes imaging polarimeter using an array of elliptical polarizer,” Opt. Express 22(3), 3063–3074 (2014). [CrossRef]  

3. Y. Maruyama, T. Terada, T. Yamazaki, Y. Uesaka, M. Nakamura, Y. Matoba, K. Komori, Y. Ohba, S. Arakawa, Y. Hirasawa, Y. Kondo, J. Murayama, K. Akiyama, Y. Oike, S. Sato, and T. Ezaki, “3.2-mp back-illuminated polarization image sensor with four-directional air-gap wire grid and 2.5-micron pixels,” IEEE Trans. Electron Devices 65(6), 2544–2551 (2018). [CrossRef]  

4. J. J. Wang, F. Walters, X. Liu, P. Sciortino, and X. Deng, “High-performance, large area, deep ultraviolet to infrared polarizers based on 40 nm line/78 nm space nanowire grids,” Appl. Phys. Lett. 90(6), 061104 (2007). [CrossRef]  

5. Q. Chen, X. Hu, L. Wen, Y. Yu, and D. R. Cumming, “Nanophotonic image sensors,” Small 12(36), 4922–4935 (2016). [CrossRef]  

6. E. R. Fossum and D. B. Hondongwa, “A review of the pinned photodiode for ccd and cmos image sensors,” IEEE Journal of the electron devices society (2014).

7. T. Novikova and J. C. Ramella-Roman, “Is a complete mueller matrix necessary in biomedical imaging?” Opt. Lett. 47(21), 5549–5552 (2022). [CrossRef]  

8. G. M. Calabrese, P. C. Brady, V. Gruev, and M. E. Cummings, “Polarization signaling in swordtails alters female mate preference,” Proc. Natl. Acad. Sci. 111(37), 13397–13402 (2014). [CrossRef]  

9. S. B. Powell, R. Garnett, J. Marshall, C. Rizk, and V. Gruev, “Bioinspired polarization vision enables underwater geolocalization,” Sci. Adv. 4(4), eaao6841 (2018). [CrossRef]  

10. L. M. Eshelman, A. M. Smith, K. M. Smith, and D. B. Chenault, “Unique navigation solution utilizing sky polarization signatures,” in Polarization: Measurement, Analysis, and Remote Sensing XV, vol. 12112 (SPIE, 2022), p. 1211203.

11. M. S. Kim, M. S. Kim, G. J. Lee, S.-H. Sunwoo, S. Chang, Y. M. Song, and D.-H. Kim, “Bio-inspired artificial vision and neuromorphic image processing devices,” Adv. Mater. Technol. 7(2), 2100144 (2022). [CrossRef]  

12. J. Qi, M. Ye, M. Singh, N. T. Clancy, and D. S. Elson, “Narrow band 3× 3 mueller polarimetric endoscopy,” Biomed. Opt. Express 4(11), 2433–2449 (2013). [CrossRef]  

13. C. He, H. He, J. Chang, B. Chen, H. Ma, and M. J. Booth, “Polarisation optics for biomedical and clinical applications: a review,” Light: Sci. Appl. 10(1), 194 (2021). [CrossRef]  

14. M. W. Kudenov, A. Altaqui, and C. Williams, “Practical spectral photography ii: snapshot spectral imaging using linear retarders and microgrid polarization cameras,” Opt. Express 30(8), 12337–12352 (2022). [CrossRef]  

15. S. Mihoubi, P.-J. Lapray, and L. Bigué, “Survey of demosaicking methods for polarization filter array images,” Sensors 18(11), 3688 (2018). [CrossRef]  

16. E. Gilboa, J. P. Cunningham, A. Nehorai, and V. Gruev, “Image interpolation and denoising for division of focal plane sensors using gaussian processes,” Opt. Express 22(12), 15277–15291 (2014). [CrossRef]  

17. C. Mei and P. Rives, “Single view point omnidirectional camera calibration from planar grids,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, (IEEE, 2007), pp. 3945–3950.

18. N. A. Hagen, S. Shibata, and Y. Otani, “Calibration and performance assessment of microgrid polarization cameras,” Opt. Eng. 58(8), 082408 (2019). [CrossRef]  

19. J. S. Baba, J.-R. Chung, A. H. DeLaughter, B. D. Cameron, and G. L. Cote, “Development and calibration of an automated mueller matrix polarization imaging system,” J. Biomed. Opt. 7(3), 341–349 (2002). [CrossRef]  

20. C. Fan, X. Hu, J. Lian, L. Zhang, and X. He, “Design and calibration of a novel camera-based bio-inspired polarization navigation sensor,” IEEE Sens. J. 16(10), 3640–3648 (2016). [CrossRef]  

21. E. P. Wibowo, A. S. Talita, M. Iqbal, A. B. Mutiara, C.-K. Lu, and F. Meriaudeau, “An improved calibration technique for polarization images,” IEEE Access 7, 28651–28662 (2019). [CrossRef]  

22. S. B. Powell and V. Gruev, “Calibration methods for division-of-focal-plane polarimeters,” Opt. Express 21(18), 21040–21055 (2013). [CrossRef]  

23. R. A. Chipman, W.-S. T. Lam, and G. Young, Polarized light and optical systems (CRC press, 2018).

24. Y. Chen, Z. Zhu, Z. Liang, L. E. Iannucci, S. P. Lake, and V. Gruev, “Analysis of signal-to-noise ratio of angle of polarization and degree of polarization,” OSA Continuum 4(5), 1461–1472 (2021). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. (a) When imaging a scene with regular lenses, all reflected light beams from point $A$ have the same AoP and they are focused at one point on the sensors’s imaging plane. Since the field of view is relatively narrow, we can reconstruct the true AoP value from the sensor readings. (b) In contrast, an omnidirectional polarization camera receives light from all directions. As the angular offset between the light beam and the focal plane’s coordinate system varies, the corresponding super-pixel is activated differently for the same incident AoP. (c) As a result, the reconstructed AoP has an inherent radial offset dependent on each pixel’s heading. The radial offset is shown in false color, where red and blue indicate 0$^\circ$ and 90$^\circ$ offset, respectively.
Fig. 2.
Fig. 2. Depiction of our imaging setup for evaluating radial offset in the reconstructed angle of polarization recorded with omnidirectional lenses. Polarization image sensor equipped with an omnidirectional lens is mounted on a rotational stage and illuminated with different incident angle of polarization.
Fig. 3.
Fig. 3. (a) For three different camera heading angles (0$^\circ$, 60$^\circ$ and 120$^\circ$), the photo response from four individual pixels within a super-pixel are plotted as a function of different incident angles of polarization. The photo response from the four pixel follow Malus law of polarization. Orange dashed lines mark maximal $I_0$ values. (b) The reconstructed AoP from the four supper-pixel readings using Eq. (1) and (2) are plotted for the three different camera headings. Since the camera headings are offset by 60$^\circ$, both the individual pixel’s photo response and reconstructed AoPs have an offset equal to the camera heading degree.
Fig. 4.
Fig. 4. (a) When incident $AoP$ on the image sensor is kept constant, the reconstructed AoP values change with different camera headings angles, which is visualized as elliptical blobs with different false colors. (b) After calibration, all blobs display the same color, indicating that the calibrated AoP have the same value for all camera headings. (c) The mean (solid line) and first standard deviation (shaded area around each curve) of the reconstructed raw and calibrated AoP for all camera heading angles.
Fig. 5.
Fig. 5. (a) We use an omnidirectional polarization camera to take a photo of three black cups placed around the field of view. The whole setup is illuminated by a single unpolarized light source. (b) The raw AoP visualization. Notice the difference in color patterns at the upper half of each cup due to the radial offset introduced by the omnidirectional lens. (c) The calibrated AoP image has color patterns that are similar for all three cups. Pixels with light intensity above 5% are masked out in both AoP visualizations.
Fig. 6.
Fig. 6. We test our calibration algorithm on real-world scenes by selecting three different times of day (from top to bottom: early morning, noon and late afternoon) and comparing parametric model outputs with actual readings from the omnidirectional polarization camera. (a) AoP patterns predicted by libRadtran simulation for a known sun’s position. (b) The intensity images in grayscale. (c) The raw AoP image. (d) The calibrated AoP image. Notice the similarity between (a) and (d). Pixels with light intensity above 95% and below 5% are masked out in all AoP visualizations.

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

[ S 0 S 1 S 2 ] = [ I 0 + I 90 I 0 I 90 I 45 I 135 , ]
A o P = 1 2 tan 1 ( S 2 S 1 ) ,
D o L P = S 2 2 + S 1 2 S 0 .
ϕ i , j = tan 1 ( w / 2 j h / 2 i ) ,
M i , j = [ 1 0 0 0 0 cos ( 2 ϕ i , j ) sin ( 2 ϕ i , j ) 0 0 sin ( 2 ϕ i , j ) cos ( 2 ϕ i , j ) 0 0 0 0 1 ] ,
S i , j = M i , j S i , j ,
S 0 = S 0 ,
S 1 = cos ( 2 ϕ i , j ) S 1 + sin ( 2 ϕ i , j ) S 2 ,
S 2 = sin ( 2 ϕ i , j ) S 1 + cos ( 2 ϕ i , j ) S 2 .
S 3 = S 3 ,
A o P i , j = 1 2 tan 1 ( S 2 S 1 ) = 1 2 tan 1 sin ( 2 ϕ i , j ) S 1 + cos ( 2 ϕ i , j ) S 2 cos ( 2 ϕ i , j ) S 1 + sin ( 2 ϕ i , j ) S 2 = 1 2 tan 1 tan ( 2 ϕ i , j ) + S 2 / S 1 1 + tan ( 2 ϕ i , j ) S 2 / S 1 = 1 2 tan 1 tan ( 2 ϕ i , j ) + tan ( tan 1 ( 2 A o P i , j ) ) 1 + tan ( 2 ϕ i , j ) tan ( tan 1 ( 2 A o P i , j ) ) = 1 2 tan 1 ( tan ( 2 A o P i , j 2 ϕ i , j ) ) = A o P i , j ϕ i , j .
D o L P i , j = S 2 2 + S 1 2 S 0 = ( sin ( 2 ϕ i , j ) S 1 + cos ( 2 ϕ i , j ) S 2 ) 2 + ( cos ( 2 ϕ i , j ) S 1 + sin ( 2 ϕ i , j ) S 2 ) 2 S 0 = S 2 2 + S 1 2 S 0 = D o L P i , j .
μ = 1 2 A r g ( 1 n j = 1 n exp ( 2 i θ j ) ) ,
σ = 1 2 2 ln ( | 1 n j = 1 n exp ( 2 i θ j ) | ) .
I o U ( H , H ) = 1 n i = 1 n min ( H i , H i ) 1 n i = 1 n max ( H i , H i ) ,
H D ( H , H ) = 1 n i = 1 n | H i H i | .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.