## Abstract

This paper describes a generalized framework for single-exposure acquisition of multi-dimensional scene information using integral imaging system based on compressive sensing. In the proposed system, a multi-dimensional scene containing a plurality of information such as 3D coordinates, spectral and polarimetric data is captured by integral imaging optics. The image sensor uses pixel-wise filtering elements arranged randomly. The multi-dimensional original object is reconstructed using an algorithm with a sparsity constraint. The proposed system is demonstrated with simulations and feasible optical experiments based on synthetic aperture integral imaging using multi-dimensional objects including 3D coordinates, spectral, and polarimetric information.

© 2013 Optical Society of America

## 1. Introduction

Multi-dimensional imaging is challenging because conventional imaging optics project an object onto a two-dimensional detector array. A cooperative design of optics and signal processing, which is called computational imaging, has been used for multi-dimensional imaging. For example, stereo camera and integral imaging have been applied to 3D imaging, including depth acquisition [1–8]. Those systems have multiple cameras or lenses to observe the object from different perspective views and computationally estimate the depth with the parallaxes between the captured elemental images.

Such imaging systems have been extended to acquisition of multi-dimensional objects including spectral, polarization, etc. [9–12]. In those systems, the lateral pixel count of the multidimensional object is the same as that of the image sensor as shown in Fig. 1(a). It means that the number of the elements in the original multi-dimensional object is larger than that of the pixels of the image sensor. In other words, typical multi-dimensional imaging systems with a single-exposure are ill-posed. The computational process is based on *rearrangement* of the captured pixels and the full-size object cannot be reconstructed in this case. Thus, conventional single-exposure acquisition of the multi-dimensional information compromises the space-bandwidth product of the image sensor.

A framework called compressive sensing (CS) can solve such ill-posed problem with randomized sampling and reconstruction algorithms employing a sparsity constraint as shown in Fig. 1(b) [13–15]. Some generalized multi-dimensional imaging systems with CS have been proposed [16–19]. In this paper, we propose a new modality of multi-dimensional integral imaging to alleviate some drawbacks in the previous works.

## 2. Proposed multi-dimensional imaging system

The proposed multi-dimensional imaging system has an integral imaging optics and an image sensor with randomly arranged pixel-wise filtering elements as shown in Fig. 2, where *x* and *y* show the lateral axes and *z* shows the longitudinal axis, respectively. It has been inspired by compressive Fresnel holography [19], where pixel-wise multimodal filtering elements on the image sensor are used for randomized sparse sampling in holographic imaging [20]. The holographic imaging process is replaced by integral imaging to alleviate some stringent requirements for holographic imaging such as the requirements for active illumination (coherent illumination), speckle noise degradation, difficulty in capturing an outdoor scene by coherent light, multiple coherent sources for multispectral illumination, etc. Furthermore speckle noise and random phase in general objects are difficult to be treated by CS with single-exposure imaging because they are not compressive [21]. An advantage of the proposed system over the other previous multi-dimensional imaging systems is the potential to have a compact optical sensor. The previous systems require additional optical elements to be used with the imaging optics [16–18]. On the other hand, the filtering elements of the proposed system can be integrated onto the image sensor. In the proposed system, the integral imaging optics composed of multiple lenses or cameras projects the multi-dimensional object onto the image sensor with randomly arranged pixel-wise filtering elements. A single lens or camera is called elemental optics in this paper. The object is reconstructed with a CS algorithm employing a sparsity constraint.

Firstly an imaging process of a conventional single aperture imaging system without any coding and filtering optics is described. In this paper, normal small letters are continuous variables, normal capital letters are integer variables, and calligraphic capital letters are functions, respectively. *y*-axis is omitted and ideal pinhole optics and ideal detector sampling are assumed for simplicity. The imaging process is written as

*u*is the lateral axis on the image sensor, and ℱ

*is the object in the*

_{C}*C*-th channel, respectively. The impulse response in this system does not depend on the depth

*z*and the channel

*C*. It is difficult to distinguish between different depths and channels in this case.

In the proposed scheme, integral imaging and pixel-wise filters realize a depth- and channel-variant impulse response, respectively. The parallax between the captured elemental images and the transmittance of the pixel-wise filters depend on the depth and the channel, respectively. The imaging process in Eq. (1) can be modified for the proposed integral imaging system as

*is the captured image by the*

_{K}*K*-th elemental optics, 𝒬

_{C,K}is the response of the pixel-wise filters for the

*C*-th channel in the

*K*-th elemental optics, 𝒟 is a low pass filter or a downsampling function caused by a fill factor of the detectors, and 𝒯

*is a translation by the parallax in the*

_{K}*K*-th elemental optics, respectively. The equation shows the following process; First, the multi-dimensional object ℱ

*is projected with the parallax translation 𝒯*

_{C}*by each of the elemental optics. Secondly, the projected signals are convolved with the detector response 𝒟 of the image sensor. Third, the convolved signals are multiplied with the filter responses 𝒬*

_{K}_{C,K}. Finally, the resultant signals are integrated onto the image sensor as 𝒢′

*. The model can be extended for higher-dimensional objects with small modifications. The proposed scheme can be adapted to various optical information acquisitions. Examples of the applications and its implementations are summarized in Table 1.*

_{K}The imaging process of our proposed system can be expressed linearly as shown in Eq. (2). The process in the *K*-th elemental optics expressed by the equation is also rewritten with matrix operators. Here bold lower case letters are column vectors and bold capital letters are matrices, respectively. This process is written as

**′**

*g**∈ ℝ*

_{K}^{MX×1}is the vector of the captured data by the

*K*-th elemental optics,

**′**

*H**∈ ℝ*

_{K}^{MX×(NX×NZ ×NC)}is the system matrix of the

*K*-th elemental optics, and

**∈ ℝ**

*f*^{(NX×NZ ×NC)×1}is the vector of the object data, respectively.

*M*is the number of the detectors in the single elemental optics,

_{X}*N*and

_{X}*N*are the numbers of elements of the object along the lateral and longitudinal axes, and

_{Z}*N*is the number of the channels as shown in Fig. 2, respectively. ℝ

_{C}^{a×b}is an

*a*×

*b*matrix with real numbers. The matrix

**′**

*H**can be decomposed as Eq. (4) based on Eq. (2). Here,*

_{K}

*Q**∈ ℝ*

_{K}^{MX×(MX×NC)}is a matrix indicating the filter response in the

*K*-th elemental optics. The matrix

*Q**can be written as*

_{K}**′**

*Q*

_{C}_{,}

*∈ ℝ*

_{K}^{MX×MX}is a diagonal matrix indicating the filter response of the

*C*-th channel in the

*K*-th elemental optics.

**∈ ℝ**

*D*^{(MX×NC)×(NX×NC)}is a matrix for downsampling, whose fill factor is supposed to be 100 % for simplicity, expressed as

**1**∈ ℝ

^{S}^{×1}is a vector with elements all equal to 1,

**0**∈ ℝ

^{S×1}is a vector with elements all equal to 0, and the superscript

*t*denotes the transpose of a matrix, respectively. Here

*S*is a downsampling factor.

*T**∈ ℝ*

_{K}^{(NX×NC)×(NX×NZ ×NC)}is a matrix indicating the projection with the parallax translation in the

*K*-th elemental optics. This matrix can be written as

**″**

*T*

_{Z}_{,}

*∈ ℝ*

_{K}^{NX×NX}is a shifted identity matrix indicating the parallax translation at the

*Z*-th depth in the

*K*-th elemental optics and

**∈ ℝ**

*O*^{NX×(NX×NZ)}is a matrix whose elements are 0, respectively. Finally, the imaging process of the entire optics is written as

**∈ ℝ**

*g*^{(MX×LX)×1}is the vector of the captured data by the entire optics and

**∈ ℝ**

*H*^{(MX×LX)×(NX×NZ ×NC)}is the system matrix of the entire optics, respectively. Here

*L*is the number of the elemental optics as shown in Fig. 2. As mentioned above, (

_{X}*M*×

_{X}*L*) << (

_{X}*N*×

_{X}*N*×

_{Z}*N*) is assumed in this paper. It means that this system is ill-posed. Equation (9) indicates that a column in the entire system matrix

_{C}**has multiple nonzero elements due to the multiple elemental imaging processes with the filtering in Eq. (5) and the downsampling in Eq. (6). Randomness can be introduced by modifying the filtering in Eq. (5) and the translation in Eq. (8). These mean that the system matrix**

*H***approximately satisfies certain properties dictated by CS theory [22].**

*H*To solve the inversion of Eq. (10), a CS algorithm called Two-step iterative shrinkage/thresholding (TwIST) [23] is used for the multi-dimensional object reconstruction. TwIST solves the following problem shown as

_{ℓ2}is the

*ℓ*

_{2}norm,

*τ*is a regularization parameter, and ℛ is a regularizer. In this paper, the two-dimensional total variation [24] for each

*Z*-th plane of each

*C*-th channel is chosen as the regularizer, which is:

**are rearranged with lexicographic order to express the multiple dimensions and**

*f***(**

*f**X*,

*Y*,

*Z*,

*C*) denotes the

*X*,

*Y*,

*Z*,

*C*-th rearranged element of the vector

**.**

*f*## 3. Simulations of sparse samplings

In the proposed system, signals on each channel are sparsely sampled with an integral imaging sensor as shown in Fig. 2 and Eq. (2). Both regular and irregular sparse samplings in conventional single aperture imaging and integral imaging, which has multiple apertures, are compared by simulations in this section. Figures 3(a)–3(c) show an object image, which is the Shepp-Logan phantom, and regular and irregular sparse sampling patterns, which correspond to the filter response in Eq. (5), respectively. The size of the object and the two patterns are 150 × 150 pixels each. That is, the size of the object is 150×150×1×1 (=*N _{X}* ×

*N*×

_{Y}*N*×

_{Z}*N*) pixels and the size of the captured image is 150 ×150 (=(

_{C}*M*×

_{X}*L*) ×(

_{X}*M*×

_{Y}*L*)) pixels, respectively. In the regular sparse sampling pattern, the pattern is divided by blocks with 3 ×3 pixels. A same combination with three pixels in these blocks is set as 1 (white) and the other pixels are 0 (black) as shown in Fig. 3(b). In this case, 33.3 % of the whole image are 1. This ratio is called the sampling ratio in this paper. The irregular sparse sampling pattern is a randomized binary distribution, where 33.3 % of pixels are 1 and the others are 0 as shown in Fig. 3(c). That is, 33.3 % of diagonal elements of the filtering sub-matrix

_{Y}**′**

*Q*_{C,K}in Eq. (5) are 1 and the others are 0 in both of the sampling patterns.

The object was projected onto the image sensor plane by either a single aperture imaging optics or an integral imaging optics. In the case of the single aperture imaging optics, there was a single elemental image, that is, 1×1 (=*L _{X}* ×

*L*), with 150×150 (=

_{Y}*M*×

_{X}*M*) pixels. In the integral imaging case, there were 5×5 (=

_{Y}*L*×

_{X}*L*) elemental images with 30 ×30 (=

_{Y}*M*×

_{X}*M*) pixels for each elemental image. The projected signals by the two optics cases were multiplied with the sampling patterns in Figs. 3(b) and 3(c). In the integral imaging case, the sampling patterns were divided into 5 ×5 regions for the multiplication. The resultant signals, which are the captured images, with the two sampling patterns in the two cases are shown in Figs. 4(a)–4(d), respectively. The signal-to-noise ratio (SNR) in the measurements was 40 dB. The reconstruction results with TwIST algorithm are shown in Fig. 5. By comparing Figs. 5(a)–5(d), it is evident that the integral imaging with irregular sparse sampling, whose reconstruction is shown in Fig. 5(d), has a higher reconstruction fidelity than the other cases. The other cases lose many pixels and/or have degradations in their reconstructions. The integral imaging with irregular sparse sampling shows the robustness of the proposed imaging system for under-sampling. The randomness introduced to sparse sampling, that is, the filtering in Eq. (5), has improved the reconstruction performance of integral imaging. The peak SNR (PSNR) was measured for comparison. The PSNR can be calculated as

_{Y}*MAX*is the maximum pixel intensity of an original data and

*MSE*is the mean square error between the original data and the reconstructed data. The PSNRs between the original phantom in Fig. 3(a) and the image reconstructions in Figs. 5(a)–5(d) were 20.6 dB, 19.4 dB, 21.7 dB, and 30.4 dB, respectively. Those PSNRs also indicate the advantage of integral imaging with irregular sparse sampling.

The relationships between the sampling ratios and reconstruction PSNRs of single aperture imaging and integral imaging with regular and irregular sparse samplings are plotted in Fig. 6. Integral imaging with irregular sparse sampling has higher PSNRs than the others when the projected signals by the optics are sampled with low sampling ratios. The lower bound of the sampling ratio in integral imaging with irregular sparse sampling may be around 30 %. As shown in the simulations of this section, the proposed integral imaging can reduce the number of sampling points or detectors on the image sensor. It is also useful in applications of multi-channel data acquisition because each channel may be under-sampled in this case as shown in Fig. 1.

## 4. Experiments

In this section, the concept of the proposed system is also verified by optical experiments based on synthetic aperture integral imaging. It can be directly applied to various types of integral imaging. In the experiments, a color camera was used to obtain elemental images. The focal length of the camera lens is 50 mm, the sensor size is 36 mm ×24 mm, and the pixel count is 1248 ×832, respectively. The camera was scanned along the horizontal and vertical directions by a two-axis translation stage. The number of the captured elemental images in the scan was 6 ×6 (=*L _{X}* ×

*L*).

_{Y}#### 4.1. Spectral integral imaging

In the first experiment, compressive spectral integral imaging with the proposed system is performed. A sign and a car object were located at 270 mm and 330 mm from the sensor. The objects’ elemental images were captured by the translated camera. The pitch of the translation was 5 mm×5 mm. The entire captured elemental data is shown in Fig. 7 and a sample elemental image is shown in Fig. 8, respectively. The size of the captured elemental images was reduced to 175 ×79 (= *M _{X}* ×

*M*) pixels. The reduced elemental images were multiplied with the filter responses 𝒬

_{Y}

_{C}_{,}

*in Eq. (2) to emulate randomly arranged pixel-wise color filters. Three single band-pass filters (red, green, and blue) were assumed. One of the three filters was randomly selected on each detector. In this case, 33.3 % of the diagonal elements of each filtering sub-matrix*

_{K}**′**

*Q*_{C,K}in Eq. (5) are 1 and the other elements are 0. The diagonal elements of the each three sub-matrices

**′**

*Q*_{C,K}in the

*K*-th elemental optics were complementary from each channel. The sampled emulated elemental image of Fig. 8 is shown in Fig. 9.

The reconstructions with a back-projection algorithm [25] using the bicubic interpolation and TwIST algorithm are shown in Fig. 10(a) and Fig. 10(b), respectively. No sparsity constraints were used in the back-projection reconstruction. The size of the reconstructed object was 1044×480×2×3 (=*N _{X}* ×

*N*×

_{Y}*N*×

_{Z}*N*) pixels. Thus, the compression ratio was 6.0. The reconstruction planes were set at 270 mm and 330 mm from the sensor, respectively. TwIST removed the defocused object in the back-projection reconstruction and enhanced the contrast and lateral resolution of the reconstructed object.

_{C}#### 4.2. Spectral and polarimetric integral imaging

In the second experiment, compressive spectral and polarimetric integral imaging is demonstrated. Polarimetric imaging has been used in medical imaging, remote sensing, industrial control, etc. [26–28]. The scene includes two plants at 430 mm, a truck at 550 mm, and a sign at 810 mm from the sensor. The scene was captured by the translated camera without and with a polarizer at a single polarization angle [29]. The pitch of the translation was 10 mm ×10 mm. The entire captured elemental data of intensity images and linearly polarized images are shown in Figs. 11(a) and 11(b), respectively. A sample captured elemental images of both cases are shown in Figs. 12(a) and 12(b). The elemental images were resized to 247 ×151 (= *M _{X}* ×

*M*) pixels. The resized images were multiplied and integrated with the filter responses 𝒬

_{Y}_{C,K}to emulate randomly arranged pixel-wise three color filters (red, green, and blue) and pixel-wise polarizers. One of the three color filters was randomly selected on each detector, and one or no polarizer was randomly arranged on each of them. A detector with or without the polarizer captures intensity or linearly polarized spectral information, respectively. In this case, 16.7 % of the diagonal elements of the each filtering sub-matrix

**′**

*Q*_{C,K}are 1 and the others are 0. The diagonal elements of the six (three spectrums times two polarizations) sub-matrices

**′**

*Q*_{C,K}in the

*K*-th elemental optics were complementary from each channel. The sampled emulated elemental image is shown in Fig. 13.

The reconstructions with the back-projection and TwIST algorithms are shown in Figs. 14(a) and 14(b), respectively. The size of the reconstructed object was 1000 ×610 ×3 ×(3 ×2) (=*N _{X}* ×

*N*×

_{Y}*N*×

_{Z}*N*) pixels. Thus the compression ratio was 8.2. In each figure, the reconstruction planes are 430 mm, 550 mm, and 810 mm from the sensor. The first and second rows in Figs. 14(a) or 14(b) show the intensity images and the linearly polarized images, respectively. The polarimetric imaging was verified with the reflection on the body of the truck object on the second plane in both of the reconstructions. TwIST algorithm suppressed the defocused objects which exist in the back-projection reconstruction. Also, TwIST enhanced the contrast and lateral resolution of the reconstructed object. In this experiment, the scene has a small occlusion of the sign by the plants as shown in Figs. 11 and 12. The occlusion does not affect the reconstructions and is negligible in this case. However the impact may be large when the occlusions are not small. It can be alleviated based on a method proposed for occlusions in compressive Fresnel holography [30].

_{C}## 5. Conclusion

In this paper, we have proposed and demonstrated a multi-dimensional integral imaging system with compressive sensing. We have described a generalized framework for single-exposure acquisition of multi-dimensional scene information using compressive integral imaging. The system is capable of handling multi-dimensional scene containing a plurality of information such as 3D coordinates, spectral and polarimetric data, dynamic range, high speed imaging, etc. In the proposed system, a multi-dimensional object is captured with the integral imaging optics. The projected signals by the imaging optics are filtered by pixel-wise optical elements on the image sensor, and the resultant signals are integrated. The original object was reconstructed with TwIST algorithm using the total variation as the regularizer. Using both simulations and feasible optical experiments based on synthetic aperture integral imaging, we have demonstrated processing of multi-dimensional scene including 3D coordinates, spectral, and polarimetric information acquisition and image reconstruction. The demonstrated concept can be directly applied to integral imaging systems.

The proposed multi-dimensional integral imaging shown in Fig. 2 and Table 1 is realizable with currently available or near future technologies. For example, pixel-wise color filters and polarizers [31] are commercially produced and CMOS image sensors use a line-wise shutter called rolling shutter. We may be able to integrate these components directly into the system even if some randomness should be added to them. Spatial light modulators (SLMs) are also useful to implement pixel-wise polarizers, neutral density filters, and shutters.

A future study for designing and constructing the proposed system should be investigated. The system models introduced in Section 2 assume an ideal imaging process. The difference between the ideal models and the real physical phenomena degrades the reconstruction performance. These models should be improved with considering more realistic imaging process containing defocusing, aberration, etc. Furthermore, a boundary condition on the reconstruction performance and a reasonable system design, e.g. filter pattern, considering the performance should be studied. As mentioned in Section 2, the proposed system was inspired by compressive Fresnel holography [19, 20]. The condition on the reconstruction performance in compressive Fresnel holography has been investigated recently [32]. This approach may be applied to the future issues of the multi-dimensional integral imaging system proposed in this paper.

## Acknowledgment

The authors wish to thank the anonymous reviewers for their comments and suggestions.

## References and links

**1. **M. Okutomi and T. Kanade, “A multiple-baseline stereo,” IEEE Trans. Pattern Anal. Mach. Intell. **15**, 353–363 (1993). [CrossRef]

**2. **G. M. Lippmann, “La photographie integrale,” Comptes-Rendus Academie des Sciences **146**, 446–451 (1908).

**3. **C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. **58**, 71–74 (1968). [CrossRef]

**4. **L. Yang, M. McCormick, and N. Davies, “Discussion of the optics of a new 3-D imaging systems,” Appl. Opt. **27**, 4529–4534 (1988). [CrossRef] [PubMed]

**5. **F. Okano, J. Arai, K. Mitani, and M. Okui, “Real-time integral imaging based on extremely high resolution video system,” Proc. IEEE **94**, 490–501 (2006). [CrossRef]

**6. **M. Cho, M. Daneshpanah, I. Moon, and B. Javidi, “Three-dimensional optical sensing and visualization using integral imaging,” Proc. IEEE **99**, 556 –575 (2011). [CrossRef]

**7. **R. Horisaki, S. Irie, Y. Ogura, and J. Tanida, “Three-dimensional information acquisition using a compound imaging system,” Optical Review **14**, 347–350 (2007). [CrossRef]

**8. **M. DaneshPanah and B. Javidi, “Profilometry and optical slicing by passive three-dimensional imaging,” Opt. Lett. **34**, 1105–1107 (2009). [CrossRef] [PubMed]

**9. **R. Shogenji, Y. Kitamura, K. Yamada, S. Miyatake, and J. Tanida, “Multispectral imaging using compact compound optics,” Opt. Express **12**, 1643–1655 (2004). [CrossRef] [PubMed]

**10. **R. J. Plemmons, S. Prasad, S. Matthews, M. Mirotznik, R. Barnard, B. Gray, V. P. Pauca, T. C. Torgersen, J. van der Gracht, and G. Behrmann, “PERIODIC: Integrated computational array imaging technology,” in “Computational Optical Sensing and Imaging,” (2007), p. CMA1.

**11. **B. Javidi, S.-H. Hong, and O. Matoba, “Multidimensional optical sensor and imaging system,” Appl. Opt. **45**, 2986–2994 (2006). [CrossRef] [PubMed]

**12. **R. Horstmeyer, G. Euliss, R. Athale, and M. Levoy, “Flexible multimodal camera using a light field architecture,” in “Proc. ICCP09 ,” (2009), pp. 1–8.

**13. **D. L. Donoho, “Compressed sensing,” IEEE Trans. Info. Theory **52**, 1289–1306 (2006). [CrossRef]

**14. **R. Baraniuk, “Compressive sensing,” IEEE Sig. Processing Mag. **24**, 118–121 (2007). [CrossRef]

**15. **E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Sig. Processing Mag. **25**, 21–30 (2008). [CrossRef]

**16. **R. Horisaki, K. Choi, J. Hahn, J. Tanida, and D. J. Brady, “Generalized sampling using a compound-eye imaging system for multi-dimensional object acquisition,” Opt. Express **18**, 19367–19378 (2010). [CrossRef] [PubMed]

**17. **R. Horisaki and J. Tanida, “Multi-channel data acquisition using multiplexed imaging with spatial encoding,” Opt. Express **18**, 23041–23053 (2010). [CrossRef] [PubMed]

**18. **R. Horisaki and J. Tanida, “Multidimensional TOMBO imaging and its
applications,” in Proc. SPIE (2011),
**8165**, pp. 816516. [CrossRef]

**19. **R. Horisaki, J. Tanida, A. Stern, and B. Javidi, “Multidimensional imaging using compressive Fresnel holography,” Opt. Lett. **37**, 2013–2015 (2012). [CrossRef] [PubMed]

**20. **Y. Rivenson, A. Stern, and B. Javidi, “Compressive Fresnel holography,” J. Display Technol. **6**, 506–509 (2010). [CrossRef]

**21. **K. Choi, R. Horisaki, J. Hahn, S. Lim, D. L. Marks, T. J. Schulz, and D. J. Brady, “Compressive holography of diffuse objects,” Appl. Opt. **49**, H1–H10 (2010). [CrossRef] [PubMed]

**22. **E. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Info. Theory **52**, 489–509 (2006). [CrossRef]

**23. **J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Proc. **16**, 2992–3004 (2007). [CrossRef]

**24. **L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D **60**, 259–268 (1992). [CrossRef]

**25. **S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express **12**, 483–491 (2004). [CrossRef] [PubMed]

**26. **J. E. Solomon, “Polarization imaging,” Appl. Opt. **20**, 1537–1544 (1981). [CrossRef] [PubMed]

**27. **S. G. Demos and R. R. Alfano, “Optical polarization imaging,” Appl. Opt. **36**, 150–155 (1997). [CrossRef] [PubMed]

**28. **J. S. Tyo, D. L. Goldstein, D. B. Chenault, and J. A. Shaw, “Review of passive imaging polarimetry for remote sensing applications,” Appl. Opt. **45**, 5453–5469 (2006). [CrossRef] [PubMed]

**29. **X. Xiao, B. Javidi, G. Saavedra, M. Eismann, and M. Martinez-Corral, “Three-dimensional polarimetric computational integral imaging,” Opt. Express **20**, 15481–15488 (2012). [CrossRef] [PubMed]

**30. **Y. Rivenson, A. Rot, S. Balber, A. Stern, and J. Rosen, “Recovery of partially occluded objects by applying compressive fresnel holography,” Opt. Lett. **37**, 1757–1759 (2012). [CrossRef] [PubMed]

**31. **T. Sato, T. Araki, Y. Sasaki, T. Tsuru, T. Tadokoro, and S. Kawakami, “Compact ellipsometer employing a static polarimeter module with arrayed polarizer and wave-plate elements,” Appl. Opt. **46**, 4963–4967 (2007). [CrossRef] [PubMed]

**32. **Y. Rivenson and A. Stern, “Conditions for practicing compressive fresnel holography,” Opt. Lett. **36**, 3365–3367 (2011). [CrossRef] [PubMed]