## Abstract

Traditional approaches to wide field of view (FoV) imager design usually lead to overly complex optics with high optical mass and/or pan-tilt mechanisms that incur significant mechanical/weight penalties, which limit their applications, especially on mobile platforms such as unmanned aerial vehicles (UAVs). We describe a compact wide FoV imager design based on superposition imaging that employs thin film shutters and multiple beamsplitters to reduce system weight and eliminate mechanical pointing. The performance of the superposition wide FoV imager is quantified using a simulation study and is experimentally demonstrated. Here, a threefold increase in the FoV relative to the narrow FoV imaging optics employed imager design is realized. The performance of a superposition wide FoV imager is analyzed relative to a traditional wide FoV imager and we find that it can offer comparable performance.

© 2010 Optical Society of America

## 1. Introduction

Current reconnaissance and surveillance cameras include so-called “soda straw” narrow field of view (FoV) imagers that require mechanical pointing to achieve adequate coverage at high resolution. At each measurement instant a sub-FoV corresponding to a different part of the desired scene is sequentially captured. Once all of the sub-images are acquired in this pan and tilt system they are appropriately tiled to form a high resolution wide field of view image of the scene. While conceptually straightforward, only a partial section of the desired scene is acquired at each measurement instant. Moreover, the mechanical complexity of such a system leads to increased size and weight.

Note that a system that captures each sub-FoV sequentially in time is inherently photon inefficient. For example, assume the narrow FoV system contributes on average *Q* photons per pixel, following a Poisson distribution, to the imaging sensor per unit time. After an observation time *τ* a total of *I*′ sub-images are collected. Further, assume that the imaging sensor has a collection efficiency *η* and the measurement is corrupted by both Poisson shot noise (i.e., with variance equal to the signal) and signal-independent Gaussian detector noise with zero mean and variance
$I\prime {\sigma}_{n}^{2}$. Note that the detection noise scales with *I*′ because the noise bandwidth is linear in *I*′ for a fixed total observation time. In this case, the electrical signal-to-noise ratio (SNR) [1, 2] of each pixel measurement is given by

*σ*→ 0, then the SNR =

_{n}*ηQτ/I*′ and when the system is detector (read) noise limited then the $\text{SNR}\hspace{0.17em}=\eta Q\tau /({(I\prime )}^{3}{\sigma}_{n}^{2})$. Also from Eq. (1) we observe that increasing the number of sub-images

*I*′ collected decreases the SNR for a fixed observation time

*τ*.

Another conventional wide field imaging approach would employ the same imaging sensor to directly image the desired scene at a lower resolution. This approach can be augmented with a more sophisticated optical system and a larger imaging sensor with a higher pixel count to produce a high resolution image. However, these augmentations again lead to increased system size and weight. Indeed, it is well-known that increasing the FoV brings greater than linearly growing costs in optical mass [3]. As a result of the various costs that accompany conventional wide FoV imaging techniques, these systems have limited applications, especially for mobile platforms like unmanned aerial vehicles (UAVs). To reduce some of these physical requirements and retain a high resolution image, here we describe a compact thin-film shuttered multi-beamsplitter superposition imaging solution and demonstrate its functionality through an experimental prototype.

## 2. Optical architecture

An alternative approach to wide FoV imaging without incurring significant size and weight penalties can be realized with spatial-multiplexing or superposition space imaging. This is accomplished by superimposing multiple sub-fields of view onto a common imaging sensor to form a composite image. In superposition space imaging, as the name implies, each pixel measurement in the composite image is a superposition of corresponding pixels from the individual sub-fields of view. A schematic depiction of superposition imaging is shown in Fig. 1 where each pixel in the *j*^{th} sub-image X* _{j}* is denoted by

*x*, and the

_{jp}*i*

^{th}resulting composite image is denoted by M

*with each pixel denoted by*

_{i}*m*.

_{ip}A number of different techniques have been described in the literature to accomplish superposition imaging. One architecture that employs multiple lenses and mechanical shutters to achieve this goal is described in [4]. In that system, each lens images a portion of the scene onto a common imaging sensor and can emulate a conventional pan and tilt mode of operation by opening a single shutter at a time. An alternative architecture based on a beamsplitter and linearly translating mirror is described in [5]. By capturing a sequence of superimposed images while the mirror position is shifted, the overall field of view can be reconstructed. A similar approach described in [6] uses a beamsplitter and a rotating mirror to accomplish superposition imaging. Yet another approach employs a binary combiner arrangement that incorporates shutters to perform superposition imaging. The architecture described in [7] combines superposition imaging with point spread function (PSF) engineering using sparse apertures, micropistons and microprisms. In that work, the sparse aperture is implemented using an “eyelid array” where each eyelid is a voltage controlled electrostatic flap that can open and close a small aperture. Note that all of these implementations, however, involve some mechanical aspect.

To achieve a wide FoV using superposition space imaging, multiple non-redundant composite image measurements must be acquired. These multiple measurements are subsequently used to recover the original sub-images. Multiplexing and encoding a source is a traditional technique in multiplex spectroscopy [8, 9, 10]. Rather than use a mechanical shutter, the optical properties of a number of materials can be exploited to implement an optical modulator. Two common types are electrochromic materials, usually found on switchable window glass, and liquid crystal (LC) materials. In this work, an LC based multi-beamsplitter system that can superimpose three sub-fields of view will be considered. A diagram of such a 3-FoV system, which will serve as the basis of our experimental implementation, is shown in Fig. 2(a). The selection of LC for the shutter material also imposes an additional polarizer requirement at the entrance aperture of the system. Using a polarizer decreases the overall light collection ability of the system by a factor of two, however, it is required to enable operation of the LC shutters. While only three sub-fields of view are considered here, the described architecture can be extended to accommodate additional sub-images in the horizontal and/or vertical directions. Another compact architecture that multiplexes multiple sub-fields of view using a polarization based encoding scheme is described in [11].

In our system, plate beamsplitters are used to tilt and optically superimpose the sub-fields of view presented to the imager. Following each beamsplitter is an electronically controlled shutter where the thin-film material is deposited directly on each beamsplitter to form a compound dual purpose element. When a shutter is in the closed state the corresponding sub-fields of view are suppressed from the composite image.

For comparison to the conventional imager the same imaging sensor must be used. In this superposition imaging case, *J* sub-images are collected simultaneously as a composite image measurement. If *I* equal exposure measurements are taken within a total observation time *τ* and given that each sub-FoV again contributes to an average of *Q* photons/pixel/unit time to the imaging sensor then the composite pixel electrical measurement SNR is given by

*J*

^{2}becomes apparent when compared with Eq. (1). When the system is shot noise limited then there is a multiplex advantage of

*J*. Note that the use of spatial-multiplexing in this architecture also leads to increased photon collection efficiency by a factor of

*J*improvement in the number of photons. In other words, a superposition image may occupy the full dynamic range of a sensor whereas a conventional image is constrained to only 1/

*J*of the dynamic range for the same observation time. Filling a fraction of the dynamic range is clearly undesirable, however, it is conceivable that operational constraints may dictate this. For instance, the frame rate may be such that the available observation time may only allow partial utilization of the sensor dynamic range.

## 3. Imaging system model

For convenience, the object space is partitioned and indexed from left to right into *J* = 3 sub-fields of view. Thus, as shown in Fig. 2(b), the front beamsplitter is defined as the 3^{rd} reflective surface providing the image X_{3} of FoV 3. The middle (*j* = 2) beamsplitter provides X_{2}, and the back (*j* = 1) mirror provides X_{1}. The system parameters for this 3-FoV system include the distance from the camera lens to the *j*^{th} beamsplitter (mirror) *d _{j}*, the angular field of view of the camera lens

*ϕ*, and the rotation angle of the respective element

*θ*relative to the optical axis of the camera. The remaining parameters shown include the

_{j}*j*

^{th}beamsplitter transmission and reflection coefficients given by

*α*and

_{jT}*α*, respectively, and the associated shutter open state transmission coefficients given by

_{jR}*k*.

_{j}In order to successfully disambiguate the individual sub-images, multiple non-redundant composite image measurements of the wide FoV scene are needed. In the optical architecture described above, each sub-image is superimposed onto an imaging sensor with the *i*^{th} composite pixel measurement given by

*n*is measurement noise that includes Poisson and thermal components,

*x*corresponds to the

_{jp}*p*

^{th}pixel in the

*j*

^{th}sub-image X

*, and the*

_{j}*h*coefficients are directly related to the physical parameters of the optical multiplexer. These physical parameters include the beamsplitter transmission and reflection coefficients and shutter states. Therefore, by changing the shutter states different linear combinations of the sub fields of view can be measured. Since the

_{ij}*i*

^{th}composite image measurement consists of a

*P*pixel image from the imaging sensor and the pixelwise measurement in Eq. (3) is repeated (in parallel) for each pixel, the

*p*subscript will be dropped to simplify notation.

The dynamic range associated with each sub-image can be uniformly mapped into equally allocated portions within the full dynamic range of the composite image. By considering a measurement with all shutters in the open state and setting *h*_{11} = *h*_{12} = *h*_{13}, the required beamsplitter coefficients *α _{j}* to equally subdivide the full sensor dynamic range for this 3-FoV system can be computed from the following system of equations

*α*

_{1R}= 1 since the back surface is always a mirror. It is assumed that the beamsplitters are spatially uniform and do not have absorptive loss resulting in

*α*+

_{jR}*α*= 1. The squared terms result from the folding in which the light travels through the same optical elements twice before reaching the imaging sensor. Solving Eq.(4) using

_{jT}*k*=

_{j}*k*= 0.62 for the liquid crystal material used in our particular implementation yields

*α*

_{2T}≈ 0.77 and

*α*

_{3T}≈ 0.92. It should be noted that the LC shutters are also assumed to be spatially uniform. Based on these values, a standard beamsplitter ratio of 1:3 (

*α*

_{2T}= 0.75) was then chosen for the middle beamsplitter and plain glass with an approximate ratio of 1:11.5, which closely matches the computed transmission coefficient, was used for the front beamsplitter. Thus for some illumination level, a conventional imager measurement

*m*=

_{conv,j}*h*

_{1j}

*x*for the

_{j}*j*

^{th}FoV is allowed to completely utilize 1/

*J*of the full sensor dynamic range.

The disambiguation problem to recover each X* _{j}* for

*j*= 1 ...

*J*can be formulated as an inverse problem. As there are three sub-images and two shutters, at least three non-redundant measurements from the four possible shutter combinations are required to ensure a well-conditioned inverse problem. The shutter combinations can be represented by a binary-valued vector 〈

*s*

_{3}

*s*

_{2}〉 where

*s*= 0 denotes an open shutter,

_{j}*s*= 1 denotes a closed shutter and the subscript identifies the shutter position. In particular, the three measurements

_{j}*m*

_{1}

*, m*

_{2}

*,*and

*m*

_{3}corresponding to the 〈00〉 all open, 〈01〉 front open and middle closed, and 〈11〉 all closed shutter states, respectively, will be used. These measurements can be written as

**H**is spatially invariant. Thus the same

**H**will be used for all pixel positions. It should be noted that, depending on the shutter material used, the light attenuating performance of the 〈11〉 shutter state may not necessarily be equivalent to the light attenuating performance of the 〈10〉 shutter state.

For simplicity, **H** is normalized such that unity is the maximum value for any element. Further, let a *perfect* shutter be a device in which there is lossless transmission in the open state and no transmission in the closed state. With perfect shutters the 〈01〉 shutter state associated with the second measurement results in only two sub-images *m*_{2} = *x*_{2} + *x*_{3} being superimposed. At each beamsplitter, as the thin-film material is placed following the reflective surface, the following conditions must be true *h*_{13} = *h*_{23} = *h*_{33} and *h*_{12} = *h*_{22}. The former condition is due to the front surface always reflecting the third FoV X_{3} and the latter condition is due to the front shutter remaining in the same state during these two measurements. Thus, with perfect shutters the normalized ideal **H** is given by

*leakage*coefficients

*h*

_{21},

*h*

_{31}, and

*h*

_{32}.

Each measurement *m _{i}* = Σ

*(*

_{j}*h*) +

_{ij}x_{j}*n*with

*i*= 1 ...

*I*and

*j*= 1 ...

*J*is a function of the shutter state, leading to a measurement SNR given by

*= Σ*

_{i}

_{j}*h*and when read noise limited then the ${\text{mSNR}}_{i}\hspace{0.17em}=\hspace{0.17em}\left({\sum}_{j}{h}_{ij}\eta Q\tau \right)/\left({I}^{3}{\sigma}_{n}^{2}\right)$.

_{ij}ηQτ/IFor comparison with the conventional SNR given in Eq. 1 the SNR of the reconstructed sub-images must be used. The composite image measurements **m** are used to estimate the individual sub-images **x̂** pixel-by-pixel after applying a linear reconstruction. Here we use the linear minimum mean square error (LMMSE) estimator [13] (Wiener reconstruction) defined as

**C**

*= E[*

_{x}**xx**

^{T}] is the (pixelwise) object autocorrelation matrix and

**C**

*= E[*

_{n}**nn**

^{T}] is the detector noise autocorrelation matrix. Because no correlation is expected between non-overlapping sub-fields of view, the object autocorrelation matrix becomes diagonal. Moreover, assuming the same average intensity ${\sigma}_{x}^{2}$ for each sub-FoV yields ${\mathbf{\text{C}}}_{x}\hspace{0.17em}=\hspace{0.17em}{\sigma}_{x}^{2}\mathbf{\text{I}}$ where

**I**is the identity matrix. Also, as the detector noise is i.i.d, the noise autocorrelation ${\mathbf{\text{C}}}_{n}\hspace{0.17em}=\hspace{0.17em}{\sigma}_{n}^{2}\mathbf{\text{I}}$.

For any linear reconstruction operator **W**, in general, with elements *w _{ij}* and a measurement SNR of Eq. (8), a reconstruction SNR is given by

*I*increases, the resulting measurement SNR and reconstruction SNR both decrease. Since this reconstruction SNR also depends on a reconstruction operator which in turn depends on

**H**, if

**H**is not well-conditioned then some of the multiplex advantage may not be preserved through reconstruction.

#### 3.1. Forward model estimation

The coefficients of **H** are directly related to the optical properties of the elements comprising the optical multiplexer. Using the normalized forward model, the leakage coefficients can be directly estimated by taking control measurements. Individual sub-fields of view can be measured by physically blocking the other sub-fields of view. Allowing only a single sub-FoV with all of the shutters in an open state results in a measurement
${\text{X}}_{j}^{\text{exp}}$ which is a scaled version of the *j*^{th} sub-image. Subsequently, a sub-FoV measurement with a scaling corresponding to a particular shutter state can be obtained. Thus, the leakage coefficients can be estimated using these control measurements.

For the 3-FoV system, the *h*_{21} coefficient can be determined by taking two measurements in which X_{2} and X_{3} are physically blocked. The first measurement
${\text{X}}_{1}^{\text{exp}}\hspace{0.17em}=\hspace{0.17em}{\widehat{\text{X}}}_{1}^{\u300800\u3009}$ represents the maximum possible contribution of X_{1} to the imager for the 〈00〉 shutter state. The second measurement
${\widehat{\text{X}}}_{1}^{\u300801\u3009}$ corresponds to the (leakage) contribution of X_{1} through a thin-film shutter in the closed state. Thus, the normalized *h*_{21} leakage coefficient is given by
${h}_{21}\hspace{0.17em}=\hspace{0.17em}{\widehat{x}}_{1p}^{\u300801\u3009}/{\widehat{x}}_{1p}^{\u300800\u3009}$. If the image is not uniform then the pixel position *p* can be chosen as the location corresponding to max
$\left({\widehat{\text{X}}}_{1}^{\u300800\u3009}\right)$ or the brightest pixel. Similarly, the remaining leakage coefficients are given by
${h}_{31}\hspace{0.17em}=\hspace{0.17em}{\widehat{x}}_{1p}^{\u300811\u3009}/{\widehat{x}}_{1p}^{\u300800\u3009}$ and
${h}_{32}\hspace{0.17em}=\hspace{0.17em}{\widehat{x}}_{2p}^{\u300811\u3009}/{\widehat{x}}_{2p}^{\u300800\u3009}$.

Replacing the zero values in the ideal normalized **H** of Eq. (7) with these leakage coefficients leads to an initial estimate of the normalized forward model. Using this initial estimate, the coefficients can be further refined by using the
${\text{X}}_{j}^{\text{exp}}$ measurements. Since
${\text{X}}_{j}^{\text{exp}}$ is treated as the a truth image, this can be used to compute and minimize an estimated mean squared error (MSE) of the LMMSE reconstructed sub-images while optimizing over the coefficients of **H**, or

*are lexicographically ordered into vectors and the*

_{j}*h*equality conditions are given for a 3-FoV system.

_{ij}## 4. Results

#### 4.1. Simulation study

In a conventional imaging system, each measurement is a direct representation of a sub-field of view. In contrast, a set of measurements in a superposition imaging system must be used to recover the individual sub-fields of view. A numerical simulation was performed for this 3-FoV system architecture using images of various buildings as non-overlapping parts of the object space. It should be noted that only a read noise limited system is considered. The simulated composite image measurements with additive zero mean Gaussian noise with *σ _{n}* = 0.01 for a pixel dynamic range of [0 – 1] and each leakage coefficient set to 30% are shown in Fig. 3. Using the simulated measurements, the optimization problem given by Eq. (11) was solved to find an estimate of the forward model
${\mathbf{\text{H}}}_{\text{MSE}}^{*,\text{sim}}$. Together with the LMMSE operator, this forward model estimate was applied to the simulated composite image measurements to reconstruct the individual sub-images shown in Fig. 4. Here we employ an image SNR metric to quantify the quality of the reconstructed images. This image SNR metric is defined as
$\widehat{\text{SNR}}=10{\text{log}}_{10}\left(\text{E}\left[{\left|\right|{\widehat{\text{X}}}_{j}\left|\right|}_{2}^{2}\right]/\text{E}\left[{\left|\right|{\text{X}}_{j}^{\text{exp}}-{\widehat{\text{X}}}_{j}\left|\right|}_{2}^{2}\right]\right)$. This results in an
$\widehat{\text{SNR}}$ of 18.7, 18.1, and 21.1 dB for X̂

_{1}, X̂

_{2}, and X̂

_{3}, respectively. For comparison, images corrupted by a relative noise strength of

*σ*= 0.01 from a conventional imager are shown in Fig. 5. For this noise level, the reconstructed images from the superposition imager are comparable to the images from the conventional imager.

_{n}This is consistent with a reconstruction SNR comparison between the conventional and superposition imagers. Note that for the conventional imager the measurement SNR is the same as the reconstruction SNR. Shown in Fig. 6 is the average reconstruction SNR for both the conventional, Eq. (1), and superposition, Eq. (10), cases as the number of sub-fields of view changes assuming a full well capacity=*ηQτ* = 100k*e*^{–} for three different read noise levels. Here it can be seen that the SNR for both approaches are comparable and it should be emphasized that the superposition imaging system has lower size and weight than a conventional wide FoV imaging system. In a conventional imaging system, as the number of fields of views (*J*) increases the signal power decreases while the noise power increases resulting in a downward trend. In this superposition imaging system as *J* increases the signal power, on average, remains the same while the noise power increases again resulting in a downward trend, although at a slower rate compared to the conventional imager. However, in the case of the superposition imager, the multiplex advantage becomes significant at low light levels or with high read noise levels. An interesting result from Fig. 6 is that at *J* = 14 and *σ _{n}* = 0.02 there is a crossover between the conventional imager SNR and superposition imager SNR. As the read noise increases the crossover occurs at a lower value of

*J*and the superposition imager SNR is again higher than the conventional imager SNR at

*J*= 9 and

*σ*= 0.03. Selected numerical values from Fig. 6 for

_{n}*σ*= 0.03 have been extracted into Table 1 to quantify this crossover.

_{n}#### 4.2. Experimental results

A prototype of the thin-film shuttered multi-beamsplitter architecture described above was constructed and the device is shown in Fig. 7. The system parameters for this device are *d*_{1}=60mm, *d*_{2}=51mm, *d*_{3}=43mm, *ϕ*_{horiz} = 7.3° and *ϕ*_{vert} = 5.5°, *θ*_{1} = 65°, *θ*_{2} = 55°, and *θ*_{3} = 45°. The beamsplitter assembly has dimensions of about 40mm × 40mm × 30mm. Also, the 50mm C-mount lens was operated with an aperture setting of f/2.8. Parts from readily available commercial LC 3-D shutter eyeglasses were used as our thin-film shutters. In this design, the front beamsplitter requires a high transmission coefficient to ensure sufficient light throughput. As a result, an LC shutter panel extracted from the eyeglass assembly was used directly as the front (*j* = 3) beamsplitter in the optical multiplexer device. For this prototype, an LC shutter panel was used in series with a standard beamsplitter for the second (*j* = 2) stage. As the second stage is relatively thick, we expect multiple reflections of the sub-field of view appearing in the composite image.

A low resolution depiction of the object space (scene) placed 3.3 meters away from the optical multiplexer is shown in Fig. 8. This object space has a horizontal angular extent of about 45° and a vertical angular extent of about 17°. In this figure, red boxes have been added to demarcate the sub-fields of view seen by the optical multiplexer from the object space. The horizontal separation of the sub-fields of view are based on the rotation angles chosen for mirror and beamsplitters in the device. In contrast, high resolution tiles of the object space from a “soda-straw” imager that requires pointing are shown in Fig. 9. For the camera used, the observation time per measurement is *τ/*3 = 25 [frames] × 1/30 [frames/sec] ≈ 0.83 sec and the composite images after time-averaging 25 frames are shown in Fig. 10. Each composite image measurement has an equivalent horizontal angular extent *ϕ*_{horiz} = 7.3° and a vertical angular extent *ϕ*_{vert} = 5.5°. The estimated equivalent noise standard deviation for these time-averaged images is *σ _{n}* ≈ 0.002. Solving Eq. (11) using the measured data results in the following estimate for the forward model

_{1}, X̂

_{2}, and X̂

_{3}, respectively. Repeating the numerical simulation using the control measurements instead of buildings and Eq.(12) together with

*σ*= 0.002 yields $\widehat{\text{SNR}}$values of 17, 16.6, and 2.2 dB for X̂

_{n}_{1}, X̂

_{2}, and X̂

_{3}, respectively. One reason for this discrepancy is that the control measurements are acquired with only a single sub-image present with the remaining sub-images physically blocked. This results in low-illumination conditions coupled with the non-linear effects of the imaging sensor and associated electronics. Numerically summing the control measurements yields an estimated composite image $\widehat{\text{M}}\hspace{0.17em}=\hspace{0.17em}{\sum}_{j}{\widehat{\text{X}}}_{j}^{\u300800\u3009}$ that has lower pixel intensity values than the corresponding optically summed measurement. Moreover, note that the simulation uses an estimate of the forward channel matrix which may differ from the actual experiment forward channel response. Thus contributing to the discrepancy between the image fidelity predicted by the simulation and the observed experimental performance.

#### 4.3. Discussion

There are practical limitations to the number of sub-fields of view that can be collected with this architecture. For this LC based system, a primary consideration is the light throughput of the optical multiplexer. An input polarizer causes a factor of two decrease in the light collection ability of this system while the shutter open state transmission losses are squared. To mitigate these shutter transmission losses, the liquid crystal orientation should be aligned with the appropriate look direction to provide maximum transmission when in the open state and minimum transmission when in the closed state of the respective sub-field of view. This is a primary reason the leakage coefficients are relatively large for this experimental device. The LC shutters used are optimized to block normally incident light, however in this prototype, the LC shutters are rotated. That is, the shutter for the *j*^{th} beamsplitter (mirror) is collocated at the *j* + 1 position. As the number of multiplexed sub-images contained in a composite image measurement increases, the dynamic range and quantization resolution of the imaging sensor and associated electronics becomes increasingly important. Since the dynamic range of each sub-image occupies a proportionately smaller region in the sensor dynamic range, quantization error further limits measurement fidelity and consequently reconstructed image fidelity.

Although some non-linear effects are unavoidable, it should be emphasized that the camera must operate in a linear regime for the valid shutter state combinations. Certain camera features such as automatic gain control and automatic exposure should be disabled as they can have an adverse effect on the post-processing procedure. Another feature that should be disabled is gamma-correction as this may introduce additional non-linearity to the measurement. In this experiment, another potential source of non-linearity is the analog frame grabber used to capture and digitize the camera analog video output.

While **H** is dependent on the physical properties of the optical multiplexer, many of these parameters can be difficult to obtain. Thus, there is some uncertainty identifying the actual system **H**. In the optimization based approach used to estimate **H**, it should be noted that solving Eq. (11) with three measurements and six free variables is an underdetermined problem. This mismatch between the actual **H** and
${\mathbf{\text{H}}}_{\text{MSE}}^{*}$ is evident in the linearly reconstructed sub-images shown in Fig. 11. A faint outline of the numerical marker of X_{2} and line patterns from X_{3} are visible in Fig. 11(a). In Fig. 11(b), the line patterns from X_{3} are visible. Lastly, in Fig. 11(c) artifacts from X_{1} and X_{2} can be noticed. Despite this mismatch, the sub-image reconstructions are readily identifiable. Further, it was assumed that **H** is independent of pixel position, however in an actual system implementation **H** can be spatially dependent. Careful characterization of the optical multiplexer may improve estimation of **H**. Another practical consideration is that the physical properties of the optical multiplexer may vary over time, e.g. due to temperature changes, thus affecting **H** and therefore reconstruction performance. A smaller mismatch between the actual and estimated forward models will result in fewer artifacts between the reconstructed sub-images.

The prototype system described here, reconstructs a wide field of view of a *static* scene. When there is motion, then the measurements should be “locally static”, i.e. the measurements should be taken faster than the motion present in the scene. If this is not the case, then ghosting (blurring) will appear in the reconstructed sub-images. Ultimately for a dynamic scene, the acquisition time will be eventually limited by the switching speed of the thin-film shutter and the scene illumination levels.

## 5. Conclusion

In this paper, a computational imaging system for providing a high resolution wide field of view image of a static scene using narrow field of view imaging optics has been presented and demonstrated. This compact thin-film shuttered optical multiplexer successfully adds sub-image diversity to the measurements and extends the field of view by applying a well known linear image reconstruction technique to disambiguate the composite images. This architecture is mechanically robust as the optical elements are fixed and pointing is not required leading to reduced system size and weight. Practical issues, however, such as transmission loss would eventually limit the total number of sub-images that can be combined. While the number of composite image measurements is equal to the number of sub-fields of view here, there is ongoing interest in compressive sensing approaches that exploit object priors such as sparsity to reduce the number of required measurements.

## 6. Acknowledgments

The authors gratefully acknowledge the financial support of the Lockheed Martin Corporation and the Defense Advanced Research Projects Agency (DARPA) under the Large Area Coverage Optical Search-while-Track and Engage (LACOSTE) program.

## Footnotes

Approved for public release.Distribution unlimited.Distribution statement A.“The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.”^{} | 7. DisclaimerApproved for public release. Distribution unlimited. Distribution statement A. “The views, opinions, and/or findings contained in this article/presentation are those of the author/presenter and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense.” |

## References and links

**1. **H. H. Barrett and K. J. Myers, *Foundations of Image Science* (Wiley, 2004).

**2. **R. D. Fiete and T. Tantalo, “Comparison of SNR image quality metrics for remote sensing systems,” Opt. Eng. **40**, 574–585 (2001). [CrossRef]

**3. **A. W. Lohmann, “Scaling laws for lens systems,” Appl. Opt. **28**, 4996–4998 (1989). [CrossRef] [PubMed]

**4. **M. D. Stenner, P. Shankar, and M. A. Neifeld, “Wide-field feature-specific imaging,” in *Frontiers in Optics*, (Optical Society of America, 2007), paper FMJ2.

**5. **R. F. Marcia, C. Kim, C. Eldeniz, J. Kim, D. J. Brady, and R. M. Willett, “Superimposed video disambiguation for increased field of view,” Opt. Express **16**, 16352–16363 (2008). [CrossRef] [PubMed]

**6. **S. Uttam, N. A. Goodman, M. A. Neifeld, C. Kim, R. John, J. Kim, and D. Brady, “Optically multiplexed imaging with superposition space tracking,” Opt. Express **17**, 1691–1713 (2009). [CrossRef] [PubMed]

**7. **A. Mahalanobis, M. Neifeld, V. K. Bhagavatula, T. Haberfelde, and D. Brady, “Off-axis sparse aperture imaging using phase optimization techniques for application in wide-area imaging systems,” Appl. Opt. **48**, 5212–5224 (2009). [CrossRef] [PubMed]

**8. **John A. Decker Jr. and M. O. Harwitt, “Sequential encoding with multislit spectrometers,” Appl. Opt. **7**, 2205–2209 (1968). [CrossRef] [PubMed]

**9. **John A. Decker Jr., “Experimental realization of the multiplex advantage with a Hadamard-transform spectrometer,” Appl. Opt. **10**, 510–514 (1971). [CrossRef] [PubMed]

**10. **M. E. Gehm, S. T. McCain, N. P. Pitsianis, D. J. Brady, P. Potuluri, and M. E. Sullivan, “Static two-dimensional aperture coding for multimodal, multiplex spectroscopy,” Appl. Opt. **45**, 2965–2974 (2006). [CrossRef] [PubMed]

**11. **K. M. Douglass, T. Kohlgraf-Owens, J. Ellis, C. Toma, A. Mahalanobis, and A. Dogariu, “Expanded field of view using polarization multiplexing,” in *Computational Optical Sensing and Imaging*, OSA Technical Digest (CD) (Optical Society of America, 2009), paper CWA5.

**12. **D. J. Brady, “Multiplex sensors and the constant radiance theorem,” Opt. Lett. **27**, 16–18 (2002). [CrossRef]

**13. **S. M. Kay, *Fundamentals of Statistical Signal Processing: Estimation Theory* (Prentice-Hall, 1993).