Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Metamaterial apertures for coherent computational imaging on the physical layer

Open Access Open Access

Abstract

We introduce the concept of a metamaterial aperture, in which an underlying reference mode interacts with a designed metamaterial surface to produce a series of complex field patterns. The resonant frequencies of the metamaterial elements are randomly distributed over a large bandwidth (18–26 GHz), such that the aperture produces a rapidly varying sequence of field patterns as a function of the input frequency. As the frequency of operation is scanned, different subsets of metamaterial elements become active, in turn varying the field patterns at the scene. Scene information can thus be indexed by frequency, with the overall effectiveness of the imaging scheme tied to the diversity of the generated field patterns. As the quality (Q-) factor of the metamaterial resonators increases, the number of distinct field patterns that can be generated increases—improving scene estimation. In this work we provide the foundation for computational imaging with metamaterial apertures based on frequency diversity, and establish that for resonators with physically relevant Q-factors, there are potentially enough distinct measurements of a typical scene within a reasonable bandwidth to achieve diffraction-limited reconstructions of physical scenes.

© 2013 Optical Society of America

1. INTRODUCTION

Computational imaging schemes encompass a broad perspective on the process of collecting and processing scene information. The diffraction limit associated with a given aperture dimension effectively partitions a scene into a finite number of pixels (or voxels), implying that the scene may be represented digitally without loss of fidelity. However, the order and manner in which these voxels are accessed by the imaging apparatus are infinitely flexible, and certain approaches are preferential for certain classes of scenes. Imaging systems based on incoherent light typically either populate the image plane with an array of fixed detectors that acquire information in parallel, or mechanically scan a smaller number of detectors that acquire scene information serially. In both cases, a system of optics—often quite complex—is typically used to transmit the scene to the aperture in as pristine a condition as possible.

The use of coherent light introduces alternative imaging paradigms, such as holography, which can often obviate the need for lenses and other optical components. Moreover, phase coherent measurements can provide depth information about objects, providing a nearly tomographic representation of a scene. A conceptually straightforward approach to image formation using coherent light might be to generate a beam or series of beams holographically that illuminate a diffraction-limited portion of a scene. Making no assumptions about the scene, it becomes evident that to recover the diffraction-limited information present in the scene, the scene must be sampled at a spatial density equivalent to its space-bandwidth product (SBP) M by scanning (or dynamically reconfiguring) the aperture to interrogate the scene with M nonoverlapping pencil beams.

Collecting scene information by scanning a beam over a scene is intuitive. However, it should also be evident that any N=M field patterns (henceforth measurement modes), suitably distinct in terms of spatial overlap, may also suffice to recover the scene; what is of consequence is simply that the measurement modes be distinct in a sense that will be discussed below. In the absence of any information about the scene, presumably any set of N=M measurement modes should be equivalent to any other. However, once information about the scene and system is known, this equivalence breaks down. Depending on the quantity and distribution of information contained in a scene, the manner in which noise is present in the imaging system, and interest in particular classes of objects within the scene, certain mode sets may be superior to others [1]. In fact, since all natural scenes are known to be compressible in some basis, natural scenes can be perfectly recovered with significantly fewer measurement modes than the SBP (NM) [2,3] provided that a set of optimal measurement modes is utilized. This statement forms the basis for compressive computational imaging. Given the potential for compression and the potential classification on the physical layer, a fully reconfigurable aperture introduces fundamental capabilities for imaging scenarios, particularly the capability to preferentially select a set of measurement modes best suited to the particular scene.

A phased-array system fulfills the description of a reconfigurable aperture and can, in principle, provide a limitless set of measurement modes. However, the drawbacks that have inhibited phased-array prevalence in applications are its significant cost, weight, and power requirements for implementing the required number of sources, phase shifters, and associated amplifiers circuitry to generate N measurement modes. The reality that the N measurement modes are of consequence and not necessarily the methods for forming or detecting them suggests that we may seek new computational imaging modalities for coherent light that can potentially provide similar functionality, but with reduced cost and complexity.

One such alternate path is that of a holographic optic. A hologram is a recorded series of fringes formed by interference of a scattered field with a reference beam. Upon illumination by a reference beam, the hologram produces the desired pattern of light, which can be considered a mode in the context of the above discussion. Because the hologram consists of a pattern of light and dark fringes, computer-generated holograms can be patterned directly without requiring scattering from actual objects, allowing the generation of masks that produce nearly any type of mode under illumination by a plane wave or other reference wave. A sequence of holographic masks, then, can be used to produce a sequence of measurement modes, potentially optimized for scene characteristics and computational imaging approaches. A related approach was used to form a single-pixel terahertz imaging system [4], in which a series of masks that were pixelated with opaque and transparent regions was used to produce the measurement modes.

Holograms have traditionally been recorded using photosensitive films exposed to the interfering fields. In more recent research trends, the use of artificially structured metamaterials to achieve diffractive optical elements has been pursued [57]. Artificial materials exhibit two key advantages: first, they offer access to designed electromagnetic properties that may be difficult or impossible to find in naturally occurring media. A second, potentially more revolutionary, advantage is that artificial materials present the potential for dynamic tuning [8], which could enable phased-array-level control over measurement modes in a package with the low cost and simplicity of holographic apertures.

One particular implementation of holographic imaging using metamaterials at microwave frequencies, which serves as the subject of the present analysis, is that of a guided-mode metamaterial imager (henceforth metaimager) that radiates via coupling a guided wave to a set of resonant, metamaterial elements distributed along the propagation path [9]. This configuration is strongly related to the well-known leaky-wave antenna [1014] conventionally used to produce a directed beam whose angle varies with the input frequency. For conventional beam forming, this frequency diversity is often viewed as a drawback.

As presented, the metaimager is a single-pixel device that performs sequential measurements of a scene using a frequency-based encoding of the measurement modes. Because the number of measurement modes is equal to the number of distinct patterns that can be generated over a given frequency bandwidth, the aperture design strategy is to maximize frequency diversity. Thus, the metaimager is populated with metamaterial elements whose resonance frequencies are distributed randomly over a given bandwidth, each with as large a quality (Q-) factor as possible. The resulting aperture produces a sequence of illumination patterns that vary rapidly as a function of frequency and are well suited for compressive imaging of canonically sparse scenes. The advantage of imaging using frequency diversity is that a series of measurement modes can be obtained using a single frequency scan, avoiding mechanical scanning, multiple detectors, or even reconfigurable elements.

The use of spatially diverse nonoverlapping measurement modes is designed to preferentially enable computational compressive recovery of information from a canonically sparse scene [1518]. In a recently reported preliminary experiment [9], a microstrip-based metaimager using frequency diversity has been demonstrated to be capable of resolving objects in a 4000 pixel sparse scene using only 101 frequency-diverse measurements (within K-band 18–26 GHz). The extension of the holographic surface concept to a two-dimensional (2D) aperture has also been demonstrated, suggesting that full imaging and ranging through the use of frequency diversity or a combination of frequency-diversity and active tuning techniques is possible [19]. This frequency scanned metamaterial imager provides an important proof of concept that more advanced and novel imaging modalities can be achieved in coherent imaging schemes through the use of complex, designed apertures.

In this work, we introduce the metaimager approach and provide an analysis of image recovery protocols given various imaging scenarios. The paper is structured as follows: in Section 2 we discuss a general forward model that enables us to illustrate the basic features of the coherent measurement process. Section 3 discusses metrics by which the measurement quality of a coherent imager can be judged. In Section 4 we model each of the metaimager’s metamaterial elements as a radiating dipole [20] fed by a guided wave in much the same fashion as a leaky-wave antenna. In this manner we abstract the key feature of the metamaterial imager, which is the point-by-point control over the amplitude and phase coupling between a guided propagating wave and the radiated field. Using this dipole model, we proceed to present simulated radiation patterns of the metaimager. Next, in Section 5, we use the general forward model and figures of merit discussed earlier and apply them to our specific metaimager antenna implementation, and finally, in Sections 6 and 7, we present simulations of 2D and three-dimensional (3D) scene reconstructions.

2. COHERENT IMAGING MODEL

Consider a system in which a transmitting aperture with optical axis along z, fed by a single source, coherently illuminates a scene with a particular field pattern, or mode. The projected field scatters from objects in the scene, and backscatter components are received by the same aperture. Each projected field pattern thus represents a measurement of the scene, and we are interested in using a discrete set of field measurements to estimate the scene. In the following derivation, we use the vector r¯S=(xS,yS,zS) to indicate points in the scene and r¯A=(xA,yA,0) to indicate points on the aperture plane, defined as parallel and infinitesimally close to the aperture (see Fig. 1). We also use the vector r¯f for the location of the source and the detector.

 figure: Fig. 1.

Fig. 1. Single-source aperture operating as a transceiver, shown here as a discretized set of array elements. The array’s radiation pattern, U0, is computed from the contribution of all array elements. Also shown is U0 projected onto a plane.

Download Full Size | PDF

Suppose TA(r¯A,r¯f) is the impulse response describing the field UA(r¯A) at the aperture plane due to the point source If(r¯f)δ(rr¯f). In this fashion TA incorporates the physics associated with the aperture’s feed and radiation, and we can describe the field distribution across a plane located just above the aperture as

UA(r¯A)=IfTA(r¯A,r¯f).
The radiation pattern U0 at the scene can be computed as a superposition of the fields produced by all elements using the convolution integral
U0(r¯s)=SUA(r¯A)zG(r¯S,r¯A)d2r¯A,
where G(r¯2,r¯1)=exp(jβ0|r¯2r¯1|)/|r¯2r¯1| and (/z)G(r¯2,r¯1) is the scalar fields-to-fields propagator from location r¯1 on one plane to location r¯2 on a parallel plane [21]. These fields, incident from the aperture onto an object-free region, are the solution to the Helmholtz equation 2U0+β02U0=0. If an object is present, however, we can describe it as a spatially varying perturbation
2UT+(β0+Δβ(r¯S))2UT=0,
where the total field UT is the sum of the incident field U0 and the scattered field US. Expanding Eq. (3) and ignoring second-order terms (first Born approximation) yields
2UT+β02UT=2β0Δβ(r¯S)UT.
The right-hand side of Eq. (4) describes our scene, which we view as a scattering density due to an index perturbation in the free-space region. Assuming U0 is not strongly perturbed by the objects’ presence, we rewrite the scene as
2β0Δβ(r¯S)U0(r¯S)=f(r¯S)U0(r¯S).
To calculate US(r¯A), the field distribution across the aperture due to the backscatter from the scene, we treat the scene as a source and use the source-to-field propagator G(r¯A,r¯S) when solving Eq. (4):
US(r¯A)=VG(r¯A,r¯S)f(r¯S)U0(r¯S)d3r¯S.
Knowing the scattered field distribution across the aperture, we can now compute our measurement g defined as the field at the detector:
g=SUS(r¯A)TA(r¯f,r¯A)d2r¯A.
Revisiting Eq. (2), we note that we can rewrite (/z)G(r¯S,r¯A) as
zG(r¯S,r¯A)=G(r¯S,r¯A)D(r¯S,r¯A),
where D(r¯S,r¯A)=z((jβ0/R)(1/R2)) and R=|r¯Sr¯A|. Rearranging Eq. (8) and solving for G(r¯S,r¯A) yields
G(r¯S,r¯A)=zG(r¯S,r¯A)D1(r¯S,r¯A).
Now, substituting the appropriate terms from Eqs. (1)–(6) and (9) into Eq. (7), normalizing the source to |If|=1, using the reciprocity of the transfer functions, and rearranging the order of integration yields
g=Vf(r¯S)U0(r¯S)ZU0(r¯S)D1(r¯A,r¯S)dzd3r¯S.
For a narrow field of view (FOV), zR; by ignoring the much smaller 1/R2 term we can estimate D(r¯S,r¯A) as D(r¯S,r¯A)jβ0. Under this assumption Eq. (10) becomes
g=jβ0Vf(r¯S)U02(r¯S)d3r¯S.
We now outline the discretization procedure for a planar scene parallel to the xy plane and note that it can be applied to 3D scenes in a similar fashion. We initiate the discretization by calculating the total scattering density within a pixel of area Δx×Δy:
fγ,η=Δy/2Δy/2Δx/2Δx/2f(xSγΔx,ySηΔy)dxSdyS,
where γ and η are the pixel’s indices in x and y, respectively. Next we assign fγ,η to the entire pixel using a sampling function σ:
f(r¯S)f˜(r¯S)=γηfγ,ησΔxΔy(xSγΔx,ySηΔy),
where we have assumed the discretized pixels scatter isotropically and σ is the rectangular sampling function.

If we assume the aperture is composed of a finite collection of discrete elements, we can discretize it as well and replace the convolution integrals of Eqs. (2) and (6) with summations. Following discretization of the scene and aperture, the measurement g from Eq. (11) can now also be expressed as a sum:

g=jβ0r¯SU02(r¯S)f(r¯S).

Having arrived at Eq. (14), which describes our general coherent imaging forward model, we now turn to discussing figures of merit by which the aperture’s imaging abilities can be judged.

3. MEASUREMENT MATRIX AND FIGURES OF MERIT

Equation (14) can be expressed as

g=[h1h2hM][f1f2fM],
in which
hm=[U0(m)]2
and we drop the proportionality constant j/β0. To conduct N measurements, we must generate N modes {U0k(r¯s)}k=1,N by sweeping through N field distributions across the aperture {UAk(r¯A)}k=1,N, where we have introduced the index k to specify the kth mode. We can express the kth measurement as
gk=r¯S[U0k(r¯S)]2f(r¯S).
The complete set of all measurements is compactly expressed in matrix notation as
[g1g2gN]=[h11h12h1Mh21h22hN1hNM][f1f2fM]
or
g=Hf,
where H is an N×M measurement matrix whose kth row corresponds to the rasterized [U0k(r¯s)]2 values across M pixels. When the number of measurements N is equal to the SBP M, and in the absence of noise, the scene can be reconstructed by inverting Eq. (19) to find f. Thus, to achieve the maximal diffraction-limited information associated with a given scene, N×M independent measurements corresponding to the SBP must be made. However, this does not mean that all set of N measurements are equal, or that any N=M measurements will completely capture the information in the scene. There is the complication of independent measurements; for many aperture mode sets, it is likely that the measurement modes will have varying degrees of overlap. Therefore, it is possible that even a measurement matrix constructed from N=M modes may effectively undersample the scene. The situation can be improved by redesigning the modes, by acquiring greater than M modes, or by the use of sparse recovery algorithms that allow image recovery from undersampled data [1518].

With knowledge of the noise model, we wish to obtain a figure of merit for the suitability of a measurement matrix to estimate a scene in a compressive framework. Classic compressed sensing has used probability theory to show that random matrices obey the restricted isometry property (RIP) with high probability [22]. Obeying RIP guarantees reconstruction accuracy even in the presence of noise [23]. More recently, further work has shown that classes of deterministic matrices are also effective compressed-sensing matrices if they obey the statistical restricted isometry property (StRIP) [24]. However, these deterministic matrices have strict requirements that can rarely be met in practice. For this reason we turn to a more empirical measure of the ability of a matrix to reconstruct sparse signals. Duarte-Carvajalino and Sapiro [25], inspired by the work of Elad [26], proposed a metric suitable for deterministic matrices based on the off-diagonal elements of the Gram matrix

G=H˜TH˜,
where H˜ is the measurement matrix H with normalized columns (H is actually the “effective” sensing/dictionary matrix, defined as the product of the sensing matrix and the sparsifying dictionary [25]). The matrix reconstruction metric is the average mutual coherence
μg=ij|Gij|2M(M1),
which was empirically shown to be proportional to mean-squared-error (MSE) values for reconstructions [25], calculated using the rasterized scene and its approximation f^ according to
MSE=1Mi=1M|f^ifi|2.
The remainder of this paper will determine how physical coding elements can code H, and what properties of the coding elements can lead to lower μg.

4. METAMATERIAL APERTURE

The metaimager (Fig. 2) consists of a parallel-plate waveguide in which the top plate is patterned with complementary metamaterial elements [27]—patterned voids in a conducting sheet forming the Babinet equivalent of their volumetric counterparts [2831]. Complementary elements were proposed as a means of introducing additional design options in surface and guided-wave devices [32], in which resonant elements can be used for filtering and the modification of other propagation properties [3336].

 figure: Fig. 2.

Fig. 2. (A) Exploded view of the metaimager (showing two parallel plates above and below a dielectric supporting a cylindrical guided wave (ii). One plate serves as the ground plane (i), while the other is patterned with complementary metamaterial elements (iii). (B) We model each element as a dipole, depicted using the functional composite structure. The angle θ is defined as the angle between the dipole moment and the vector pointing from the dipole to a location at the scene.

Download Full Size | PDF

A variety of excitations can be applied to the 2D waveguide, including monopole sources that launch guided waves. The solution for the parallel-plate geometry considered here is a guided cylindrical slow wave with Hankel function

UGW(r¯A)=Jf(r¯f)H01(βGW|r¯Ar¯f|),
where βGW is the guided wave’s propagation constant |βGW|=εr|β0| and εr is the effective dielectric constant of the substrate between the aperture’s plates.

This configuration closely resembles a leaky-wave antenna, with each of the complementary metamaterial elements serving as a subwavelength resonant radiator. However, there are two significant differences between this metamaterial aperture and a conventional leaky-wave antenna. The first is the periodicity of the metamaterial elements, which is smaller (in relation to the wavelength) than is ever useful in a conventional leaky-wave antenna. The second difference is the use of resonant elements, as is common—though not required—in metamaterials. The use of resonance not only grants us access to the exotic electromagnetic responses metamaterials are known for, but also allows us to use frequency as a convenient parameter by which to index our measurement modes.

In the current section we do not follow explicitly the forward model as discussed in Section 2, where we assumed the fields on the aperture plane are known exactly. The details of the waveguide feed and radiation mechanism of the metamaterial elements—including all near- and far-field interactions between the elements—are well beyond the scope of this paper. Instead, here we calculate the unperturbed field within the parallel-plate guide and use it as the local driving field exciting each of the complementary metamaterial elements.

The details of the actual complementary metamaterial elements are unimportant to the present discussion; as was stated in the introduction, we model each complementary metamaterial element as a radiating dipole [20,37]. For convenience we assume these elements are resonant over the K-band (18–26 GHz) of frequencies and that the metamaterial imager operates over the same frequency range.

The dipole moment m¯(r¯A) can be calculated from its polarizability α(r¯A) and the local guided field according to

m¯(r¯A)=α(r¯A)·UGW(r¯A).
We model the dipole’s polarizability according to the Lorentzian dispersion:
α(r¯A)=Fω2ω2ω0(r¯A)2+jωγ(r¯A).
Here ω0(r¯A) is the angular resonance frequency of the dipole at r¯A, γ=ω0/2Q and F is a factor representing the oscillator strength and coupling. We assume a dilute array in which both the coupling between dipoles and the perturbation in the local field due to the dipoles are negligible. Under this assumption the qualitative behavior of the aperture does not depend on F, and we set it to F=1 throughout the rest of this paper. The fields radiating from such a dipole have known solutions [38] and are approximated in the far-field region as
U0(r¯S)r¯AZ0β0ωm¯(r¯A)4πRexp(jβ0R)sin(θ),
where θ is the angle between m¯(r¯A) and r¯S (see Fig. 2) and Z0 is the impedance of freespace.

It is now evident why frequency can serve as a parameter by which to index the measurement modes. From Eq. (25) we note that by sweeping ω the polarizability of each dipole changes; in addition, the local field UGW at each dipole can change with ω as well. From Eq. (24) we see that both polarizability and UGW affect the dipole moments, and these in turn can modify the field pattern with which the array illuminates the scene, calculated according to Eq. (26).

To visualize how the aperture’s modes vary with frequency, we simulate a center-fed, 50×50 element array with resonance frequencies randomly chosen from the K-band spectrum. The aperture is 20 cm in size with relative dielectric εr=2.2. We calculate and plot in Fig. 3 the dipoles’ magnitude and phase distributions across the aperture at 20 GHz (A,B) and 26 GHz (D,E) assuming elements with Q-factors of 200. For simplicity, in the current discussion all dipoles are oriented along the x axis; since the center probe generates fields with circular phase fronts, the dipole moments are weaker near the x axis, where the x component of UGW vanishes (A,D). Investigation of the dipoles’ phase distribution (B,E) reveals the guided wave’s circular phase fronts perturbed by the presence of strongly resonating dipoles. From these distributions of dipole moments we calculate and display the fields illuminating a 1m2 planar area spanning a FOV of ±21° at z=1.3m (C,F). The difference in radiation patterns highlights how unique sets of measurement modes can be accessed using only frequency diversity.

 figure: Fig. 3.

Fig. 3. Magnitude (A and D) and phase (B and E) distributions of a center-fed 50×50 dipole array operating at 20 GHz (top row) and 26 GHz (bottom row). All dipoles are oriented along the x axis. The corresponding radiation patternsare shown as well, both as a 3D beam pattern and projected onto a plane at z=1.3m (C and F).

Download Full Size | PDF

5. IMPROVING THE METAIMAGER μg

Within a frequency-diversity imaging scheme, it is advantageous to utilize array elements with the highest achievable Q-factor in order to decrease the correlation between the measurement modes. To illustrate this we simulate a center-fed array 48 cm in size with 120×120 elements, and sweep the elements’ Q-factor from 50 to 1000. For each value of the Q-factor we compute the array’s measurement matrix and calculate μg. To compute each measurement matrix, we sweep through N=600 frequency steps across the K-band spectrum, calculating at each frequency step the kth field pattern U0k illuminating the same 1m2 planar area described in Section 4. Since our aperture’s angular-resolution limit is approximated as λmin/A1.3° (where A is the aperture size), we calculate U0k across 32×32 resolution-limited pixels. We then calculate μg of the canonical measurement matrix H as well as of a wavelets-transformed measurement matrix HW, because natural scenes are often more compressible in the wavelets basis [39,40] (see Section 6 for further details). A plot of μg versus the Q-factor is shown in Fig. 4A, where we observe how an increase in the Q-factor causes μg to decrease—signifying less correlation between measurements.

 figure: Fig. 4.

Fig. 4. Average mutual coherence μg of the canonical (HC) and wavelets-based (HW) measurement matrix as a function of (A) array elements’ Q-factors and (B) number of alternating sources for a constant Q-factor of 200. The location of the six alternating sources is shown in the inset.

Download Full Size | PDF

Since arbitrarily large Q-factors are not realistically achievable, we explore another method to improve H: alternating the location of the source. We still excite the array using one source at each measurement, and keep the total number of measurements unchanged, but we switch between various source locations. Figure 4B compares μg when the same array is excited by an increasing number of sources (evenly distributed around the aperture’s center as shown in the figure’s inset) and the Q-factor is set to a realistically attainable value of 200. It is apparent that we can still increase the orthogonality of our measurements even when limited by the elements’ Q-factors by alternating the source location: since each location generates a different guided wave UGW across the aperture, moving the source changes the distribution of dipoles across the aperture and modifies the field pattern with which the aperture illuminates the scene.

6. IMAGING OF A 2D SCENE

To investigate the imaging capabilities of the metamaterial aperture, we compare simulated reconstructions using all aperture configurations discussed in Section 5. The simulated scene is a gray scale image of a person holding reflective construction tools (Fig. 5A), downsampled to 32×32 pixels (Fig. 5B). In the present discussion we are interested in microwave imaging. At these frequencies we expect the metallic tools to reflect far more strongly than the person, and we edit the scene’s colors accordingly (here white pixels correspond to highly reflective surfaces). While in practice speckle will corrupt the reconstruction since different parts of the scene reflect from different depths, here we simplify the reconstruction associated with such a scenario and instead assume each pixel reflects from a single point in its center, and that all pixels lie on the z=1.3m plane.

 figure: Fig. 5.

Fig. 5. Image of the author holding reflective construction tools (A) is discretized to resolution-limited pixels (B). The discretized scene is then illuminated by a center-fed aperture with element Q-factor of 750. Reconstruction by matrix inversion is shown in (C) and (D) for no noise and 10 dB SNR, respectively, using 600 measurements. Improved results are obtained when the scene’s sparsity is used as a prior in a compressive-sensing algorithm in the canonical basis (E) and the Haar wavelets basis (F). When high Q-factors are unattainable, we can switch between various source locations. Compressive-sensing reconstructions in the canonical and wavelets basis are shown in (G) and (H), respectively, for an aperture with a Q-factor of 200 fed from six source locations.

Download Full Size | PDF

We simulate N=600 measurements according to g=Hf+n, but first normalize the 2 norm of the rasterized scene according to

i|fi|2=1,
and similarly normalize each row of H. The noise n is introduced as a Gaussian random variable with mean zero and variance
σn2=σg2/(10SNR/10),
where σg2=(1/N)i=1N|gi|2 is the average signal energy, and the SNR is specified in decibels. As discussed in Section 3, when the measurements are noiseless it is possible to estimate the scene using a simple matrix inversion of Eq. (19); such a reconstruction is shown in Fig. 5C for the case of Q=750. Reflective features from the original scene are recognizable in this approximation, but errors are present due to the fact that N<M and because the N measurements are not orthogonal. The approximation worsens when noise is introduced, as shown in Fig. 5D for a SNR of 10 dB. Here we turn to sparsity as a constraint and pose the reconstruction problem as
f^=argminf(gHf2+Γf1),
where argminf is the value of f that minimizes the expression, and Γ is a scalar weighting factor. The first term in the argmin is the 2 norm corresponding to the conventional least-squares minimization, while the second term is the 1 norm representing the scene’s sparsity [23,41,42]. Figure 5E shows reconstruction results from the noisy measurements when Eq. (29) was solved using the TwIST algorithm [43]. We can clearly see significant improvements in the scene approximation. In addition, since natural scenes such as ours are compressible in the wavelets basis [39], we calculate the measurement matrix for scenes in the wavelets basis according to HW=HΨW (where ΨW is the Haar wavelets-transform matrix [44]), approximate f^W using
f^W=argminfW(gHΨWfW2+ΓfW1),
and then transform the approximation back to the canonical basis using f^=ΨWf^W. The results of this wavelets-basis reconstruction are shown in Fig. 5F, where further improvements are observed. As was explained in Section 5, we can also improve reconstructions by switching the location of the aperture’s source. Figures 5G and 5H depict the compressive-sensing noisy reconstructions in the canonical and wavelets basis, respectively, for an aperture with a Q-factor of 200 fed by six alternating sources. In summary, we calculate and plot the MSEs of all reconstructions as a function of Q-factor and the number of alternating sources (for a constant Q of 200) in Fig. 6. We observe that the wavelets-basis reconstructions outperform both the canonical compressive-sensing reconstructions and the matrix-inversion reconstructions. Also, in agreement with the average mutual-coherence trends discussed in Section 5, reconstruction performance improves with Q-factor and the number of sources.

 figure: Fig. 6.

Fig. 6. MSE for a 2D scene reconstruction as a function of (A) array elements’ Q-factor and (B) number of alternating sources.

Download Full Size | PDF

7. IMAGING OF A 3D SCENE

In a similar fashion to the 2D scene reconstruction described above, we can demonstrate the aperture’s ability to image 3D scenes. For the purposes of this demonstration, our 3D target is represented using the standard tessellation language (STL) format, which describes the surface geometry of an object as a collection of triangular facets. We consider only facets facing the aperture to be reflective, and define the reflectivity of a facet at r¯S when it is illuminated from the origin to be

f(r¯S)=|A(r¯S)n^S(r¯S)·r¯S|,
where n¯S is the unit-vector normal to the facet and A is the facet’s area.

Next, we discretize the volume of interest into M cells in angle and range. The range-resolution of an aperture is c/(2BW), where c is the speed of light and BW is the operation bandwidth [45], and the angular-resolution of a given aperture was discussed in Section 5. We define the scattering from each cell to be the sum of all reflectivities associated with the facets it contains, and as was done for the 2D scene assuming the cell reflects from a single point at its center.

As before, each row in H corresponds to the rasterized fields radiating from the aperture to the center of each cell. However, calculating the fields to every cell in the 3D volume can be computationally taxing. Instead, we calculate the fields across the desired FOV at a constant distance R1 from the aperture’s center, and assume that across the same angles, fields at a second distance R2 can be computed from

U0(R2)=R1R2U0(R1)exp(jβ0|R2R1|).
With the measurement matrix known, we again define our measurement as g=Hf+n using the discretized scene, and reconstruct the scene using Eq. (29). We recognize that the scene discretization has alleviated potentially adverse effects due to speckle, which remains an active topic of research.

We illustrate this setup in Fig. 7A, where an STL representation of a human is illuminated by an aperture 3.25 m away. The triangular facets used to describe the shape’s surface are shown in Fig. 7B. Figure 7C depicts the discretized target when the 2m×2m×0.5m volume surrounding the target was discretized into 32×32×16 cells—less than the full SBP but more manageable computationally. Here higher scattering densities are marked with darker shades, and nonreflecting cells are transparent. We compute the measurement matrix across all cells assuming the aperture is 48cm2 per side with 120×120 elements having a Q-factor of 200, and we use six source locations—sweeping each through 1000 frequency steps. Although the total number of measurements is just more than one-third the number of discretized cells to be reconstructed, the compressive-sensing algorithm utilizing our sparsity prior successfully reconstructs the scene. The volumetric scattering density reconstructions for noiseless measurements and in the presence of noise with 5 dB SNR are shown in Figs. 7D and 7E, respectively, where for visualization purposes we have thresholded the displayed pixels. In both reconstructions, the person is clearly identifiable.

 figure: Fig. 7.

Fig. 7. (A and B) Fields radiating from the aperture illuminate a 3D STL scene. (C) Discretized volumetric scattering density. (D) Reconstruction with no noise. (E) Reconstruction with 5 dB SNR.

Download Full Size | PDF

8. CONCLUSIONS

We have introduced a computational imaging framework appropriate to a variety of single-pixel coherent imagers, and applied it to a specific aperture implementation we termed the metaimager—a 2D guided-wave aperture radiating via an array of complementary metamaterial elements. We have modeled each element as a radiating dipole and showed how their dispersion allows the metaimager to control its field patterns through frequency diversity. Furthermore, by randomly distributing the resonance frequencies of its elements, we demonstrated the metaimager can illuminate a scene with random field patterns well suited for compressive sensing. We have discussed how to improve the metaimager’s imaging capabilities by increasing its element’s Q-factor as well as by switching between various source locations. Lastly, we have presented simulations of 2D and 3D scene reconstructions demonstrating the imaging capabilities of the proposed metaimager. Extending the work presented in this paper to model the dipole interaction instead of using our simplified assumption of noninteracting dipoles will likely improve the accuracy of the predicted aperture radiation pattern. In addition, in our reconstructions each pixel/voxel was assumed to reflect like a single point from its center; speckle-related issues were not addressed here and can be tackled by future extension of the work as well.

ACKNOWLEDGMENTS

This work was supported by the Air Force Office of Scientific Research (AFOSR) (Grant No. FA9550-09-1-0562) and the Department of Homeland Security (DHS) (Grant No. HSHQDC-XX-12-C-00049). The authors also thank Professor Guillermo Sapiro for providing comments on the manuscript regarding the coherence metric.

REFERENCES

1. D. J. Brady, Optical Imaging and Spectroscopy (Wiley-OSA, 2009).

2. D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express 17, 13040–13049 (2009). [CrossRef]  

3. C. F. Cull, D. A. Wikner, J. N. Mait, M. Mattheiss, and D. J. Brady, “Millimeter-wave compressive holography,” Appl. Opt. 49, E67–E82 (2010). [CrossRef]  

4. W. L. Chan, K. Charan, D. Takhar, K. F. Kelly, R. G. Baraniuk, and D. M. Mittleman, “A single-pixel terahertz imaging system based on compressed sensing,” Appl. Phys. Lett. 93, 121105 (2008). [CrossRef]  

5. W. Freese, T. Kampfe, E. B. Kley, and A. Tunnermann, “Design of binary subwavelength multiphase level computer generated holograms,” Opt. Lett. 35, 676–678 (2010). [CrossRef]  

6. U. Levy, H. C. Kim, C. H. Tsai, and Y. Fainman, “Near-infrared demonstration of computer-generated holograms implemented by using subwavelength gratings with space-variant orientation,” Opt. Lett. 30, 2089–2091 (2005). [CrossRef]  

7. S. Larouche, Y. J. Tsai, T. Tyler, N. M. Jokerst, and D. R. Smith, “Infrared metamaterial phase holograms,” Nat. Mater. 11, 450–454 (2012). [CrossRef]  

8. H. T. Chen, W. J. Padilla, J. M. O. Zide, A. C. Gossard, A. J. Taylor, and R. D. Averitt, “Active terahertz metamaterial devices,” Nature 444, 597–600 (2006). [CrossRef]  

9. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science 339, 310–313 (2013). [CrossRef]  

10. W. Menzel, “New traveling-wave antenna in microstrip,” AEU Int. J. Electron. Commun. 33, 137–140 (1979).

11. A. Oliner and K. S. Lee, “Microstrip leaky wave strip antennas,” in IEEE International Antennas and Propagation Symposium Digest, Philadelphia, Pennsylvania, 1986, p. 443.

12. D. R. Jackson, C. Caloz, and T. Itoh, “Leaky-wave antennas,” Proc. IEEE 100, 2194–2206 (2012). [CrossRef]  

13. A. Sutinjo, M. Okoniewski, and R. H. Johnston, “Radiation from fast and slow traveling waves,” IEEE Antennas Propag. Mag. 50(4), 175–181 (2008). [CrossRef]  

14. C. A. Balanis, in Modern Antenna Handbook (Wiley, 2008), Chap. 7.

15. E. J. Candès, “Compressive sampling,” in Proceedings of the International Congress of Mathematicians, Madrid, August 22–30, 2006 (invited lectures, 2006).

16. D. L. Donoho, “Compressed sensing,” IEEE Trans. Inform. Theory 52, 1289–1306 (2006). [CrossRef]  

17. R. G. Baraniuk, “Compressive sensing [lecture notes],” IEEE Signal Process. Mag. 24(4), 118–121 (2007). [CrossRef]  

18. J. Romberg, “Imaging via compressive sampling,” IEEE Signal Process. Mag. 25(2), 14–20 (2008). [CrossRef]  

19. B. H. Fong, J. S. Colburn, J. J. Ottusch, J. L. Visher, and D. F. Sievenpiper, “Scalar and tensor holographic artificial impedance surfaces,” IEEE Trans. Antennas Propag. 58, 3212–3221 (2010). [CrossRef]  

20. C. Rockstuhl, C. Menzel, S. Muhlig, J. Petschulat, C. Helgert, C. Etrich, A. Chipouline, T. Pertsch, and F. Lederer, “Scattering properties of meta-atoms,” Phys. Rev. B 83, 245119 (2011). [CrossRef]  

21. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995).

22. E. J. Candesand and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory 51, 4203–4215 (2005). [CrossRef]  

23. E. J. Candes and M. B. Wakin, “An introduction to compressive sampling,” IEEE Signal Process. Mag. 25(2), 21–30 (2008). [CrossRef]  

24. R. Calderbank, S. Howard, and S. Jafarpour, “Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property,” IEEE J. Sel. Top. Signal Process. 4, 358–374 (2010). [CrossRef]  

25. J. M. Duarte-Carvajalino and G. Sapiro, “Learning to sense sparse signals: simultaneous sensing matrix and sparsifying dictionary optimization,” IEEE Trans. Image Process. 18, 1395–1408 (2009). [CrossRef]  

26. M. Elad, “Optimized projections for compressed sensing,” IEEE Trans. Signal Process. 55, 5695–5702 (2007). [CrossRef]  

27. N. Landy, J. Hunt, and D. R. Smith, “Homogenization of guided wave metamaterials,” Photon. Nanostruct.—Fundam. Applic. (to be published).

28. F. Falcone, T. Lopetegi, M. A. G. Laso, J. D. Baena, J. Bonache, M. Beruete, R. Marques, F. Martin, and M. Sorolla, “Babinet principle applied to the design of metasurfaces and metamaterials,” Phys. Rev. Lett. 93, 197401 (2004). [CrossRef]  

29. F. Martin, J. Bonache, F. Falcone, M. Sorolla, and R. Marques, “Split ring resonator-based left-handed coplanar waveguide,” Appl. Phys. Lett. 83, 4652–4654 (2003). [CrossRef]  

30. J. Martel, R. Marques, F. Falcone, J. D. Baena, F. Medina, F. Martin, and M. Sorolla, “A new LC series element for compact bandpass filter design,” IEEE Microw. Wirel. Compon. Lett. 14, 210–212 (2004). [CrossRef]  

31. E. Jarauta, M. A. G. Laso, T. Lopetegi, F. Falcone, M. Beruete, J. D. Baena, A. Marcotegui, J. Bonache, J. Garcia, R. Marques, and F. Martin, “Novel microstrip backward coupler with metamaterial cells for fully planar fabrication techniques,” Microw. Opt. Technol. Lett. 48, 1205–1209 (2006). [CrossRef]  

32. K. Afrooz, A. Abdipour, and F. Martin, “Broadband bandpass filter using open complementary split ring resonator based on metamaterial unit-cell concept,” Microw. Opt. Technol. Lett. 54, 2832–2835 (2012). [CrossRef]  

33. R. Liu, Q. Cheng, T. Hand, J. J. Mock, T. J. Cui, S. A. Cummer, and D. R. Smith, “Experimental demonstration of electromagnetic tunneling through an epsilon-near-zero metamaterial at microwave frequencies,” Phys. Rev. Lett. 100, 023903 (2008). [CrossRef]  

34. Q. Cheng, R. P. Liu, J. J. Mock, T. J. Cui, and D. R. Smith, “Partial focusing by indefinite complementary metamaterials,” Phys. Rev. B 78, 121102 (2008). [CrossRef]  

35. R. P. Liu, X. M. Yang, J. G. Gollub, J. J. Mock, T. J. Cui, and D. R. Smith, “Gradient index circuit by waveguided metamaterials,” Appl. Phys. Lett. 94, 073506 (2009). [CrossRef]  

36. Q. Cheng, H. F. Ma, and T. J. Cui, “Broadband planar Luneburg lens based on complementary metamaterials,” Appl. Phys. Lett. 95, 181901 (2009). [CrossRef]  

37. T. H. Hand, J. Gollub, S. Sajuyigbe, D. R. Smith, and S. A. Cummer, “Characterization of complementary electric field coupled resonant surfaces,” Appl. Phys. Lett. 93, 212504 (2008). [CrossRef]  

38. C. A. Balanis, Advanced Engineering Electromagnetics (Wiley, 1989).

39. B. E. Usevitch,” A tutorial on modern lossy wavelet image compression: foundations of JPEG 2000,” IEEE Signal Process. Mag. 18(5), 22–35 (2001). [CrossRef]  

40. R. G. Baraniuk, V. Cevher, M. F. Duarte, and C. Hegde, “Model-based compressive sensing,” IEEE Trans. Inf. Theory 56, 1982–2001 (2010). [CrossRef]  

41. E. J. Candes, J. K. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Commun. Pure Appl. Math. 59, 1207–1223 (2006). [CrossRef]  

42. D. L. Donoho, “For most large underdetermined systems of equations, the minimal l(1)-norm near-solution approximates the sparsest near-solution,” Commun. Pure Appl. Math. 59, 907–934 (2006). [CrossRef]  

43. J. M. Bioucas-Dias and M. A. Figueiredo, “A new twIst: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process. 16, 2992–3004 (2007). [CrossRef]  

44. R. C. Gonzalez and R. E. Woods, Digital Image Processing, 2nd ed. (Prentice-Hall, 2002).

45. M. A. Richards, Fundamentals of Radar Signal Processing (McGraw-Hill, 2005).

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1.
Fig. 1. Single-source aperture operating as a transceiver, shown here as a discretized set of array elements. The array’s radiation pattern, U 0 , is computed from the contribution of all array elements. Also shown is U 0 projected onto a plane.
Fig. 2.
Fig. 2. (A) Exploded view of the metaimager (showing two parallel plates above and below a dielectric supporting a cylindrical guided wave (ii). One plate serves as the ground plane (i), while the other is patterned with complementary metamaterial elements (iii). (B) We model each element as a dipole, depicted using the functional composite structure. The angle θ is defined as the angle between the dipole moment and the vector pointing from the dipole to a location at the scene.
Fig. 3.
Fig. 3. Magnitude (A and D) and phase (B and E) distributions of a center-fed 50 × 50 dipole array operating at 20 GHz (top row) and 26 GHz (bottom row). All dipoles are oriented along the x axis. The corresponding radiation patternsare shown as well, both as a 3D beam pattern and projected onto a plane at z = 1.3 m (C and F).
Fig. 4.
Fig. 4. Average mutual coherence μ g of the canonical ( H C ) and wavelets-based ( H W ) measurement matrix as a function of (A) array elements’ Q -factors and (B) number of alternating sources for a constant Q -factor of 200. The location of the six alternating sources is shown in the inset.
Fig. 5.
Fig. 5. Image of the author holding reflective construction tools (A) is discretized to resolution-limited pixels (B). The discretized scene is then illuminated by a center-fed aperture with element Q -factor of 750. Reconstruction by matrix inversion is shown in (C) and (D) for no noise and 10 dB SNR, respectively, using 600 measurements. Improved results are obtained when the scene’s sparsity is used as a prior in a compressive-sensing algorithm in the canonical basis (E) and the Haar wavelets basis (F). When high Q -factors are unattainable, we can switch between various source locations. Compressive-sensing reconstructions in the canonical and wavelets basis are shown in (G) and (H), respectively, for an aperture with a Q -factor of 200 fed from six source locations.
Fig. 6.
Fig. 6. MSE for a 2D scene reconstruction as a function of (A) array elements’ Q -factor and (B) number of alternating sources.
Fig. 7.
Fig. 7. (A and B) Fields radiating from the aperture illuminate a 3D STL scene. (C) Discretized volumetric scattering density. (D) Reconstruction with no noise. (E) Reconstruction with 5 dB SNR.

Equations (32)

Equations on this page are rendered with MathJax. Learn more.

U A ( r ¯ A ) = I f T A ( r ¯ A , r ¯ f ) .
U 0 ( r ¯ s ) = S U A ( r ¯ A ) z G ( r ¯ S , r ¯ A ) d 2 r ¯ A ,
2 U T + ( β 0 + Δ β ( r ¯ S ) ) 2 U T = 0 ,
2 U T + β 0 2 U T = 2 β 0 Δ β ( r ¯ S ) U T .
2 β 0 Δ β ( r ¯ S ) U 0 ( r ¯ S ) = f ( r ¯ S ) U 0 ( r ¯ S ) .
U S ( r ¯ A ) = V G ( r ¯ A , r ¯ S ) f ( r ¯ S ) U 0 ( r ¯ S ) d 3 r ¯ S .
g = S U S ( r ¯ A ) T A ( r ¯ f , r ¯ A ) d 2 r ¯ A .
z G ( r ¯ S , r ¯ A ) = G ( r ¯ S , r ¯ A ) D ( r ¯ S , r ¯ A ) ,
G ( r ¯ S , r ¯ A ) = z G ( r ¯ S , r ¯ A ) D 1 ( r ¯ S , r ¯ A ) .
g = V f ( r ¯ S ) U 0 ( r ¯ S ) Z U 0 ( r ¯ S ) D 1 ( r ¯ A , r ¯ S ) d z d 3 r ¯ S .
g = j β 0 V f ( r ¯ S ) U 0 2 ( r ¯ S ) d 3 r ¯ S .
f γ , η = Δ y / 2 Δ y / 2 Δ x / 2 Δ x / 2 f ( x S γ Δ x , y S η Δ y ) d x S d y S ,
f ( r ¯ S ) f ˜ ( r ¯ S ) = γ η f γ , η σ Δ x Δ y ( x S γ Δ x , y S η Δ y ) ,
g = j β 0 r ¯ S U 0 2 ( r ¯ S ) f ( r ¯ S ) .
g = [ h 1 h 2 h M ] [ f 1 f 2 f M ] ,
h m = [ U 0 ( m ) ] 2
g k = r ¯ S [ U 0 k ( r ¯ S ) ] 2 f ( r ¯ S ) .
[ g 1 g 2 g N ] = [ h 11 h 12 h 1 M h 21 h 22 h N 1 h N M ] [ f 1 f 2 f M ]
g = H f ,
G = H ˜ T H ˜ ,
μ g = i j | G i j | 2 M ( M 1 ) ,
MSE = 1 M i = 1 M | f ^ i f i | 2 .
U GW ( r ¯ A ) = J f ( r ¯ f ) H 0 1 ( β GW | r ¯ A r ¯ f | ) ,
m ¯ ( r ¯ A ) = α ( r ¯ A ) · U GW ( r ¯ A ) .
α ( r ¯ A ) = F ω 2 ω 2 ω 0 ( r ¯ A ) 2 + j ω γ ( r ¯ A ) .
U 0 ( r ¯ S ) r ¯ A Z 0 β 0 ω m ¯ ( r ¯ A ) 4 π R exp ( j β 0 R ) sin ( θ ) ,
i | f i | 2 = 1 ,
σ n 2 = σ g 2 / ( 10 SNR / 10 ) ,
f ^ = arg min f ( g H f 2 + Γ f 1 ) ,
f ^ W = arg min f W ( g H Ψ W f W 2 + Γ f W 1 ) ,
f ( r ¯ S ) = | A ( r ¯ S ) n ^ S ( r ¯ S ) · r ¯ S | ,
U 0 ( R 2 ) = R 1 R 2 U 0 ( R 1 ) exp ( j β 0 | R 2 R 1 | ) .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.