## Abstract

Dynamic metasurface antennas are planar structures that exhibit remarkable capabilities in controlling electromagnetic wavefronts, advantages that are particularly attractive for microwave imaging. These antennas exhibit strong frequency dispersion and produce rapidly varying radiation patterns. Such behavior presents unique challenges for integration with conventional imaging algorithms. We adapt the range migration algorithm (RMA) for use with dynamic metasurfaces and propose a preprocessing step that ultimately allows for expression of measurements in the spatial frequency domain, from which the fast Fourier transform can efficiently reconstruct the scene. Numerical studies illustrate imaging performance using conventional methods and the adapted RMA, demonstrating that the RMA can reconstruct images with comparable quality in a fraction of the time. The algorithm can be extended to a broad class of complex antennas for application in synthetic aperture radar and MIMO imaging.

© 2016 Optical Society of America

## 1. INTRODUCTION

Microwave imaging has historically found application in security screening [1], Earth and subsurface detection [2], and medical diagnosis [3]. High-resolution images in these systems are typically achieved by using a large frequency bandwidth (to sense objects primarily along the range coordinate) coupled with a very large aperture [4] (to sense objects primarily along the cross-range coordinates). Some applications, such as security screening and through-wall imaging [5,6], use an array imaging scheme where a collection of transmitters and receivers are spatially distributed to cover the observation area. When an enormous area is under examination—as might be the case in many geophysical-related scenarios—SAR imaging is employed, in which an antenna in motion collects data over one or more paths [7,8]. In all cases, frequency sampling of the operational spectrum and spatial sampling of the physical aperture should satisfy the Nyquist theorem to retrieve images without aliasing effects [9].

Moreover, sampling at the Nyquist rate ensures that the transmitted and received fields can sample a complete set of Fourier components, such that a discrete Fourier transform can be defined over the aperture field. When combined with knowledge of the free space propagator and a scattering model for objects in the scene, this Fourier transform relationship can be used as the foundation for efficient image reconstruction algorithms. From this information, the fields can be propagated from the transmitters and backpropagated from the receivers over the scene area (or volume), ultimately enabling reconstruction of the scene’s reflectivity distribution.

While the frequency sampling of the backscattered wave can be used to locate objects along the range coordinate, the inherent curvature of the illuminating and scattered wavefronts removes the simple Fourier transform relationship between the frequency and range variables. However, because the frequency and spatial wavenumber along the range direction are connected through the free space dispersion relation, a relatively straightforward resampling of the data—known as Stolt interpolation—restores the relationship, enabling fast Fourier transforms (FFTs) to be applied in image reconstruction. This drastically reduces processing times and storage requirements. This approach of resampling data from the frequency domain to the wavenumber domain forms the basis for the range migration algorithms (RMA) commonly used in SAR imaging [4,10,11]. Because range migration algorithms can take advantage of efficient fast Fourier transforms (FFTs), they have been advantageously applied for array imaging systems [12,13] as well as SAR imaging schemes [14].

Sampling the aperture fields at the Nyquist limit provides a definitive path to diffraction-limited image reconstruction; meeting this requirement generally results in cumbersome hardware architecture. Either a multitude of antennas (in the case of a stationary array) or bulky, moving parts (in the case of SAR) are typical solutions to this problem. One means of alleviating this hardware burden is to consider new perspectives developed in the context of computational imaging, where a multitude of imaging architectures can be considered so long as the backscattered fields on the aperture can be reasonably estimated from some number of measurements. Modern computational imaging schemes, such as coded apertures and single-pixel techniques, have been applied to achieve scene estimates with relaxed hardware constraints [15,16]. Here, we simplify the hardware complexities by replacing the antenna array with an electrically large metasurface [17,18] antenna, which can be dynamically controlled to produce a sequence of field profiles [19].

The dynamic metasurface architecture consists of a waveguide that excites an array of subwavelength radiators, each of which can be switched *on* or *off* by an external stimulus. This metasurface can thus generate a variety of spatially distinct radiation patterns as a function of each distinct *on*/*off* pattern or *mask*. At a given frequency, a number of measurements can be made by cycling through multiple masks. Subsequently, computational imaging techniques are employed to recover the scene’s spatial content.

Metasurface antennas are low profile, consume little power, and are easily fabricated [20], making them an enabling technology for applications ranging from satellite communications [21] to security screening [5,22]. Furthermore, dynamically controlled metasurfaces can produce tailored radiation patterns without complicated hardware components, such as phase shifters, amplifiers, or mechanical scanning equipment. As a result, the metasurface antenna architecture has rapidly gained traction in large-scale computational imaging platforms [23]. In [23], a frequency-diverse metasurface antenna was introduced, consisting of a parallel-plate waveguide with resonant irises patterned on one of the conducting plates. The resonance frequency associated with a given metamaterial element, determined solely by geometry, was assigned randomly from the operating bandwidth. Thus, the distribution of radiating irises changed as a function of excitation frequency, resulting in a sequence of diverse radiation patterns during a rapid frequency sweep. In [24,25], electrically large metasurface antennas with nonresonant radiating irises, whose patterns varied with frequency due to a chaotic feed cavity, were demonstrated. More recently, the aforementioned dynamic metasurfaces have been proposed, and their ability to image with diverse radiation patterns, generated with simple control circuitry, has been demonstrated [19]. No matter the physical hardware, if the field on the aperture is known, then backpropagation techniques, such as the RMA, can be applied to find the reflectivity distribution in the scene.

The use of metasurface antennas in imaging systems simplifies the physical hardware and can increase the data acquisition rate. However, they do not generate uniform radiation patterns; further, they do not provide information that can be directly transformed in the Fourier basis. Thus they require additional processing to ensure compatibility with Fourier transform approaches. In prior works, the radiation patterns from metasurface antennas were characterized experimentally and used to form a forward model. Reconstruction of images was then accomplished by a straightforward matrix multiplication using, for example, a matched filter approach. Iterative algorithms, such as least-squares solvers, also could be applied at the harsh cost of computational effort and time [26]. Although these reconstruction techniques are versatile enough to utilize arbitrary radiation patterns—such as those formed by a passive and dynamic metasurface antenna—they do not scale well as the number of measurements and scene unknowns increase. Without a more sophisticated approach that takes advantage of the symmetry within the propagator, processing times become an impediment for computational imaging architectures.

In this paper, we develop an efficient method to reconstruct images obtained using a dynamic metasurface antenna. Specifically, we show that the RMA can be made compatible with the metasurface architecture, which allows FFTs to be used in the image reconstruction steps. The key modification we propose is to separate the role of the dynamic metasurface from the backpropagation step. In this sense, we use the various tuning masks to estimate the measurements that would be produced by conventional sampling. This modification—detailed in Section 3—is necessary to bridge the spatial measurements (obtained using diverse radiation patterns) with those obtained in arrayed systems (which consider independent antennas at different locations). This work extends the imaging process of [27] to a general class of antennas that can be dynamically controlled by any mechanism.

With the dynamic metasurface, the measured signal must be transformed as if it were the signal measured with an array of independent antennas. This can be accomplished using the antenna’s experimental characterization or with a predictive analytic model. This transformation is developed by drawing an analogy between array systems and the dynamic metasurface’s constituting elements. Once this signal transformation is performed, the RMA can be applied, as described in Sections 3 and 4, to retrieve the scene reflectivity distribution. This algorithm can rapidly estimate the scene as compared with the matched filter and least-squares approaches taken in [23,28,29]. We consider two styles of imaging. In the first case, the dynamic metasurface is maintained at a fixed position, acting as an electrically large aperture for imaging in two dimensions. In the second mode, the dynamic metasurface is translated to obtain information in both cross-range directions, thereby obtaining fully 3D images.

## 2. DYNAMIC METASURFACE ANTENNA

Metasurfaces can be implemented in radiative platforms in the form of electrically large apertures composed of subwavelength resonant elements. One approach is to feed the elements with a guided wave [30] that stimulates each unit cell as it propagates. If the metamaterial elements are resonant, they become extremely sensitive to changes in the local material properties, enabling a wide variety of tuning mechanisms to be introduced. Tuning approaches have included PIN diodes [31], varactors [32], and liquid crystals [21], all of which can be used to individually address each resonator, by either shifting the resonance or damping it entirely. With this control, each element in the array can be tuned to modify its radiative characteristics, which resembles a magnetic dipole with a complex amplitude.

The dynamic aperture can be considered a passive device in the sense that no gain components are required for operation. A great simplification over the phased array design is thus achieved, in that no phase shifter or amplifiers are utilized [33]. However, independent control over the magnitude and phase is not achievable with most dynamic metasurfaces because these quantities are jointly determined by the stimulating field and the characteristics of the resonators. Nonetheless, extraordinary control can be achieved over the collective aperture to generate the desire patterns, chiefly through the dense sampling of the guided wave enabled by the subwavelength placement of the elements. For example, in [21], steerable, highly directive beams were generated by modeling the aperture with a discrete dipole approximation in conjunction with holographic approaches. In [31], the metamaterial radiators were turned *on*/*off* to create random, spatially diverse radiation patterns. In the present context, we will develop a method that is applicable to generic radiation patterns from a dynamic metasurface—illustrated in Fig. 1—which is described by the following criteria:

- 1. The dynamic metasurface consists of a 1D waveguide (e.g., rectangular waveguide, SIW, microstrip).
- 2. N metamaterial irises are embedded within the waveguide. Each of these elements acts as a linearly polarized magnetic dipole, excited by the magnetic field associated with the waveguide mode.
- 3. The dipole response can be modified in any way, including the most complete case of individual addressability. Each tuning state is referred to as a
*mask*.

For the present analysis, the exact details of the waveguide feed mode and metamaterial elements are unimportant. To obtain a working model of the 1D metasurface antenna, we assume the magnetic field along the center of the waveguide varies as

where $\beta $ is the propagation constant of the waveguide. We assume that $\beta $ is real and ignore the attenuation due to propagation of the mode down the line. Each cell can be represented as a magnetic polarizability with the Lorentzian formHere, $F$, $\omega $, ${\omega}_{0}$, and $\gamma $ correspond to the amplitude, angular frequency ($\omega =2\pi f$), resonant frequency, and damping factor of each metamaterial element, respectively [34–36]. The magnetic dipole moment induced in each metamaterial element can be found as

Because each iris radiates as a magnetic dipole, the radiated field from the aperture can be found by summing the dipole contributions from each metamaterial element or

If it is assumed that the polarizabilities can be switched between two states by a control voltage, then the set of values $\{\alpha ({y}_{i})\}$ constitutes a mask, where each polarizability may have a value of ${\alpha}_{0}$ or zero. We consider the binary case, though it is possible for the polarizability to have a continuous range of magnitude or phase values depending on the tuning implementation.

Equations (4) or (5)—depending on the approximation—can then be used to compute the field everywhere in the scene. While the approximations used in the above equations are simplistic, they are nevertheless sufficient to model the dynamic aperture for the imaging scenarios presented here, where we are more concerned about the algorithm development. For an actual imaging scenario, the fields from the aperture could be measured experimentally or possibly predicted using more refined models for the aperture. For example, one can include the decay of the mode along the waveguide and the interaction between radiating irises using the discrete dipole approximation [37–39].

## 3. ESTIMATING THE APERTURE FIELD

As the driving frequency or the voltage tuning changes, the radiation pattern from the dynamic aperture illuminating the scene changes. Objects within the scene scatter the incident fields, producing a backscattered field that can be detected at the aperture plane. In a transceiver configuration, the backscattered fields are detected by the same aperture that produced the incident field. However, transceiver architectures generally require more complicated radio design, so here we consider an alternative configuration in which a single, low-gain waveguide probe is used to measure the backscattered field. The low-gain antenna ensures all backscattered radiation is collected from the scene. The aperture-to-antenna configuration can be viewed as a single-input-multiple-output scheme, as shown in Fig. 2 and discussed below.

When modeling the propagation in such an imaging configuration, we apply the first Born approximation, which states that the total field is equal to the incident field. Mathematically, this may be represented as

If the transmit and receive antennas are more complicated and cannot be modeled as single dipoles, a more copmlex mathematical description of the signal is required, which involves using Eq. (4):

Note that we no longer have an unambiguous source location for either the transmit or the receive antenna, so the symmetry contained in a propagator such as in Eq. (7) is lost. We therefore can only indicate the fields as coming from either the transmit or receive antenna using the subscripts as shown. Figure 2 represents the analogy between the two scenarios: when we have a collection of independent antennas, we measure the signal by using Eq. (6); while employing the dynamic metasurface, we measure the signal by using Eq. (8). If we were able to rely solely on frequency measurements, as is possible for frequency-diverse imaging systems [23,26], then the set of measurements indexed by frequency would have the form

where the integral is taken over the reconstruction region in the scene. Assuming that the measurements are sampled at a finite number of frequencies and the reflectivity distribution is discretized into a finite number of voxels, Eq. (9) can be written as a matrix equation as where $\mathbf{g}$ is a vector of measurements, $\mathbf{H}$ is defined as the sensing matrix, $\mathit{\sigma}$ is the scene reflectivity vector (to be estimated), and $\mathbf{n}$ is an additive noise term included for generality. Within the approximations leading to Eq. (9), the elements of $\mathbf{H}$ correspond to the product of the transmit and receive fields, so that each row is a measurement mode interrogating the scene. In the absence of any additional information regarding the propagator, Eq. (10) can form the basis for a reconstruction process in which the reflectivity of the scene $\mathit{\sigma}$ is estimated using methods such as matched filtering or the Moore–Penrose pseudo-inverse [40].The direct solution of Eq. (10) represents a general approach for reconstructing a scene using arbitrary field patterns and unconventional antennas. Such an approach has been successfully applied in computational imaging systems [23,26]. Furthermore, the form of Eq. (10) also works well with compressive imaging algorithms that make use of iterative methods to refine the estimation of under-sampled scenes. However, calculation time and storage requirements rapidly increase as the dimensionality of the sensing matrix $\mathbf{H}$ grows. It is therefore desirable to either avoid or simplify the calculation of $\mathbf{H}$ and apply imaging processing techniques directly to $\mathbf{g}$. Our primary goal here is to develop a preprocessing step in which the measurements provided by a dynamic metasurface can be transformed into a set of equivalent spatial measurements emanating from a collection of effective dipole sources. While this transformation will require knowledge of the radiated fields from the dynamic metasurface, it will not require using $\mathbf{H}$ during image reconstruction and instead will allow the use of the RMA. To develop this preprocessing transformation, measurements recorded as a function of tuning mask must be moved from the mask domain to the spatial domain.

From Eq. (4), we can approximate the field over the aperture corresponding to a given mask configuration as

Here we have introduced the index $m$ to indicate each mask, which is a specific pattern of susceptibilities that, for our purposes, take values of either zero or unity. We represent the field incident on each metamaterial element as a simple traveling wave in the waveguide, with propagation constant $\beta $. We assume that the measured signal at the aperture plane can be expanded in terms of the fields associated with all masks as

If we can assume or determine a set of aperture modes that exhibit some degree of orthonormality and completeness, then we may write

Then, using Eq. (14), we find

If we now consider ${\mathrm{\Phi}}_{m}({y}_{i};f)$ as a matrix $\mathbf{\Phi}$ in which each row corresponds to the mask function along the aperture coordinate, then we can write Eq. (16) as

where, as described by Eq. (14),Equation (17) shows that, from a collection of measurements taken with a dynamic metasurface $\mathbf{g}$, we can approximately estimate the field at the aperture plane at an equally spaced number of points that can be considered effective dipole sources, also called $\mathbf{S}$. The free space propagator in Eq. (7) can then be used for each of these sources, and the estimation of the scene reflectivity can proceed using the RMA as explained is the next section.

## 4. RANGE MIGRATION FORMULATION

Building upon the analogy drawn in Section 3, here we develop a range migration method based on bistatic detection of the fields, following closely the path presented by Zhuge and Yarovoy [12]. In this scenario, a single receiver is located at the origin, and a set of transmitters are uniformly distributed along an axis, as shown in Fig. 2(a). Though we will later discretize these equations with respect to the coordinates, it is convenient in deriving the basic reconstruction approach to assume the coordinates are continuous variables.

We begin by writing the field at the aperture in terms of contributions from an object in the scene. Starting with Eq. (6) and (7), we have

The use of the Dirac delta functions is justified, as we will be integrating over the receive antenna location, which is fixed in the aperture plane. We imagine the field at coordinate ${y}_{t}$ to be a Hertzian dipole. Then, the distance from each of the effective dipole sources to a scattering point in the scene is

Although we will fix the receiver position at the origin of the coordinate system, we can generally write the distance from a receive point to the object as

We now assume that the transmit and receive aperture fields are continuous and that a Fourier transform can be established over those fields. Thus, taking the Fourier transform of both sides of Eq. (19) over the aperture coordinates yields

The integral over ${A}_{r}$ is

Equation (26) is an approximation considering that the magnitude term $1/{R}^{\prime}$ has been ignored as in other RMA derivations [12,13]. The function ${E}_{2}({k}_{yt}^{\prime})$ can be easily evaluated as

Inserting Eqs. (26) and Eq. (27) into Eq. (25) yields

Finally, inserting Eq. (23) and Eq. (28) into Eq. (22) yields

In the absence of a ${k}_{z}$ component, the object cannot be resolved along the $z$ direction; therefore, it is preferable in this scenario to have an object lying in the $x\u2013y$ plane, such that $z=0$ everywhere. Additionally, if $x\gg y$, making the approximation $\sqrt{{x}^{2}+{y}^{2}}\approx x$ allows Eq. (29) to be rewritten as

From Eq. (30), it can be seen that $S({k}_{y},{k}_{x})$ is the Fourier transform of the reflectivity of the signal; therefore, in order to retrieve the reflectivity, an inverse Fourier transform over $S({k}_{y},{k}_{x})$ must be performed. In addition, the term $\frac{j\sqrt{2\pi}}{\sqrt{{k}^{2}-{k}_{y}^{2}}}$ acts as a filter of the signal in the Fourier domain.

Equation (31) will be referred to throughout this paper as the dispersion relation. The mapping of the signal from $S({k}_{y},k)$ to $S({k}_{y},{k}_{x})$ is known as Stolt interpolation, which compensates the range curvature of all scatterers by an appropriate warping of the backscattered data. Additionally, Stolt interpolation resamples $S$ onto a uniform grid of ${k}_{y}$ and ${k}_{x}$ so that fast Fourier transforms can readily operate on $S$. In the present context, this dispersion relation arises as the result of an approximation that $\sqrt{{x}^{2}+{y}^{2}}\approx x$, which means that an image will only be reconstructed with high fidelity when the object is near the center of the aperture.

A block diagram of the RMA is shown in Fig 3. Similar from other RMA derivations [10,13], once the signal is transformed at the aperture plane, it is processed by applying a fast Fourier transform (FFT) at the aperture plane. The Stolt interpolation then applies the dispersion relation and resamples the signal onto a uniform grid, only retaining real values from the dispersion relation. Finally, the IFFT is performed in order to retrieve the reflectivity of the object.

#### A. Moving the Dynamic Aperture

If the aperture is translated along the $z$ direction—the backscattered signal associated with each transmitting antenna—then each location along the $z$ axis can be expressed as

Following a similar derivation as described above, the measured signal can be transformed in the Fourier domain as

## 5. ALGORITHM IMPLEMENTATION

This section illustrates the practical implementation of the adapted RMA formulation described in the previous section. For all the simulation results presented throughout this paper, the frequency band of operation is 17.5–22.0 GHz, sampled with ${n}_{f}=51$ uniformly spaced points. The inter-element spacing of elements composing the dynamic metasurface is 6.8 mm, corresponding to the half-wavelength at 22 GHz in free space, satisfying the Nyquist sampling requirement. The total number of metamaterial elements is ${n}_{y}=105$, corresponding to a 71.4 cm overall aperture. The imaging scene is the domain described as $x\in [0.75,1.25]\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{m}$, $y\in [-0.25,0.25]\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{m}$. This region is discretized using 61 by 94 pixels, resulting in 5354 pixels. A noiseless scenario is considered throughout this paper to focus on the implementation of the proposed algorithm. The numerical results are obtained using a workstation equipped with a 64 bit 3.5 GHz CPU.

#### A. Pseudo-Inversion of the Aperture Field

Considering the fact that not all masks applied to the dynamic metasurface result in some degrees of orthonormality and completeness, it is not granted that the exact inverse of $\mathbf{\Phi}$ can be obtained. Therefore, in order to satisfy Eq. (14), we must calculate a high-fidelity estimate of the pseudo-inverse ${\mathbf{\Phi}}^{+}$. To this purpose, we study its ill-conditioning through the singular value decomposition (SVD) [15,41] of $\mathbf{\Phi}$. At each frequency, the SVD factorizes $\mathbf{\Phi}$ as

where $.\u2020$ stand for the complex conjugate operator, and $\mathbf{U}$ and $\mathbf{V}$ are unitary matrices made of an orthonormal set of bases, weighted by the singular values ${s}_{1},\dots ,{s}_{N}$, ordered from largest to smallest in the quasi-diagonal matrix $\mathbf{\Sigma}$. Under this factorization, the pseudo-inverse matrix as described in Eq. (37) is where ${\mathbf{\Sigma}}^{+}$ is a diagonal matrix given byThe conditioning of $\mathbf{\Phi}$ is defined as the ratio between the smallest and the largest singular values $\frac{{s}_{1}}{{s}_{{n}_{y}}}$. When all the singular values are equal, the vectors of the orthogonal basis $\mathbf{U}$ and $\mathbf{V}$ are equally weighted in the signal reconstruction, indicating that, for each mask, the radiation patterns generated by the collection of dipoles in $\mathbf{\Phi}$ are spatially independent. As such, the SV spectrum is flat and $\frac{{s}_{1}}{{s}_{{n}_{t}}}=1$.

An example of such scenario is shown in Fig. 4(a), where a set of masks with only one element *on* was used; this set of masks is called *identity*. In practice, the weak coupling between the guided wave and each metamaterial element in the dynamic metasurface implies a small radiated power per element; therefore, many more elements should be turned *on*. An example of this case is shown in Fig. 4(b), where a random distribution of *on/off* elements has been used. In this case, more energy is being radiated toward the scene, which results in a system more robust to noise. This comes at the cost of correlation among the resulting radiation patterns—which can be seen from the condition number—due to the fact that the dynamic metasurface does not provide full independent control of the magnitude and phase of each dipole.

In our particular simulation, 105 metamaterial elements compose the dynamic metasurface; 105 masks have been applied, although this number does not necessarily needs to be the same. For each random mask, half of the elements are turned on. Because half of the elements are turned on, the expected normalized power radiated is 0.49, and this value matches in high fidelity with ${s}_{1}/105=0.50$, [41] as shown in Fig 4(c).

In order to retrieve high fidelity images, a regularization parameter $\beta $ must be introduced to avoid singular values, which contribute to correlated measurement bases. This regularization parameter ensures a proper reconstruction of the data, depending on the intrinsic ill-conditioning of $\mathbf{\Phi}$, by removing the lower singular values of the SV spectrum, such that ${\mathbf{\Sigma}}^{+}$ moves to $\{\frac{1}{{s}_{1}},\dots ,\frac{1}{{s}_{\beta}},\dots ,0,\dots ,0\}$ [25]. Details on the effects of this regularization parameter are presented in the next section.

#### B. Range Migration Algorithm

The signal measured with a collection of independent antennas, as shown in Fig. 2(a) and computed using Eq. (19), is ${\mathbf{S}}_{{n}_{y}\times {n}_{f}}$, where ${n}_{f}$ is the number of frequencies and ${n}_{y}$ is the number antennas. The signal measured with the dynamic metasurface, as shown in Fig. 2(b) and computed as in Eq. (8), becomes ${\mathbf{g}}_{{n}_{m}\times {n}_{f}}$, where ${n}_{m}$ is the number of masks. Finally, the transformed signal, which satisfies Eq. (17), will be called ${\widehat{S}}_{{n}_{y}\times {n}_{f}}$.

Given ${\mathbf{\Phi}}^{+}$ we multiply it by $\mathbf{g}$ in order to find the signal at the aperture plane, which is compatible with the RMA. Figure 5(a) corresponds to the backscattered signal measured with an independent array of antennas, satisfying Eq. (19), while Fig. 5(b) corresponds to the backscattered signal measured with the dynamic metasurface, satisfying Eq. (8). As shown in Fig. 5(c), the transformed signal $\widehat{\mathbf{S}}=\mathbf{\Phi}\mathbf{g}$ appears comparable with the signal shown in Fig. 5(a). Therefore, we have separated the role of the dynamic metasurface and the backpropagation of the received signal. In this example, a single point-like scatterer located at $({x}_{0},\mathrm{0,0})$ with ${x}_{0}=1\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{m}$ is used as a target, resulting in a point spread function (PSF).

With the transformed signal shown in Fig 5(c), we will walk through each step of the RMA flow chart depicted in Fig. 3. The next step is the plane-wave decomposition of the signal by taking the Fourier transform of ${\widehat{S}}_{{n}_{y}\times {n}_{f}}$, as shown in Fig. 6(a). Then, the signal is backpropagated to the center of the region of interest in range ${x}_{0}$, multiplying the signal by ${e}^{j{k}_{x}{x}_{0}}$, where ${k}_{x}$ satisfies the dispersion relation in Eq. (31). Then, the signal is also re-sampled onto the ${k}_{x}$ vector, as shown in Fig. 6(b). Notice that the component of the wave vector along the optical range must be real in order to get a propagating mode. Therefore, the range of values where Eq. (31) is valid reduces to ${k}^{2}\ge {k}_{y}^{2}$. The values that do not satisfy this inequality, corresponding to the evanescent fields, are ignored. Finally, given the signal in the wavenumber domain, a 2D inverse Fourier transform is performed in order to find the reflectivity of the object, as shown in Fig. 6(c).

In order to determine the fidelity of the reconstruction with the transformed signal, ${\widehat{S}}_{{n}_{y}\times {n}_{f}}$, we compare images reconstructed with ${\widehat{S}}_{{n}_{y}\times {n}_{f}}$ to images reconstructed with ${S}_{{n}_{y}\times {n}_{f}}$. To quantify this comparison, we use the peak signal-to-noise ratio (PSNR), defined as the difference between the images ${I}_{S}$ and ${I}_{\widehat{S}}$ of size $M\times N$, as shown in Eq. (39). For this comparison, images of a point scatterer were used. The PSNR is

Figure 7(a) shows a 2D map of the PSNR as a function of the regularization parameter and the oversampling rate, which is defined as the ratio between the number of masks over the number of effective dipoles along the $y$ direction. In this example, additive white Gaussian noise was introduced to the measurements vector $g$ such that the maximum PSNR achievable is 35 dB. With a fixed term for the noise, it is possible to observe how different degrees of oversampling affect the conditioning of the $\mathbf{\Phi}$ matrix, as shown in Fig. 7(b). The variation of the SV spectra implies that deliberate truncation of the singular values must be considered. Figure 7(c) presents three possible scenarios. The first scenario shows that too much truncation of the SV spectra reduces the number of basis vectors that carry information about the target, introducing artificial noise in the target’s reconstruction. A greater value of the PSNR is shown in the second scenario, where both the oversampling and the singular value truncation improve the signal reconstruction. In the third scenario, the lack of significant oversampling yields a poor image reconstruction, regardless of how much truncation is applied. In this case, the smallest SV values—associated with the signal’s noise—become the largest, and the image cannot be distinguishable from the introduced noise.

## 6. SIMULATED RESULTS

Given the appropriate regularization parameter for the reconstruction, it is in our interest to compare the presented RMA with the previous computational imaging techniques used for the dynamic metasurface. The main advantage of the RMA described above relies on the use of fast Fourier transforms to backpropagate the transformed signal in the wavenumber domain and inverse fast Fourier transforms to find the signal in the spatial domain. To validate the adapted RMA, we compare its results to those obtained with computational imaging techniques, which have been previously used with dynamic metamaterial antennas [19,42]. For such techniques, the measured signal is found following Eq. (10), and the simplest method to achieve the image reconstruction is

where ${\mathit{\sigma}}_{\text{est}}$ is an estimate of the imaged scene, and ${\mathbf{H}}^{+}$ is the pseudo inverse of the sensing matrix. An initial approximate calculation of the pseudo inverse is ${\mathbf{H}}^{+}={H}^{\u2020}$, where $\u2020$ represents the conjugated transpose, usually referred to as matched filter (MF). In order to reconstruct better quality images, it is possible to use iterative techniques, such as least squares or the generalized minimal residual method (GMRES) to invert Eq. (10). The GMRES algorithm takes the matched filter reconstruction as an initial estimate—considering that MF applies only a phase compensation—and iterates toward the best solution, including the magnitude information [43]. For the point spread function shown in Fig. 6, we expect the resolution in range and cross range to follow the diffraction limit as where $B$ is the frequency bandwidth, ${\lambda}_{c}$ is the center wavelength, ${x}_{0}$ the range distance of the target, and $L$ is the physical length of the dynamic metasurface. For the simulated parameters described in the previous section, the expected values for 42 are ${\delta}_{r}=3.33\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{cm}$ and ${\delta}_{cr}=2.14\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{cm}$. As shown in Fig. 8, these values correspond to the expected values for all the different reconstruction methods, where ${\delta}_{r}=3.38\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{cm}$ and ${\delta}_{cr}=2.15\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{cm}$, respectively.Figure 9 shows the image reconstruction of a complex target by using different reconstruction methods. Interestingly, the image reconstructed with the RMA has better quality than the matched filter; this result can be associated with the fact that before we perform the RMA, the signal has been transformed with a more accurate calculation of ${\mathbf{\Phi}}^{+}$ with the aforementioned truncated SVD, while in the matched filter technique, ${\mathbf{\Phi}}^{+}={\mathbf{\Phi}}^{\u2020}$, implicitly computed in ${\mathbf{H}}^{\u2020}$. Moreover, the signal is better reconstructed with the GMRES technique, considering that this iterative approach provides a more accurate calculation of ${\mathbf{H}}^{+}$, while the RMA technique implies the approximation $\sqrt{{x}^{2}+{y}^{2}}\approx x$.

The adapted RMA can be used for imaging systems in which a platform moves to synthesize a large aperture and obtain higher resolution. This case can be encountered especially in Earth observation. One form of SAR imaging involves translating the aperture along the $z$ direction, with the aim of retrieving 3D images. In this approach, range information is obtained from the frequency, height information is obtained by moving along the $z$ axis, and cross range information is obtained by illuminating the scene with different radiation patterns. The transformation of the signal remains the same as in the stationary case, and the reflectivity can be found using Eqs. (32)–(35). To demonstrate the utility of this approach, the dynamic metasurface and the receiving probe are used to reconstruct an image five points spread over the region of interest, as shown in Fig. 10(a). In this scenario, the aperture was moved at 6.8 mm increments along the $z$ axis. For further comparison with other reconstruction techniques, the aperture and the domain size have been reduced. At each $z$ position, 68 masks were used to illuminate the scene, and the total number of effective dipoles in the 2D synthetic aperture is $45\times 45=2025$, and the number of frequencies was reduced to 15. The reconstructed image is shown in Fig. 10(b), which exhibit great similarity with the target under test. This scenario, which involves 15 by 45 by $45=30375$ voxels defined in the region of interest $x\in [0.1-0.9]$, $y\in [-0.15-0.15]$, $z\in [-0.15-0.15]$, further highlights major advantages of the presented algorithm. The aforementioned methods, based in computing the sensing matrix H, might become intractable as the region of interest increases, as shown in Table 1.

While matched filter and RMA methods may produce images of similar quality, the presented RMA offers significant advantages both in precomputation and reconstruction time for the stationary and moving case scenarios. For the MF and GMRES reconstructions, the precomputing time is the time required to compute ${\mathbf{H}}^{\u2020}$, while for the RMA the precomputing time is the time required to compute ${\mathbf{\Phi}}^{+}$. Therefore, despite the fact that reconstruction time via MF is faster, the precomputing time is considerably larger and requires a predefined scene, while the RMA only requires a knowledge of the average distance in range for the center of the region of interest. Note that the precomputing time for the 3D reconstruction appears to be smaller than the 2D reconstruction. Let us recall that, in the 3D scenario, the aperture size and number of masks has been reduced in order to make simulations comparable with other reconstruction methods and even in this reduced case, the reconstruction with the GMRES technique cannot be performed due to the limitations of the computer hardware used.

## 7. CONCLUSIONS

An imaging system using dynamic metasurfaces has great potential in various applications ranging from security screening to Earth and ocean observation. In particular, metasurfaces can overcome the complications of using complex hardware systems or moving parts. However, previous reconstruction algorithms used for these systems are sluggish, preventing them from becoming ubiquitous.

In the process of adapting the RMA for dynamic metasurfaces, the important role of the ill-conditioning of the sensing matrix has been explored. Particularly, $\mathbf{H}$ has been factorized as $\mathit{H}=\mathbf{\Phi}\mathbf{G}$, where $\mathbf{G}$ accounts for the propagation only, while $\mathbf{\Phi}$ depends on the tunability of the dynamic metasurface. The appropriate selection of an orthonormal $\mathbf{\Phi}$ matrix does not only rely on the selection of the masks applied but also on the appropriate modeling of the effective dipoles that compose the dynamic metasurface, including their neighboring interactions [39,37]. The design procedure to find the set of masks that generates the less ill-conditioned $\mathbf{\Phi}$ matrix is beyond the scope of this paper.

In this paper, we described how to transform the dynamic metasurface measurements in order to make them compatible with the RMA in a fast precomputation step. As such, we have adapted the RMA to develop an effective and fast reconstruction technique for imaging systems based on dynamic metasurfaces. The combination of a dynamic metasurface with this fast reconstruction algorithm can lead to an imaging system that is efficient and simple on the hardware and software levels. This exciting prospect opens the door to new opportunities in various research areas such as high-resolution, real-time, volumetric imaging, and airborne/spaceborne SAR.

## REFERENCES

**1. **D. A. Ausherman, A. Kozma, J. L. Walker, H. M. Jones, and E. C. Poggio, “Developments in radar imaging,” IEEE Trans. Aerosp. Electron. Syst. **AES-20**, 363–400 (1984). [CrossRef]

**2. **D. Massonnet and K. L. Feigl, “Radar interferometry and its application to changes in the earth’s surface,” Rev. Geophys. **36**, 441–500 (1998). [CrossRef]

**3. **J.-C. Bolomey, “Recent european developments in active microwave imaging for industrial, scientific, and medical applications,” IEEE Trans. Microwave Theory Tech. **37**, 2109–2117 (1989). [CrossRef]

**4. **M. Soumekh, *Fourier Array Imaging* (Prentice-Hall, 1994).

**5. **B. Gonzalez-Valdes, G. Allan, Y. Rodriguez-Vaqueiro, Y. Alvarez, S. Mantzavinos, M. Nickerson, B. Berkowitz, J. A. Martinez-Lorenzo, F. Las-Heras, and C. M. Rappaport, “Sparse array optimization using simulated annealing and compressed sensing for near-field millimeter wave imaging,” IEEE Trans. Antennas Propag. **62**, 1716–1722 (2014). [CrossRef]

**6. **M. Dehmollaian and K. Sarabandi, “Refocusing through building walls using synthetic aperture radar,” IEEE Trans. Geosci. Remote Sens. **46**, 1589–1599 (2008). [CrossRef]

**7. **J. C. Curlander and R. N. McDonough, *Synthetic Aperture Radar* (Wiley, 1991).

**8. **S. S. Ahmed, A. Schiessl, and L.-P. Schmidt, “A novel fully electronic active real-time imager based on a planar multistatic sparse array,” IEEE Trans. Microwave Theory Tech. **59**, 3567–3576 (2011). [CrossRef]

**9. **A. V. Oppenheim and R. W. Schafer, *Discrete-Time Signal Processing* (Prentice-Hall, 1989), Vol. 2.

**10. **C. Cafforio, C. Prati, and F. Rocca, “Sar data focusing using seismic migration techniques,” Aerosp. Electron. Syst. **27**, 194–207 (1991). [CrossRef]

**11. **S. Guillaso, A. Reigber, L. Ferro-Famil, and E. Pottier, “Range resolution improvement of airborne sar images,” IEEE Geosci. Remote Sens. Lett. **3**, 135–139 (2006). [CrossRef]

**12. **X. Zhuge and A. G. Yarovoy, “Three-dimensional near-field mimo array imaging using range migration techniques,” IEEE Trans. Image Process. **21**, 3026–3033 (2012). [CrossRef]

**13. **J. M. Lopez-Sanchez and J. Fortuny-Guasch, “3-d radar imaging using range migration techniques,” IEEE Trans Antennas Propag. **48**, 728–737 (2000). [CrossRef]

**14. **M. Soumekh, *Synthetic Aperture Radar Signal Processing* (Wiley, 1999).

**15. **D. J. Brady, *Optical Imaging and Spectroscopy* (Wiley, 2009).

**16. **C. M. Watts, D. Shrekenhamer, J. Montoya, G. Lipworth, J. Hunt, T. Sleasman, S. Krishna, D. R. Smith, and W. J. Padilla, “Terahertz compressive imaging with metamaterial spatial light modulators,” Nat. Photonics **8**, 605–609 (2014). [CrossRef]

**17. **C. L. Holloway, A. Dienstfrey, E. F. Kuester, J. F. O’Hara, A. K. Azad, and A. J. Taylor, “A discussion on the interpretation and characterization of metafilms/metasurfaces: the two-dimensional equivalent of metamaterials,” Metamaterials **3**, 100–112 (2009). [CrossRef]

**18. **C. L. Holloway, E. F. Kuester, J. Gordon, J. O. Hara, J. Booth, and D. R. Smith, “An overview of the theory and applications of metasurfaces: the two-dimensional equivalents of metamaterials,” IEEE Antennas Propag. Mag. **54**(2), 10–35 (2012). [CrossRef]

**19. **T. Sleasman, M. F. Imani, J. N. Gollub, and D. R. Smith, “Dynamic metamaterial aperture for microwave imaging,” Appl. Phys. Lett. **107**, 204104 (2015). [CrossRef]

**20. **J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady, and D. R. Smith, “Metamaterial apertures for computational imaging,” Science **339**, 310–313 (2013). [CrossRef]

**21. **R. A. Stevenson, A. H. Bily, D. Cure, M. Sazegar, and N. Kundtz, “55.2: Invited paper: Rethinking wireless communications: Advanced antenna design using LCD technology,” in *SID Symposium Digest of Technical Papers* (Wiley, 2015), Vol. 46, pp. 827–830.

**22. **E. E. Fenimore and T. Cannon, “Coded aperture imaging with uniformly redundant arrays,” Appl. Opt. **17**, 337–347 (1978). [CrossRef]

**23. **J. Hunt, J. Gollub, T. Driscoll, G. Lipworth, A. Mrozack, M. S. Reynolds, D. J. Brady, and D. R. Smith, “Metamaterial microwave holographic imaging system,” J. Opt. Soc. Am A **31**, 2109–2119 (2014). [CrossRef]

**24. **T. Fromenteze, O. Yurduseven, M. F. Imani, J. Gollub, C. Decroze, D. Carsenat, and D. R. Smith, “Computational imaging using a mode-mixing cavity at microwave frequencies,” Appl. Phys. Lett. **106**, 194104 (2015). [CrossRef]

**25. **T. Fromenteze, E. L. Kpré, D. Carsenat, C. Decroze, and T. Sakamoto, “Single-shot compressive multiple-inputs multiple-outputs radar imaging using a two-port passive device,” IEEE Access **4**, 1050–1060 (2016). [CrossRef]

**26. **G. Lipworth, A. Rose, O. Yurduseven, V. R. Gowda, M. F. Imani, H. Odabasi, P. Trofatter, J. Gollub, and D. R. Smith, “Comprehensive simulation platform for a metamaterial imaging system,” Appl. Opt. **54**, 9343–9353 (2015). [CrossRef]

**27. **T. Fromenteze, C. Decroze, and D. Carsenat, “Waveform coding for passive multiplexing: Application to microwave imaging,” IEEE Trans. Antennas Propag. **63**, 593–600 (2015). [CrossRef]

**28. **O. Yurduseven, M. F. Imani, H. Odabasi, J. Gollub, G. Lipworth, A. Rose, and D. R. Smith, “Resolution of the frequency diverse metamaterial aperture imager,” Prog. Electromag. Res. **150**, 97–107 (2015). [CrossRef]

**29. **T. Sleasman, M. Boyarsky, M. F. Imani, J. N. Gollub, and D. R. Smith, “Design considerations for a dynamic metamaterial aperture for computational imaging at microwave frequencies,” J. Opt. Soc. Am. B **33**, 1098–1111 (2016). [CrossRef]

**30. **A. Grbic and G. V. Eleftheriades, “Leaky cpw-based slot antenna arrays for millimeter-wave applications,” IEEE Trans. Antennas Propag. **50**, 1494–1504 (2002). [CrossRef]

**31. **T. Sleasman, M. Imani, W. Xu, J. Hunt, T. Driscoll, M. Reynolds, and D. Smith, “Waveguide-fed tunable metamaterial element for dynamic apertures,” IEEE Antennas Wireless Propag. Lett. **15**, 606–609 (2015). [CrossRef]

**32. **I. Gil, J. Bonache, J. García-García, and F. Martín, “Tunable metamaterial transmission lines based on varactor-loaded split-ring resonators,” IEEE Trans. Microwave Theory Tech. **54**, 2665–2674 (2006). [CrossRef]

**33. **R. C. Hansen, *Phased Array Antennas* (Wiley, 2009), Vol. 213.

**34. **T. H. Hand, J. Gollub, S. Sajuyigbe, D. R. Smith, and S. A. Cummer, “Characterization of complementary electric field coupled resonant surfaces,” Appl. Phys. Lett. **93**, 212504 (2008). [CrossRef]

**35. **C. Jouvaud, J. de Rosny, and A. Ourir, “Adaptive metamaterial antenna using coupled tunable split-ring resonators,” Electron. Lett. **49**, 518–519 (2013). [CrossRef]

**36. **H. Odabasi, F. Teixeira, and D. Guney, “Electrically small, complementary electric-field-coupled resonator antennas,” J. Appl. Phys. **113**, 084903 (2013). [CrossRef]

**37. **P. T. Bowen, T. Driscoll, N. B. Kundtz, and D. R. Smith, “Using a discrete dipole approximation to predict complete scattering of complicated metamaterials,” New J. Phys. **14**, 033038 (2012). [CrossRef]

**38. **M. Johnson, P. Bowen, N. Kundtz, and A. Bily, “Discrete-dipole approximation model for control and optimization of a holographic metamaterial antenna,” Appl. Opt. **53**, 5791–5799 (2014). [CrossRef]

**39. **L. Pulido-Mancera, T. Zvolensky, M. Imani, P. Bowen, M. Valayil, and D. Smith, “Discrete dipole approximation applied to highly directive slotted waveguide antennas,” IEEE Antennas Wireless Propag. Lett. **PP**, 1 (2016). [CrossRef]

**40. **E. Moors, “On the reciprocal of the general algebraic matrix, abstract,” Bull. Amer. Math. Soc. **26**, 394–395 (1920).

**41. **D. J. Brady, K. Choi, D. L. Marks, R. Horisaki, and S. Lim, “Compressive holography,” Opt. Express **17**, 13040–13049 (2009). [CrossRef]

**42. **O. Yurduseven, J. N. Gollub, D. L. Marks, and D. R. Smith, “Frequency-diverse microwave imaging using planar mills-cross cavity apertures,” Opt. Express **24**, 8907–8925 (2016). [CrossRef]

**43. **Y. Saad and M. H. Schultz, “Gmres: a generalized minimal residual algorithm for solving nonsymmetric linear systems,” SIAM J. Sci. Stat. Comput. **7**, 856–869 (1986). [CrossRef]