Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Impact of image registration errors on the quality of hyperspectral images in imaging static Fourier transform spectrometry

Open Access Open Access

Abstract

Imaging static Fourier transform spectrometry (isFTS) is used for pushbroom airborne or spaceborne hyperspectral remote sensing. In isFTS, a static two-wave interferometer imprints linear interference fringes over the image of the scene, so that the spectral information is multiplexed over several instantaneous images, and numerical reconstruction is needed to recover the full spectrum for each pixel. The image registration step is crucial since insufficient accuracy leads to artefacts on the images and the estimated spectra. In order to investigate these artifacts, we performed a theoretical study and designed a simulation program. We established that registration errors create crenellated spatial patterns, the magnitude of which depends on the radiance gradient of the scene, the amplitude of the registration error, and the wavelength. In the case of sinusoidal perturbations, which may correspond for instance to mechanical vibrations of the carrier, we established that spurious peaks appear on the spectrum, similarly to what happens in dynamic FTS, but with spatial patterns specific to static interferometers.

© 2024 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Imaging static Fourier transform spectrometers (isFTS), in the "high étendue" configuration [1], are hyperspectral instruments resulting of the association of a static interferometer and a 2D imaging system. They offer various advantages, including high flux collection, with no slit or narrowband spectral filter, and absence of moving part, unlike classical (or dynamic) imaging Fourier transform spectrometers [2]. These assets have spurred the development of such instruments developed for ground-based [35], airborne [69] and spaceborne [10] applications, including ocean and atmosphere observation as well as Earth surface observation. For example, it has been used for precision agriculture and for measuring the speed of a plume from a volcano [11].

Nevertheless, isFTS are far less common than dispersive imaging spectrometers. This can be mainly explained by two factors. The first one is that isFTS offers low flexibility about spectral range and resolution. Indeed, as Fourier transform spectrometers, they necessarily measure spectra from 0 wavenumber to the maximum wavenumber, with a spectral resolution constant in wavenumber and not in wavelength. Furthermore, as static interferometers, their spectral resolution is limited by the number of pixels of the focal plane array, thus preventing from reaching spectral resolution as fine as with classical FTS. The other main hindrance to the use of isFTS is that they require heavy data processing algorithms: since they are "push-frame" or "windowing" [12] spectral imagers, image registration is needed to obtain the interferogram of each pixel, and the interferograms have then to be Fourier inverted. The processing chain is thus quite complex, and errors at any of these processing steps may result in spatial and spectral artefacts on the hyperspectral cube.

In this article, we will specifically focus on the impact of image registration errors. Previous studies have been conducted by research teams in the world on the effects of line-of-sight jitter in dynamic iFTS, see for instance [13,14] or [15]. There are also publications about registration errors with isFTS, however either they study only slit-based hyperspectral cameras [16,17], or insist on the need of precise registration for "high étendue" isFTS [18,19], but without describing in a general way the consequences of registration errors. Thus, in this article, we propose to describe and quantify the impact of registration errors on the hyperspectral cube, and show that they may create very specific spatial and spectral patterns. These results are validated by simulated and experimental data.

In Section 2, we will present "high étendue" isFTS (which we will call from now only by isFTS, even though all imaging static Fourier transform spectrometers are not of the "high étendue" class), and emphasize how the registration step is crucial for the hyperspectral image cube reconstruction, since it may lead to artefacts if not properly achieved. Then, in Section 3, we will analytically develop a model for image formation and inversion, without image registration errors. Section 4 will deal with registration error, in the general case and in the more specific case of periodic errors, which may result from uncorrected and unknown micro-vibrations of the carrier. Results will be compared with those obtained with dynamic iFTS. Lastly, in Section 5, we will illustrate these results with experimental data, both from the laboratory and from airborne instrument.

2. Principle of isFTS

2.1 General principle

As stated above, the isFTS instruments we are interested in are made up of a classical 2D imaging system and of an interferometer. This latter splits the light coming from the source in two arms, delays one with an optical path difference (OPD) with respect to the other arm, and then recombines the two arms. Measuring the signal variation versus the OPD (i.e. the interferogram) is equivalent to measure the autocorrelation of light, and thus the spectrum after Fourier transform (see for instance page 42 of [20]). In its most ideal form, the relationship between the interferogram $I$ and the apparent spectrum $S$ is:

$$I(\delta) = \int_{\sigma_{min}}^{\sigma_{max}}S(\sigma)\times\frac{1+\mu \cos (2\pi \sigma \delta)}{2}\,\mathrm{d}\sigma$$
with $\delta$ the OPD [m], $\sigma$ [$\mathrm {m}^{-1}$] the wavenumber, and $\mu$ the interferometer contrast. In the following, we will replace $S$ by a radiometrically defined quantity, but at this stage the point is that by "apparent", we mean that the detector relative efficiency and optical transmittance of the instrument —apart from the interference term— are included in $S$. This relationship is a Fourier (or Cosine) transform. It can be inverted to retrieve the spectrum from the modulated part (AC part) of the interferogram:
$$S(\sigma) = \frac{4}{\mu} \int_{-\infty}^{+\infty} \left( I\left(\delta\right)-\bar{I} \right) \times \cos (2\pi \sigma \delta)\,\mathrm{d}\delta$$
with $\bar {I}$ the mean value (DC part) of the interferogram. Since the support of $S(\sigma )$ is contained in $[-\sigma _{max},\sigma _{max}]$, the interferogram must be sampled with a pitch finer than $1/2\sigma _{max}$ —with the exception of the specific case of narrow spectra which will not be discussed here. In practice, the measured range of OPD is finite, limited by $\delta _{max}$, so that the retrieved spectrum is the true spectrum convolved by the Instrument Line Shape (ILS), a sinc function in the ideal case, normalized so that $\int \mathit {ILS}\left (\sigma \right )\,\mathrm {d}\sigma = 1$:
$$\mathit{ILS}\left(\sigma\right) = 2 \delta_{max} \mathrm{sinc}\left(2\delta_{max}\sigma\right)$$

These general considerations about FTS will be useful for the rest of this article. Further details, which are out of our scope, can be found in excellent dedicated books, like [20] or [21].

In dynamic imaging FTS, the field-of-view is fixed ("staring" mode), and the OPD scan is performed by moving one or the two mirrors of a Michelson interferometer. Conversely, in isFTS, the interferometer is designed so that the OPD varies linearly along one direction of the field-of-view, viz. the along track (ALT) one. Thus, a scan of the scene ("push frame" or "windowing" mode) in the direction of the variation of OPD provides the OPD scan for each point of the scene. This scan is provided either by the natural movement of the carrier in airborne or spaceborne applications, or by a rotating stage for ground-based applications. In any case, the interferometer itself is not modified. This acquisition process is summarized on Fig. 1. On Fig. 1 (a), we see the image of the scene with the interference fringes created by the varying OPD. As the carrier moves straight over the scene, a sequence of instantaneous images is taken, so that a very point on the ground is seen through all the available OPDs. These images are then registered: on the registered sequence, one ground point is on the same pixel in every image (Fig. 1 (b)). We can thus extract the interferogram of each pixel from the stack of registered images. Each image yields one sample to the interferograms. The last operation is to calculate the Fourier or Cosine transform of each interferogram. The set of spectra corresponding to each scene point constitutes the hyperspectral cube, an object with two spatial dimensions and one spectral dimension. The hyperspectral cube can also be seen as a stack of spectral (or monochromatic) images (Fig. 1 (c)).

 figure: Fig. 1.

Fig. 1. Principle of isFTS (a) three instantaneous images of the same scene taken at different moments. The yellow dot follows the same ground point; (b) the images are registered; (c) three monochromatic images at different wavelengths.

Download Full Size | PDF

2.2 Image registration

Image registration is a key step of the data processing. Registration errors cause a mixing of interferograms from different ground points, this mixing depending on the OPD. The consequence is thus not a mere blur, but also an alteration of the estimated spectra. A special care is therefore required to register the images.

Registration may be performed by image processing or using line of sight (LoS) data. LoS data have the advantage of providing directly the attitude of the instrument (translations and rotations), which can be converted to image transformation with a robust instrument model, including distorsion, and, in the case of airborne or spaceborne instruments, a digital elevation model (DEM), to cope with non horizontal scenes. However, LoS accuracy at the camera frame rate may be insufficient for high spatial resolution imaging. On the contrary, image processing has the advantage to provide directly the useful information, with an accuracy that may be much finer than one pixel. But image processing alone suffers from drawbacks, it may especially be biased by moving elements in the scene (e.g. vehicles) or by parallax errors due to different heights in the scene (e.g. buildings), even though a DEM can also be provided by image processing [22]. The best solution may thus be to merge both approaches: for instance, LoS data provide an initial set of registration parameters, which are then refined by image processing. This is illustrated on Fig. 2 with images from the Sieleters infrared airborne isFTS [9] developed at Onera. On Fig. 2 (a), images have been registered only with LoS data, the accuracy of which being about a quarter of pixel. Despite this subpixel accuracy, artefacts are clearly visible on the spectral images, in the form of crenellations along the vertical edges. On Fig. 2 (b), image registration has been improved by image processing, the estimated accuracy being far better than a tenth of a pixel: the quality of the image is much better, with the disappearance of the artefacts.

 figure: Fig. 2.

Fig. 2. Spectral image from the infrared airborne isFTS Sieleters, without (a) and with (b) fine image registration. In the first case, spatial artefacts appear along near vertical edges, indicated by the red ellipses.

Download Full Size | PDF

However, even though the artefacts seem to have disappeared, it is useful to quantify the impact of registration errors, either because image processing may not be possible (for instance if images have to be processed onboard the aircraft or the satellite), or merely to specify the registration accuracy needed to be compliant with the required spectral image quality. Such a quantification is the topic of the next section, focusing on registration that can be described by mere translation, i.e. displacement of the carrier at constant altitude, or roll or pitch. This covers a wide part of the operational situations, and facilitates analytical calculation. We do not deal with unknown altitude changes, even though the analytical image model we developed may be adapted to this specific case.

3. Hyperspectral image model

3.1 Notations and direct problem

To analyze in a theoretical way the effect of registration errors, we first need to know how raw images are formed on the image sensor, and how the raw image stack is processed in order to build the interferometric cube and then the hyperspectral cube. Since we focus in this article only on image registration errors, we can make several simplifying assumptions.

  • • The scene is defined by its apparent spectral radiance $L_{\sigma }\left ((x,y)_S,\sigma \right )$ [photons.$\mathrm {s^{-1}}$.$\mathrm {m^{-2}}$.$\mathrm {sr^{-1}}$.m], in a 2D coordinate system attached to the scene and thus marked by the $_{S}$ subscript, and at wavenumber $\sigma$.
  • • The imaging system is perfectly stigmatic and without distorsion, and the detector coordinate system is marked by subscript $_D$. For the sake of simplicity, we will consider a magnification factor of +1 between the scene and the detector coordinate systems.
  • • We assume that the scan can be modeled by a mere shift of the scene on the detector, this shift being defined for image number $k$ by the position $\left (x_{P,k},y_{P,k}\right )_S$ of the center $P$ of the instrument field-of-view (which is the point of coordinates $\left (0,0\right )_D$ in the detector coordinate system). This means that, if a point is of coordinates $(x,y)_D$ in the detector coordinate system, then this very point is of coordinate $(x+x_{P,k},y+y_{P,k})_S$ in the scene coordinate system:
    $$(x,y)_D \rightleftarrows (x+x_{P,k},y+y_{P,k})_S$$

    The exact value of $\left (x_{P,k},y_{P,k}\right )_S$ may be unknown: we thus define $\left (\hat {x}_{P,k},\hat {y}_{P,k}\right )_S$ as the estimated position of $P_k$ in the scene coordinate system. We will further assume that the estimated scan is in the $y$ direction and at constant speed, so that:

    $$\left(\hat{x}_{P,k},\hat{y}_{P,k}\right)_S = \left(0,k \cdot \Delta y\right)$$

  • • The transmission of the instrument $\mathscr{T}\left ((x,y)_D,\sigma \right )$ is reduced to the ideal transmission of the interferometer (see Eq. (1)):
    $$\mathscr{T}\left((x,y)_D,\sigma\right) = \frac{1+\mu \cos \left(2 \pi \delta\left(x,y\right)_D \sigma \right)}{2}$$
    with $\delta \left (x,y\right )_D$ the OPD map, assumed to be independent from $\sigma$. We will further assume that this map varies linearly with $y$ only:
    $$\delta\!\left(x,y\right)_D = p_y \cdot y$$
  • • We note $G$ [$\mathrm {m}^2.\mathrm {sr}$] the optical étendue of one pixel, $\Delta t$ the integration time [s], and $\eta$ the detector quantum efficiency [$\mathrm {electron}.\mathrm {photon}^{-1}$].
  • • We neglect any noise, either readout noise or photon noise, and we assume that the offset and gain of each pixel are respectively 0 and 1.

With these assumptions, $I_k\left (x,y\right )_D$ [electrons], the $k\mathrm {th}$ image of the sequence, can be written by:

$$I_k\!\left(x,y\right)_D = \dfrac{G \,\Delta t\, \eta}{2}\int_{\sigma_{min}}^{\sigma_{max}} \bigr[ 1 + \mu \cos (2 \pi \delta\left(x,y\right)_D \sigma )\bigr] \times L_{\sigma}\left(\left(x+x_{P,k},y+y_{P,k}\right)_S,\sigma\right) \,\mathrm{d}\sigma$$

3.2 Hyperspectral cube estimation: inverse problem

We now have to extract the interferogram $\hat {I}$ of each scene point $(x,y)_S$ from the stack of images $I_k$, that is to estimate $\hat {I}\left ((x,y)_S,\delta _k\right )$ with $\delta _k$ the sampled OPD. According to Eq. (4) and subsequent comments, the estimated location of $(x,y)_S$ in frame $I_k$ is $(x-\hat {x}_{P,k},y-\hat {y}_{P,k})_D$ —this is the key registration step. We immediately deduce that:

$$\left\{ \begin{array}{ll} \delta_k = \delta \left(x-\hat{x}_{P,k},y-\hat{y}_{P,k}\right)_D \\ \hat{I}\left((x,y)_S,\delta_k\right) = I_k(x-\hat{x}_{P,k}, y-\hat{y}_{P,k})_D \end{array} \right.$$

Note that $(x-\hat {x}_{P,k},y-\hat {y}_{P,k})_D$ may not be the center of one pixel of the matrix detector (or Focal Plane Array, FPA): it means that image $I_k$ has to be interpolated, but, in the frame of this article, we will assume that no interpolation error occurs. If there is furthermore no registration error, that is if $\left (\hat {x}_{P,k},\hat {y}_{P,k}\right )_S = \left (x_{P,k},y_{P,k}\right )_S$, then, thanks to Eqs. (8) and (9), we obtain:

$$\hat{I}\left((x,y)_S,\delta_k\right) = \dfrac{G\,\Delta t\,\eta}{2}\int_{\sigma_{min}}^{\sigma_{max}} \bigr[ 1 + \mu \cos (2 \pi \delta_k \sigma )\bigr] \times L_{\sigma}\left(\left(x,y\right)_S,\sigma\right) \,\mathrm{d}\sigma$$

By comparing this equation with Eq. (1), it appears clearly that $\hat {I}\left ((x,y)_S,\delta \right )$ is the true interferogram of spectrum $L_{\sigma }\left (\left (x,y\right )_S,\sigma \right )$, sampled at OPD $\delta _k$. Thus, as stated in Section 2.1 with Eq. (2) and subsequent comments, we can retrieve the spectrum $L_{\sigma }\left (\left (x,y\right )_S,\sigma \right )$, convolved by the ILS.

In the general case, this OPD sampling may not be regular, and specific algorithms may be needed to invert the interferogram: we can either use any least squares solver for underdetermined linear systems (like truncated SVD or conjugate gradients), or the more specific inverse non-uniform Fourier transform algorithm [23] which exhibits lower computational complexity. However, for the sake of simplicity of this registration study, we now assume that the OPD map is linear (Eq. (7)) and that the scan is also at constant speed (Eq. (5)); $\delta _k$ are then regularly sampled with step $a_{\delta } = p_y \cdot \Delta y$, and we can use the common Discrete Cosine Transform (DCT) to estimate the spectrum. This latter, in the case of a double-sided interferogram (both negative and positive OPD from $-\delta _{max}$ to $+\delta _{max}$), is given by:

$$\hat{L}_\sigma \left((x,y)_S,\sigma\right) = \dfrac{4 a_\delta}{G \, \Delta t \, \eta \, \mu} \sum_{k} \bigr[\hat{I}\left((x,y)_S,\delta_k\right) - \bar{\hat{I}}\bigl] \cdot \cos\left(2\pi\delta_k\sigma\right)$$
with $\bar {\hat {I}}$ the mean value of the interferogram. The sum over $k$ is implicitely done over values of $k$ such that there exists a frame $I_k$ where the scene point $(x,y)_S$ appears. The sum is therefore finite, leading to the limitation of the spectral resolution by the ILS (see Eq. (3)).

Note that other interpolation schemes could be implemented: although we consider here only frame-by-frame spatial interpolation, it is for instance also possible to take advantage of multiple frames to interpolate the signal at both desired ground location and OPD, as proposed in Fig. 7 of [7], also described in [19] (Fig. 3.1). Nevertheless, in the framework of this article, we will restrict ourselves to frame-by-frame interpolation as described above, since our past experience has shown that it gives satisfactory enough results in most cases.

 figure: Fig. 3.

Fig. 3. Top: at the left, the panchromatic scene, consisting of a bright rectangle over a dark background, and then four examples of instantaneous images, inside the red rectangles. For sake of clarity, the crossing of $x_D$ and $y_D$ axes has been set at the bottom left of the field-of-view rather than at the center. For frame $k_0$, the expected field-of-view is indicated by the orange rectangle. Bottom: the stack of instantaneous images after registration. There are no registration errors, save at frame $k_0$. Outside from the frame field-of-view, we have shown the true scene, to emphasize the registration error. At this frame $k_0$, the two ground points marked by the blue and green crosses erroneously lie inside the bright rectangle.

Download Full Size | PDF

4. Hyperspectral cube estimation with registration errors

In the previous subsection we have shown that, without registration errors, we can correctly estimate the hyperspectral cube, its spectral resolution being only limited by the finite OPD range. In this section, we will quantify the consequences of registration errors.

4.1 General expression

In case of registration errors, $\left (\hat {x}_{P,k},\hat {y}_{P,k}\right )_S \neq \left (x_{P,k},y_{P,k}\right )_S$. Let us note $\vec {\varepsilon }_k$ this error:

$$\vec{\varepsilon}_k = \left(\varepsilon_{x,k},\varepsilon_{y,k}\right) = \left(\hat{x}_{P,k}-x_{P,k},\hat{y}_{P,k}-y_{P,k}\right)$$

According to Eqs. (8) and (9), we have:

$$\hat{I}\left((x,y)_S,\delta_k\right) = \dfrac{G \, \Delta t \, \eta}{2}\int_{\sigma_{min}}^{\sigma_{max}} \bigr[ 1 + \mu \cos (2 \pi \delta_k \sigma )\bigr] \\ \times L_{\sigma}\left(\left(x-\varepsilon_{x,k},y-\varepsilon_{y,k}\right)_S,\sigma\right) \,\mathrm{d}\sigma$$
with $\delta _k = \delta \left (x-\hat {x}_{P,k},y-\hat {y}_{P,k}\right )_D$. Note that there is no error on the OPD: the OPD in the cosine term of the integral is indeed $\delta _k$. This holds because the OPD map is stuck to the detector, so we exactly know which OPD is associated with any point of the FPA. But the interferogram intensity is obviously erroneous with an error defined by:
$$\Delta \hat{I}\left((x,y)_S,\delta_k\right) = \hat{I}\left((x,y)_S,\delta_k\right) - I\left((x,y)_S,\delta_k\right)$$
where $I\left ((x,y)_S,\delta \right )$ is the true interferogram of point $(x,y)_S$ at OPD $\delta$.

In order to more easily quantify this error, we will now introduce the hypothesis that the scene radiance is a separable function of space and spectral variables:

$$L_\sigma\left((x,y)_S,\sigma\right) = \mathscr{L}(x,y)_S \times \mathscr{B}_{\sigma}(\sigma)$$
with the additional condition that
$$\int_{\sigma_{min}}^{\sigma_{max}} \mathscr{B}_{\sigma}(\sigma) \,\mathrm{d}\sigma = 1$$
to remove the ambiguity on the normalization of $\mathscr{L}$ and $\mathscr{B}_{\sigma }$. $\mathscr{L}$ is thus the total radiance [photons.$\mathrm {s^{-1}}$.$\mathrm {m^{-2}}$.$\mathrm {sr^{-1}}$]. Although this hypothesis would be far too simplifying for the hyperspectral cube estimation, it is a good approximation to describe the major gradients of the interferometric image [24], gradients which convert the registration errors to errors on the hyperspectral cube. With such an hypothesis, we can express the frame $I_k$ as the product of a panchromatic image $I_{panchro}\left (x,y\right )_S$ and of a normalized interferogram $\iota (\delta )$ identical for all the scene points:
$$I_k(x,y)_D = I_{panchro}\left(x+x_{P,k},y+y_{P,k}\right)_S \times \iota\left( \delta(x,y)_D \right)$$
with:
$$I_{panchro}\left(x,y\right)_S = \dfrac{G\,\Delta t\,\eta}{2} \cdot \mathscr{L}\left(x,y\right)_S$$
and
$$\iota\left(\delta\right) = 1 + \mu \int_{\sigma_{min}}^{\sigma_{max}} \mathscr{B}_{\sigma}(\sigma) \cdot \cos (2 \pi \delta \sigma ) \,\mathrm{d}\sigma$$

Assuming registration errors small enough so that Taylor expansion at first order is appropriate, we obtain:

$$\Delta \hat{I}\left((x,y)_S,\delta_k\right) ={-}\vec{\varepsilon}_k \cdot \vec{grad}\left(I_{panchro}\right)\left(x,y\right)_S \times \iota\left(\delta_k\right)$$

We now assume that the nominal scan speed is constant, so that $\delta _k$ is still regularly sampled and Eq. (11) can be used. Thus, according to Eq. (20) and discarding the mean value $\bar {I}$ of the interferogram, we obtain the following expression for the error $\Delta \hat {L}_{\sigma } = \hat {L}_{\sigma } - L_{\sigma }$ on the estimated spectrum:

$$\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) ={-}\dfrac{4 a_\delta}{G\,\Delta t\,\eta\,\mu} \times \sum_{k} \vec{\varepsilon}_k \cdot \vec{grad}\left(I_{panchro}(x,y)_S\right) \times \iota\left(\delta_k\right) \times \cos\left(2\pi\delta_k\sigma\right)$$
$\delta _k$ being the sampled OPD with a constant sampling step $a_\delta = p_y\,\Delta y$:
$$\delta_k = p_y \cdot y - k \cdot p_y \cdot \Delta y$$

Using the definition of $I_{panchro}$ (Eq. (18)), we can also express the radiance error with respect to $\mathscr{L}$:

$$\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) ={-}\dfrac{2 a_\delta}{\mu} \times \sum_{k} \vec{\varepsilon}_k \cdot \vec{grad}\left(\mathscr{L}(x,y)_S\right) \times \iota\left(\delta_k\right) \times \cos\left(2\pi\delta_k\sigma\right)$$

4.2 Analysis of the general expression and illustration in the case of a single frame registration error

From Eq. (21), we see that the larger the panchromatic scene gradient is, the larger the spectral error is: even though this could have been anticipated, this is nonetheless a significant feature of the impact of registration errors. A second interesting component of $\Delta \hat {L}_{\sigma }\left ((x,y)_S,\sigma \right )$ in Eq. (23) is $\cos \left (2\pi \delta _k\sigma \right )$: with the $y$ dependency of $\delta _k$ (Eq. (22)), this term is $\cos \left (2\pi p_y y \sigma - 2 \pi k \cdot p_y \cdot \Delta y \right )$. With respect to $y$, it is therefore a sinusoid, with a period equal to the fringe spacing at wavenumber $\sigma$, independent from $k$. This explains the crenellated pattern visible on Fig. 2. Indeed, if we follow an edge of the panchromatic image, i.e. $\vec {grad}\left (I_{panchro}(x,y)_S\right )$ being constant, and if we assume a broad spectrum so that for most of OPD $\delta$, $\iota (\delta )$ is close to one, then $\Delta \hat {L}_{\sigma }\left ((x,y)_S,\sigma \right )$ is merely a sum of sinusoids of same spatial frequency $p_y \sigma$: the result is also a sinusoid of same frequency.

Leaving aside equations, this crenellated pattern can also be graphically explained. Let us assume a very simple scene, a bright rectangle over a dark background (see Fig. 3). The scan is uniform in the $y$ direction, with only a single frame registration error, at frame $k_0$, where the field-of-view is slightly shifted from its nominal position (top layouts of Fig. 3): the expected position is $\hat {P}_{k_0}$, but the true position is $P_{k_0}$. If we register the frames according to their nominal (and not actual) position, an error occurs at frame $k_0$, as it can be seen on the bottom layouts of Fig. 3. If we now consider two points on the registered frame stack, marked by the green and blue crosses, and close enough to the edge of the bright rectangle, they will suffer errors, both at frame $k_0$. However, as they are separated along the $y$ axis, the corresponding OPD will be different for the two points (see Fig. 4). Thus, the error on the spectrum for both points will be a sinusoid, but not of same frequency in wavenumber. On the illustrated case, error occurs at a dark fringe for the blue point, but at a bright fringe for the green, if we define the fringes by the central wavenumber $\sigma _0$. Thus, at this wavenumber, the error on the spectrum will be minimal (in true/negative value, not in absolute value) for the blue point, while it is maximal for the green point. This gives the crenellated pattern. It therefore appears that a single misregistered frame damages the whole hyperspectral cube, as well spatially as spectrally. Consequently, it may be difficult to correct the impact of registration errors on the hyperspectral cube itself, and that is why a special care must be exercised to properly register images.

 figure: Fig. 4.

Fig. 4. Left: interferograms of the two points of Fig. 3, in thin black line the continuous errorless interferograms, and with coloured dots, the sampled interferograms, with the impact of the registration error at frame $k_0$. Since the two points have distinct $y_D$ location, the same frame number $k$ correspond to different OPD. Right: the spectral errors for these two points.

Download Full Size | PDF

4.3 Sinusoidal registration error

A specific case that may be of interest in pratical applications is the one of a sinusoidal error: it results for instance from residual micro-vibrations of the platform, unestimated by the inertial measurement unit (IMU). Decomposition on a sinusoidal basis may also be easier than the general expression of Eq. (21) for LoS or IMU specifications.

In such a case, the sinusoidal perturbation amplitude of Eq. (12) takes the following form:

$$\vec{\varepsilon}_k = \left| \begin{array}{l} \varepsilon_{x} \cdot \cos\left(2\pi \dfrac{k}{K}+\varphi_x\right) \\ \varepsilon_{y} \cdot \cos\left(2\pi \dfrac{k}{K}+\varphi_y\right) \end{array} \right.$$
with $K$ the period of the perturbation in number of frames. $\vec {\varepsilon }_k \cdot \vec {grad}\left (I_{panchro}(x,y)_S\right )$ can thus be written as follows:
$$\vec{\varepsilon}_k \cdot \vec{grad}\left(I_{panchro}(x,y)_S\right) = E_0(x,y)_S \cdot \cos \left( 2\pi\dfrac{k}{K} + \varphi (x,y)_S \right)$$
with:
$$E_0 \cdot e^{i\varphi} = \varepsilon_x \cdot \dfrac{\partial I_{panchro}}{\partial x} \cdot e^{i\varphi_x} + \varepsilon_y \cdot \dfrac{\partial I_{panchro}}{\partial y} \cdot e^{i\varphi_y}$$

If we still assume the estimated scan speed constant and fringes equidistant, then this sinusoidal registration error will also derive in a sinusoidal modulation error on the interferogram. Expressed in OPD, the $K$ frames period is $K \cdot p_y \cdot \Delta y$. We thus expect to obtain two types of artifacts in the estimated spectrum. Firstly, we will observe peaks at wavenumbers:

$$\pm \sigma_p ={\pm} \frac{1}{K \cdot p_y \cdot \Delta y},$$
because of the DC component of the interferogram. Secondly, we will observe replica of the spectrum shifted by $\pm \sigma _p$, because of the amplitude modulation of the AC component of the interferogram. Indeed, using Eqs. (25) and (21), and taking advantage of the Fourier transform relationship between $\iota$ and $\mathscr{B}_\sigma$, one can show (see details in the appendix at the end of the article) that, for positive wavenumbers:
$$\begin{aligned}\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) &={-}\dfrac{2 E_0(x,y)_S}{G \, \Delta t \, \eta \, \mu} \times \cos \left( 2\pi p_y y \sigma_p + \varphi (x,y)_S \right) \\ &\times \Bigg[ 2 \delta_{max} \mathrm{sinc}\left(2 \delta_{max} \left(\sigma - \sigma_p\right)\right) + \dfrac{\mu}{2} \cdot \mathscr{B}'_\sigma(\sigma-\sigma_p) + \dfrac{\mu}{2} \cdot \mathscr{B}'_\sigma(\sigma+\sigma_p) \Bigg]\end{aligned}$$
with $\delta _{max}$ the maximum OPD and $\mathscr{B}'$ the spectrum convolved by the sinc ILS (see Eq. (3)). The three expected terms are well present.

4.4 Discussion and comparison with a dynamic FTS

The error in the estimated spectrum, $\Delta \hat {L}_{\sigma }\left ((x,y)_S,\sigma \right )$ is the product of three terms: an amplitude term depending on the scene geometry and the magnitude of the registration error, a spatially varying term, and a spectral term. The first two have already been described in subsection 4.2: the perturbation amplitude is proportional to the radiance gradient in the direction of the registration error, and the spatially varying term comes from the linearly varying OPD map. Indeed, $2\pi p_y y \sigma _p + \varphi (x,y)_S = 2 \pi \frac {y}{K \Delta y} + \varphi$ is the phase of the perturbation when $(x,y)_S$ was at the zero path difference, that is for frame $k=\frac {y}{\Delta y}$. Due to the linear OPD map, this phase linearly depends on the position along $y$ axis. This is the main difference with dynamic iFTS systems: for these latter, the registration error impinges the interferograms with the same phase, independent —at least at first order— from the location on the FPA.

The last term of Eq. (28) is a spectral term, itself consisting of three parts: a main peak due to the modulation of the DC component, and two replica of the true spectrum due to the modulation of the AC component. This spectral term is the same as for dynamic iFTS: the reader could for instance compare it with equation (13) from [15], even though our notation is different. Thus, all the discussion developed by Miecznik and Johnson in Section 3.1 of [15] about this spectral term also applies to isFTS.

5. Experimental illustrations

In this Section, we propose to experimentally illustrate the analytical results presented above, first with laboratory data, so that we can control the registration error, and then with airborne data from the Sieleters instrument, to demonstrate the relevance of our results in operational conditions.

5.1 Laboratory results

We designed and built a compact and robust isFTS instrument in the visible domain, specifically targeted for lab investigations about basic principles and limitations. Especially, in order to avoid some difficulties in the theoretical analyses, we sized it so that it fulfills the Shannon-Nyquist criterion: we can thus assume that image interpolation will be errorless.

A picture of the setup and its layout are given on Fig. 5. On the right side, we see (from right to left) the camera, its imaging lens and the compact interferometer ahead the lens, the whole being mounted on a rotation stage in order to scan the scene. On the left side, we see the parts that allow us to build a controlled scene located at infinity, with (from left to right) an integrating sphere acting as a uniform source, a slide holder and a collimator, the assembly being fixed on the breadboard in front of the rotating imaging interferometer.

 figure: Fig. 5.

Fig. 5. Experimental setup (top view). 1: integrating sphere, 2: target, 3: baffle, 4: collimator, 5: polarizer, interferometer (made of two plates) and analyser, 6: imaging lens, 7: camera, 8: rotation stage.

Download Full Size | PDF

The lateral shearing interferometer gives nearly straight and equidistant fringes located at infinity. In our experiment, it is a birefringent interferometer, based on the configuration described in [25]. It is compact, static, and does not require any alignement or tuning. It was designed in order to give an OPD slope $p_y$ of $67\,\textrm {nm/pixel}$, or a $\pm 68\,\mathrm {\mu }\textrm {m}$ OPD excursion on the whole ALT field-of-view ($\pm {5.8}^{\circ }$) of the imaging system, even though we used only an OPD range reduced to [-$31\,\mathrm {\mu }\textrm {m}$;+$13\,\mathrm {\mu }\textrm {m}$] in order to deal with lighter data. The birefringent device, manufactured by Altechna corporation in Calcite material (two plates of thickness ${6.30}\,\textrm {mm}$ and ${3.40}\,\textrm {mm}$), and the Moxtek polarizers have been carefully designed and mounted in order to avoid any vignetting in this useful part of the field-of-view. The lens —a C-mount Schneider-Kreuznach ApoXenoplan 35/2.0— has an effective focal length of ${35}\,\textrm {mm}$ and is used on our experiment with a F-number of 17 in order to fulfill the Shannon-Nyquist criterion at all useful wavelengths, as stated above. We put the center of its (virtual) entrance pupil on the rotating stage axis to avoid vignetting by the collimator when we scan the fixed scene. The panchromatic VisNIR silicon sensor is an Allied Vision GT2450 camera, with a total of 2448x2050 pixels of ${3.45}\,\mathrm {\mu }\textrm {m}$ pitch, but only images of 750 (across-track -ACT) x 700 (along-track -ALT) pixels are used. The optomechanics of the hyperspectral camera are home-made, and this latter is set on a Thorlabs DDR100/M rotation stage. Regarding the scene generation, it is a Thorlabs 4P3 100mm-diameter integrating sphere, fed with an HeNe laser to measure the OPD map, or with a fiber-coupled red LED source (M625F2) emitting around $\sigma _{LED}={15785}\,\textrm {cm}^{-1}$ for hyperspectral measurements. This integrating sphere and the LED are used as a uniform back-illuminating stage for a transmittance target, a set of multi-frequencies (Edmund Optics) Ronchi rulings with increasing spatial frequencies (see Fig. 6), which is set at the object focal plane of the collimator lens (Thorlabs TTL200), allowing us to have a spectrally and spatially well defined scene located at infinity in front of the imaging interferometer.

 figure: Fig. 6.

Fig. 6. Monochromatic images of the scene without (left) and with (right) registration error, at $\sigma _{LED} = \,{15785}\,\textrm {cm}^{-1}$ (top) and at $\sigma _p = \,{12820}\,\textrm {cm}^{-1}$ (bottom). The spectrum of points marked A, B and C are plotted on Fig. 7.

Download Full Size | PDF

The motion of the rotation stage is continuous. A single program controls both the camera and the rotation stage. The total angular amplitude of rotation was set to $\pm {5}^{\circ }$. The frame rate of the camera is set to ${2}\,\textrm {Hz}$, and the rotation speed was set so that the displacement $\Delta y$ between two consecutive images is ${1.95}\,\textrm {pixel}$, that is about a quarter of a fringe, much finer than the Nyquist criterion: $a_\delta = p_y\,\Delta y = {130}\,\textrm {nm}$. We thus obtained a sequence of 712 images cropped to 750x700 pixels. A two-point Non-Uniformity Correction (NUC) is applied to these images, with a camera gain previously measured without the interferometer.

These images are first registered assuming a constant angular scan speed. We thus obtain an hyperspectral cube which we consider as the reference data. We then purposely add an ACT sinusoidal perturbation of amplitude $\varepsilon _x=0.5$ pixel, and of period $K=6$ images, leading to $\sigma _p={12820}\,\textrm {cm}^{-1}$.

Figure 6 shows "monochromatic" images, at two wavenumbers, $\sigma _{LED}$ and $\sigma _p$, both without and with registration errors: the crenellated artifact is clearly visible on the vertical edges of the scene at $\sigma _p$. Its spatial period is ${11.6}\,\textrm {pixel}$, perfectly in line with the expected value given by the cosine term of Eq. (21): $\frac {1}{p_y \sigma _p}$.

The spectral extent and magnitude of the artefact are also consistent with the theoretical results, as it can be seen on Fig. 7: we plotted on the bottom part on this figure $\Delta \hat {L}_{\sigma }$ for the three points marked by a red circle on Fig. 6. The spurious peak has a sinc shape with a full bandwidth (at first zero) indeed equal to the spectral resolution of the spectrometer $\frac {1}{\delta _{max}}={322}\,\textrm {cm}^{-1}$ with $\delta _{max}$=$31\,\mathrm {\mu }\textrm {m}$. We indicated by a black star the expected value of the peak magnitude, computed from Eq. (28) and using an estimation of the gradient of the panchromatic image by finite difference. Agreement is quite good, and the tiny differences may be partly explained by the errors due to the numerical estimation of the gradient.

 figure: Fig. 7.

Fig. 7. Top: spectra for the three points A, B and C marked on Fig. 6, without registration error in blue, and with in red. For sake of readability, an offset is added to the spectra. Bottom: error on the spectrum due to registration error. Here also, an offset has been added to ease the figure understanding.

Download Full Size | PDF

5.2 Airborne results

Another illustration of the theoretical results of the previous Section is provided by airborne experimental data, acquired with the Sieleters instrument [9]. In this instrument, image registration is performed with two steps, as indicated in SubSection 2.2: the LoS data give a first estimation, refined by image correlation. We consider that, with respect to the LoS accuracy, image correlation provides the true registration parameters, and therefore the difference between these parameters and LoS-based ones is the registration error.

In the same way as Fig. 6, we plotted on Fig. 8 Sieleters images at two distinct wavenumbers, registration being performed with LoS data only or refined by image correlation. It clearly appears that when using LoS data only, severe artefacts degrade the image at ${2700}\,\textrm {cm}^{-1}$, but not at ${2630}\,\textrm {cm}^{-1}$.

 figure: Fig. 8.

Fig. 8. Monochromatic images from Sieleters when interferometric images are registered with LoS data and image correlation (left), or with LoS data only (right), at ${2632}\,\textrm {cm}^{-1}$ (top) and ${2702}\,\textrm {cm}^{-1}$ bottom. The spectrum of points marked A, B and C are plotted on Fig. 9. On the right, we included a zoom to show the spatial period of the artefact, about ${2.7}\,\textrm {pixels}$, as expected at ${2700}\,\textrm {cm}^{-1}$. We also included the corresponding area of the panchromatic image.

Download Full Size | PDF

The explanation comes from Fig. 9. On the left, we plotted the difference between registration with and without image correlation as a function of frame number. Except for a very smooth drift on the $y$ axis (probably coming from bias on the correlation image due to different elevations in the scene), this difference is quite low, below 0.2 pixel. However, oscillatory components are present, as revealed by the Fourier transform shown on the bottom right, with the frequency axis expressed in wavenumber according to Eq. (27). Above this frequency analysis we have shown the spectrum of the three points A, B and C, marked on Fig. 8, and the difference between spectra estimated with LoS only or LoS plus image correlation registration: there is an undoubted conformity between the peaks of the registration error and the spurious peaks in the spectra. This proves, on an operational airborne instrument, how important very precise image registration is, both to have geometrically clean images (suppression of the crenellated pattern which may for instance complicate image segmentation), and accurate spectra (suppression of spurious peaks, which may for instance be interpreted as atmospheric lines).

 figure: Fig. 9.

Fig. 9. Left: registration error when using LoS data only, in the $x$ (ACT) and $y$ (ALT) directions, as a function of frame number —note however that the slow drift along the $y$ axis may rather come from image correlation errors. Right: at the bottom, the Fourier transform of the registration errors, with the frequency axis converted in wavenumber according to Eq. (27), at the middle, the error on the spectrum for the three points marked on Fig. 8, and at the top, the spectrum of these points, with and without registration errors. The dotted lines have been set on the two main peaks of the Fourier analysis of the registration error.

Download Full Size | PDF

6. Conclusion

Image registration is a key step of an isFTS processing chain, and registration errors may have significant impacts on the hyperspectral image quality. We analytically established the existence and significance of spatial and spectral artefacts on the edges of the scene. The spatial artefacts are very specific to isFTS, with, on the spectral images, a crenellated pattern of same spatial frequency as the interference fringes on the interferometric images. Spectral artifacts are similar to those obtained by dynamic iFTS: each temporal frequency of the registration error adds a peak in the spectrum and two ghost spectra. Experimental results, both from laboratory and from airborne images, confirmed this analysis. We hope that this work will help the engineers who are developing such isFTS in correctly designing their system, either in provisioning well-adapted IMU, in establishing the required platform stability, or in deciding whether an image-based registration is required.

Appendix: details of calculation of Eq. (28)

Thanks to Eqs. (21), (22) and (25), and defining $C^{st}$ as:

$$C^{st} ={-}\dfrac{4 a_\delta}{G \, \Delta t \, \eta \, \mu}$$
with $a_\delta = p_y \cdot \Delta y$, we have:
$$\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) = C^{st} \times \mathscr{R}e \Bigg\{ \sum_{k} E_0(x,y)_S \cdot \cos \left( 2\pi\dfrac{k}{K} + \varphi (x,y)_S \right) \times \iota\left(\delta_k\right) \times e^{2i\pi p_y y \sigma} \times e^{{-}2i\pi k a_\delta \sigma} \Bigg\}$$

We then introduce $\sigma _p$ defined by Eq. (27) and we obtain:

$$\begin{aligned}\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) &= \frac{C^{st}}{2} \times \mathscr{R}e \Bigg\{ E_0(x,y)_S \cdot e^{{+}2i\pi p_y y \sigma} \cdot e^{{+}i\varphi (x,y)_S} \cdot \sum_{k} e^{{+}2i\pi k a_\delta \cdot \left( \sigma_p - \sigma \right)} \cdot \iota\left(\delta_k\right) \\ &+ E_0(x,y)_S \cdot e^{{+}2i\pi p_y y \sigma} \cdot e^{{-}i\varphi (x,y)_S} \cdot \sum_{k} e^{{+}2i\pi k a_\delta \cdot \left( -\sigma_p -\sigma \right)} \cdot \iota\left(\delta_k\right) \Bigg\}\end{aligned}$$
with $\delta _k = p_y \, y - k \, a_\delta$.

According to Eq. (19), $\iota (\delta )$ is such that $\iota \left (\delta \right ) = 1 + \frac {\mu }{2} \int _{-\infty }^{+\infty } \mathscr{B}_{\sigma }(\sigma ) \cdot e^{2 i \pi \delta \sigma } \,\mathrm {d}\sigma$ with $\mathscr{B}_{\sigma }$ defined by parity for negative wavenumbers. Thus:

$$\int_{-\infty}^{+\infty} \iota(\delta+\delta_0) e^{{-}2i\pi\delta\sigma} \,\mathrm{d}\delta = \Bigr[ \mathcal{D}\!\mathit{irac}(\sigma) + \frac{\mu}{2}\mathscr{B}_{\sigma}(\sigma) \Bigl] \times\, e^{{+}2i\pi\delta_0 \sigma}$$
and after regular sampling of step $|a_\delta |$, we get on the basic wavenumber cell $[-\sigma _{max},+\sigma _{max}]$:
$$\sum_{k} \iota(\delta_0 - k\,a_\delta) e^{{+}2i\pi k a_\delta \sigma} = \frac{1}{|a_\delta|} \Bigr[ \mathcal{D}\!\mathit{irac}(\sigma) + \frac{\mu}{2}\mathscr{B}_{\sigma}(\sigma) \Bigl] \times e^{{+}2i\pi\delta_0 \sigma}$$

The OPD range being limited between $-\delta _{max}$ and $+\delta _{max}$, the sum over $k$ and the spectral resolution are also finite:

$$\sum_{k} \iota(\delta_0-k\,a_\delta) e^{{+}2i\pi k a_\delta \sigma} = \frac{1}{|a_\delta|} \Bigr[ 2\delta_{max}\mathrm{sinc}(2\delta_{max}\sigma) + \frac{\mu}{2}\mathscr{B}'_{\sigma}(\sigma) \Bigl] \times\, e^{{+}2i\pi\delta_0 \sigma}$$

$\mathscr{B}'$ being $\mathscr{B}$ convolved with the ILS.

Using this result we have:

$$\sum_{k} e^{{+}2i\pi k a_\delta \cdot \left({\pm} \sigma_p - \sigma \right)} \cdot \iota\left(\delta_k\right) = \frac{1}{|a_\delta|} \Bigr[ 2\delta_{max}\mathrm{sinc}(2\delta_{max}\left(\sigma \mp \sigma_p\right)) + \frac{\mu}{2}\mathscr{B}'_{\sigma}(\sigma \mp \sigma_p) \Bigl] \times\, e^{{+}2i\pi p_y y \cdot \left({\pm} \sigma_p - \sigma \right)}$$
where we used the parity of sinc and of $\mathscr{B}'_{\sigma }$. And consequently:
$$\begin{array}{l}\mathscr{R}\mathscr{e} \Bigg\{ E_0(x,y)_S \cdot e^{{+}2i\pi p_y y \sigma} \cdot e^{{\pm} i\varphi (x,y)_S} \cdot \sum_{k} e^{{+}2i\pi k a_\delta \cdot \left({\pm} \sigma_p - \sigma \right)} \cdot \iota\left(\delta_k\right)\Bigg\} \\ = \frac{E_0(x,y)_S}{|a_\delta|} \cdot \cos\left(2\pi p_y y \sigma_p + \varphi(x,y)_S\right) \times \Bigr[ 2\delta_{max}\mathrm{sinc}(2\delta_{max}\left(\sigma \mp \sigma_p\right)) + \frac{\mu}{2}\mathscr{B}'_{\sigma}(\sigma \mp \sigma_p) \Bigl]\end{array}$$

Therefore, Eq. (31) becomes:

$$\begin{array}{l}\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) = \frac{C^{st}}{2} \cdot \frac{E_0(x,y)_S}{|a_\delta|} \cdot \cos\left(2\pi p_y y \sigma_p + \varphi(x,y)_S\right) \\ \times \Bigr[ 2\delta_{max}\mathrm{sinc}(2\delta_{max}\left(\sigma - \sigma_p\right)) + 2\delta_{max}\mathrm{sinc}(2\delta_{max}\left(\sigma + \sigma_p\right)) + \frac{\mu}{2}\mathscr{B}'_{\sigma}(\sigma - \sigma_p) + \frac{\mu}{2}\mathscr{B}'_{\sigma}(\sigma + \sigma_p) \Bigl]\end{array}$$
and we can further simplify by neglecting the $\mathrm {sinc}(2\delta _{max}\left (\sigma + \sigma _p\right ))$ term, since it is a peak at $-\sigma _p$ while we are only interested in positive wavenumbers. Thus, we can write:
$$\begin{aligned}\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) &={-}\dfrac{2 E_0(x,y)_S}{G \, \Delta t \, \eta \, \mu} \cdot \cos\left(2\pi p_y y \sigma_p + \varphi(x,y)_S\right) \\ &\times \Bigr[ 2\delta_{max} \cdot \mathrm{sinc}(2\delta_{max} \cdot \left(\sigma - \sigma_p\right)) + \frac{\mu}{2}\mathscr{B}'_{\sigma}(\sigma - \sigma_p) + \frac{\mu}{2}\mathscr{B}'_{\sigma}(\sigma + \sigma_p) \Bigl]\end{aligned}$$
which is Eq. (28).

As for Eq. (23), it may be sometimes more convenient to express $\Delta \hat {L}_{\sigma }$ with respect to $\mathscr{L}$ rather than $I_{panchro}$. We thus define $\mathscr{E}_0 = E_0 \times \frac {2}{G\,\Delta t\, \eta }$, which equivalent to:

$$\vec{\varepsilon}_k \cdot \vec{grad}\left(\mathscr{L}(x,y)_S\right) = \mathscr{E}_0(x,y)_S \cdot \cos \left( 2\pi\dfrac{k}{K} + \varphi (x,y)_S \right)$$
with:
$$\mathscr{E}_0 \cdot e^{i\varphi} = \varepsilon_x \cdot \dfrac{\partial \mathscr{L}}{\partial x} \cdot e^{i\varphi_x} + \varepsilon_y \cdot \dfrac{\partial \mathscr{L}}{\partial y} \cdot e^{i\varphi_y}$$

Then, we obtain:

$$\begin{aligned}\Delta\hat{L}_{\sigma}\left((x,y)_S,\sigma\right) &={-}\mathscr{E}_0(x,y)_S \cdot \cos\left(2\pi p_y y \sigma_p + \varphi\right) \times \\ &\Bigr[ \frac{2\delta_{max}\mathrm{sinc}(2\delta_{max}\left(\sigma - \sigma_p\right))}{\mu} + \frac{\mathscr{B}'_{\sigma}(\sigma - \sigma_p) + \mathscr{B}'_{\sigma}(\sigma + \sigma_p)}{2} \Bigl]\end{aligned}$$

Acknowledgments

We gratefully thank Thales Alenia Space (Cannes, France) for their support.

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. R. F. Horton, “Optical design for a high-etendue imaging fourier-transform spectrometer,” in Imaging Spectrometry II, vol. 2819 (SPIE, 1996), pp. 300–315.

2. R. J. Huppi, R. B. Shipley, and E. R. Huppi, “Balloon-borne fourier spectrometer using a focal plane detector array,” in Multiplex and/or High Throughput Spectroscopy, vol. 191 (SPIE, 1979), pp. 26–32.

3. A. Pola Fossi, Y. Ferrec, N. Roux, O. D’almeida, N. Guerineau, and H. Sauer, “Miniature and cooled hyperspectral camera for outdoor surveillance applications in the mid-infrared,” Opt. Lett. 41(9), 1901 (2016). [CrossRef]  

4. C. Bai, J. Li, Y. Xu, H. Yuan, and J. Liu, “Compact birefringent interferometer for fourier transform hyperspectral imaging,” Opt. Express 26(2), 1703–1725 (2018). [CrossRef]  

5. I. G. Renhorn, T. Svensson, and G. D. Boreman, “Performance of an uncooled imaging interferometric spectrometer with intrinsic background radiation,” Opt. Eng. 60(03), 033106 (2021). [CrossRef]  

6. P. G. Lucey, K. A. Horton, and T. Williams, “Performance of a long-wave infrared hyperspectral imager using a sagnac interferometer and an uncooled microbolometer array,” Appl. Opt. 47(28), F107–F113 (2008). [CrossRef]  

7. Y. Ferrec, J. Taboury, H. Sauer, P. Chavel, P. Fournet, C. Coudrain, J. Deschamps, and J. Primot, “Experimental results from an airborne static Fourier transform imaging spectrometer,” Appl. Opt. 50(30), 5894–5904 (2011). [CrossRef]  

8. G. Zhang, D. Shi, S. Wang, T. Yu, and B. Hu, “Data correction techniques for the airborne large-aperture static image spectrometer based on image registration,” J. Appl. Remote Sens. 9(1), 095088 (2015). [CrossRef]  

9. C. Coudrain, S. Bernhardt, M. Caes, R. Domel, Y. Ferrec, R. Gouyon, D. Henry, M. Jacquart, A. Kattnig, P. Perrault, L. Poutier, L. Rousset-Rouvière, M. Tauvy, S. Thétas, and J. Primot, “SIELETERS, an airborne infrared dual-band spectro-imaging system for measurement of scene spectral signatures,” Opt. Express 23(12), 16164 (2015). [CrossRef]  

10. C. Kirkconnell, M. Nunes, I. Ruelich, M. Zagarola, and S. Rafol, “Integration of a tactical cryocooler for 6u cubesat hyperspectral thermal imager,” in proceedings of the 21th International Cryocooler Conference (Cryocoolers 21), (ICC, 2021).

11. A. Gabrieli, R. Wright, J. N. Porter, P. G. Lucey, and C. Honnibal, “Applications of quantitative thermal infrared hyperspectral imaging (8-14 μm): measuring volcanic so2 mass flux and determining plume transport velocity using a single sensor,” Bull. Volcanol. 81(8), 47 (2019). [CrossRef]  

12. R. G. Sellar and G. D. Boreman, “Classification of imaging spectrometers for remote sensing applications,” Opt. Eng. 44(1), 013602 (2005). [CrossRef]  

13. C. L. Bennett, “Effect of jitter on an imaging ftir spectrometer,” in Infrared Imaging Systems: Design, Analysis, Modeling, and Testing VIII, vol. 3063 (SPIE, 1997), pp. 174–184.

14. A. Mahgoub, “Retrieving spectra from a moving imaging fourier transform spectrometer,” Ph.D. thesis, Université Laval, Québec, Canada (2015).

15. G. Miecznik and B. R. Johnson, “Effects of line-of-sight motion on hyperspectral Fourier transform measurements,” J. Appl. Remote Sens 9(1), 095982 (2015). [CrossRef]  

16. J. Jing, R. Wei, and Y. Yuan, “Effect of platform attitude stability on image quality of spatially modulated imaging fourier transform spectrometer,” in 2010 Third International Symposium on Information Science and Engineering, (IEEE, 2010), pp. 153–156.

17. L. Zhang, Y. Chang, Y. Tang, Y. Nan, and Q. Guo, “Simulation of imaging spectrometers degraded by satellite vibrations with pseudo cross-correlation theory,” in 2013 International Conference on Optical Instruments and Technology: Optical Systems and Modern Optoelectronic Instruments, vol. 9042 (SPIE, 2013), pp. 403–413.

18. X. Ma, J. Yang, W. Qiao, and B. Xue, “An improved fourier-based sub-pixel image registration algorithm for raw image sequence of lasis,” in International Symposium on Photoelectronic Detection and Imaging 2007: Image Processing, vol. 6623 (SPIE, 2008), pp. 75–82.

19. F. Wang, J. Zhou, J. Jing, Q. Wu, and W. Cheng, “Research on lasis interferogram processing,” in MIPPR 2015: Multispectral Image Acquisition, Processing, and Analysis, vol. 9811 (SPIE, 2015), pp. 114–121.

20. R. J. Bell, Introductory Fourier Transform Spectroscopy (Academic Press, 1972).

21. J. W. B. Sumner, P. Davis, and Mark C. Abrams, Fourier Transform Spectrometry (Academic Press, 2001).

22. C. Barbanson, A. Almansa, Y. Ferrec, and P. Monasse, “Relief computation from images of a fourier transform spectrometer for interferogram correction,” in Fourier Transform Spectroscopy, (Optica Publishing Group, 2016), pp. FM3E–6.

23. M. Kircheis and D. Potts, “Direct inversion of the nonequispaced fast Fourier transform,” Linear Algebr. Its Appl. 575, 106–140 (2019). [CrossRef]  

24. D.-C. Soncco, C. Barbanson, M. Nikolova, A. Almansa, and Y. Ferrec, “Fast and accurate multiplicative decomposition for fringe removal in interferometric images,” IEEE Trans. Comput. Imaging 3(2), 187–201 (2017). [CrossRef]  

25. P. B. Phua and B. C. Lim, “Hyperspectral imaging device,” (2011). Patent WO 2011093794 A1.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Principle of isFTS (a) three instantaneous images of the same scene taken at different moments. The yellow dot follows the same ground point; (b) the images are registered; (c) three monochromatic images at different wavelengths.
Fig. 2.
Fig. 2. Spectral image from the infrared airborne isFTS Sieleters, without (a) and with (b) fine image registration. In the first case, spatial artefacts appear along near vertical edges, indicated by the red ellipses.
Fig. 3.
Fig. 3. Top: at the left, the panchromatic scene, consisting of a bright rectangle over a dark background, and then four examples of instantaneous images, inside the red rectangles. For sake of clarity, the crossing of $x_D$ and $y_D$ axes has been set at the bottom left of the field-of-view rather than at the center. For frame $k_0$, the expected field-of-view is indicated by the orange rectangle. Bottom: the stack of instantaneous images after registration. There are no registration errors, save at frame $k_0$. Outside from the frame field-of-view, we have shown the true scene, to emphasize the registration error. At this frame $k_0$, the two ground points marked by the blue and green crosses erroneously lie inside the bright rectangle.
Fig. 4.
Fig. 4. Left: interferograms of the two points of Fig. 3, in thin black line the continuous errorless interferograms, and with coloured dots, the sampled interferograms, with the impact of the registration error at frame $k_0$. Since the two points have distinct $y_D$ location, the same frame number $k$ correspond to different OPD. Right: the spectral errors for these two points.
Fig. 5.
Fig. 5. Experimental setup (top view). 1: integrating sphere, 2: target, 3: baffle, 4: collimator, 5: polarizer, interferometer (made of two plates) and analyser, 6: imaging lens, 7: camera, 8: rotation stage.
Fig. 6.
Fig. 6. Monochromatic images of the scene without (left) and with (right) registration error, at $\sigma _{LED} = \,{15785}\,\textrm {cm}^{-1}$ (top) and at $\sigma _p = \,{12820}\,\textrm {cm}^{-1}$ (bottom). The spectrum of points marked A, B and C are plotted on Fig. 7.
Fig. 7.
Fig. 7. Top: spectra for the three points A, B and C marked on Fig. 6, without registration error in blue, and with in red. For sake of readability, an offset is added to the spectra. Bottom: error on the spectrum due to registration error. Here also, an offset has been added to ease the figure understanding.
Fig. 8.
Fig. 8. Monochromatic images from Sieleters when interferometric images are registered with LoS data and image correlation (left), or with LoS data only (right), at ${2632}\,\textrm {cm}^{-1}$ (top) and ${2702}\,\textrm {cm}^{-1}$ bottom. The spectrum of points marked A, B and C are plotted on Fig. 9. On the right, we included a zoom to show the spatial period of the artefact, about ${2.7}\,\textrm {pixels}$, as expected at ${2700}\,\textrm {cm}^{-1}$. We also included the corresponding area of the panchromatic image.
Fig. 9.
Fig. 9. Left: registration error when using LoS data only, in the $x$ (ACT) and $y$ (ALT) directions, as a function of frame number —note however that the slow drift along the $y$ axis may rather come from image correlation errors. Right: at the bottom, the Fourier transform of the registration errors, with the frequency axis converted in wavenumber according to Eq. (27), at the middle, the error on the spectrum for the three points marked on Fig. 8, and at the top, the spectrum of these points, with and without registration errors. The dotted lines have been set on the two main peaks of the Fourier analysis of the registration error.

Equations (41)

Equations on this page are rendered with MathJax. Learn more.

I ( δ ) = σ m i n σ m a x S ( σ ) × 1 + μ cos ( 2 π σ δ ) 2 d σ
S ( σ ) = 4 μ + ( I ( δ ) I ¯ ) × cos ( 2 π σ δ ) d δ
I L S ( σ ) = 2 δ m a x s i n c ( 2 δ m a x σ )
( x , y ) D ( x + x P , k , y + y P , k ) S
( x ^ P , k , y ^ P , k ) S = ( 0 , k Δ y )
T ( ( x , y ) D , σ ) = 1 + μ cos ( 2 π δ ( x , y ) D σ ) 2
δ ( x , y ) D = p y y
I k ( x , y ) D = G Δ t η 2 σ m i n σ m a x [ 1 + μ cos ( 2 π δ ( x , y ) D σ ) ] × L σ ( ( x + x P , k , y + y P , k ) S , σ ) d σ
{ δ k = δ ( x x ^ P , k , y y ^ P , k ) D I ^ ( ( x , y ) S , δ k ) = I k ( x x ^ P , k , y y ^ P , k ) D
I ^ ( ( x , y ) S , δ k ) = G Δ t η 2 σ m i n σ m a x [ 1 + μ cos ( 2 π δ k σ ) ] × L σ ( ( x , y ) S , σ ) d σ
L ^ σ ( ( x , y ) S , σ ) = 4 a δ G Δ t η μ k [ I ^ ( ( x , y ) S , δ k ) I ^ ¯ ] cos ( 2 π δ k σ )
ε k = ( ε x , k , ε y , k ) = ( x ^ P , k x P , k , y ^ P , k y P , k )
I ^ ( ( x , y ) S , δ k ) = G Δ t η 2 σ m i n σ m a x [ 1 + μ cos ( 2 π δ k σ ) ] × L σ ( ( x ε x , k , y ε y , k ) S , σ ) d σ
Δ I ^ ( ( x , y ) S , δ k ) = I ^ ( ( x , y ) S , δ k ) I ( ( x , y ) S , δ k )
L σ ( ( x , y ) S , σ ) = L ( x , y ) S × B σ ( σ )
σ m i n σ m a x B σ ( σ ) d σ = 1
I k ( x , y ) D = I p a n c h r o ( x + x P , k , y + y P , k ) S × ι ( δ ( x , y ) D )
I p a n c h r o ( x , y ) S = G Δ t η 2 L ( x , y ) S
ι ( δ ) = 1 + μ σ m i n σ m a x B σ ( σ ) cos ( 2 π δ σ ) d σ
Δ I ^ ( ( x , y ) S , δ k ) = ε k g r a d ( I p a n c h r o ) ( x , y ) S × ι ( δ k )
Δ L ^ σ ( ( x , y ) S , σ ) = 4 a δ G Δ t η μ × k ε k g r a d ( I p a n c h r o ( x , y ) S ) × ι ( δ k ) × cos ( 2 π δ k σ )
δ k = p y y k p y Δ y
Δ L ^ σ ( ( x , y ) S , σ ) = 2 a δ μ × k ε k g r a d ( L ( x , y ) S ) × ι ( δ k ) × cos ( 2 π δ k σ )
ε k = | ε x cos ( 2 π k K + φ x ) ε y cos ( 2 π k K + φ y )
ε k g r a d ( I p a n c h r o ( x , y ) S ) = E 0 ( x , y ) S cos ( 2 π k K + φ ( x , y ) S )
E 0 e i φ = ε x I p a n c h r o x e i φ x + ε y I p a n c h r o y e i φ y
± σ p = ± 1 K p y Δ y ,
Δ L ^ σ ( ( x , y ) S , σ ) = 2 E 0 ( x , y ) S G Δ t η μ × cos ( 2 π p y y σ p + φ ( x , y ) S ) × [ 2 δ m a x s i n c ( 2 δ m a x ( σ σ p ) ) + μ 2 B σ ( σ σ p ) + μ 2 B σ ( σ + σ p ) ]
C s t = 4 a δ G Δ t η μ
Δ L ^ σ ( ( x , y ) S , σ ) = C s t × R e { k E 0 ( x , y ) S cos ( 2 π k K + φ ( x , y ) S ) × ι ( δ k ) × e 2 i π p y y σ × e 2 i π k a δ σ }
Δ L ^ σ ( ( x , y ) S , σ ) = C s t 2 × R e { E 0 ( x , y ) S e + 2 i π p y y σ e + i φ ( x , y ) S k e + 2 i π k a δ ( σ p σ ) ι ( δ k ) + E 0 ( x , y ) S e + 2 i π p y y σ e i φ ( x , y ) S k e + 2 i π k a δ ( σ p σ ) ι ( δ k ) }
+ ι ( δ + δ 0 ) e 2 i π δ σ d δ = [ D i r a c ( σ ) + μ 2 B σ ( σ ) ] × e + 2 i π δ 0 σ
k ι ( δ 0 k a δ ) e + 2 i π k a δ σ = 1 | a δ | [ D i r a c ( σ ) + μ 2 B σ ( σ ) ] × e + 2 i π δ 0 σ
k ι ( δ 0 k a δ ) e + 2 i π k a δ σ = 1 | a δ | [ 2 δ m a x s i n c ( 2 δ m a x σ ) + μ 2 B σ ( σ ) ] × e + 2 i π δ 0 σ
k e + 2 i π k a δ ( ± σ p σ ) ι ( δ k ) = 1 | a δ | [ 2 δ m a x s i n c ( 2 δ m a x ( σ σ p ) ) + μ 2 B σ ( σ σ p ) ] × e + 2 i π p y y ( ± σ p σ )
R e { E 0 ( x , y ) S e + 2 i π p y y σ e ± i φ ( x , y ) S k e + 2 i π k a δ ( ± σ p σ ) ι ( δ k ) } = E 0 ( x , y ) S | a δ | cos ( 2 π p y y σ p + φ ( x , y ) S ) × [ 2 δ m a x s i n c ( 2 δ m a x ( σ σ p ) ) + μ 2 B σ ( σ σ p ) ]
Δ L ^ σ ( ( x , y ) S , σ ) = C s t 2 E 0 ( x , y ) S | a δ | cos ( 2 π p y y σ p + φ ( x , y ) S ) × [ 2 δ m a x s i n c ( 2 δ m a x ( σ σ p ) ) + 2 δ m a x s i n c ( 2 δ m a x ( σ + σ p ) ) + μ 2 B σ ( σ σ p ) + μ 2 B σ ( σ + σ p ) ]
Δ L ^ σ ( ( x , y ) S , σ ) = 2 E 0 ( x , y ) S G Δ t η μ cos ( 2 π p y y σ p + φ ( x , y ) S ) × [ 2 δ m a x s i n c ( 2 δ m a x ( σ σ p ) ) + μ 2 B σ ( σ σ p ) + μ 2 B σ ( σ + σ p ) ]
ε k g r a d ( L ( x , y ) S ) = E 0 ( x , y ) S cos ( 2 π k K + φ ( x , y ) S )
E 0 e i φ = ε x L x e i φ x + ε y L y e i φ y
Δ L ^ σ ( ( x , y ) S , σ ) = E 0 ( x , y ) S cos ( 2 π p y y σ p + φ ) × [ 2 δ m a x s i n c ( 2 δ m a x ( σ σ p ) ) μ + B σ ( σ σ p ) + B σ ( σ + σ p ) 2 ]
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.