Spectral data acquired with traditional push-broom hyperspectral cameras may be significantly distorted due to spatial misregistration such as keystone. The mixel camera is a new type of push-broom hyperspectral camera, where an image recorded with arbitrary (even large) keystone is reconstructed to a nearly keystone-free image. The key component of the mixel camera is an array of light mixing chambers in the slit plane, and the precision of the image reconstruction depends on the light mixing properties of these chambers. In this work we describe how these properties were measured in a mixel camera prototype. We also investigate the potential performance of the mixel camera in terms of spatial co-registration, based on the measured response of the mixing chambers to a point source. The results suggest that, with the current chambers, a perfectly characterized mixel camera should have residual spatial misregistration that is equivalent to 0.02-0.03 pixels keystone. This compares favorably to high resolution instruments where keystone is corrected in hardware or by resampling.
© 2015 Optical Society of America
Hyperspectral cameras are capable of acquiring 2D images where each pixel contains spectral information about the corresponding area of the scene . Naturally, the spectral data is not absolutely precise and contains errors – mostly due to various imperfections in the camera optics, such as smile and keystone [2,3]. In order to reduce these errors to a reasonably low level, certain compromises must be made in the camera design – compromises that affect key camera specifications, such as spatial resolution and light throughput.
When designing a high-end hyperspectral camera, 0.1 pixel keystone is often set as the design goal. The keystone of a real camera may be larger due to various factors: manufacturing tolerances, imperfect alignment, as well as changes of optical and mechanical properties of the camera components due to changes in ambient temperature and pressure. As the resolution of modern sensors increases, it becomes more challenging to correct optical aberrations, such as keystone and smile, to a small fraction of a pixel level, and to keep them at this level in a real camera. For example, the upcoming high-end spaceborne instrument EnMAP with spatial resolution 1024 pixels is specified to have keystone below 0.2 pixels .
If smile and keystone are corrected in postprocessing instead, the optical designer can put more effort into increasing the spatial resolution and light throughput of the camera. Use of a good resampling technique  for correcting these aberrations makes it possible to significantly increase the spatial resolution and light throughput of a camera, while keeping the accuracy of the spectral data for each pixel comparable to the best modern cameras where optical aberrations are corrected in hardware, as shown in [3,6].
The mixel camera is a new hyperspectral camera concept, where optical keystone is corrected by use of an array of light mixing chambers in the slit plane combined with a mathematical method for restoring the data in postprocessing [7,8]. The mixel camera has similar advantages as resampling cameras in terms of increased spatial resolution and light throughput. In addition, the mixel camera is expected to be capable of having significantly lower spectral errors  than the best hyperspectral cameras available today [4,6].
A key component for the performance of the mixel camera (in terms of very low keystone errors) is the array of light mixing chambers that is placed in the slit plane. The purpose of the chambers is to mix the incoming light as well as possible, so that the light distribution at the output of a chamber is independent of the light distribution at the input. The previous paper on the mixel camera used a geometric ray tracing model in two dimensions to show that for a certain ratio between the length and the width of the chamber the light is mixed nearly perfectly, so that any residual errors caused by imperfect light mixing are negligible . However, when the chambers are not very large (~tens of microns) compared to the wavelength, a geometric approximation [9,10] may not be sufficiently precise anymore and wave optics theory must be applied [11,12]. The light distribution at the output of a very long and narrow waveguide (single mode fiber) is well known from literature , and deviates significantly from what a geometric ray tracing model would predict. However, the case of partially coherent light that passes through a very short air waveguide with metal reflective walls, which is the case for the light mixing chambers of the mixel camera, does not seem to be discussed in literature. It was therefore decided to verify the performance of the mixing chambers experimentally.
A mixel camera prototype was built in order to test and evaluate the performance of the light mixing chambers. Results from the tests are presented and discussed in this paper. Section 2 briefly presents the mixel camera concept. In Section 3 the light mixing chambers are described in more detail, while Section 4 briefly describes the layout and design of the mixel camera prototype. Section 5 shows the experimental setup and describes the measurement procedure. Results from the measurements of the light distribution at the output of the chambers are presented in Section 6. In Section 7 the mixel camera performance is analysed through simulations, based on the measured output light distributions for the chambers. Finally, the conclusions are given in Section 8.
2. Mixel camera concept
The principle for the mixel camera is shown in Fig. 1. The mixel camera contains an array of light mixing chambers that is placed in the slit. The incoming light from the scene is mixed in the chambers, so that the light distribution at the output of a chamber becomes uniform. While the light distribution at the input of the chamber is unknown due to subpixel sized details in the scene, the light distribution at the output is always known (uniform in this example). When the light distribution is known, it is possible to restore data captured with keystone to its original keystone-free form .
In order to restore the energy content of N mixing chambers from M recorded sensor pixels, where M>N, the data restoring equation set given in  must be used:
Equation (1) is solved independently for each spectral channel. The matrix coefficients qmn and the number of recorded sensor pixels M will typically be different for different spectral channels, while the mixing chamber array is common for all spectral channels and therefore the number of mixing chambers is always N. It is required that the number of sensor pixels M is larger than the number of mixing chambers N in each spectral channel, but already for M ≈1.1N the data reconstruction works well , i.e., the spatial resolution of the sensor is more or less preserved in the final data cube.
Note that the light distribution at the output of the chambers does not necessarily have to be uniform for the method to work. What is important is that the light distribution at the output is known and independent of the light distribution at the input of the chambers. When this is the case, the appropriate matrix coefficients qmn (that describe how the energy of a single mixing chamber is divided between sensor pixels) can be used, and the data can be reconstructed perfectly.
3. Light mixing chambers
The slit with the light mixing chambers is made of nickel. This material has several advantages: it has quite good reflectivity in the relevant wavelength range; it is chemically stable, and therefore the reflectivity remains high for a long time; it is suitable for manufacturing using the LIGA-process [14,15].
Since the propagation of light waves depends on the size of the optical elements, the dimensions of the mixing chambers were chosen to be compatible with a planned production camera. Each chamber is a 20 µm by 20 µm wide squared hole in the nickel plate. The length of each chamber is 180 µm and the walls between the chambers are 3 µm thick. The ratio between the width and the length of the chambers is optimized for good mixing – according to the geometric ray tracing model  – of light coming from F4.5 foreoptics. In order to prevent clogging of the mixing chambers with small dust particles, the chambers are protected by thin glass plates mounted on each side of the nickel plate.
Figure 2 is a photo taken in visible light of a mixing chamber array that shows several chambers. Figure 3 is a photo taken by a scanning electron microscope that shows a close-up of two of the chambers.
4. Mixel camera prototype
The main purpose of building the mixel camera prototype was to investigate how well the light mixing chambers mix light, i.e., not to build a fully functional mixel camera. However, the mixel camera prototype – designed and built by Norsk Elektro Optikk (Norway) – contains all the main parts of a typical push-broom hyperspectral camera [see Fig. 1 in  for an overview of the key elements in such a camera]. Figure 4 shows a photo of the mixel camera prototype.
Light from the scene is focused by the foreoptics onto the slit – which in this case is an array of light mixing chambers. The foreoptics consists of reflective elements only. Because of this, and because of the absence of spectrally dispersive elements in the foreoptics, the light of all wavelengths follows precisely the same path. The foreoptics is therefore completely keystone free. For the mixel camera this is required, since only keystone appearing after the slit can be removed during the image reconstruction .
The slit is imaged by refractive relay optics that contains a dispersive element (prism) onto the image sensor. Normally, the relay optics for a mixel camera would be designed so that the image of each mixing chamber on the sensor is only slightly larger than the sensor pixel size. However, in order to be able to see the light distribution at the output of each mixing chamber, the relay optics for the mixel camera prototype was designed somewhat differently (see Table 1). The magnification of the relay optics is set to −1, so that the size of the mixing chamber image on the sensor is 20 µm x 20 µm. Since we have chosen a sensor with quite small pixels – 3.45 µm x 3.45 µm – the image of a mixing chamber in the across-track direction will cover 6.67 sensor pixels. The same is the case in the spectral direction. Note that both the foreoptics and the relay optics are capable of resolving spatial details that are several times smaller than the size of the mixing chambers. Together with the relatively high number of sensor pixels per chamber, this gives us the necessary tool to investigate how the light distribution at the output of the chamber depends on the light distribution at the input of the chamber.
Figure 5 shows the image of the mixing chamber array on the sensor. In order to obtain this image, the whole field of view of the prototype was filled with polychromatic light. The image on the sensor is spectrally dispersed in (almost) vertical direction – the dispersive element is deliberately tilted relative to the slit in order to introduce significant keystone. The 3 µm thick walls between the mixing chambers – seen as dark thin almost vertical lines that separate the images of the mixing chambers – are well resolved by the relay optics and are clearly visible [Fig. 5(b)]. This indicates that the resolution of the optics and the sensor is high enough to see light intensity variations at the output of a single mixing chamber. Note that the image of the walls will not be present in the reconstructed keystone-free image from a mixel camera.
5. Experimental setup and measurement procedure
The purpose of the light mixing chambers is to make the light distribution at the output of each chamber independent of the light distribution at the input of the chamber. In order to investigate how well the chambers perform this task, we measured the light distribution at the output of a mixing chamber for different point source positions at the chamber input.
Figure 6 shows the experimental setup for the measurements. The setup consists of a point source (1), which is projected to infinity by a parabolic mirror (2). A high-resolution rotation stage (3) moves the mixel camera prototype (4) so that the image of the point source can be precisely placed into the required position anywhere within the field of view of the camera. In order to simplify the alignment of the camera with the point source in this setup, the “point source” here is actually a short line. The image of this line in the slit plane is significantly smaller than the size of the mixing chamber when measured parallel to the slit direction, see Fig. 7.
Before performing the measurements, the camera was focused. First, the sensor was focused on the back side of the mixing chamber array. After that, the foreoptics was focused on the front side of the mixing chamber array.
During the measurements, the point source was placed in many different positions within a mixing chamber and, for each position, the light distribution at the output of the mixing chamber was recorded. The point source was moved across three consecutive mixing chambers in order to take advantage of the ratio 6.67 of the mixing chamber pitch to the sensor pixel pitch and to maximize the spatial resolution of the images (this will be explained in the next section). The point source was moved in small equal steps – approximately 52 steps per mixing chamber.
Images that were obtained using this approach, made it possible to reconstruct the light distribution at the output of a light mixing chamber with quite high resolution, as will be shown in the next section.
6. Measured light distribution at the output of the light mixing chambers
For the analyses of the performance of the light mixing chambers, we consider seven different point source input positions. The input positions are evenly distributed across the chamber with position 1 close to the left wall, position 4 at the center, and position 7 close to the right wall. Since the measurements were performed by moving the point source in much smaller steps than this (typically, 52 steps were used to cover a chamber, see Section 5), we construct each of the seven point source input positions by combining seven (or sometimes eight) consecutive measurements and taking the average value. Figure 8 shows an example for input position 4 (at the center of the chamber input) for the 610 nm wavelength.
The image of each chamber covers 6.67 sensor pixels, which means that the resolution of the measurements is relatively low. Figure 9(a) shows the output light distribution for input position 4 for the 610 nm wavelength for three consecutive chambers. For a given point source input position the output light distribution should be the same for all chambers. However, in Fig. 9(a) the distribution appears different for each chamber due to the relatively low sampling rate. Also, for input position 4 – where the point source is at the center of the chamber input – we would expect the output light distribution to be symmetric (left-right). However, this is only the case for one of the chambers shown in Fig. 9(a).
The above illustrates some of the problems encountered when the sampling rate of the measurements is quite low. It would be helpful for the analyses if the resolution of the output light distribution could be increased, and in this case it can be done. The geometry of the relative position between the chambers and the pixels repeats itself after 3 chambers (the image of 3 chambers covers exactly 20 pixels) and this fact can be used to increase the resolution of the calculated output light distribution by a factor 3. By performing the measurements for three consecutive chambers [as shown in Fig. 9(a) for input position 4], the output light distribution is sampled with three different grids that are shifted exactly 1/3 of a pixel with respect to each other. By combining the measurements from all three chambers [Fig. 9(b)] a high-resolution curve for the output light distribution can be created. Instead of 6.67 samples per chamber we now have 20 samples per chamber. The output light distribution can then be described much more accurately and is now fairly symmetric for input position 4 [Fig. 9(b)], as it should be.
Unintentional differences in how the measurements are performed for each of the three chambers (slightly different amount of light coming through each chamber, not identical spacing between point source input positions, etc.) may cause high frequency semi-periodic variations in the output light distribution curves that are not real. This is most noticeable for input positions close to the chamber walls where the signal is low. Here we observe ripples in the measured output light distribution with a period equal to 3 sampling points. This period of the ripples suggests that they are caused by limitations in the precision of the measurement setup, rather than reflecting real variations in the light distribution. To avoid such artefacts, the signal is smoothed somewhat before being used in the further analyses. The smoothing is done by averaging the value of each point with its closest neighbor on each side. This ensures that each of the three chambers contributes to the value at a given point on the output light distribution curve. Figure 10 shows the result after smoothing for input position 4 for the 610 nm wavelength. The defining features of the curve are preserved, but the contrast of fine details is somewhat reduced [compare with Fig. 9(b)]. However, the latter is not important for the further analyses and conclusions, as we will show later in Section 7.
The next step is to truncate the output light distribution curve and to deduct the noise floor. We will keep 20 samples on each side of the chamber, i.e., the final output light distribution curve will cover the image of three chambers on the sensor. Note that the image of the output light distribution on the sensor extends somewhat outside the chamber walls [Fig. 10]. The main reason for this is the smoothing of the signal. Also, slight blurring of the signal in the relay optics contributes somewhat to this effect.
The output light distribution is now on a form that is suitable for the further analyses. It will hereafter be referred to as a Mixing Chamber Point-Spread-Function (MC-PSF). Figures 11(a)-(g) shows the MC-PSFs for all seven point source input positions for the 610 nm wavelength. Also shown, is the output light distribution that we would get if the chamber was uniformly illuminated with a light source at the same wavelength [Fig. 11(h)]. This light distribution is found by adding the MC-PSFs for all seven input positions together and will hereafter be referred to as a Mixing Chamber Uniform Light Function (MC-ULF). The MC-ULF is normalized so that the area under its curve is equal to 1. The MC-PSFs are scaled accordingly. The MC-ULF is the function that will be used to reconstruct the hyperspectral data to the final data cube. Note that the MC-ULF is not a straight horizontal line as shown in Fig. 1 where the mixel camera principle was explained. Rather, it is a bell-shaped curve. However, the exact shape of the MC-ULF is not important, as long as its shape is known. If the output light distribution has the same shape as the MC-ULF for any input light distribution, i.e., if each MC-PSF has the same shape as the MC-ULF, then the data can be reconstructed perfectly. On the other hand, the more the MC-PSFs deviate from the MC-ULF, the more errors will remain after the data restoring process has been completed. In the figure, the MC-PSFs (red curves) are shown together with the corresponding scaled down versions of the MC-ULF (blue curves) so that the deviation of the measured MC-PSFs from the ideal MC-PSF can be evaluated. Note that while the energy at the output of the chamber is considerably lower for input positions 1 and 7 – indicating a loss of light when the point source at the input is behind or close to the chamber wall – the spectrum will still be correct since a similar loss of light will be experienced across all wavelengths for these input positions.
Figure 11 shows that the light from the point source at the input of the chamber is mixed and spread out to cover more or less the whole output of the chamber for all seven input positions. However, the shape of the MC-PSF is somewhat different for different input positions. Note in particular the MC-PSFs for input positions 2 and 6 that have a more narrow distribution than the other MS-PSFs. We expect that a point source at one of these two input positions will be more difficult to restore correctly than for the other input positions. Note also that the MC-PSFs are reasonably symmetric for left-right input position pairs. This is as expected since the chamber itself is symmetric. From the figure it can be concluded that the chambers do not mix light perfectly, since there are some deviations between the different MC-PSFs and the corresponding scaled down MC-ULFs. However, for all point source input positions the light at the output is spread quite well across the chamber, which means that the output light distribution is considerably more similar for different input positions than if the chamber was not there. The observed deviations from perfect light mixing therefore only lead to small errors in the final hyperspectral data cube, as will be shown in Section 7.
We have measured and calculated MC-PSFs for several different wavelengths. Figure 12 shows the MC-PSFs (red curves) for point source input position 4 for the 530 nm and 1000 nm wavelengths. The corresponding MC-ULFs are also shown (blue curves). In general, our measurements have shown that both the MC-ULFs and the MC-PSFs look somewhat different for different wavelengths. This can easily be seen for input position 4 by comparing Fig. 12 for the 530 nm and 1000 nm wavelengths and Fig. 11(d) for the 610 nm wavelength. However, despite these differences the camera performance with respect to misregistration errors will still be very similar across different wavelengths, as will be shown in Section 7. Note that for optimal image reconstruction in any given spectral channel, the MC-ULF for that particular spectral channel should be used.
7. Simulations of mixel camera performance based on the measured MC-PSFs
We have developed a Virtual Camera software  that simulates the performance of different types of push-broom cameras, such as mixel cameras, resampling cameras , and traditional cameras that correct keystone in hardware (HW corrected cameras). The Virtual Camera software models various aspects of camera performance, such as keystone, point-spread function (PSF) of the optics, photon and readout noise, etc. The light mixing in the mixing chambers can also be simulated, either by use of a theoretical model (as was done in a previous paper ) or by use of the measured MC-PSFs, which is what we will do here. The hyperspectral data of a real scene (captured by a real hyperspectral camera) is used as input and the Virtual Camera software distorts the input data somewhat in accordance with the modeled optical distortions, sensor characteristics and photon noise. Then, by comparing the data at the output of the virtual camera with the data at the input, we can evaluate the performance of the camera being investigated.
For the performance analyses, a two-dimensional scene of 1600 x 12233 pixels originally captured by a HySpex VNIR1600 hyperspectral camera  was used as the input, see Fig. 13. The virtual camera is set to have significantly lower resolution (228 pixels) in the across-track direction than that of the scene, so that 7 spatial pixels from the HySpex VNIR1600 data set form 1 scene pixel (i.e., 1596 of the 1600 across-track pixels are used for the simulations). By doing this, we simulate the fact that any real scene contains smaller details than the resolution of the camera being tested.
The performance of the mixel camera will be compared to both a resampling camera and a HW corrected camera. The resampling method used is high-resolution cubic splines [3,5], which is one of the best available resampling methods today. For the resampling and mixel cameras we assume a keystone of 22 pixels, i.e., the content of the 228 scene pixels is spread over 250 pixels when recorded onto the sensor. The keystone is assumed to be linear across the image, changing from zero on the left side of the image to 22 pixels on the right side. The recorded pixels are then resampled (in the case of the resampling camera) or restored (in the case of the mixel camera) onto the scene pixel grid to give the final data. For the HW corrected camera the keystone is assumed to change linearly from zero on the left side of the image to 0.3 pixels on the right side.
In a real camera, the signal will be somewhat blurred before being sampled by the sensor. In the Virtual Camera software this is simulated by convolving a real PSF of a HySpex VNIR1600 camera with the input signal. The PSF is first scaled to the scene pixel size used in the simulations and the resulting PSF corresponds to MTF = 0.536 at Nyquist frequency. The optical blur is only applied to the resampling and HW corrected cameras.
For the mixel camera, any optical blur that happens after the slit will be removed in the restoring process , i.e., it is the “sharp” mixel values that will be restored. The mixel camera will therefore return a much sharper signal than the other two cameras. It has previously been demonstrated that the same amount of keystone causes larger errors in a camera with sharper optics [3,18]. In order to adequately compare the misregistration errors between the three cameras, we will apply blur to the final data cube for the mixel camera so that its output signal becomes equally blurry as that of the resampling and HW corrected cameras.
The light mixing in the mixing chambers is simulated as follows. Each chamber receives at its input the signal content of 1 scene pixel, which consists of 7 across-track pixels from the original HySpex VNIR1600 image. Each of those 7 original across-track pixels represents one input position at the chamber and the response at the output of the chamber is simulated by using the MC-PSF corresponding to that input position and wavelength. After the signal has been sampled by the sensor, the data is restored using the MC-ULF for the given wavelength as the assumed response of the chamber. Of course, the assumed response of the chamber is the same for all possible light distributions at the input of the chamber.
When evaluating the performance of the cameras, we calculate the error in the final data relative to the input. The relative error, dE, is given by:
The calculations are done for each of the 12233 spatial lines in the image. Each of the 228 across-track scene pixels of the virtual camera will then have 12233 different error values associated with it. Based on this, we can calculate the standard deviation of the relative error for each of the 228 across-track scene pixels.
We have looked at misregistration errors only, i.e., photon and readout noise are not included in the calculations. The sources of the misregistration errors in our virtual camera simulations are different for the three cameras: for the HW corrected camera these errors are caused by the keystone in the system, for the resampling camera they are a result of the fact that resampling cannot be performed perfectly between two grids, and for the mixel camera the misregistration errors are caused by imperfect light mixing in the light mixing chambers. There is another source of spatial misregistration errors – differences in PSF shape for different wavelengths  – that was not included in the simulations. PSF shape differences affect the hyperspectral data in the same way as keystone does , but is more problematic for HW corrected cameras than for mixel cameras, since in the latter the effect of such differences is mostly removed during the data restoring process . Also, during the simulations we have assumed that the keystone is precisely characterized for the resampling and mixel cameras.
Figure 14 shows results for all three cameras for the 610 nm wavelength. The blue curve shows the misregistration errors for the HW corrected camera, and we can see that the standard deviation of the relative error increases approximately linearly from zero on the left side of the graph (where the keystone is zero) to about 9% at the right side where the keystone is 0.3 pixels. For the resampling camera (green curve) the errors vary periodically, with peaks close to 4% for the standard deviation of the relative error. The periodic behavior is due to a repeating pattern in the relative position between scene pixels and sensor pixels for every tenth scene pixel. Also for the mixel camera (red curve), the errors vary periodically. The periodicity of this variation is the same as for the resampling camera since the size ratio between scene pixels and sensor pixels is the same for the two cameras. However, for the mixel camera the peaks are almost four times lower than for the resampling camera! The standard deviation of the relative error here remains below 1% everywhere. Compared with the blue curve for the HW corrected camera, this level of error is equivalent to a HW corrected camera with around 0.03 pixels keystone (corresponding to pixel 23 in the graph). Note also that these are the peak errors for the mixel camera. The average error is only 0.6%, which is equivalent to a HW corrected camera with 0.02 pixels keystone (corresponding to pixel 15 in the graph).
Some readers may have noticed that the results shown for the HW corrected and resampling cameras in Fig. 14 are different from similar results presented in Fig. 12 in . The reason for this is that the current simulations are performed for cameras with MTF = 0.536, while in  the simulations were done for cameras with MTF = 0.44, i.e., the cameras in  were more blurry. More blurry cameras have smaller misregistration errors , and this would be the case also for the mixel camera.
The light mixing chambers mix the light somewhat differently for different wavelengths. We have therefore checked the performance of the mixel camera also for several other wavelengths. This was done individually for each tested wavelength by using the MC-PSFs for that wavelength together with the corresponding wavelength specific MC-ULF. Figures 15 and 16 show the misregistration errors for the three cameras for the 530 nm and 1000 nm wavelengths respectively. Note that the error curves for the HW corrected and resampling cameras are the same as before since our simulations do not include any wavelength dependent effects for these two cameras. We see that for the 530 nm and 1000 nm wavelengths the misregistration errors for the mixel camera are very similar to those for the 610 nm wavelength. In fact, the errors are even somewhat smaller in the last two cases, with peaks only up to about 0.9%.
Another way of evaluating camera performance – instead of looking at the standard deviation of the relative errors – is to calculate the number of pixels with relative errors above a given threshold. If the error in a pixel is too large, the pixel may not be useable, and the number of such pixels may therefore be a good criterion for camera performance. A maximum acceptable misregistration error of about 10% of the signal seems to be an adequate and practically relevant criterion for high-end scientific hyperspectral imaging systems. Figure 17 shows the percentage of such pixels for the three cameras for the 610 nm wavelength. For the HW corrected camera (blue curve) the percentage of such pixels is zero up to a keystone of about 0.05 pixels (corresponding to pixel 38 in the graph). From around pixel 60 the curve increases approximately linearly up to above 16% at the right side of the graph where the keystone is 0.3 pixels. For the resampling camera, the green curve showing the percentage of such pixels has periodic variations with peaks 2.5-3.8%. However, for the mixel camera (red curve) we see that the percentage of such pixels is practically zero everywhere.
When creating the MC-PSFs (Section 6), the output signal of the chambers was averaged over seven (or eight) consecutive measurement steps and the signal was also smoothed somewhat after combining results from three neighboring chambers. High-frequency spatial variations present in the output signal were therefore not captured. However, it appears that such rapid variations should not affect the camera performance significantly. For example, the MC-PSFs for different wavelengths are somewhat different, but these differences are not visible when looking at the resulting misregistration errors for the mixel camera for different wavelengths. What seems to matter for the camera performance is where the main bulk of the energy in the output signal is placed, i.e., how close – or far away – the centers of mass of the MC-PSFs are compared to the center of mass of the MC-ULF.
An important goal during the testing of the prototype camera was to find the camera configuration that gives the optimum performance of the mixing chambers across all wavelengths. As part of this process, many MC-ULFs – with corresponding MC-PSFs – that happened to give less good camera performance were created. We have calculated the difference between the center of mass of each such MC-ULF and the (weighted) mean center of mass for the corresponding seven MC-PSFs and plotted this number against the average standard deviation of the relative error resulting from the simulations. Figure 18 shows the results and demonstrates that there is a strong correlation between the deviation of center of mass for the MC-PSFs and the camera performance.
We have also calculated the (weighted) mean correlation between the MC-PSFs and the center MC-PSF (corresponding to input position 4) for each case and plotted the results against the average standard deviation of the relative error, see Fig. 19. If finer details in the MC-PSF curves were important for the camera performance, we would expect to see a strong dependency on the calculated correlation coefficients, but this is not the case. There are large fluctuations in the resulting curve, indicating that the connection between camera performance and finer details in the MC-PSFs is relatively weak. Still, there seems to be a general tendency for the curve to fall from the left to the right side. A likely explanation for the main part of this trend is a correlation between the deviation of center of mass [which we know affects the camera performance strongly, see Fig. 18] and the correlation coefficients calculated here. Overall, Figs. 18 and 19 seem to confirm that what is important for the camera performance is where the main bulk of the energy at the output of a chamber is situated, while finer details in the MC-PSF curves seem to be less significant.
Note that the results for the camera performance presented here are valid for the current chambers. Optimum dimensions for these chambers were determined based on the geometric ray model analyses in . Choosing a different length, width or shape for the chambers could possibly improve camera performance further. However, the current chambers already perform quite well and our analyses show that the resulting spatial misregistration errors caused by imperfect light mixing are already low.
The mixel camera is a new type of push-broom hyperspectral camera, where an image recorded with arbitrary (even large) keystone is reconstructed to a nearly keystone-free image. The key component of the mixel camera is an array of light mixing chambers in the slit plane, and the precision of the image reconstruction depends on the light mixing properties of these chambers.
A mixel camera prototype was built and used to check experimentally how well the chambers mix light at different wavelengths. This was done by measuring the light distribution at the output of a mixing chamber for various positions of a polychromatic point source at the input of the chamber. Knowing the output light distributions of the real chambers made it possible to assess the misregistration errors for this camera type in a previously developed Virtual Camera software. The simulations used real hyperspectral data from a high resolution airborne camera as the input and modeled the response of the mixing chambers according to the results from our measurements. The performance of the mixel camera was compared to cameras where keystone is corrected in hardware or by resampling.
Based on the measurements and simulations we have shown that, with the existing chambers, a perfectly characterized mixel camera is potentially capable of having a residual spatial misregistration that corresponds to approximately 0.02-0.03 pixels keystone. This compares favorably to high resolution instruments where keystone is corrected in hardware or by resampling.
References and links
1. M. T. Eismann, Hyperspectral Remote Sensing (SPIE, 2012), Chap. 7.
2. P. Mouroulis, R. O. Green, and T. G. Chrien, “Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information,” Appl. Opt. 39(13), 2210–2220 (2000). [CrossRef] [PubMed]
3. A. Fridman, G. Høye, and T. Løke, “Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for the optical design and data quality,” Opt. Eng. 53(5), 053107 (2014). [CrossRef]
4. H. Kaufmann, K. Segl, S. Chabrillat, A. Mueller, R. Richter, G. Schreier, S. Hofer, T. Stuffler, R. Haydn, H. Bach, and U. Benz, “EnMAP – an advanced hyperspectral misson,” in Proceedings of 4th EARSeL Workshop on Imaging Spectroscopy, New quality in environmental studies, B. Zagajewski, M. Sobczak, M. Wrzesień, eds. (EARSeL and Warsaw University, 2005), pp. 31–34.
6. S. Blaaberg, T. Løke, I. Baarstad, A. Fridman, and P. Koirala, “HySpex ODIN-1024: A new high-resolution airborne HSI-system,” Proc. SPIE 9070, 90700L (2014). [CrossRef]
7. G. Høye and A. Fridman, “Mixel camera - a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging,” Opt. Express 21(9), 11057–11077 (2013). [CrossRef] [PubMed]
8. G. Høye and A. Fridman, “Hyperspectral camera and method for acquiring hyperspectral data,” PCT international patent application number PCT/NO2012/050132.
9. M. Traub, H. D. Hoffmann, H. D. Plum, K. Wieching, P. Loosen, and R. Poprawe, “Homogenization of high power diode laser beams for pumping and direct applications,” Proc. SPIE 6104, 61040Q (2006). [CrossRef]
11. R. G. Hunsberger, Integrated Optics: Theory and Technology, 3rd ed. (Springer-Verlag, 1991).
12. K. Okamoto, Fundamentals of Optical Waveguides, 2nd ed. (Optics and Photonics Series, Academic, 2005).
13. B. E. A. Salech and M. C. Teich, Fundamentals of Photonics (John Wiley & Sons Inc., 1991).
14. H. Guckel, “High-aspect-ratio micromachining via deep X-ray lithography,” Proc. IEEE 86(8), 1586–1593 (1998). [CrossRef]
15. O. Mäder, P. Meyer, V. Saile, and J. Schulz, “Metrology study of high precision mm parts made by the deep x-ray lithography (LIGA) technique,” Meas. Sci. Technol. 20(2), 025107 (2009). [CrossRef]
16. G. Høye and A. Fridman, “Performance analysis of the proposed new restoring camera for hyperspectral imaging,” FFI-rapport 2010/02383 (2010), declassified on 25 March 2014.
18. G. Høye, T. Løke, and A. Fridman, “Method for quantifying image quality in push-broom hyperspectral cameras,” Opt. Eng. 54(5), 053102 (2015). [CrossRef]