Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Scene-based nonuniformity corrections for optical and SWIR pushbroom sensors

Open Access Open Access

Abstract

We propose and evaluate several scene-based methods for computing nonuniformity corrections for visible or near-infrared pushbroom sensors. These methods can be used to compute new nonuniformity correction values or to repair or refine existing radiometric calibrations. For a given data set, the preferred method depends on the quality of the data, the type of scenes being imaged, and the existence and quality of a laboratory calibration. We demonstrate our methods with data from several different sensor systems and provide a generalized approach to be taken for any new data set.

©2005 Optical Society of America

1. Introduction

Hyperspectral and panchromatic pushbroom sensors require nonuniformity corrections (NUCs) to insure that all cross-image pixels respond similarly to the same input. If computed as part of a radiometric calibration, the NUC provides the conversion from digital counts to radiance units; otherwise, the NUC provides only a relative calibration. The NUC takes into account the entire imaging system from the lens to the digital data and may also include geometric effects between the lens and the area being imaged. For pushbroom sensors, the primary sources of nonuniformities are the variability in the response of the focal-plane-array (FPA) elements and the small-scale roughness of the camera’s entrance slit. The presence of dirt or condensation on the entrance slit can make the nonuniformities more pronounced.

For well behaved (i.e., linear) sensors, the NUC may consist of an offset and a single multiplicative factor for each element on the FPA; for nonlinear sensors, the NUC may require a higher-order relationship. The offset is usually computed in-flight by collecting data with the camera aperture closed and then averaging that data to obtain a “dark frame.” For visible and Short-Wave Infrared (SWIR) sensors, the linear (and higher) NUC coefficients are computed in the laboratory using a uniform, diffuse light source. Because modern Visible and Near-Infrared (VNIR) sensors are usually very stable, it is generally assumed that the laboratory-derived NUC will work well on subsequent data collects in the field. However, it is common to see changes in the system between the time of the laboratory calibration and the end of the field campaign. These changes may be due, for example, to dust or condensation collecting on the slit or to a physical shift in the alignment between the entrance slit and the focal plane array. These changes can severely degrade the effectiveness of the laboratory-derived NUC. Changes appear to be more common for SWIR sensors, as the response of the FPA itself tends to be less stable. Unless the system is equipped with an internal light source for in-flight calibration, the only option for replacing an inadequate NUC is to calculate a new one in flight using imagery of the ground.

A poor NUC leads to poor performance for most applications. For pushbroom sensors it causes vertical striping in the data that is visually distracting and adversely affects the automated histogram-stretching parameters of visualization applications. These lines will confound spatially-based algorithms, especially those that rely on line-enhancement filters. A poor NUC also introduces artifacts into the spectral content of the data and can lead to false alarms in target-detection algorithms and poor overall performance in classification algorithms.

There have been a number of papers published about various methods for performing scene-based nonuniformity corrections for staring long-wave infrared sensors [13]; however, these methods are not directly applicable to VNIR line-scanners. The most commonly used method for generating scene-based NUCs for VNIR pushbroom sensors makes use of scenes that contain large areas of uniform areas, such as open water or fields. It is assumed, for this uniform scene, that the average spectral response of each of the cross-track pixels of the sensor should be equal to those of all the other pixels. The NUC is therefore computed as average spectrum for each cross-track pixel divided into the scene-wide average spectrum. This approach is related to the constant-statistic methods of Refs. [1] and [2] and is described in more detail in Section 2.2. There are several problems with this approach. It may not be possible to identify scenes that have large uniform regions with sufficient intensity at all wavelengths, and attempting to find appropriate subscenes can be labor-intensive. Also, it is unlikely that any region in a real image is truly uniform; assuming so is likely to introduce biases into the NUC. Furthermore, a new uniform scene has to be identified each time the system exhibits any changes.

When computing a scene-based NUC, we feel that it is more realistic to assume that neighboring cross-track pixels should respond similarly to one another throughout the flight line than it is to assume that all cross-track pixels should give the same average response. In this paper we investigate the use of the ratios of the responses of neighboring pixels (in the cross-track direction) to build a new NUC or to repair or refine an existing radiometric calibration. In Section 2 we review the standard mean-based methods and then describe our new methods. In Section 3 we evaluate the various methods on several different data sets, and our summary and conclusions are provided in Sections 4 and 5.

2. Methods

2.1 Nonuniformity corrections and radiometric calibrations

Pushbroom sensors collect one spatial line of data at a time (at all wavelengths). We refer to cross-track pixels as samples and the various wavelength channels as bands. The motion of the aircraft or camera mount allows each camera frame to contribute another spatial line of data. The number of samples and bands of a data cube is limited by the dimensions of the FPA, whereas the number of lines in the image cube corresponds to the number of frames recorded.

A radiometric calibration provides the relationship between dark-subtracted sensor response x and radiance L (i.e., W m-2 sr-1 nm-1) for each pixel on the FPA (i.e., for each sample and wavelength band). For sensors that respond linearly to intensity, this can be expressed with a scalar gain factor cs,b for each pixel so that the radiance is given by

Ls,b=cs,bxs,b,

where x is the dark-subtracted data and the subscripts s and b refer to the sample and band numbers of the quantity, respectively. A higher order relationship may be required,

L=c1x+c2x2+;

however, we will use Eq. (1) throughout this paper with the understanding that we can easily modify our techniques to use Eq. (2) if necessary. If the calibration is obtained with laboratory measurements of a calibrated diffuse source, the value of cs,b is uniquely determined for each sample and band and inherently includes a nonuniformity correction. However, the radiometric calibration and NUC can be separated so that

Ls,b=cbνs,bxs,b,

where the calibration coefficient cb consists of a scalar value for each band and the NUC νs,b provides only a relative calibration. This allows one to obtain the calibration and NUC separately. Normally the calibration would be obtained from laboratory measurements, but it may be necessary to make adjustments to it or to generate it afresh from in-flight data using knowledge of the atmospheric absorption features and the reflectance values of certain specific objects in the imagery. If absolute radiance values are not required then one can simply set cb to unity.

2.2 Mean-spectrum NUC methods

The most commonly used NUC method for VNIR pushbroom sensors makes the assumption that the average (over many lines of data) of the true radiance being measured is the same for all samples; i.e.,

L̅sL̅,

where s is the average spectrum at any given sample and is the average spectrum over the entire scene. To help insure that Eq. (4) is a reasonable assumption, an analyst should attempt to identify a uniform scene or subscene from which to compute the NUC. The uniform area must cover all cross-track spatial samples in order to generate a NUC for the entire scene. Ideally, the area would be a visibly uniform scene, such as open water or a large field, but in practice it is often taken to be a relatively large amount of data collected over a natural environment with the hope that each sample in the imager views similar items throughout the selected data. As illustrated in Fig. 1A, the NUC is computed with the following steps:

1. Compute the average (arithmetic mean) of the data for each sample s over all lines (frames). This can be done for all bands simultaneously.

2. Compute the average of the spectral data over the entire uniform area. This can be computed as the mean of s.

3. Compute the ratio of the overall average spectrum to the average spectrum for each sample; i.e.,

νs=x̅x̅s.
 figure: Fig. 1.

Fig. 1. Mean-spectrum NUC (A) and median-ratio NUC (B) methods.

Download Full Size | PDF

2.3 Median-ratio NUC

The NUC methods we propose here rely on the empirical observation that adjacent samples of a line scanner usually view similar scenes (and frequently the same object) as one another. We make use of this observation by making the assumption that the median value (over many lines) of the ratio of the radiance of neighboring samples is approximately unity;

medianl(Ls+1Ls)1.

Unlike the method of Section 2.2, we make no assumption about the radiance at non-adjacent samples. Equation (6) is easily satisfied in imagery where the spatial resolution of the imaging system is smaller than the predominant scale of variability in the scene. In this case, adjacent samples are usually viewing the same object; the situations where adjacent pixels are at object boundaries generate outliers that do not significantly influence the median result. In imagery where the spatial variability of the scene is small compared with the imaging resolution, it may be necessary to compute the statistics over a large number of lines of data to insure that Eq. (6) becomes sufficiently accurate.

We can develop a NUC method with Eq. (6) alone. The simplest approach is:

1. For hundreds or thousands of lines (camera frames), compute the ratios of adjacent samples, rs,b,lxs +1, b,l/xs,b,l, where the subscripts s, b, and l refer to the sample, band, and line numbers of the quantity, respectively.

2. Select the median ratio s,b for each sample.

3. Compute the NUC with

νs,b=i=0sr˜i,b.

There are two practical considerations with this method. The first is that the storage of the ratio values over thousands of lines may be impractical. This concern is addressed in Section 2.7 by providing a method for keeping only a limited number of ratio values in memory. The second concern is that any error or uncertainty in νs,b due to error or uncertainty in s,b will accumulate for high sample numbers in Eq. (7). We can limit this accumulated error by starting the NUC construction at the center sample. In this case we set the center NUC value to unity and build the NUC to the sides with

νs+1,b=νs,br˜s,bandνs,b=νs+1,br˜s,b.

Even with this adjustment, statistically noisy values of s,b could potentially cause the magnitude of ν to slowly “drift” as we move from the center to the edges of the image. In other words, Eq. (8) provides a NUC that is accurate locally (i.e., across tens to hundreds of samples) but can be inaccurate globally (i.e., across hundreds to thousands of samples), depending on the nature of the data. As a result, an image that is processed with a NUC computed with Eq. (8) should look excellent over any small area in the image but may at the same time appear brighter on one side of the image than on the other. Furthermore, the global-scale error in ν could be different in different spectral bands, causing a change in spectral shape from one side of the image to the other. We have developed several ways to eliminate this potential problem, which we will describe in Sections 2.4–2.6.

2.4 Calibration repair

When a pre-existing NUC or radiometric calibration shows problems in only a few localized regions (e.g., from the arrival or movement of dust on the camera’s entrance slit), it may be most practical to replace only small sections of it. The steps for performing this calibration repair are as follows:

1. Compute sample ratios rs,b,l on uncalibrated dark-subtracted data and then compute their median, s,b.

2. Identify the samples s 1 to s 2 to be replaced. (This can be done manually through visual inspection of the calibrated data or automatically by comparing old and new s,b values.)

3. Starting at sample s 1, compute the new NUC values up to sample s 2 with νs,b=νs +1,b s,b

4. Apply a linear adjustment to the replaced NUC values to force the new value at sample s 2 to exactly match the pre-existing value at s 2.

The linear correction in Step 4 is expected to be very small. A not-so-small correction would indicate that the endpoints s 1 and s 2 should be reselected. Therefore, the endpoints should be chosen with an automated trial-and-error approach to minimize, or at least limit, the slope of the linear correction required. An example will be provided in Section 3.2.

2.5 NUC detrending and retrending

As mentioned in Section 2.3, a median-ratio NUC computed with Eq. (8) is susceptible to a slow spatial drift from one side to the other. To address this concern, we divide the NUC into two components, one that describes the fine-scale pixel-to-pixel variation in the response of the camera and one that describes the large-scale variation in the response across the entire focal plane array; i.e.,

ν=νsmνdt,

where the superscript ‘sm’ denotes a quantity that has been smoothed or low-pass filtered and the superscript ‘dt’ denotes a quantity that has been detrended or high-pass filtered. Similarly, we can divide a radiometric calibration into the same two components, c sm and c dt. Note, though, that c dt contains no absolute magnitude information and is therefore equivalent to v dt; i.e.,

c=csmνdt.

For most sensors it is relatively easy to separate the fine-scale nonuniformities ν dt from large-scale variation c sm. For example, shown in Fig. 2A are three bands of a radiometric calibration for the WAR HORSE VNIR hyperspectral sensor [4], and shown in Fig. 2B is the low-pass component c sm of this calibration. The high-pass component, shown in Fig. 2C, can be obtained by dividing the two; i.e, ν dt=c/c sm.

 figure: Fig. 2.

Fig. 2. Example WAR HORSE laboratory calibration before (A) and after (B) the application of a low-pass filter and the high-pass version of the calibration (C) obtained by dividing (A) by (B). Shown are spectral channels 9, 10, and 47.

Download Full Size | PDF

As discussed in Section 2.3., the v sm component of the median-ratio NUC computed with Eq. (8) is potentially less accurate than the νdt component. We therefore want to develop methods that retain the νdt component for use in Eqs. (9) and (10) and either correct or replace vsm [Eq. (9)] or c sm [Eq. (10)]. We consider a few choices here.

The simplest approach is to let ν=νdt (i.e., set ν sm=1). This provides a reasonable NUC for a sensor that has fairly uniform response from side to side, and, in any case, might be adequate for applications that rely primarily on spatial processing. In many cases, though, this NUC would not be satisfactory. For example, it can be seen by comparing Fig. 2A with Fig. 2C that a WAR HORSE image processed with ν=νdt would appear darker on the sides than in the middle.

In most cases, a laboratory calibration will be available, even if its fine-scale quality is in question. This gives us the option of using a smoothed version of the laboratory calibration together with the detrended median-ratio NUC,

c=(νdt)inflight(csm)lab.

An alternative way to incorporate the laboratory calibration is to divide the scene-based NUC by a smoothed version of the ratio between this scene-based NUC and the pre-existing calibration,

c=νinflight(νinflightclab)sm.

If the sensor has not changed much since the last calibration was computed, then Eq. (12) is preferable over Eq. (11) because the fine-scale variability in νin-flight and clab largely cancel out when computing the ratio νin-flight/clab, making it relatively easy to smooth the result. On the other hand, Eq. (11) is likely to be preferable to Eq. (12) if the projection of the slit onto the CCD has shifted since the last calibration.

If no laboratory calibration or NUC exists, there are still a few choices. As we will show in Section 3, the straight-forward median-ratio method of Eq. (8) can generate an excellent NUC by itself if enough lines are processed to generate good statistics. If the quality of the statistics is in question, an alternative approach is to combine the fine-scale component of the median-ratio NUC with the large-scale component of a mean-spectrum NUC using

ν=νmedianratio(νmedianratioνmeanspec)sm.

As a last resort, perhaps, the large-scale component of the NUC, νsm, could be computed as a polynomial fit, say, to a set of hand-selected points in the imagery that appear to be of the same material. This could then be combined with the fine-scale component of the median-ratio NUC in a manner analogous to Eq. (11).

2.6 Uniform-scene median-based NUC

If a statistically uniform scene or subscene is available, we can combine the pixel-ratio approach with the mean-based approach to obtain a globally accurate NUC from scratch. This can be done with the following steps:

1. Compute the ratios of each data spectrum to the spectrum of a reference sample for all frames, rs,b,l=xs,b,l/xref,b,l.

2. Compute the nonuniformity correction with respect to the reference sample; i.e.,

νs,b=medianl(xs,b,lxref,b,l).

This approach combines the advantage of not requiring the uniform area to be completely uniform with the advantage of eliminating the compounding error that can occur with the unconstrained median-ratio NUC. As a practical consideration, though, it is important that the reference sample be a trusted one (i.e., not a “bad pixel”, as described in Section 2.8). Also note that if the scene is not truly uniform, the NUC will be somewhat better near the reference point than it is further away from the reference point; therefore, the reference point should be near the center of the array.

2.7 Ratio statistics for large or multiple files

The methods discussed in Sections 2.3–2.6 rely on the median of the ratios of neighboring pixels. We expect that we will get the best results by using a large number of frames. Unlike the arithmetic mean, though, the median statistic cannot be updated frame-by-frame based on only current information; it requires the entire set of values over which the median is to be computed. We get around this practical limitation by storing only 400 ratio values for each band and sample. Initially, 100 of the entries are set to a low value (e.g., 0) and 100 are set to a high value (e.g., 10) and the remaining 200 entries are filled by the ratios of the first 200 lines. After every subsequent 200 lines, the 400 ratio values are sorted, the middle 200 are retained, and the other 200 slots are filled by the values from the next 200 lines. As more and more lines are processed, the retained values become more and more homogeneous, causing the median value to converge on a statistically accurate value. These retained values can be stored and reused from image to image so that the estimates can be refined over many images. An example is shown in Section 3.4. The choice of 200 for the number of lines retained was purely intuitive; we have not evaluated other choices.

2.8 Bad-pixel maps

Regardless of the calibration technique being used, it is important to identify any bad pixels on the FPA and to interpolate across those pixels before computing any ratio values. For our purposes we define a bad pixel as any pixel that fails to respond (approximately) linearly with illumination. For example, a “dead” pixel may give the same output in the dark frame as it does when illuminated. Pixels at or near saturation will also fail to respond linearly. From anecdotal evidence, it appears that bad pixels are rarely a problem with visible-wavelength sensors but are a common problem with SWIR sensors. Bad pixels are labeled using a bad-pixel map, which is simply a binary mask file with the same dimensions as the camera frame that identifies which pixels are not to be trusted. The frequency at which the bad-pixel map needs to be updated is sensor-dependent.

For the unconstrained mean-ratio calibration given by Eq. (8), an untreated bad pixel might cause an offset in the calibrated image from one side of the bad pixel to the other. For mean-based calibrations, a bad pixel causes a sharp vertical line to occur at the location of the bad pixel. These symptoms are easily seen in processed imagery, and pixels can be added to the bad-pixel map manually if necessary. Of course, we would rather find the bad pixels automatically. We tried two general methods to do so, one that evaluated the computed NUC and one that evaluated the statistics of the difference in the responses of neighboring pixels. In the first method, we removed the trend from the NUC so that the NUC values were all close to unity. We then labeled a pixel as being bad if

(1νdt)2>d,

where the threshold d is chosen by an analyst. The second method we used to find bad pixels relied on the average percentage difference between the responses of adjacent samples. A pixel was assumed to be bad if it fails to track at least one of its neighbors.

2.9 The data sets

We evaluated the NUC methods described in Section 2 on four different hyperspectral data sets: VNIR data collected with the WAR HORSE sensor [4] in Maryland and Montana in July/August 2003, visible data collected with the Ocean PHILLS sensor in the Bahamas in 2000 [5,6], SWIR data collected with the IRON HORSE [4] sensor in Montana in 2003, and off-nadir SWIR data collected with the Diamond 1 system in Maryland in 2004 [7]. All four of these sensors have 1024 cross-image samples. The Ocean PHILLS has 128 spectral channels ranging from 380 to 1000 nm; WAR HORSE has 64 spectral bands over the range 434 to 1156 nm; IRON HORSE and Diamond 1 both have 168 bands over 815 to 2500 nm. We also evaluated our methods with data collected with the high-resolution panchromatic line scanner that was part of the WAR HORSE system in 2003. This data has 6000 samples and one spectral band.

We evaluated the quality of the nonuniformity corrections based on careful visual inspection of the images themselves, of their cross-sample and spectral profiles, and of Principle-Component projections of the images. We also compared the shapes of the scene-based NUCs and calibrations to the laboratory calibrations to verify the spectral integrity of the results.

3. Experimental Results

3.1 Ocean PHILLS data

The imagery collected by the Ocean PHILLS in the Bahamas in 2000 contained a combination of islands and very shallow and optically clear water. The land areas were primarily beaches and dense shrub. Submerged seagrass beds and submerged sand shoals were prevalent in the water areas. These land and water features were typically hundreds of pixels in extent. On this deployment there was a shift in the alignment between the camera’s entrance slit and the CCD between the time that the sensor was calibrated and the time that data was collected [6]. Although the laboratory calibration provided a decent global (large spatial scale) calibration c sm, the fine-scale NUC νdt was very poor.

The only bad pixels in the Ocean PHILLS were located within three samples of the sides of the FPA. All processing of the PHILLS data ignored the first and last three samples of the data; no additional bad-pixel masking was necessary.

The data was stored in files of 1024 lines each. The median-ratio NUC [Eq. (8)] computed over 1024 lines provided excellent results for most of the wavelength bands. However, it did exhibit some gradual error propagation from the center to the sides at the higher bands (where the sensor had relatively low signal-to-noise). On the other hand, when we evolved the sample-ratio statistics over an entire flight line (approximately 10,000 lines; retaining 400 lines at a time), we obtained an excellent NUC for all useable bands in the PHILLS data. Our basis for evaluation in this case is that the images looked visually superb and the general shape of the NUC closely matched that of the laboratory calibration for all bands. Note, though, that this NUC provides a relative calibration but not a conversion to radiance units.

We obtained the best radiometric calibration for this data set by using Eq. (11) to combine the median-ratio NUC with the laboratory calibration. We did this using only 1024 lines of data; it was not necessary to collect statistics over more lines of data when using Eq. (11) because the accumulation of error from Eq. (8) is removed during the detrending (high-pass) operation. One added complication, though, came from the fact that the PHILLS sensor used different gain settings for the two halves of the FPA, causing a discontinuity in the calibration at the center of the array. It was therefore necessary to smooth (low-pass) the two sides of the laboratory calibration separately and, likewise, to detrend (high-pass) the two sides of the median-ratio NUC separately. As a result, the calibration we obtained with Eq. (11) was slightly noisy near the center samples. Fortunately, this was very easy to repair using the method described in Section 2.4. Shown in Fig. 3 is a comparison of the laboratory calibration and our updated calibration for band 9. A spatial shift of approximately nine samples in alignment is evident between the two, as was previously reported [6]. Shown in Fig. 4 is an example image calibrated with the two calibrations shown in Fig. 3.

 figure: Fig. 3.

Fig. 3. Laboratory (black) and updated (red) calibration for band 9 of the Ocean PHILLS.

Download Full Size | PDF

 figure: Fig. 4.

Fig. 4. Fig. 4. Example Ocean PHILLS image with laboratory calibration (left) and median-ratio computed calibration (right).

Download Full Size | PDF

While the patchy nature of the PHILLS imagery was conducive to good results with the median-ratio NUC methods, it was far from ideal for the mean-spectrum NUC method. The combination of bright beaches and dark water caused the mean-spectrum NUC method great difficulty. In the entire data set, there was only one sequence of about 1000 frames that could be considered uniform; all other images contained combinations of irregularly shaped regions of land and water. The one uniform sequence of relatively deep water was used to generate the mean-spectrum NUC which was used to process the data for benthic studies [8,9]. This NUC performed fairly well for the lower wavelengths; however, the water scene contains low signal at high wavelengths and therefore did not provide good statistics for the higher wavelengths.

3.2 WAR HORSE hyperspectral data

The WAR HORSE imagery collected in Montana in July/August 2003 consisted primarily of rural mountainous regions. There were both large scale illumination changes due to terrain and small-scale natural variability due primarily to individual trees, rocks, and their shadows.

For this data set there was only one bad pixel (at band 25, sample 32), and this was trivially easy to detect with Eq. (15) as an outlier in the NUC. Data at this FPA element was replaced with the average of the data from the two adjacent samples.

The WAR HORSE system was fairly stable over the period of 2003 deployment. The laboratory calibration worked fairly well overall; however the calibrated imagery showed a handful of faint vertical lines, ranging from 1 to 6 pixels in thickness. This made this data set an excellent candidate for the localized calibration repair described in Section 2.4. The widest of the dark lines, centered at sample 565, can be seen in the subscene (200 samples wide) shown in Fig. 5B. Shown in Fig. 5A is a comparison of band 32 of the NUC before and after repair of samples 558 to 572. It can be seen that this was an area of relative darkness in the WAR HORSE sensor that for some reason became more pronounced during the flight test. Shown in Fig. 5C is the Fig. 5B subimage after repair. This repair was very easy to do, and we were able to quickly repeat the process for all of the regions that noticeably required repair.

In addition to repairing the existing WAR HORSE calibration, we also computed a new median-ratio NUC entirely from the field data using Eq. (8). The images processed with this NUC looked very good over relatively small spatial scales, but did not provide a consistent apparent brightness level over the entire width of the image; the images appeared brighter in the middle than at the sides. For a given number of lines used, the slow accumulation of error in Eq. (8) was much more pronounced with this data set than it was with the Ocean PHILLS data set. We believe that the reason for this is that the Montana WAR HORSE imagery has much higher spatial variability than the PHILLS data.

 figure: Fig. 5.

Fig. 5. Example repair of the radiometric calibration for the WAR HORSE sensor. Part A shows a region of the calibration for band 32 before and after repair. Parts B and C show ‘before’ and ‘after’ images (200 samples wide; RGB=bands 32, 24, and 10) in the region of the image near the repair.

Download Full Size | PDF

The next step we took was to combine the median-ratio NUC with the laboratory calibration using Eq. (12). The results of this approach were excellent, providing a calibration that was superior to the one obtained by repairing select portions of the laboratory calibration. This improvement is especially noticeable in the relatively noisy bands (e.g., bands 2 and 63), where vertical lines are visible in the laboratory-calibrated imagery but not in the scene-based calibrated imagery.

There were no spatially uniform areas in the entire Montana data collect from which to construct a mean-spectrum NUC [Eq. (5)] or a referenced median-ratio NUC [Eq. (14)]. Therefore we constructed these NUCs using typical scenes. As with the PHILLS data, the results were generally poor. Occasional patches of snow and mountain lakes mixed in with the typical fields, rocks, and forest prevented the average spectra of the 1024 samples from converging adequately. Furthermore, neither one of these methods can provide a radiometric calibration.

3.3 High-resolution panchromatic data

In 2003 the WAR HORSE system included a Dalsa line scanner that provided 6000 samples of panchromatic visible data. No laboratory calibration was available for this sensor. The data analyzed was collected in rural Southwestern Montana in 2003, as discussed in Section 2.3.

We found the mean-spectrum NUC to give poor results when applied over 3000 lines of data. In fact, this NUC produced data that was less preferable than the uncorrected data. The reason for this is that the imagery was collected in a mountainous region where there are large-scale illumination changes. The mean-spectrum NUC causes samples that contain primarily shaded areas to be too bright and samples that contain primarily sun-lit areas to be too dark. Shown in Fig. 7 is an example image before and after the mean-spectrum NUC.

 figure: Fig. 6.

Fig. 6. Dalsa image before (left) and after (right) a mean-spectrum NUC.

Download Full Size | PDF

The straightforward median-ratio NUC [Eq. (8)] using 3000 lines of data provided images that were visibly improved over the uncorrected data; however, the general magnitude of the image was not consistent over the entire 6000 samples. The best result, therefore, was a detrended (high-pass) version of the median-ratio NUC. Shown in Fig. 8 is a small part of a Dalsa image before and after the application of the detrended median-ratio NUC.

 figure: Fig. 8.

Fig. 8. Dalsa subimage (97 samples wide) before and after a median-spectrum NUC.

Download Full Size | PDF

3.4 Diamond 1 data

The Diamond 1 (D1) SWIR data analyzed was collected in Southern Maryland in 2004. The imagery was obtained at highly oblique angles (approximately 75 deg. off-nadir). The imagery consisted primarily of forests, fields, and rivers with occasional roads, buildings, and automobiles. The images were typically between 300 and 500 lines each.

Building the bad-pixel map for this data was challenging. Because of the atmospheric absorption features, there was a wide variety in the signal-to-noise levels across the spectral bands. Furthermore, the image scans were relatively short (usually less than 500 lines), which is somewhat insufficient for good statistics. As a result, both methods for identifying bad pixels flagged far more pixels in the noisy channels than they did in the channels with high atmospheric transmission. This could be remedied somewhat by collecting statistics over multiple images and by relaxing the bad-pixel threshold for the noisier bands. In practice, though, we were not interested in using the noisy bands anyway, so it was not considered to be a problem to incorrectly label many of these pixels as being bad pixels. Between the two methods for finding bad pixels, we found the ratio approach to be far more practical. While this method generates some “false alarms”, especially in the noisy bands, these tended to be isolated pixels. In contrast, bad-pixels identified directly from the NUC caused a few clumpings of many false alarms.

No radiometric calibration was available for this data set. A pre-deployment NUC was computed in the laboratory, but this produced such poor results in the field that it was of no practical value. We therefore used scene-based mean-spectrum NUCs during the data collection. It was impractical to try to identify any uniform scenes for this purpose, so the NUC was computed individually for each image. Although this provided much better imagery than either the raw data or the laboratory-NUC data, these mean-spectrum NUCs were visibly poor, causing streaks or shadows in the imagery in places where the landscape changed from trees to fields or from land to water. In contrast, we found that the median-ratio NUC gave excellent results at all the wavelength bands of interest (i.e., those bands with high atmospheric transmission). Our classification algorithms worked very well on data that was calibrated with this median-ratio NUC alone; we did not find it necessary to apply a high-pass filter to the data or to combine the scene-based NUC with the laboratory NUC. Shown in Fig. 7 is an image that was corrected with the median-ratio NUC compared with the same image that was corrected with a mean-spectrum NUC.

 figure: Fig. 7.

Fig. 7. Example Diamond 1 image corrected with an in-scene mean-spectrum NUC (left) and the same image corrected with an in-scene median-ratio NUC.

Download Full Size | PDF

The reason for the high success of the median-ratio NUC in the D1 imagery is the high correlation among neighboring samples. As seen in Fig. 7, the scales of variation in the scenes tended to be low compared with the spatial resolution of the imagery. (The patchy nature of the imagery is also why the mean-spectrum NUC tended to perform poorly.) In addition, the D1 sensor was experiencing high-frequency jitter during its scanning that blurred the imagery, eliminating any sharp contrasts between neighboring pixels.

Because the D1 images were frequently less than 500 lines each, it was sometimes desired to compute the median-ratio NUC using ratio values from several consecutive images. Fig. 8 shows an example of how the 200 stored ratio values for any given band and sample converge toward homogeneous values (as described in Section 2.7) as more files are used to compute the statistics. Note that sample 100 in Fig. 8 represents the median value used in the NUC.

3.5 IRON HORSE data

The IRON HORSE spectrometer is similar to that in the D1. As with the D1 data, it was important to identify bad pixels before computing the NUC. We used the same approach as with the D1 data and iteratively reduced the threshold for bad-pixel detection until all bad pixels were removed as determined by visual inspection of the processed data and of its statistics.

 figure: Fig. 8.

Fig. 8. Example sets of sorted ratio values (at band 21, sample 201) for four consecutive D1 images. The ratio values become more and more homogeneous as more data is processed, causing the median value to converge.

Download Full Size | PDF

The IRON HORSE data was collected simultaneously with the WAR HORSE data in 2003, and the two sensors had approximately the same spatial resolution. Therefore, despite covering different regions of the spectrum, the scene-based NUC results for IRON HORSE were essentially the same as already described for WAR HORSE. The primary difference between the two data sets is that the unmodified laboratory calibration for IRON HORSE was quite poor; data calibrated with the laboratory calibration appeared noisy and contained many negative values. As with WAR HORSE, the straight-forward median-ratio NUC produced locally smooth but globally inaccurate imagery; whereas the combination of this NUC with the laboratory calibration using Eq. (12) produced excellent results.

4. Discussion

Despite the expectation that laboratory calibrations should provide the best method for processing optical and SWIR pushbroom imagery, our experience has been that the sensor response frequently changes sometime during transit, installation in the plane, and/or the rise to altitude. For example, only one of the five data sets investigated here had a laboratory calibration that was good enough to be used to process the data, and even that produced visible blemishes in the imagery.

Traditionally, when an adequate calibration has not been available, we have used an inscene mean-spectrum NUC [Eq. (5)]. Although it is common knowledge that this NUC method should only be used on uniform regions of the imagery, this is usually an unrealistic restraint, especially in real time applications. In the case of the WAR HORSE and IRON HORSE deployments in Montana, there were simply no uniform regions available. In the case of the Ocean PHILLS and Diamond 1 deployments there were rare regions of deep water that could be used for this purpose; however, water is not a good reflector at red and SWIR wavelengths and therefore does not provide a good scene for NUC computations at these bands. Even if an earnest attempt is made to find uniform areas on land, as we have occasionally done post-deployment, it is unlikely that any region in a real image is truly uniform; assuming so introduces biases into the calibrated data. These practical difficulties are compounded if the response of the system changes during a deployment (e.g., if a new dust particle or drop of condensation appears on the sensor’s slit).

When computing a NUC from in-flight data, we feel that it is most realistic to assume that neighboring pixels will see a similar scene but to make no assumptions about the overall uniformity of the scene. We found that the median-ratio NUC is very good locally (over cross-track scales of hundred of samples) but can be poor globally (over thousands of samples) if not enough data is used to compute the statistics. We have developed ways to remove the large-scale error while retaining the good fine-scale performance.

Our recommendations for the in-flight calibration or NUC of optical pushbroom sensors is as follows. If a laboratory calibration that is well aligned with the current state of the sensor is available, then we recommend that the fine-scale component of the median-ratio NUC be combined with the large-scale component of the laboratory calibration using Eq. (12). This provided the best calibration for the WAR HORSE data set. If a laboratory calibration is available but it is poorly aligned with the current state of the sensor, then we recommend that the fine-scale component of the median-ratio NUC be combined with the large-scale component of the laboratory calibration using Eq. (11). This was the approach of choice for the PHILLS data set. If no laboratory calibration is available, an unaltered median-ratio NUC will in some cases be sufficient; however, it should be validated by an analyst. This NUC was sufficient for both the Diamond 1 and Ocean PHILLS data set, provided enough lines of data were used to compute the statistics. If the large-scale shape of the NUC does not appear to be correct, or if no analyst evaluation is possible, then the choices are to either use the detrended version of the NUC or to combine the median-ratio NUC with a mean-spectrum NUC using Eq. (13). A detrended median-ratio NUC was the best solution for the Dalsa panchromatic data set.

Regardless of the method chosen, any localized irregularities in a NUC can be repaired using sample ratios as described in Section 2.4. The endpoints defining the region to be repaired should be chosen by trial and error so that the slope of the linear correction is minimized.

Because the arithmetic mean is much easier to work with than the median function, it is tempting to replace the median in Eq. (6) with the mean, producing a mean-ratio NUC method. However, results we obtained from this method were clearly inferior; the outlier ratio values that exist near object boundaries in the scene have a large impact on the mean value but not on the median value.

None of the NUC methods can properly handle an unmasked bad pixel on the FPA. While a single unidentified bad pixel causes only a single bad point on a mean-spectrum NUC, the same single bad pixel might potentially effect several pixels in the detrended median-ratio NUC. This is not a concern for sensors that do not suffer from bad pixels, such as the PHILLS and WAR HORSE sensors, but it can potentially cause problems for SWIR sensors like IRON HORSE and Diamond 1. We were disappointed in our ability to generate bad pixel maps for the two SWIR sensors in a fully automated way. Although our methods did successfully identify most of the bad pixels, they could not detect all of them without labeling some good pixels as bad. Therefore we set the decision thresholds loosely so that some, but not all, of the bad pixels were identified automatically and then we added the remaining bad pixels manually. This is a topic that calls for further research.

5. Conclusions

Our experience and results show that the best calibration for VNIR/SWIR pushbroom sensors can be obtained by performing a laboratory radiometric calibration and then updating this with flight data using Eqs. (8) and (12). The primary advantage of this median-ratio approach over the commonly used mean-spectrum NUC [Eq. (5)] is that it is not necessary to compute the NUC with spatially uniform imagery. If no attempt is made to select a uniform scene for computing the NUC, the median-ratio NUC can dramatically outperform the mean-spectrum NUC. The disadvantage of the median-ratio NUC is the increased computation time; however, this is unlikely to be a practical limitation to real-time implementation. A second disadvantage of the median-ratio NUC is that it may be more adversely affected by any unidentified bad pixels in a SWIR FPA. Modifications to our median-ratio approach are required for the cases of a poor-quality or non-existent laboratory calibration; our recommendations for these cases are provided in Section 4.

Acknowledgments

We thank Eric Allman for his assistance with the Montana data and Bill Snyder and Megan Carney for their help in processing the Ocean PHILLS data. We also acknowledge Fred Olchowski and the many others who participated in the collection of the data sets used in this research.

References and links

1. D. A. Scribner, K. A. Sarkay, J. T. Caldfield, M. R. Kruer, G. Katz, and C. J. Gridley, “Nonuniformity correction for staring focal plane arrays using scene-based techniques,” in Infrared Detectors and Focal Plane Arrays , E. L. Dereniak and R. E. Sampson, eds., Proc. SPIE 1308, 224–233 (1990).

2. S. N. Torres, J. E. Pezoa, and M. M. Hayat, “Scene-based nonuniformity correction for focal plane arrays by the method of the inverse covariance form,” Appl. Opt. 42, 5872–5881 (2003). [CrossRef]   [PubMed]  

3. B. M. Ratliff, M. M. Hayat, and J. S. Tyo, “Radiometrically accurate scene-based nonuniformity correction for array sensors,” J. Opt. Soc. Am. A 20, 1890–1899 (2003). [CrossRef]  

4. C. M. Stellman, F. M. Olchowski, G. G. Hazel, E. C. Allman, and M. L. Surratt, “WAR HORSE and IRON HORSE at Camp Shelby - Data Collection and Associated Processing Results,” in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery IX , S. S. Shen and P. E. Lewis, eds., Proc. SPIE 5093, 94–103 (2003).

5. C. O. Davis, J. Bowles, R. A. Leathers, D. Korwan, T. V. Downes, W. A. Snyder, W. J. Rhea, W. Chen, J. Fisher, W. P. Bissett, and R. A. Reisse, “Ocean PHILLS hyperspectral imager: design, characterization, and calibration,” Opt. Express 10, 210–221 (2002). [PubMed]  

6. R. A. Leathers, T. V. Downes, W. A. Snyder, J. H. Bowles, C. O. Davis, M. E. Kappus, M. A. Carney, W. Chen, D. Korwan, M. J. Montes, and W. J. Rhea, “Ocean PHILLS Data Collection and Processing: May 2000 Deployment, Lee Stocking Island, Bahamas,” U. S. Naval Research Laboratory technical report NRL/FR/7212--02-10,010 (Available from the Defense Technical Information Center) (2002).

7. J. N. Lee, M. R. Kruer, D. C. Linne von Berg, J. G. Howard, F. Olchowski, M. D. Duncan, E. J. Stone, R. A. Leathers, and T. V. Downes, “Sensor Fusion for Long-Range Airborne Reconnaissance,” Photonic Applications, Systems and Technologies (PhAST) Conference, OSA, Baltimore, Maryland, 24–26 May 2005.

8. H. M. Dierssen, R. C. Zimmerman, R. A. Leathers, T. V. Downes, and C. O. Davis, “Ocean color remote sensing of seagrass and bathymetry in the Bahamas Banks by high-resolution airborne imagery,” Limnol. Oceanogr. 48, 444–455 (2003). [CrossRef]  

9. E. M. Louchard, R. P. Reid, C. F. Stephens, C. O. Davis, R. A. Leathers, T. V. Downes, and R. Maffione, “Derivative analysis of absorption features in hyperspectral remote sensing data of carbonate sediments,” Opt. Express 10, 1573–1584 (2002). [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Mean-spectrum NUC (A) and median-ratio NUC (B) methods.
Fig. 2.
Fig. 2. Example WAR HORSE laboratory calibration before (A) and after (B) the application of a low-pass filter and the high-pass version of the calibration (C) obtained by dividing (A) by (B). Shown are spectral channels 9, 10, and 47.
Fig. 3.
Fig. 3. Laboratory (black) and updated (red) calibration for band 9 of the Ocean PHILLS.
Fig. 4.
Fig. 4. Fig. 4. Example Ocean PHILLS image with laboratory calibration (left) and median-ratio computed calibration (right).
Fig. 5.
Fig. 5. Example repair of the radiometric calibration for the WAR HORSE sensor. Part A shows a region of the calibration for band 32 before and after repair. Parts B and C show ‘before’ and ‘after’ images (200 samples wide; RGB=bands 32, 24, and 10) in the region of the image near the repair.
Fig. 6.
Fig. 6. Dalsa image before (left) and after (right) a mean-spectrum NUC.
Fig. 8.
Fig. 8. Dalsa subimage (97 samples wide) before and after a median-spectrum NUC.
Fig. 7.
Fig. 7. Example Diamond 1 image corrected with an in-scene mean-spectrum NUC (left) and the same image corrected with an in-scene median-ratio NUC.
Fig. 8.
Fig. 8. Example sets of sorted ratio values (at band 21, sample 201) for four consecutive D1 images. The ratio values become more and more homogeneous as more data is processed, causing the median value to converge.

Equations (15)

Equations on this page are rendered with MathJax. Learn more.

L s , b = c s , b x s , b ,
L = c 1 x + c 2 x 2 + ;
L s , b = c b ν s , b x s , b ,
L ̅ s L ̅ ,
ν s = x ̅ x ̅ s .
median l ( L s + 1 L s ) 1 .
ν s , b = i = 0 s r ˜ i , b .
ν s + 1 , b = ν s , b r ˜ s , b and ν s , b = ν s + 1 , b r ˜ s , b .
ν = ν sm ν dt ,
c = c sm ν dt .
c = ( ν dt ) in flight ( c sm ) lab .
c = ν in flight ( ν in flight c lab ) sm .
ν = ν median ratio ( ν median ratio ν mean spec ) sm .
ν s , b = median l ( x s , b , l x ref , b , l ) .
( 1 ν dt ) 2 > d ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.