## Abstract

In a recently demonstrated algorithmic spectral-tuning technique by Jang *et al*. [Opt. Express **19**, 19454-19472, (2011)], the reconstruction of an object’s emissivity at an arbitrarily specified spectral window of interest in the long-wave infrared region was achieved. The technique relied upon forming a weighted superposition of a series of photocurrents from a quantum dots-in-a-well (DWELL) photodetector operated at discrete static biases that were applied serially. Here, the technique is generalized such that a continuously varying biasing voltage is employed over an extended acquisition time, in place using a series of fixed biases over each sub-acquisition time, which totally eliminates the need for the post-processing step comprising the weighted superposition of the discrete photocurrents. To enable this capability, an algorithm is developed for designing the time-varying bias for an arbitrary spectral-sensing window of interest. Since continuous-time biasing can be implemented within the readout circuit of a focal-plane array, this generalization would pave the way for the implementation of the algorithmic spectral tuning in focal-plane arrays within in each frame time without the need for on-sensor multiplications and additions. The technique is validated by means of simulations in the context of spectrometry and object classification while using experimental data for the DWELL under realistic signal-to-noise ratios.

©2012 Optical Society of America

## 1. Introduction

In multispectral (MS) and hyperspectral (HS) infrared (IR) sensing, the spectral information of an object is traditionally captured through dispersive optics or MS/HS optical-filter wheels. In recent years, our group has developed the quantum dots-in-a-well (DWELL) photodetector [1,2], which offers electrically controlled spectrally tunable responses in the long-wave IR (LWIR: 8-12 μm) region. The bias-controlled tunability is a result of the quantum-confined Stark effect [3]. Figure 1 shows bias-dependent spectral responses of a DWELL photodetector developed by our group. (We will use the spectral data of this device throughout this paper as we demonstrate the sensing algorithms to be developed.) Specifically, a single DWELL photodetector can perform the task of a MS IR detector by changing its bias voltage without requiring optical-filter wheels. The DWELL’s spectral tunability, as it stands, however, is not sufficient to provide the high resolutions required by many spectral-sensing problems.

To extend the MS capability of the DWELL photodetector, the DWELL’s bias-controlled spectral tunability was substantially enhanced by means of a post-processing technique, termed here as the spectral tuning (ST) algorithm [4–6]. The extended MS capabilities demonstrated by the ST algorithm include high-resolution, narrowband spectral filtering, as well as object spectrometry and classification [7,8]. We emphasize that none of these capabilities involved the use of spectral filters. The underlying principle of the ST algorithm is to sense an object with the DWELL photodetector sequentially at prescribed bias voltages, yielding a set of bias-dependent photocurrents. Then, the ST algorithm performs a linear superposition of photocurrents with a set of weights to reconstruct the emissivity of an object at a given wavelength. Each set of weights is designed by the ST algorithm for a specific spectral filter of interest. For each spectral filter of interest, the so-called superposition photocurrent best approximates the ideal photocurrent that would have been obtained while using a combination of a broadband detector and the desired spectral filter. To date, the ST algorithm has been developed and demonstrated using discrete set of static biases. The three data-processing steps involved include the calculation of weights corresponding to each spectral tuning filter, multiplication of weights with sensed photocurrents and the superposition of the weighted photocurrents to yield the superposition photocurrent.

In this paper we develop the concept for novel implementation of the ST algorithm within the readout circuit (ROIC) of a DWELL-based focal-plane array (FPA) without resorting to multiplying photocurrents by or performing photocurrent additions algebraically. Motivated by how a trans-impedance-based ROIC, works, namely by feeding the photocurrent at each bias into an integrating capacitor that yields an integrated photocurrent (charge) for each integration time [9–12], the idea here is to absorb both the multiplications (by weights) and additions of the ST algorithm in the photocurrent integration process by appropriately adjusting the bias of the DWELL continuously in time within an extended integration time. For example, if we have only two photocurrents, corresponding to two bias levels *v _{a}* and

*v*, with infinite signal-to-noise ratios (SNRs), multiplying the first and second by the weights 1 and 5, respectively, and summing up the two can be done in one step via the integration of the first photocurrent over a certain duration followed by the integration of the second photocurrent over five times the integration time of the first one while keeping the total integration time fixed. In this simple example the bias is held constant at level

_{b}*v*for one unit of time and then changed to level

_{a}*v*for five units of time, as the dynamic photocurrent is integrated continuously over the extended acquisition time. To achieve this effect for more general superposition schemes while incorporating the effects of SNRs, we will need to generalize the ST algorithm to allow for continuous, time-varying biases within a fixed integration time. As a result of the generalization, the algorithm will yield, for each desired spectral filter, a time-varying bias waveform.

_{b}Since weights that are to be absorbed in the time-varying bias can be positive or negative, two waveforms are designed that together span the total integration time: a “positive” waveform corresponding to the positive weights and a “negative” waveform corresponding to the negative weights. The integrated photocurrent corresponding to the “positive” waveform is added to the negative of the integrated photocurrent corresponding to the “negative” waveform, yielding the subtracted photocurrent. With this approach, the superposition photocurrent representing the spectral measurement is directly extracted from the ROIC [13,14] as the ROIC can be configured to apply the positive and negative bias waveforms sequentially and the two integrated photocurrents at each detector in the FPA. This paper will focus on the algorithmic aspects of the proposed ST technique; the ROIC-based implementation will be reported elsewhere.

The remainder of this paper is organized as follows. In Section 2 we review the concept of spectral tuning (ST) algorithm and further review its application to two representative MS sensing problems of object spectrometry and classification. In Section 3 we describe a theory to achieve the generalized spectral tuning (GST) algorithm, which allows a weight-compound and continuous time-varying biasing for the integration-time constraint. In Section 4, the algorithm is validated, followed by the conclusions in Section 5.

## 2. Review of the spectral tuning algorithm

In this section, we begin by briefly reviewing germane aspects of the ST algorithm drawing freely from our earlier work [5]. We consider an object of interest, *f*, whose emissivity in the LWIR region is denoted by *e*(λ). Suppose that a DWELL photodetector is used to probe the object illuminated by a blackbody at the bias voltages, *v*_{1},…, *v _{m}*, yielding a set of bias-dependent photocurrents,

*I*

_{1},…,

*I*. In principle, the photocurrent

_{m}*I*corresponding to the

_{k}*k*

^{th}bias can be expressed as an inner product between the emissivity of an object and each one of the DWELL’s spectral responses with the bias-dependent noise [5,9,15],

- $\Omega $- Solid Angle
- $e(\lambda ,T)$- Scene emissivity at temperature
*T* *${M}_{p}(\lambda ,T)$-*Planck function at temperature*T**${R}_{k}(\lambda )$-*DWELL spectral response at the*k*th bias- ${\tau}_{filt}(\lambda )$- Spectral window transmission, if used
- ${\tau}_{window}(\lambda )$- Window transmission
- ${A}_{\mathrm{det}}$- Detector area
- ${N}_{k}$- Noise associated with ${R}_{k}(\lambda )$
- $q$- Electron charge.

This general formula can be simplified to

*T*, simplifying the term,$e(\lambda ,T){M}_{p}(\lambda ,T)$and

*C*combines all the remaining factors.

*N*represents the total noise, which includes one or more of the following components: generation-recombination (G-R) noise, shot noise, Johnson noise, and 1/

_{k}*f*noise. The noise varies over the operating temperature and bias voltage of the detector and is assumed independent from detector to detector. In the ST algorithm [5],

*N*in Eq. (1) is embedded into the integral, so that the noisy spectral response at each bias is identified. Such noisy spectral responses are then incorporated into the algorithm to find a weight vector that best approximates a desired spectral shape in the sense of minimizing the wavelength-integrated mean squared error. The solution for a weight vector is provided in Eq. (2) and the detailed derivation can be found in [5]. Note that the bias-dependent variance of the noise is embedded in the solution of the weight vector through the signal-to-noise matrix Φ.

_{k}We specify the transmittance of a *desired* tuning filter, *r*(*λ*;*λ _{n}*), that would be used to estimate

*e*(λ) at the tuning wavelength

*λ*. For

_{n}*r*(

*λ*;

*λ*), the ST algorithm [5,6] calculates a weight vector,

_{n}**w**

*= [*

_{n}*w*

_{1},…,

*w*] using Eq. (2) below, which yields the

_{m}*algorithmic tuning filter*$\widehat{r}(\lambda ;{\lambda}_{n})$. The weights are derived so that the algorithmic tuning filter $\widehat{r}(\lambda ;{\lambda}_{n})$best approximates the hypothetical tuning filter

*r*(

*λ*;

*λ*). The weight vector

_{n}**w**

*is calculated using the formula [5]*

_{n}**A**is the matrix of DWELL’s spectral responses [

*R*

_{1}(λ),…,

*R*(λ)]

_{m}*and*

^{T}**Φ**is a matrix that includes the SNR term. Each measurement SNR

*corresponding to the*

_{k}*k*th bias was calculated using the formula [6]where

*I*is the averaged photocurrent of DWELL under illumination from a black body source and ${\sigma}_{N,k}$ is the noise power, which was calculated empirically from the dark current realizations by Poisson noise model [6,9]. The inclusion of SNR term reduces the noise accumulation in the linear superposition. The term

_{k}*α*

**Α**

^{Τ}**Q**

^{Τ}**QΑ**is a regularization term, which penalizes spurious fluctuations in the approximation. The matrix

**Q**is a Laplacian operator and

*α*is the regularization weight. Then the weight vector

**w**

*is linearly synthesized with the photocurrents,*

_{n}*I*

_{1},…,

*I*, yielding the superposition photocurrent $\widehat{I}$ as expressed byThe superposition photocurrent, $\widehat{I}$ , is usually computed with positive and negative signs since weights in

_{m}**w**

*can be either positive or negative. The synthetic photocurrent, $\widehat{I}$, represents the captured output by $\widehat{r}(\lambda ;{\lambda}_{n})$, which best reconstructs the emissivity*

_{n}*e*(λ) at

*λ*that we would have obtained by the DWELL photodetector looking at the object through an ideal spectral filter

_{n}*r*(λ;λ

*). Hence, our algorithmic tuning filter $\widehat{r}(\lambda ;{\lambda}_{n})$ [5–8] is functionally equivalent to the effect of an optical filter in IR multispectral sensing.*

_{n}## 3. Generalized spectral tuning algorithm

In this section, we describe the generalized spectral tuning algorithm to achieve a continuous time-varying biasing with acquisition time constraint. Our solution to the generalization is based upon a discrete-time approximation of the continuous-time problem.

Any continuous time-varying function can be approximated by a piecewise-constant function with jumps occurring at fixed time increments; an example is shown in Fig. 2 .

For an arbitrary bias function *V*(*t*), *t*∈[0,α], let *I _{V}*(

*t*) represent the dynamic photocurrent of the DWELL when it is driven by the time-varying bias waveform

*V*. Now consider a desired spectral filter

*f*that we can approximate with a superposition spectral filer according to the bias set B(

*f*) = {

*B*

_{1}(

*f*),…,

*B*(

_{k}*f*)}, each applied for a duration Δ

*t*, such that

*w*and${R}_{{B}_{i}(f)}\text{(}\lambda \text{)}$are the weight and the spectral response of DWELL at

_{i}*B*(

_{i}*f*).

Based on this piecewise-constant approximation of the time-varying bias, the superposition photocurrent $\widehat{I}$ in Eq. (3) can be reinterpreted as the integration of weighted photocurrents over *α*. As such, Eq. (3) can be recast as

*w*(

_{f}*t*) and piecewise constant photocurrent ${I}_{\text{B}(f)}(t)$ are defined as

*w*(

_{f}*t*) =

*w*and ${I}_{\text{B}(f)}(t)$ =

_{i}*I*for (

_{i}*i*-1)

*T*

_{i}_{-1}≤

*t*≤

*iT*, with Δ

_{i}*t*=

*iT*- (

_{i}*i*-1)

*T*

_{i}_{-1},

*i =*1,…,

*k*.

Motivated by the form of (6), we can further extend Eq. (6) to find $\widehat{I}$ without performing multiplications and superpositions with *w _{f}*(

*t*) as expressed by

*w*(

_{f}*t*) in Eq. (6). The idea is to embed the multiplication and superposition processes in the photocurrent integration by properly adjusting the integration time Δ

*t*within

*α*instead of scaling each photocurrent with

*w*(t). A key task is now to find$\widehat{B}(f)$.

_{f}The question is then whether we should simply scale Δ*t* with the corresponding weight in order to blend the weight information into the integration time. The answer is no. As shown in Eq. (2), the weights are calculated by the ST algorithm using the detector’s SNRs. It is to be noted that the SNR is proportional to the integration time of the detector [5,6,9], so, for instance, if Δ*t* is reduced according to some weight factor, so does the SNR of the integrated photocurrent. The new SNR, if lower than the old value, could result in an error in reconstructing the emissivity of an object. We next provide a solution to$\widehat{B}(f)$. The algorithm for calculating$\widehat{B}(f)$is given below.

We begin by normalizing entire set of weights {*w _{i}*} by their absolute minimum,

Then each Δ*t* is scaled by the absolute normalized weight$\left|{\widehat{w}}_{i}\right|$, denoted by${b}_{i}=\left|{\widehat{w}}_{i}\right|\Delta t$, where *i* = 1,…, *k*, and *k* is the number of bias-time intervals (or bias slots). This weight normalization guarantees that each *b _{i}* is equal to or greater than Δ

*t*, so that the SNR corresponds to

*b*will not be reduced. In addition,

_{i}*b*indicates that the important bias has a longer bias-time interval than the weak bias. The total (extended) integration time,

_{i}*τ*(Δ

*t*), is then calculated as

*t*. As a result, a time-varying biasing waveform with adjusted integration time is obtained as illustrated in Fig. 3 .

As we mentioned earlier, the sign of the weights can be either positive or negative, so two types of waveforms for integrating photocurrents,${I}_{\widehat{B}(f)}(\text{t})$, are obtained: (1) a negative waveform corresponding to negative sign of weights and (2) a positive waveform corresponding to positive sign of weights. In order to find $\widehat{I}$, we subtract the integrated photocurrent corresponding to the negative waveform from the integrated photocurrent corresponding to the positive waveform, mimicking the superposition of the probed photocurrents as in Eq. (4).

The challenge here is that *τ*(Δ*t*) may exceed the given total integration time *α* and the question is how do we adjust *τ*(Δ*t*) so that *τ*(Δ*t*) ≤ *α* ? To address this challenge, we repeat the steps above while reducing Δ*t*.

As defined in Eq. (9), *τ*(Δ*t*) is the integration time combined with weights, which it is assumed to be continuous. However, *τ*(Δ*t*) may not be monotonic but there is a maximal critical Δ*t*, call it *t**, such that *τ*(*t**) = *α* . Ideally, *t** is the maximum solution to *τ*(Δ*t*) = Δ*t*. As an approximation, however, *t** can be solved for numerically within a specified tolerance-level ε of *α* by

The computational procedure to search *t** is described as follows.

- 1) The normalized weight ${\widehat{w}}_{i}$ is obtained by using Eq. (8) and then each Δ
*t*is scaled by $\left|{\widehat{w}}_{i}\right|$, yielding a weighted bias time*b*._{i} - 2) With
*b*available from Step 1,_{i}*τ*(Δ*t*) is calculated by using Eq. (9) and if*τ*(Δ*t*) =*α*, the search is complete with*t** = Δ*t*and the corresponding biasing waveform is obtained. Otherwise, go to the next step. - 3) Set Δ
*t*= Δ*t*/2 and $\tilde{t}$2Δ*t*. - 4) Recalculate${\widehat{w}}_{i}$and
*b*._{i} - 5) Compute
*τ*(Δ*t*): if*τ*(Δ*t*) >*α*, then go back to Step 3; if*τ*(Δ*t*) satisfies Eq. (10), then*t** = Δ*t*. Otherwise, set Δ*t =*(Δ*t + $\tilde{t}$*) and go back to Step 4.

## 4. Simulation results on spectrometry and classification

To demonstrate the ST algorithm, we will show two representative MS sensing examples: (1) spectrometry of a LWIR object and (2) statistical classification of a test object among three LWIR objects based on the spectral matched filter [8]. For the sensing example (1), the LWIR object we selected shows the emissivity in 8-9 μm range (red curve in Fig. 4
). For the example (2), we selected *r*_{1}(*λ*;*λ*_{1}) out of three LWIR filters in Fig. 5
(dotted lines) as our test object. As shown in dotted line of Fig. 5 (left), the test object is a band-pass LWIR filter transmitting in 7.5-10.5 μm range. We sense the test object with the DWELL photodetector at once and then find its identity using our ST algorithm. These examples will be used as a reference for comparison with the generalized algorithm.

For the spectrometry example, we selected a triangular narrowband tuning filter as *r*(*λ*;*λ _{n}*) with 0.5 μm width and

*λ*= 8.8 μm to sample the emissivity of LWIR object,

_{n}*e*(λ), at

*λ*as illustrated in Fig. 4. Using Eq. (2), we calculated

_{n}**w**

*for the algorithmic tuning filter $\widehat{r}(\lambda ;{\lambda}_{n})$using minimal four biases, {-3.0, −0.8, 1.0, 2.8 V}, selected by the Minimal-Bias-Set (MBS) algorithm [8]. The MBS algorithm is the bias selection algorithm based on an exhaustive search approach, which identifies a minimal set of biases required for multiple sensing applications of interest. The search process of MBS algorithm is well described in [8]. The algorithmic tuning filter $\widehat{r}(\lambda ;{\lambda}_{n})$ is shown in Fig. 4 (solid black). We also simulated photocurrents,*

_{n}*I*

_{1},…,

*I*

_{4}, for these four biases using Eq. (1) with actual noise values available from the DWELL’s SNRs, {397.8, 135.9, 155.3, 122.2}. The reconstructed sample $\widehat{e}(\lambda ;{\lambda}_{n})$was then obtained by forming a linear superposition between

**w**

*and photocurrents (*

_{n}*I*

_{1},…,

*I*

_{4}) according to Eq. (4). We also generated the estimated emissivity,

*e*(

*λ*;

*λ*), resulting from sampling

_{n}*e*(λ) by

*r*(

*λ*;

*λ*), which is used as a reference measurement for the ST algorithm. The reconstructed emissivity $\widehat{e}(\lambda ;{\lambda}_{n})$by the ST algorithm is 0.134, and is within 22% error as compared to the benchmark value of

_{n}*e*(

*λ*;

*λ*) = 0.171.

_{n}For the classification example, we selected three actual spectral filters, *r*_{1}(*λ*;*λ*_{1}), *r*_{2}(*λ*;*λ*_{2}) and *r*_{3}(*λ*;*λ*_{3}) with centers at *λ*_{1} = 9 μm, *λ*_{2} = 8.5 μm and *λ*_{3} = 10 μm as shown in Fig. 5 (dashed line), as objects that need to be classified once each one of them is probed by the DWELL detectors using a set of prescribed biases. The classification is based on spectral matched filtering, which employs the three weight vectors (one weight vector for each filter) obtained from the ST algorithm to form a inner product with the photocurrent vector, thereby producing three features. Based on the maximum feature, the classifier labels the object of interest. In this example, we selected *r*_{1}(*λ*;*λ*_{1}) as the test object of interest to be classified. The classification process is described as follows.

Using Eq. (2), while using the same bias set as before, {-3.0, −0.8, 1.0, 2.8 V}, we obtained the three weight vectors, **w**_{1}, **w**_{2} and **w**_{3}, which respectively yield the approximated spectral filters, $\widehat{r}(\lambda ;{\lambda}_{1})$, $\widehat{r}(\lambda ;{\lambda}_{2})$and $\widehat{r}(\lambda ;{\lambda}_{3})$, that are optimally matched to *r*_{1}(*λ*;*λ*_{1}), *r*_{2}(*λ*;*λ*_{2}) and *r*_{3}(*λ*;*λ*_{3}), respectively. The approximated filters$\widehat{r}(\lambda ;{\lambda}_{1})$, $\widehat{r}(\lambda ;{\lambda}_{2})$and $\widehat{r}(\lambda ;{\lambda}_{3})$are termed *algorithmic spectral matched filters* to be used to classify the test object, *r*_{1}(*λ*;*λ*_{1}). Transmittances of three spectral matched filters are shown in Fig. 5 (solid black line).

We simulated the photocurrent vector, **I**_{class} = [*I*_{1},…, *I*_{4}], with Eq. (1) just as the DWELL photodetector probed the emissivity transmitted through the test object, *r*_{1}(*λ*;*λ*_{1}) using the biases, {-3.0, −0.8, 1.0, 2.8 V}. We considered **I**_{class} as the test data to classify. For the classification, we labeled three matched filters, $\widehat{r}(\lambda ;{\lambda}_{1})$, $\widehat{r}(\lambda ;{\lambda}_{2})$ and $\widehat{r}(\lambda ;{\lambda}_{3})$with Class 1, Class 2 and Class 3 respectively. Based on Eq. (4), **w**_{1}, **w**_{2} and **w**_{3} were linearly combined with **I**_{class}, extracting three synthesized features:${F}_{\text{1}}={\text{(}{w}_{1}\text{)}}^{T}{I}_{\text{class}}=\text{0}\text{.519}$, ${F}_{\text{2}}={\text{(}{w}_{2}\text{)}}^{T}{I}_{\text{class}}=\text{0}\text{.428}$ and ${F}_{\text{3}}={\text{(}{w}_{3}\text{)}}^{T}{I}_{\text{class}}=\text{0}\text{.457}$. Our classifier [8] identifies the class of an object (out of three predefined choices: 1… 3) based on the maximum (strongest) feature. In this case, the classifier correctly assigned the test object *r*_{1}(*λ*;*λ*_{1}) to Class 1 since *F*_{1} was the largest value.

For validation, we applied the GST algorithm for the same MS sensing problems as demonstrated for the ST algorithm above. For the spectrometry problem, the continuous time-varying bias waveform, which consists of negative and positive waveforms, was obtained by the GST algorithm as shown in Fig. 6 . According to these two waveforms, we simulated and integrated two photocurrents. The progression curves for integrating photocurrents corresponding to negative and positive waveforms are shown in Fig. 7 . The integrated photocurrent for negative waveform, ${\widehat{I}}_{\text{neg}}$, is 0.472 and the integrated photocurrent for positive waveform, ${\widehat{I}}_{\text{pos}}$, is 0.619. To reconstruct the emissivity of an object at 8.8 μm, we simply subtracted ${\widehat{I}}_{\text{neg}}$ from${\widehat{I}}_{\text{pos}}$, yielding $\widehat{I}={\widehat{I}}_{\text{pos}}-{\widehat{I}}_{\text{neg}}=$0.147. By comparison, this reconstructed emissivity (0.147) by the GST algorithm is closer to the ground truth (0.171 shown in Table 1 ) than the value (0.134 shown in Table 1) obtained by the ST algorithm. Thus, the GST algorithm performs better than the original ST algorithm in successfully extracting the narrowband spectral feature. Specifically for this problem, the GST algorithm reconstructed the emissivity of an object with a 14% of error rather than the ST algorithm, which achieved a 21% error for the same reconstruction. Since the GST algorithm inherently selects more relevant biases (more relevant spectral information) by virtue of employing continuous time-varying waveform (Fig. 6) compared to the original ST algorithm (which uses only four static biases) the GST algorithm can generally yield improved tuning results.

For the classification problem, three bias waveforms were computed by the GST algorithm as shown in Fig. 8 . Each bias waveform includes negative (Fig. 8 (red)) and positive (Fig. 8 (blue)) waveforms that were used to successfully design each algorithmic matched filter as shown in Fig. 9 . Three algorithmic matched filters were then labeled with the appropriate class number (Class 1, Class 2 and Class 3).

Based on the bias waveforms shown in Fig. 8, the curves showing the integration of photocurrents were obtained. Each curve represents the process of continuously probing the test filter object, *r*_{1}(*λ*;*λ*_{1}) with the DWELL photodetector controlled by the bias waveforms in Fig. 8.

From the curves shown in Fig. 10(a)
, the integrated photocurrents for negative and positive bias waveforms, ${\widehat{I}}_{\text{neg,}\text{class}1}$and ${\widehat{I}}_{\text{pos,}\text{class}1}$, are 0.99 and 1.515 for Class 1, respectively. For Class 2, the integrated photocurrents, ${\widehat{I}}_{\text{pos,}\text{class}2}$ and ${\widehat{I}}_{\text{pos,}\text{class}2}$, are 3.164 and 3.563 obtained from Fig. 10(b). For Class 3, the integrated photocurrents, ${\widehat{I}}_{\text{neg,}\text{class}3}$and ${\widehat{I}}_{\text{pos,}\text{class}3}$ are 2.064 and 2.504 obtained from Fig. 10(c). To perform the feature extraction for each class, we subtracted the integrated photocurrent corresponding to negative waveform from the one corresponding to positive waveform, in the same way as we did for the spectrometry example, yielding three features: *F*_{1} = = ${\widehat{I}}_{\text{pos,}\text{class}1}-{\widehat{I}}_{\text{neg,}\text{class}1}$0.525, *F*_{2} = ${\widehat{I}}_{\text{pos,}\text{class}2}-{\widehat{I}}_{\text{neg,}\text{class}2}$ = 0.399, and *F*_{3} = ${\widehat{I}}_{\text{pos,}\text{class}3}-{\widehat{I}}_{\text{neg,}\text{class}3}$ = 0.44. According to the classification rule, (similarly, the classifier outputs Class 1 since feature value *F*_{1} is the largest among the three features. Thus, the classifier correctly identified the test filter object, *r*_{1}(*λ*;*λ*_{1}) by assigning it to Class 1. The plot for the feature vector **F** is shown in Fig. 11
(blue). By comparison with the reference, the classification results shown in Fig. 11 demonstrate that classifiers by both algorithms have correctly assigned the test object to Class 1 based on extracted feature vectors.

## Performance of the GST algorithm for nonuniformity noise

In this subsection, we evaluate the performance of the GST algorithm with variation in the DWELL’s spectral response. We termed this variation the detector-to-detector nonuniformity in the spectral response or simply nonuniformity noise. To examine the nonuniformity noise in the GST algorithm, we introduced a stochastic multiplicative factor, *ρ* centered around unity, to the DWELL’s spectral response at each bias in the matrix A of Eq. (2) by

*ρ*= (1 +

*β*) with −1<

*β*<1.

The GST algorithm used to reconstruct the emissivity of the LWIR object at 8.8 μm in Table 1 was repeated for various levels of the nonuniformity noise. The noise level was controlled by varying a factor *ρ*, for example, the noise is amplified as *ρ* is away from 1. The results in Table 2
show that at high noise levels (*ρ* ≤ 0.6), the reconstruction error as compared to the reference was over 60%. For moderate noise levels (0.8 ≤ *ρ* ≤ 1.2), a good reconstruction was observed with the error of 25% or less.

## 5. Design considerations for practical applications and readout integrated circuit (ROIC)

To implement the presented algorithm in a practical imaging system, a new custom-designed ROIC with the following specifications must be developed:

- a) Ability to apply both positive and negative biases
- b) Ability to integrate both polarities of photocurrent
- c) Ability to perform the analysis in real time

The double polarity biases can be applied by simply using a capacitive trans-impedance amplifier (CTIA) technique [16], where an operational amplifier with a capacitive feedback is used for integration. Next, the polarity of integration can be controlled by switching the orientation of the integration capacitor in the CTIA circuit as shown in Fig. 12 . Finally, to be able to perform the analysis in real time, the algorithm must be implemented inside the ROIC. This can be done by using analog circuits to implement the signal processing functions. For example, after the completion of the process and by the end of integration time, the integrated value related to each feature will be stored in a sample and hold (S&H) circuit within each unit cell. Then the maximum value will be identified and the resulted feature will be sent out in real time during the readout process.

## 6. Conclusions

In this paper we have generalized the original algorithmic spectral tuning technique [5] to allow a continuous, time-varying bias waveform, which enables the detector to extract the desired spectral features for a specific multispectral sensing application in a limited integration time simply by operating the DWELL detector under an appropriately designed time-varying bias. The significance of the algorithm’s generalization is that it entirely absorbs the data-processing steps (multiplications and superpositions associated with weights) of the conventional spectral tuning algorithm in the photocurrent integration process, making the algorithm readily available for hardware implementation. The generalized spectral-tuning algorithm directly extracts spectral features by integrating the photocurrent with appropriately biasing the DWELL detector using a waveform instead of forming a weighted superposition of photocurrents resulting from static biasing. The elimination of the data-processing steps can greatly simplify the design of a multispectral sensing system since the system will no longer require an on-chip processing unit. As a result, the required cost and complexity of the system would be significantly reduced.

We successfully validated the generalized algorithm by means of simulation of two multispectral sensing problems: spectrometry of a representative spectral filter and the classification of three LWIR filters (used as objects to be classified) based on spectral matched filtering. The results were comparable to those obtained by the conventional spectral tuning algorithm. The next step is to implement the generalized spectral-tuning algorithm in hardware by means of a reconfigurable ROIC-based FPA and demonstrate multispectral spectrometry and object classification in real time.

## Acknowledgments

This work was supported in part by the National Science Foundation (NSF) Smart Lighting Engineering Research Center (No. EEC-0812056). The authors also acknowledge the supports of the National Science Foundation under grant ECCS-0925757 and AFOSR Optoelectronic Research Center Grant.

## References and links

**1. **S. Krishna, “Quantum dots-in-a-well infrared photodetectors,” J. Phys. D Appl. Phys. **38**(13), 2142–2150 (2005). [CrossRef]

**2. **S. Krishna, S. Raghavan, G. von Winckel, A. Stintz, G. Ariyawansa, S. G. Matsik, and A. G. U. Perera, “Three-color (λp1~3.8μm, λp2~8.5μm, λp3~23.2μm) InAs/InGaAs quantum-dots-in-a-well detector,” Appl. Phys. Lett. **83**(14), 2745–2747 (2003). [CrossRef]

**3. **D. A. B. Miller, D. S. Chemla, T. C. Damen, A. C. Gossard, W. Wiegmann, T. H. Wood, and C. A. Burrus, “Band-edge electroabsorption in quantum well structures: the quantum-confined stark effect,” Phys. Rev. Lett. **53**(22), 2173–2176 (1984). [CrossRef]

**4. **Ü. Sakoğlu, J. S. Tyo, M. M. Hayat, S. Raghavan, and S. Krishna, “Spectrally adaptive infrared photodetectors using bias-tunable quantum dots,” J. Opt. Soc. Am. B **21**(1), 7–17 (2004). [CrossRef]

**5. **Ü. Sakoğlu, M. M. Hayat, J. S. Tyo, P. Dowd, S. Annamalai, K. T. Posani, and S. Krishna, “Statistical adaptive sensing by detectors with spectrally overlapping bands,” Appl. Opt. **45**(28), 7224–7234 (2006). [CrossRef] [PubMed]

**6. **Ü. Sakoğlu, “Signal-processing strategies for spectral tuning and chromatic nonuniformity correction for quantum-dot IR sensors,” Ph.D. Dissertation, Univ. New Mexico (2006).

**7. **W.-Y. Jang, M. M. Hayat, J. S. Tyo, R. S. Attaluri, T. E. Vandervelde, Y. D. Sharma, R. Shenoi, A. Stintz, E. R. Cantwell, S. C. Bender, S. J. Lee, S. K. Noh, and S. Krishna, “Demonstration of bias controlled algorithmic tuning of quantum dots in a well (DWELL) MidIR detectors,” IEEE J. Quantum Electron. **45**(6), 5537–5540 (2009). [CrossRef]

**8. **W.-Y. Jang, M. M. Hayat, S. E. Godoy, S. C. Bender, P. Zarkesh-Ha, and S. Krishna, “Data compressive paradigm for multispectral sensing using tunable DWELL mid-infrared detectors,” Opt. Express **19**(20), 19454–19472 (2011). [CrossRef] [PubMed]

**9. **P. Bhattacharya, *Semiconductor Optoelectronic Devices* (Prentice Hall, 1996).

**10. **T. Yasuda, T. Hamamoto, and K. Aizawa, “Adaptive-integration-time image sensor with real-time reconstruction function,” IEEE Trans. Electron. Dev. **50**(1), 111–120 (2003). [CrossRef]

**11. **T. Ogi, T. Yasuda, T. Hamamoto, and K. Aizawa, “Smart image sensor for wide dynamic range by variable integration time,” IEEE Conf. on Multisensor Fusion and Integration for Intelligent Systems, 179–184 (2003).

**12. **T. Hamamoto and K. Aizawa, “A computational image sensor with adaptive pixel-based integration time,” IEEE J. Solid-state Circuits **36**(4), 580–585 (2001). [CrossRef]

**13. **M. G. Brown, J. Baker, C. Colonero, J. Costa, T. Gardner, M. Kelly, K. Schultz, B. Tyrrell, and J. Wey, “Digital-pixel focal plane array development,” Proc. SPIE **7608**, 76082H, 76082H-10 (2010). [CrossRef]

**14. **P. Zarkesh-Ha, W.-Y. Jang, P. Nguyen, A. Khoshakhlagh, and J. Xu, “A reconfigurable ROIC for integrated infrared spectral sensing,” the 23rd Annual Meeting of the IEEE Photonics Society 714–715 (2010).

**15. **B. Paskaleva, M. M. Hayat, Z. Wang, J. S. Tyo, and S. Krishna, “Canonical correlation feature selection for sensors with overlapping bands: theory and application,” IEEE T. Geo. Remote Sens. **46**(10), 3346–3358 (2008). [CrossRef]

**16. **L. J. Kozlowski, “Low noise capacitive transimpedance amplifier performance vs. alternative IR detector interface schemes in submicron CMOS,” Proc. SPIE Infrared Readout Electronics III, 2745, 2–11 (1996).