Abstract

The synthesis of a new category of spatial filters that produces sharp output correlation peaks with controlled peak values is considered. The sharp nature of the correlation peak is the major feature emphasized, since it facilitates target detection. Since these filters minimize the average correlation plane energy as the first step in filter synthesis, we refer to them as minimum average correlation energy filters. Experimental laboratory results from optical implementation of the filters are also presented and discussed.

© 1987 Optical Society of America

I. Introduction

The technique of using matched spatial filters (MSFs)[1] for optical pattern recognition has been well investigated, and several methods have been proposed to use it for recognition of 2-D images in the presence of noise and geometric distortions. Correlators are one of the most powerful techniques for locating multiple objects in parallel, and the MSF is optimal for recognition of targets in the presence of white noise. However, the MSF has two major limitations: (1) the output correlation peak degrades rapidly with geometric image distortions; and (2) the MSF (matched to one given image) cannot be used for multiclass pattern recognition.

The concept of MSFs has been greatly extended in recent years by several types of generalized filter. These methods can be broadly classified into two categories. The first category concerns in-plane 2-D scaling and rotation distortions. Such methods include the use of space-variant transforms[2] and circular harmonic functions (CHFs).[3] In these techniques, the intensity at the origin of the correlation function cannot generally be specified during filter synthesis. The second category of filters uses training images that are sufficiently descriptive and representative of the expected distortions. These filters can be viewed as generalizations of MSFs for the identification of multiple targets in the presence of virtually any type of distortion (i.e., 3-D distortions). The intensity at the center of the cross-correlation function (defined as the filter output) can be specified for each training image during synthesis, and several objects can be handled by one filter by including all object classes in the training set.

Several versions of this second class of filter exist.[4][8] The most well known is the synthetic discriminant function (SDF)[5] and its variations.[6][8] We refer to all such combination filters as correlation filters since they are designed for implementation in optical correlators. We also use the terms center and origin interchangeably to refer to the point of interest (the true peak) in the correlation plane (i.e., the filter output). This loses no generality since correlation is a shift-invariant operation. When only one image is present, the conventional SDF reduces to the MSF of that image. The use of more training set data is intended, (and required) to reduce sensitivity to image distortions. An SDF with minimum variance[8] has been derived and is one optimal filter. However, it controls SNR at the correlation peak only. None of the prior filters offers optimum detection. To aid in detecting correlation peaks, correlation filters using shifted versions of each training set image have been suggested to control the shape of true correlation peaks.[6] Peak-to-sidelobe ratio (PSR) filters have also been used. However, these only maximize PSR in the vicinity of the peak but not in the full correlation plane.[6] PSR filters do not allow control of the correlation peak, and both PSR and correlation filters require typically 5 times more training set images (for four shifts of each training set image). The motivation for these last two filters is to supress the presence of extraneous correlation peaks (away from the central peak) that make detection hard.

For the MSF, when the correct image is present at the input, the output of the correlator is the autocorrelation function. Thus locating an image with its MSF is simple, since the peak of the autocorrelation function is easy to identify. However, the linear combination correlation filters lack a sharp correlation peak, since the input image cross-correlates with all images in the training set. This often produces sidelobes of high intensity and degrades correlation plane PSR. The proposed filter in this paper uses a new technique for producing sharp correlation peaks and allowing easy detection in the full correlation plane as well as control of the correlation peak value. As before, training set images are used to reduce the filter's sensitivity to 3-D object distortions.

Section II contains a description of the mathematical notation and terminology used in this paper. The filter design problem is formulated in Sec. III, and its solution is discussed in Sec. IV. Several interesting properties of the filter are then discussed in Sec. V. A gradient descent-type procedure is proposed in Sec. VI for obtaining relatively constant correlation plane energies for all training images. Section VII summarizes the results of initial computer simulations and quantitative data to evaluate filter performance. Section VIII discusses the results obtained on an optically implemented laboratory version of the filter.

II. Notation

The ith training image is described as a 1-D discrete sequence (obtained by lexicographic ordering the rows of the image) denoted by xi(n). Its discrete Fourier transform (DFT) is denoted by Xi(k). In this discussion, we describe the discrete image sequence as a column vector xi of dimensionality d equal to the number of pixels in the image xi(n), i.e.,

xi=[xi(1),xi(2),xi(d)]T.
All DFTs are also of length d. We denote by the vector Xi the discrete frequency domain sequence Xi(k). We define a matrix
X=[X1,X2,,XN]
with column vectors Xi. We denote matrices by light face characters and vectors by bold characters. Upper case symbols refer to the frequency plane terms, while lower case symbols represent quantities in the space domain. The vector h represents the filter h(n) in the space domain and the vector H its Fourier transform H(k) in the frequency domain. It is clear that h can be obtained from H by inverse DFT and vice versa. We denote the correlation function of the ith image sequence xi(n) with the filter sequence h(n) by gi(n), i.e.,
gi(n)=xi(n)h(n).
We denote the DFT of the correlation function by Gi(k). The energy of the ith correlation plane is
Ei=n=1d|gi(n)|2=(1/d)k=1d|Gi(k)|2=(1/d)k=1d|H(k)|2|Xi(k)|2.
Equation (3) is a direct realization of Parseval's theorem. Using the vector form of the image sequence, we can also write Eq. (3) as
Ei=H+DiH,
where the superscript + denotes the conjugate transpose of a complex vector, and Di is a diagonal matrix of size d × d whose diagonal elements are the magnitude square of the associated element of Xi, i.e.,
Di(k,k,)=|Xi(k)|2.
Note that the diagonal elements of Di describe the power spectrum of xi(n).

III. Problem Definition

We now state the pattern recognition problem to be solved. We wish to design a correlation filter that ensures sharp correlation peaks while allowing constraints on the correlation peak values and retaining shift invariance. We also seek to improve tolerance to distortion using a reduced number of training images. Our main concern in this paper is the production of sharp easily detected correlation peaks (since the distortion tolerance of correlation filters has been addressed elsewhere[6]). To achieve good detection it is necessary to reduce correlation function levels at all points except at the origin of the correlation plane, where the imposed constraint on the peak value must also be met. Specifically, the value of the correlation function must be at a user specified value at the origin but is free to vary elsewhere. This is equivalent to minimizing the energy of the correlation function while satisfying intensity constraints at the origin.

In vector notation, the correlation peak amplitude constraint is

gi(0)=Xi+H=ui
for all i = 1,2,… ,N training set images, where ui is the user specified value of the ith correlation function at the origin and is also the ith element of the constraint vector u. In Eq. (6a), gi(0) is the value of the output correlation at the peak (origin). This filter must also minimize the correlation plane energy
Ei=H+DiH
for all i. We earlier defined the matrix X whose columns were the vectors Xi. Thus in matrix–vector notation, the problem is to find the frequency domain vector H that minimizes H+DiH for all i, while satisfying the peak constraints in Eq. (6a), which is written for all images as
X+H=u.

The solution to this problem does not exist because the simultaneous constrained minimization of all Ei (i = 1,2, … ,N) is not possible. We, therefore, attempt to minimize the average value of Ei (average correlation energy) in Eq. (6b) while meeting the linear constraints in Eq. (7). Hence we refer to the proposed filter as a minimum average correlation energy (MACE) filter.

We make several observations at this point regarding the differences between the MACE filter and the peak-to-sidelobe ratio correlation filter.[6] These PSR filters optimize the correlation plane PSR (but only in a small region about the peak). The PSR filter requires shifted images (and hence more training set images) for synthesis and does not allow control of the peak value as in Eq. (7). The MACE filter is attractive and unique since it does not require shifted images (during filter synthesis) and since it allows control of correlation peak intensities. Moreover the MACE filter is synthesized in the frequency domain, whereas the PSR and all other correlation filters are synthesized in the space domain. The algorithm for PSR filter synthesis maximizes the ratio of average true and false class correlation peak intensities (with shifted images being false class images). Our MACE filter seeks to minimize the average correlation plane energy for the training images. The algorithm for PSR filter synthesis maximizes a quadratic ratio by solving a generalized eigenvector problem, while the synthesis algorithm for MACE filters uses the method of Lagrange multipliers (as will be shown in Sec. IV) to minimize a quadratic term subject to linear constraints. Thus MACE filters differ significantly in concept and mathematical detail from PSR filters.

IV. Solution

A. MACE Filter Solution

The average correlation plane energy is

Eav=(1/N)i=1NEi=(1/N)i=1NH+DiH=(1/N)H+(i=1NDi)H.
We define D as
D=i=1NαiDi,
where αi are constants. If all αi = 1, we may rewrite Eq. (8) as
Eav=(1/N)H+DH,αi=1,i=1,2,,N.
Thus, for all αi = 1, each correlation plane is equally weighted, and the diagonal matrix D is the sum of the diagonal matrices Di, and the average energy of the correlation planes is given by Eq. (10). Since the minimum is not affected by a scale factor, we must minimize H+DH subject to the linear constraints X+H = u. In Sec. VI, we discuss selection of nonequal αi to improve performance.

The solution to this problem may be found using the method of Lagrange multipliers. This solution method is possible since we solve for the filter in the frequency domain. The vector H given by

H=D1X(X+D1X)1u.
The proof of this result is given in the Appendix.

B. General Solution

Assume that the d × d matrix A is nonsingular. A general solution vector H given by

H=AX(X+AX)1u
satisfies the linear constraint
X+H=u.
The properties of the vector H are determined by the matrix A. Since the choices for A are infinite, the possibilities for H are also limitless. In general, all vectors H as defined in Eqs. (12) minimize the quadratic term H+A−1H subject to the linear constraint X+H = u. This result is well known and used extensively in many areas of research. In this paper, we limit our attention to the relationship between the SDFs and the general family of solution vectors H. In fact, the general solution form in Eqs. (12) unifies several existing types of SDF and provides a common ground for comparison, as we now discuss.

Kallman[7] has suggested a minimax formulation of the problem to maximize the correlation peak PSR. In this approach, the filter is assumed to be a linear combination of the training images. Moreover, the constraint vector u is allowed to be complex to obtain a complex image plane correlation filter. Computing the solution to the minimax problem (as suggested by Kallman) requires enormous amounts of computer time, and an exhaustive search must be carried out to optimize the proposed criterion. Our method does not restrict the solution to be a linear combination of the training images and requires far less CPU time. Although Kallman's filter is ideal, the performance of the MACE filter (as shown in Sec. VII) excells that of most other existing correlation filters. As sated before, the MACE filter can be synthesized in a relatively short time with fewer training images and is hence preferable.

If A is the identity matrix (A = I), the filter vector reduces to the conventional SDF-1 or projection SDF and is given by H = X(X+X)−1u. Recall that all terms refer to quantities in the frequency domain. Therefore, this expression represents SDF-1 (or projection SDF) filters in the frequency domain. Equivalent formulations in the space domain are also possible. Thus, when A is the identity matrix, we have the more familiar space domain expression h = x(xTx)−1u. Note that in this form x contains the training images, not their Fourier transforms. The vector u must be scaled by the constant d (the dimensionality of the training vectors) to obtain the same projection outputs as in the case of the frequency domain method.

A second example is the minimum variance synthetic discriminant function (MVSDF) proposed by Kumar.[8] Assume that the images to be identified are corrupted by additive zero mean noise with a covari-ance matrix C. It was shown that when A = C−1, the resulting filter maximizes the output SNR at the correlation peak by minimizing the variance of the filter output peak value.

We have provided a third choice for A in this paper. We have shown (in the frequency domain) that when A is diagonal and its nonzero elements constitute a sequence which is the reciprocal of the average power spectrum of the training data, the resulting filter H minimizes the average cross-correlation energy of the training data and the filter. An equivalent formulation in the space domain is possible where the matrix A would be Toeplitz. The problem was formulated in the frequency domain for the sake of analytical simplicity.

V. Properties of the MACE Filter

In this section, we discuss three noteworthy properties of the proposed filter. The structure of the MACE filter as a cascade of a whitening filter is examined in Sec. V.A. Sections V.B and V.C discuss special aspects of the filter's performance, proving that correlation energies obtained with the MACE filter cannot be further reduced and that in the extreme case a delta function is obtained in the output correlation plane.

A. Structure of the Optimal MACE filter

In this section, we show that the MACE filter can be interpreted as the cascade of two stages. The first stage has a transfer function related directly to the average power spectrum of the training data, and the second stage is a simple projection SDF based on the training images filtered by the first stage. Recall that

H=D1X(X+D1X)1u,
where D is diagonal. We assume that none of the diagonal terms of D is zero. Therefore, D−1 is diagonal. Note that the diagonal elements of D are equal to the samples of the average power spectrum of the training images. Hence the diagonal elements of D−1 are the reciprocals of the corresponding elements of the average power spectrum.

Let D−0.5 = P, i.e., P is a diagonal matrix with its diagonal elements being the reciprocal square roots of the diagonal elements of D. Then

H=P(PX)(X+PPX)1u.
Let PX=X¯. This allows H to be rewritten as
H=PX¯(X¯+X¯)1u.
We let H¯=X¯(X¯+X¯)1u and write H as
H=PH¯.
The vector H¯ is an ordinary SDF based on the transformed data X¯. Thus in the frequency domain H is seen to be the cascade of the matrix P (related to the power spectrum of the training data) and the projection SDF H¯ (based on transformed data X¯). Equation (15) can be described by the block diagram in Fig. 1. The input FT data Xi is first filtered by P (which may be viewed as a spectrum whitening filter), and then filtered by H¯ (the projection SDF based on the filtered data) to obtain the final output ui. The above discussion is important for the following two reasons.

  1. The MACE filter is the same as the conventional SDF operating on preprocessed (filtered) data, where the preprocessor forces the average (over all training images) power spectrum of the training images to become white.
  2. The MACE filter is also optimal for target recognition in the presence of noise for which P is the whitening filter. This is a direct consequence of the earlier results[9] which show that the optimal filter for a particular type of input noise is a cascade of the whitening filter and the conventional SDF based on the transformed data.

B. Preprocessing Invariance

In this section we prove that no linear preprocessing of the training data can alter or improve on the performance of the MACE filter. In other words, we show that, although the filter structure changes if the training data are linearly preprocessed, the correlation energies Ei do not change. This means that high pass, low pass, or any bandpass filtering of the data is of no consequence. This statement is significant because intuitively one may feel that high pass filtered data should yield lower correlation energies. If such filtering is useful, it is included automatically in the filter.

Assume that we prefilter the training data by a linear shift-invariant filter whose DFT is given by F(k). Let X¯i(k) be the DFT of the filtered data. Then, in the frequency domain, we have

X¯i(k)=F(k)Xi(k).

Define the diagonal matrix S so that S−0.5(k,k) = F(k). In matrix vector notation, we then have

X¯=S0.5X
Therefore, our new MACE filter structure is
H¯=D¯1X¯(X¯+D¯1X¯)1u.
This is the new MACE filter based on the preprocessed training data X¯. Consider
D¯=iαiD¯i
in Eq. (18) and the term D¯i. Using Eq. (17),
|X¯i(k)|2=|S0.5(k,k)|2|Xi(k)|2.
We denote the product of S−0.5 and its conjugate transpose by S−1 so that
S1=S0.5(S0.5)+.
This yields [by writing Eq. (19) in matrix form]
D¯i=S1Di=S0.5Di(S0.5)+,
D¯=S1i=1NαiDi=S1D=DS1,
from which
D¯1=D¯1S=S0.5D1(S0.5)+=(S0.5)+D1S0.5.
Substituting for D¯1 in Eq. (22) and X¯ in Eq. (17) into Eq. (18), we get
H¯=D1SS0.5X[X+(S0.5)+(S0.5)+D1S0.5S0.5X]1u=(S0.5)+D1X(X+D1X)1u=(S0.5)+H.
As a result, the ith correlation energy E¯i of the preprocessed training data is given by
E¯i=H¯+D¯iH¯=[H+S0.5][S0.5Di(S0.5)+][(S0.5)+H]=H+(S0.5S0.5)Di(S0.5S0.5)+H=H+DiH.
But H+DiH is the correlation energy Ei obtained with the MACE filter based on unprocessed data given in Eq. (4). Thus we conclude that the MACE filter's performance cannot be changed by preprocessing the input images with any linear filter. In this sense, the MACE filter achieves the lowest possible values of Ei while meeting linear projection constraints.

C. Single Training Image

Consider the case when N = 1. Let X represent the DFT of the single training image. The diagonal elements of D are then given by

D(k,k)=|X(k)|2.
Thus the quadratic term (X+D−1X) is given by
X+D1X=i=1dj=1dX*(i)X(j)D1(i,j),
where X(i) denotes the ith element of the vector X, and the superscript * denotes complex conjugation. Since D is diagonal, we substitute Eq. (25) into Eq. (26) to obtain
X+D1X=i=1dX*(i)X(i)|X(i)|2=d.
For this special case, Eq. (11) reduces to
H=uD1X,
where a factor of d has been absorbed in the definition of u (the scalar representing the constraint on the output correlation peak value). If X(k) is the discrete sequence corresponding to the vector X in the sense of Eq. (1), and H(k) corresponds to H, we have the following frequency domain expression for the single training image MACE filter:
H(k)=uX(k)|X(k)|2.
Equation (29) is identical to the phase-correlation filter proposed by Pearson et al.[10] Thus their filter is a special case of the MACE filter and is obtained when N = 1. Note that the single training image MACE filter is not a phase-only filter[11] since its magnitude is given by |H(k)| = |u| · [1/|X(k)|] and in general is not constant. When X*(k) is input to the system (as in the case of the MSF) with the filter in Eq. (29), the data leaving the frequency plane are
X*(k)H(k)=uX*(k)X(k)|X(k)|2=u=constant.
Since the left-hand side of Eq. (30) represents the product of the Fourier transform of the input and the filter, we obtain the output correlation
x(n)h(n)=uδ(n),
Where δ(n) represents the delta function in the correlation plane. Thus we find that for this special case the MACE filter sets the entire correlation function to zero (minimum correlation energy) except at the origin, where the amplitude must be at a user specified value. This results in a delta function of height u in the output correlation plane. We conclude that the phase correlation method minimizes correlation plane energy by power spectrum normalization and is a special case of the proposed MACE filter.

VI. Iterative Energy Scatter Reduction

The MACE filter described in Sec. IV minimizes the average correlation energy Eav in Eq. (10) for the training set. This ensures that (on the average) all training data correlation planes yield the sharpest possible peaks while meeting the imposed constraints. We now advance a further iterative improvement to the filter. We note that some individual Ei values may lie much below Eav while others may exceed Eav by a large amount. We thus consider a filter that minimizes the largest of the individual Ei to reduce the scatter in the Ei values. This increases the value of Eav by a small amount but reduces the scatter in the Ei values, and this is preferable. The scatter for all the Ei is

σ2=1Ni=1N(EiEav)2.
The filter obtained by reducing the scatter is a suboptimal MACE filter but should yield better performance over all training images.

This final MACE filter is derived from the optimal MACE filter in Eq. (11) as a minimax optimization problem using a simple gradient descent procedure. To reduce the correlation energies of those images whose Ei (from the optimal MACE filter) are large, we alter their αi coefficients in Eq. (9). The coefficients αi determine the contribution of Di towards D and hence the contribution of Ei towards Eav. When all αi are not exactly equal to 1, the weighted sum of the Ei is not the exact average. We denote this weighted sum

i=1NαiEi
by E¯av. The corresponding scatter is denoted by σ¯2. By setting a particular αk to be larger than the others, we obtain a filter which minimizes Ek more than the other Ei. Hence, to reduce the scatter, we set those αi large which correspond to the large Ei (since it is these correlation plane energies that must be reduced more). The αi corresponding to small Ei are kept small. The filter is then resynthesized according to Eq. (11) with the altered αi. This process is continued until the scatter reaches a minimum. The algorithm is summarized in Table I. In our tests, we used P = 3 in Table I. Smaller values of P result in slower descent, and large P values may cause oscillations. A formal proof for the convergence of this procedure does not yet exist. However, this algorithm was found to converge in all cases examined.

VII. Initial Simulation Results

The new suboptimal MACE filter was synthesized to discriminate between a tank and an armored personnel carrier (APC). Thirty-six images of each object were available from a 20° depression angle at 10° increments in aspect. Six images of each object were chosen at aspect intervals of 60° for a two-class training set of twelve images. Each image contained 32 × 32 pixels with the pixel values coded to 256 gray levels. Edge enhancement was not performed. We now report the test results obtained by correlating the training images with the MACE filter. Correlation output amplitudes of 1.0 and 0.296 were arbitrarily specified for true (class 1, tank) and false (class 2, APC) class targets. Since detection is in intensity mode, we expect to measure output correlation intensities of 1.0 and 0.2962 = 0.0876. The total CPU time for filter synthesis (including iterative scatter reduction) based on these twelve training images was 50.4 s on a VAX 11/ 750.

The results of the iterative procedure for minimizing correlation energy scatter are shown in Fig. 2. The ● points in Fig. 2 are the initial individual correlation energy levels Ei of the training images prior to iterative reduction of scatter σ2. The first six training images are tanks, and training images 7 to 12 are APCs. The average correlation energy value is found to be 3.875. This is the minimum produced by the initial algorithm for the given training set. The scatter σ2 is 2,13. The × points in Fig. 2 show the correlation energies for each image after two iterative cycles of scatter reduction with P = 3. The new correlation energy average is 4.09 (an increase of 0.22), while the scatter is reduced to 1.03 (a reduction of 1.10). A relatively significant decrease in the scatter of the correlation energies is obtained at the cost of an acceptably small increase in the average energy value, as shown in Fig. 2. Note that E4 (the correlation energy for the fourth training image) increased while all other true class Ei, i = 1, 2, … ,6 decreased. This can be attributed to the fact that the initial value of E4 was sufficiently smaller than Emax to cause α4 to be small. In general, the results of the scatter reduction algorithm are data dependent, and no significant comment can be made on the behavior of individual Ei values.

Typical 3-D plots of class 1 and class 2 correlation planes are shown in Figs. 3(A) and (B), respectively. The sharpness of the correlation peak is excellent, and the sidelobes away from the peaks are very low in both cases. Tables II and III list the statistics for true and false class training data, respectively. These data include the test data identifiers, the intensity at the center of the correlation peak, the largest value anywhere in the correlation plane, the location of the largest correlation plane peak, plus two measures (N and PSR) of the sharpness of the correlation peak described below. The parameter N listed is the number of standard deviations above the mean of the correlation plane that the peak is. PSR is the peak-to-sidelobe ratio measured in a 11 × 11 pixel region around the peak. The full 64- × 64-pixel correlation plane was stored in 64 × 64 memory arrays. Pixel (33,33) is the center at which the value is user specified (1.0 or 0.0876). We note that all twelve correlation planes satisfied the imposed constraints at the center. For the true class 1 images, the peak at the center is also the largest peak and has very high N and PSR measures indicating a sharp peak. The largest peak for the false class objects to be rejected is not always at the center but is three pixels off-center. Its value is much lower than the true class peak, and thus detection of the class 1 objects is improved. As seen, the largest peaks anywhere for class 2 objects have low values between 0.08 and 0.21. In general, the N and PSR measures are much larger for true class 1 than for false class 2 peaks, as expected.

VIII. Experimental Results

We now present initial optical laboratory results obtained with MACE filters. Twelve high resolution (256 × 256) images were used for filter synthesis, six images of the tank and six of the APC. Output correlation peak amplitudes of 1.0 (for the tank) and 0.707 (for the APC) were specified. The measured output intensities are expected to be 1.0 and 0.5 for class 1 and class 2 images, respectively. The MACE filter was synthesized in the frequency plane, and the image plane MACE filter was obtained by an inverse DFT. It is real since the training data are real. The 2-D discrete filter image h(i,j) was regenerated from h(n) by reordering the samples appropriately. The resulting gray level image plane filter was then recorded on a laser printer using halftoning techniques with sixty-four gray levels employed. The image plane MACE filter from the laser printer was photoreduced to 0.5 × 0.5 cm2. Its frequency domain matched filter was formed optically in λ = 633 nm with a fL = 371 mm lens and an optical reference beam at 20°.

A test scene of two tanks and two APCs was generated (Fig. 4). It contains two training set images (the 0° tank and APC at the left) and two nontraining set images (the 10° rotated tank and APC shown at the right). This test scene was recorded with sixty-four gray levels on a laser printer, photoreduced to 1 × 1 cm2, and placed in the input plane of an optical frequency plane correlator, with the MACE filter in the frequency plane. Figure 5 shows cross sections of the correlation output for the two object classes. The cross section of the correlation output for the two tank inputs is shown in Fig. 5(A). Two large and sharp peaks occur. Figure 5(B) shows the outputs for the class 1 (tank) as the left peak and the class 2 object (APC) as the right peak. The nearly equal peaks in Fig. 5(A) demonstrate the distortion tolerance of the filter. The tank correlation peak at the left in Fig. 5(B) is of the same height. The APC correlation peak to the right in Fig. 5(B) is seen to be half of the height of the tank peak, in agreement with theory. This verifies the ability to control correlation peak values and reject one class of object. The sidelobes are seen to be low, and hence the PSR of the MACE filter is demonstrated.

Appendix

Let A be a nonsingular matrix. We wish to find a vector H so that the quadratic term H+ AH is minimized subject to the linear constraint

X+H=u,
where the superscript + denotes complex conjugate transpose. The ith column of the matrix X is denoted by Xi and ui denotes the ith element of the vector u. We form the functional to be minimized:
ϕ=H+AH2λ1(H+X1u1)2λN(H+XNuN),
where λ1, … ,λN are parameters introduced to satisfy the constrained minimization. Setting the gradient of ϕ with respect to H equal to O (the zero vector), we see that the vector H satisfies
AH=λ1X1++λNXN,
where the coefficients λi are chosen to satisfy the constraints in Eq. (Al). Since the matrix A is invertible, we can rewrite H as
H=A1[i=1NλiXi]=i=1NλiA1Xi.
In terms of the vector L = [λ12, … λN]+,
H=A1XL.
Substituting (A5) into Eq. (Al), the constraint equation becomes
X+A1XL=u,
which is solved for L as
L=(X+A1X)1u.
Therfore, substituting L in Eq. (A7) into Eq. (A5), we obtain the final expression for H as
H=A1X(X+A1X)1u.
The vector H in Eq. (A8) simultaneously satisfies X+H = u and minimizes H+ AH.

The authors acknowledge the support of this research by the independent research and development funds of General Dynamics–Pomona. We thank J. Z. Song for helpful laboratory assistance.

Figures and Tables

 

Fig. 1 MACE filter as a cascade of the prefilter P and the projection SDF H¯.

Download Full Size | PPT Slide | PDF

 

Fig. 2 Plot of training image correlation energies before ● and after × iterative scatter reduction.

Download Full Size | PPT Slide | PDF

 

Fig. 3 (A) True class correlation plane for tank; (B) false class correlation plane for APC.

Download Full Size | PPT Slide | PDF

 

Fig. 4 Input plane test scene for optical correlator.

Download Full Size | PPT Slide | PDF

 

Fig. 5 (A) Cross section of class 1 (tanks) optical correlation peaks; (B) cross section of class 1 and 2 (tank and APC) optical correlation peaks.

Download Full Size | PPT Slide | PDF

Tables Icon

Table I. Iterative Scatter Reduction Algorithm

Tables Icon

Table II. Correlation Plane Statistics for Class 1 Image (Tanks)

Tables Icon

Table III. Correlation Plane Statistics for Class 2 Image (APC)

References

1. A. B. VanderLugt, “Signal Detection by Complex Matched Spatial Filtering,” IEEE Trans. Inf. Theory IT-10, 139 (1964). [CrossRef]  

2. D. Casasent and D. Psaltis, “Position, Rotation, and Scale Invariant Optical Correlation,” Appl. Opt. 15, 1795 (1976). [CrossRef]   [PubMed]  

3. Y. N. Hsu and H. H. Arsenault, “Optical Pattern Recognition using the Circular Harmonic Expansion,” Appl. Opt. 21, 4016 (1982). [CrossRef]   [PubMed]  

4. H. J. Caulfield and M. H. Weinberg, “Computer Recognition of 2-D Patterns using Generalized Matched Filters,” Appl. Opt. 21, 1699 (1982). [CrossRef]   [PubMed]  

5. D. Casasent, “Unified Synthetic Discriminant Function Computational Formulation,” Appl. Opt. 23, 1620 (1984). [CrossRef]   [PubMed]  

6. D. Casasent and W. T. Chang, “Correlation Synthetic Discriminant Functions,” Appl. Opt. 25, 2343 (1986). [CrossRef]   [PubMed]  

7. R. R. Kallman, “Construction of Low Noise Optical Correlation Filters,” Appl. Opt. 25, 1032 (1986). [CrossRef]   [PubMed]  

8. B. V. K. Vijaya Kumar, “Minimum Variance Synthetic Discriminant Functions,” J. Opt. Soc. Am. A 3, 1579 (1986). [CrossRef]  

9. B. V. K. Vijaya Kumar and A. Mahalanobis, “Alternate Interpretation for Minimum Variance Synthetic Discriminant Functions,” Appl. Opt. 25, 2484 (1986). [CrossRef]  

10. J. J. Pearson, D. C. Hines Jr., S. Golosman, and C. D. Kuglin, “Video-Rate Image Correlation Processor,” Proc. Soc. Photo-Opt. Instrum. Eng.119, 197 (IOCC1977).

11. J. L. Horner and P. D. Gianino, “Phase-Only Matched Filtering,” Appl. Opt. 23, 812 (1984). [CrossRef]   [PubMed]  

References

  • View by:
  • |
  • |
  • |

  1. A. B. VanderLugt, “Signal Detection by Complex Matched Spatial Filtering,” IEEE Trans. Inf. Theory IT-10, 139 (1964).
    [Crossref]
  2. D. Casasent, D. Psaltis, “Position, Rotation, and Scale Invariant Optical Correlation,” Appl. Opt. 15, 1795 (1976).
    [Crossref] [PubMed]
  3. Y. N. Hsu, H. H. Arsenault, “Optical Pattern Recognition using the Circular Harmonic Expansion,” Appl. Opt. 21, 4016 (1982).
    [Crossref] [PubMed]
  4. H. J. Caulfield, M. H. Weinberg, “Computer Recognition of 2-D Patterns using Generalized Matched Filters,” Appl. Opt. 21, 1699 (1982).
    [Crossref] [PubMed]
  5. D. Casasent, “Unified Synthetic Discriminant Function Computational Formulation,” Appl. Opt. 23, 1620 (1984).
    [Crossref] [PubMed]
  6. D. Casasent, W. T. Chang, “Correlation Synthetic Discriminant Functions,” Appl. Opt. 25, 2343 (1986).
    [Crossref] [PubMed]
  7. R. R. Kallman, “Construction of Low Noise Optical Correlation Filters,” Appl. Opt. 25, 1032 (1986).
    [Crossref] [PubMed]
  8. B. V. K. Vijaya Kumar, “Minimum Variance Synthetic Discriminant Functions,” J. Opt. Soc. Am. A 3, 1579 (1986).
    [Crossref]
  9. B. V. K. Vijaya Kumar, A. Mahalanobis, “Alternate Interpretation for Minimum Variance Synthetic Discriminant Functions,” Appl. Opt. 25, 2484 (1986).
    [Crossref]
  10. J. J. Pearson, D. C. Hines, S. Golosman, C. D. Kuglin, “Video-Rate Image Correlation Processor,” Proc. Soc. Photo-Opt. Instrum. Eng.119, 197 (IOCC1977).
  11. J. L. Horner, P. D. Gianino, “Phase-Only Matched Filtering,” Appl. Opt. 23, 812 (1984).
    [Crossref] [PubMed]

1986 (4)

1984 (2)

1982 (2)

1976 (1)

1964 (1)

A. B. VanderLugt, “Signal Detection by Complex Matched Spatial Filtering,” IEEE Trans. Inf. Theory IT-10, 139 (1964).
[Crossref]

Arsenault, H. H.

Casasent, D.

Caulfield, H. J.

Chang, W. T.

Gianino, P. D.

Golosman, S.

J. J. Pearson, D. C. Hines, S. Golosman, C. D. Kuglin, “Video-Rate Image Correlation Processor,” Proc. Soc. Photo-Opt. Instrum. Eng.119, 197 (IOCC1977).

Hines, D. C.

J. J. Pearson, D. C. Hines, S. Golosman, C. D. Kuglin, “Video-Rate Image Correlation Processor,” Proc. Soc. Photo-Opt. Instrum. Eng.119, 197 (IOCC1977).

Horner, J. L.

Hsu, Y. N.

Kallman, R. R.

Kuglin, C. D.

J. J. Pearson, D. C. Hines, S. Golosman, C. D. Kuglin, “Video-Rate Image Correlation Processor,” Proc. Soc. Photo-Opt. Instrum. Eng.119, 197 (IOCC1977).

Mahalanobis, A.

Pearson, J. J.

J. J. Pearson, D. C. Hines, S. Golosman, C. D. Kuglin, “Video-Rate Image Correlation Processor,” Proc. Soc. Photo-Opt. Instrum. Eng.119, 197 (IOCC1977).

Psaltis, D.

VanderLugt, A. B.

A. B. VanderLugt, “Signal Detection by Complex Matched Spatial Filtering,” IEEE Trans. Inf. Theory IT-10, 139 (1964).
[Crossref]

Vijaya Kumar, B. V. K.

Weinberg, M. H.

Appl. Opt. (8)

IEEE Trans. Inf. Theory (1)

A. B. VanderLugt, “Signal Detection by Complex Matched Spatial Filtering,” IEEE Trans. Inf. Theory IT-10, 139 (1964).
[Crossref]

J. Opt. Soc. Am. A (1)

Other (1)

J. J. Pearson, D. C. Hines, S. Golosman, C. D. Kuglin, “Video-Rate Image Correlation Processor,” Proc. Soc. Photo-Opt. Instrum. Eng.119, 197 (IOCC1977).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 MACE filter as a cascade of the prefilter P and the projection SDF H ¯.
Fig. 2
Fig. 2 Plot of training image correlation energies before ● and after × iterative scatter reduction.
Fig. 3
Fig. 3 (A) True class correlation plane for tank; (B) false class correlation plane for APC.
Fig. 4
Fig. 4 Input plane test scene for optical correlator.
Fig. 5
Fig. 5 (A) Cross section of class 1 (tanks) optical correlation peaks; (B) cross section of class 1 and 2 (tank and APC) optical correlation peaks.

Tables (3)

Tables Icon

Table I Iterative Scatter Reduction Algorithm

Tables Icon

Table II Correlation Plane Statistics for Class 1 Image (Tanks)

Tables Icon

Table III Correlation Plane Statistics for Class 2 Image (APC)

Equations (47)

Equations on this page are rendered with MathJax. Learn more.

x i = [ x i ( 1 ) , x i ( 2 ) , x i ( d ) ] T .
X = [ X 1 , X 2 , , X N ]
g i ( n ) = x i ( n ) h ( n ) .
E i = n = 1 d | g i ( n ) | 2 = ( 1 / d ) k = 1 d | G i ( k ) | 2 = ( 1 / d ) k = 1 d | H ( k ) | 2 | X i ( k ) | 2 .
E i = H + D i H ,
D i ( k , k , ) = | X i ( k ) | 2 .
g i ( 0 ) = X i + H = u i
E i = H + D i H
X + H = u .
E av = ( 1 / N ) i = 1 N E i = ( 1 / N ) i = 1 N H + D i H = ( 1 / N ) H + ( i = 1 N D i ) H .
D = i = 1 N α i D i ,
E av = ( 1 / N ) H + D H , α i = 1 , i = 1 , 2 , , N .
H = D 1 X ( X + D 1 X ) 1 u .
H = AX ( X + AX ) 1 u
X + H = u .
H = D 1 X ( X + D 1 X ) 1 u ,
H = P ( PX ) ( X + P PX ) 1 u .
H = P X ¯ ( X ¯ + X ¯ ) 1 u .
H = P H ¯ .
X ¯ i ( k ) = F ( k ) X i ( k ) .
X ¯ = S 0.5 X
H ¯ = D ¯ 1 X ¯ ( X ¯ + D ¯ 1 X ¯ ) 1 u .
D ¯ = i α i D ¯ i
| X ¯ i ( k ) | 2 = | S 0.5 ( k , k ) | 2 | X i ( k ) | 2 .
S 1 = S 0.5 ( S 0.5 ) + .
D ¯ i = S 1 D i = S 0.5 D i ( S 0.5 ) + ,
D ¯ = S 1 i = 1 N α i D i = S 1 D = DS 1 ,
D ¯ 1 = D ¯ 1 S = S 0.5 D 1 ( S 0.5 ) + = ( S 0.5 ) + D 1 S 0.5 .
H ¯ = D 1 SS 0.5 X [ X + ( S 0.5 ) + ( S 0.5 ) + D 1 S 0.5 S 0.5 X ] 1 u = ( S 0.5 ) + D 1 X ( X + D 1 X ) 1 u = ( S 0.5 ) + H .
E ¯ i = H ¯ + D ¯ i H ¯ = [ H + S 0.5 ] [ S 0.5 D i ( S 0.5 ) + ] [ ( S 0.5 ) + H ] = H + ( S 0.5 S 0.5 ) D i ( S 0.5 S 0.5 ) + H = H + D i H .
D ( k , k ) = | X ( k ) | 2 .
X + D 1 X = i = 1 d j = 1 d X * ( i ) X ( j ) D 1 ( i , j ) ,
X + D 1 X = i = 1 d X * ( i ) X ( i ) | X ( i ) | 2 = d .
H = u D 1 X ,
H ( k ) = u X ( k ) | X ( k ) | 2 .
X * ( k ) H ( k ) = u X * ( k ) X ( k ) | X ( k ) | 2 = u = constant .
x ( n ) h ( n ) = u δ ( n ) ,
σ 2 = 1 N i = 1 N ( E i E av ) 2 .
i = 1 N α i E i
X + H = u ,
ϕ = H + A H 2 λ 1 ( H + X 1 u 1 ) 2 λ N ( H + X N u N ) ,
A H = λ 1 X 1 + + λ N X N ,
H = A 1 [ i = 1 N λ i X i ] = i = 1 N λ i A 1 X i .
H = A 1 X L .
X + A 1 X L = u ,
L = ( X + A 1 X ) 1 u .
H = A 1 X ( X + A 1 X ) 1 u .

Metrics