Sparse representation-based classification (SRC) has attracted increasing
attention in remote-sensed hyperspectral communities for its competitive
performance with available classification algorithms. Kernel sparse
representation-based classification (KSRC) is a nonlinear extension of SRC,
which makes pixels from different classes linearly separable. However, KSRC only
considers projecting data from original space into feature space with a
predefined parameter, without integrating a priori domain
knowledge, such as the contribution from different spectral features. In this
study, customizing kernel sparse representation-based classification (CKSRC) is
proposed by incorporating th nearest neighbor density as a weighting
scheme in traditional kernels. Analyses were conducted on two publicly available
data sets. In comparison with other classification algorithms, the proposed
CKSRC further increases the overall classification accuracy and presents robust
classification results with different selections of training samples.
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
3. Calculate the sparse coefficients for sample
by solving the following
-minimization problem with
customizing Gaussian radial basis function kernel:
.
4. Compute the reconstruction error
for .
5. Output: .
Table 1.
Number of Training and Testing Pixels for the AVIRIS Data Set
Class
Training
Testing
C1-Corn-notill
717
1434
C2-Corn-mintill
417
834
C3-Grass-pasture
248
497
C4-Grass-trees
373
747
C5-Hay-windrowed
244
489
C6-Soybean-notill
484
968
C7-Soybean-mintill
1234
2468
C8-Soybean-clean
307
614
C9-Woods
647
1294
Total
4671
9345
Table 2.
Number of Training and Testing Pixels for the Center of Pavia Data
Set
Class
Training
Testing
C1-Water
2355
4712
C2-Trees
270
542
C3-Asphalt
110
220
C4-Brick
95
191
C5-Bitumen
235
470
C6-Tile
330
660
C7-Shadow
260
520
C8-Meadow
1525
3059
C9-Soil
100
204
Total
5280
10578
Table 3.
Comparison of Cohen Kappa Coefficients, Overall Classification Accuracies
(%), Average Classification Accuracies (%), and Classification
Accuracies (%) Conducted by the SVM, CSVM, SRC, KSRC, and CKSRC
Algorithms Yielded on the AVIRIS Data Seta
SVM
CSVM
SRC
KSRC
CKSRC
Cohen kappa coefficient
0.9394
0.9140
0.8271
0.9606
0.9633
Overall classification accuracy
94.84
92.66
85.29
96.64
96.88
Average classification accuracy
95.46
93.39
85.97
96.97
97.12
Classification accuracy
C1-Corn-notill
91.07
90.38
79.43
93.58
94.28
C2-Corn-mintill
89.57
84.41
73.98
93.76
93.17
C3-Grass-pasture
98.99
97.59
88.53
98.99
98.59
C4-Grass-trees
99.87
99.46
93.98
99.87
99.73
C5-Hay-windrowed
100.00
100.00
99.59
99.80
99.80
C6-Soybean-notill
90.50
82.54
76.45
94.73
95.76
C7-Soybean-mintill
94.53
92.14
84.44
96.60
96.92
C8-Soybean-clean
94.79
94.30
78.50
95.44
95.93
C9-Woods
99.85
99.69
98.84
100.00
99.92
The maximum value of each row is shown in boldface.
Table 4.
Comparison of Cohen Kappa Coefficients, Overall Classification Accuracies
(%), Average Classification Accuracies (%), and Classification
Accuracies (%) Conducted by the SVM, CSVM, SRC, KSRC, and CKSRC
Algorithms Yielded on the Center of Pavia Data Seta
SVM
CSVM
SRC
KSRC
CKSRC
Cohen kappa coefficient
0.9732
0.9651
0.9740
0.9833
0.9851
Overall classification accuracy
98.12
97.53
98.17
98.81
98.95
Average classification accuracy
92.33
91.26
94.52
96.31
96.57
Classification accuracy
C1-Water
99.98
99.98
99.94
100.00
100.00
C2-Trees
97.79
95.02
95.02
96.49
97.42
C3-Asphalt
96.36
81.36
95.91
95.45
95.00
C4-Brick
92.67
69.11
86.39
93.72
91.62
C5-Bitumen
98.09
95.11
97.23
98.09
98.51
C6-Tile
95.30
95.30
92.27
95.91
96.36
C7-Shadow
94.62
89.42
94.23
95.77
95.77
C8-Meadow
99.77
99.44
99.51
99.71
99.80
C9-Soil
56.37
96.57
90.20
91.67
94.61
The maximum value of each row is shown in boldface.
Table 5.
Comparison of Computation Time(s) Conducted by the SVM, CSVM, SRC, KSRC,
and CKSRC Algorithms Yielded on the AVIRIS and Center of Pavia Data
Sets
SVM
CSVM
SRC
KSRC
CKSRC
AVIRIS
9.4
21.9
22.2
93.8
112.7
Center of Pavia
7.4
9.1
73.1
104.8
113.4
Table 6.
Quantile of the Contrastive Algorithms with Different Percentages of the
Training Sample with the AVIRIS and Center of Pavia Data
Sets
Percentage of Training Sample
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
AVIRIS
SVM
1.30
1.30
1.29
1.30
1.29
1.29
1.29
1.30
1.33
CSVM
1.30
1.29
1.29
1.29
1.29
1.30
1.30
1.31
1.31
SRC
1.33
1.36
1.31
1.31
1.33
1.30
1.33
1.31
1.42
KSRC
1.30
1.29
1.29
1.29
1.29
1.30
1.29
1.30
1.32
Center of Pavia
SVM
1.29
1.29
1.28
1.28
1.28
1.28
1.29
1.29
1.29
CSVM
1.29
1.29
1.28
1.29
1.29
1.29
1.29
1.29
1.30
SRC
1.29
1.29
1.28
1.28
1.28
1.28
1.28
1.29
1.29
KSRC
1.29
1.29
1.28
1.28
1.28
1.28
1.29
1.29
1.29
Table 7.
Observation Values of the Contrastive Algorithms with
Different Percentages of the Training Sample with the AVIRIS and Center
of Pavia Data Sets
3. Calculate the sparse coefficients for sample
by solving the following
-minimization problem with
customizing Gaussian radial basis function kernel:
.
4. Compute the reconstruction error
for .
5. Output: .
Table 1.
Number of Training and Testing Pixels for the AVIRIS Data Set
Class
Training
Testing
C1-Corn-notill
717
1434
C2-Corn-mintill
417
834
C3-Grass-pasture
248
497
C4-Grass-trees
373
747
C5-Hay-windrowed
244
489
C6-Soybean-notill
484
968
C7-Soybean-mintill
1234
2468
C8-Soybean-clean
307
614
C9-Woods
647
1294
Total
4671
9345
Table 2.
Number of Training and Testing Pixels for the Center of Pavia Data
Set
Class
Training
Testing
C1-Water
2355
4712
C2-Trees
270
542
C3-Asphalt
110
220
C4-Brick
95
191
C5-Bitumen
235
470
C6-Tile
330
660
C7-Shadow
260
520
C8-Meadow
1525
3059
C9-Soil
100
204
Total
5280
10578
Table 3.
Comparison of Cohen Kappa Coefficients, Overall Classification Accuracies
(%), Average Classification Accuracies (%), and Classification
Accuracies (%) Conducted by the SVM, CSVM, SRC, KSRC, and CKSRC
Algorithms Yielded on the AVIRIS Data Seta
SVM
CSVM
SRC
KSRC
CKSRC
Cohen kappa coefficient
0.9394
0.9140
0.8271
0.9606
0.9633
Overall classification accuracy
94.84
92.66
85.29
96.64
96.88
Average classification accuracy
95.46
93.39
85.97
96.97
97.12
Classification accuracy
C1-Corn-notill
91.07
90.38
79.43
93.58
94.28
C2-Corn-mintill
89.57
84.41
73.98
93.76
93.17
C3-Grass-pasture
98.99
97.59
88.53
98.99
98.59
C4-Grass-trees
99.87
99.46
93.98
99.87
99.73
C5-Hay-windrowed
100.00
100.00
99.59
99.80
99.80
C6-Soybean-notill
90.50
82.54
76.45
94.73
95.76
C7-Soybean-mintill
94.53
92.14
84.44
96.60
96.92
C8-Soybean-clean
94.79
94.30
78.50
95.44
95.93
C9-Woods
99.85
99.69
98.84
100.00
99.92
The maximum value of each row is shown in boldface.
Table 4.
Comparison of Cohen Kappa Coefficients, Overall Classification Accuracies
(%), Average Classification Accuracies (%), and Classification
Accuracies (%) Conducted by the SVM, CSVM, SRC, KSRC, and CKSRC
Algorithms Yielded on the Center of Pavia Data Seta
SVM
CSVM
SRC
KSRC
CKSRC
Cohen kappa coefficient
0.9732
0.9651
0.9740
0.9833
0.9851
Overall classification accuracy
98.12
97.53
98.17
98.81
98.95
Average classification accuracy
92.33
91.26
94.52
96.31
96.57
Classification accuracy
C1-Water
99.98
99.98
99.94
100.00
100.00
C2-Trees
97.79
95.02
95.02
96.49
97.42
C3-Asphalt
96.36
81.36
95.91
95.45
95.00
C4-Brick
92.67
69.11
86.39
93.72
91.62
C5-Bitumen
98.09
95.11
97.23
98.09
98.51
C6-Tile
95.30
95.30
92.27
95.91
96.36
C7-Shadow
94.62
89.42
94.23
95.77
95.77
C8-Meadow
99.77
99.44
99.51
99.71
99.80
C9-Soil
56.37
96.57
90.20
91.67
94.61
The maximum value of each row is shown in boldface.
Table 5.
Comparison of Computation Time(s) Conducted by the SVM, CSVM, SRC, KSRC,
and CKSRC Algorithms Yielded on the AVIRIS and Center of Pavia Data
Sets
SVM
CSVM
SRC
KSRC
CKSRC
AVIRIS
9.4
21.9
22.2
93.8
112.7
Center of Pavia
7.4
9.1
73.1
104.8
113.4
Table 6.
Quantile of the Contrastive Algorithms with Different Percentages of the
Training Sample with the AVIRIS and Center of Pavia Data
Sets
Percentage of Training Sample
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
AVIRIS
SVM
1.30
1.30
1.29
1.30
1.29
1.29
1.29
1.30
1.33
CSVM
1.30
1.29
1.29
1.29
1.29
1.30
1.30
1.31
1.31
SRC
1.33
1.36
1.31
1.31
1.33
1.30
1.33
1.31
1.42
KSRC
1.30
1.29
1.29
1.29
1.29
1.30
1.29
1.30
1.32
Center of Pavia
SVM
1.29
1.29
1.28
1.28
1.28
1.28
1.29
1.29
1.29
CSVM
1.29
1.29
1.28
1.29
1.29
1.29
1.29
1.29
1.30
SRC
1.29
1.29
1.28
1.28
1.28
1.28
1.28
1.29
1.29
KSRC
1.29
1.29
1.28
1.28
1.28
1.28
1.29
1.29
1.29
Table 7.
Observation Values of the Contrastive Algorithms with
Different Percentages of the Training Sample with the AVIRIS and Center
of Pavia Data Sets