Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Application and comparison of active and transfer learning approaches for modulation format classification in visible light communication systems

Open Access Open Access

Abstract

Automatic modulation classification (AMC) is a crucial part of adaptive modulation schemes for visible light communication (VLC) systems. However, most of the deep learning (DL) based AMC methods for VLC systems require a large amount of labeled training data which is quite difficult to obtain in practical systems. In this work, we introduce active learning (AL) and transfer learning (TL) approaches for AMC in VLC systems and experimentally analyze their performances. Experimental results show that the proposed novel AlexNet-AL and AlexNet-TL methods can significantly improve the classification accuracy with small sizes of training data. To be specific, using 60 labeled samples, AlexNet-AL and AlexNet-TL increase the classification accuracy by 6.82% and 14.6% compared to the result without AL and TL, respectively. Moreover, the use of data augmentation (DA) operation along with our proposed methods helps achieve further better performances.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Visible light communication (VLC) is a promising wireless communication technology, which utilizes visible light between 380 and 750 nm [13]. Due to the advantages of rich spectral resources, high data rates, strong confidentiality and no electromagnetic interference, VLC will become an indispensable part of 6G communication [4,5]. Unfortunately, the frequency responses of the VLC channel are not ideal and the bandwidth is limited, which results in uneven signal-to-noise ratio (SNR) distribution over the operation bandwidth. In order to alleviate the effect of imbalanced SNR distribution and enhance the spectral efficiencies, adaptive modulation schemes are often adopted in orthogonal frequency-division multiplexing (OFDM) VLC systems [68]. In adaptive modulation schemes, the modulation formats of different subcarriers are dynamically selected based on the SNR values. Owing to the demodulation process of received signals determined by their modulation formats, automatic modulation classification (AMC) is vital for adaptive modulation schemes [9]. Besides, it can contribute to the spectra management in optical wireless communications (OWC).

The traditional AMC methods generally include likelihood-based (LB) methods and feature-based (FB) methods [10]. The LB approaches [11,12] are based on the likelihood function of the received signal and compare the likelihood ratio against a threshold to make the decision. On the other hand, in the FB approaches [13,14], a decision is made based on the observed values of several employed features. With the development of deep learning (DL) in recent years, DL-based FB methods have been implemented to improve the accuracy of AMC. In [15], the authors adopt two convolutional neural network (CNN) models, AlexNet [16] and GoogLeNet for modulation classification, yielding better classification accuracy than the traditional FB methods. Besides, generative adversarial networks (GANs) are introduced to achieve higher accuracy [17]. In [18], a DL enabled modulation classification scheme is proposed for VLC systems, which uses few received symbols. In [19], an OFDM based progressive growth meta-learning AMC scheme is proposed for underwater OWC and can achieve fast self-learning for new tasks with less training time and data. Unfortunately, most of the DL-based methods require a large number of training data to train the deep neural networks that can generalize well. However, it is very difficult to collect and label the training data in real VLC systems. When the number of available training data is limited, two methods can be adopted to improve the classification accuracy [20]. The first one is active learning (AL) [21] which chooses the most informative samples to be labeled and train the models using these labeled samples. The second viable method is transfer learning (TL) [22] which extracts new useful knowledge from the external data. TL improves the performance of a machine learning (ML) model by applying knowledges and skills learned from a source domain to the target domain, where the ML model is actually tested and used.

In this work, we experimentally demonstrate AL and TL based AMC schemes for VLC systems and make a detailed comparison. AlexNet combined with AL and TL are denoted as AlexNet-AL and AlexNet-TL, respectively. AlexNet without AL and TL is adopted as the baseline, called AlexNet-Raw. The major contributions of this paper are as follows:

  • i) A preliminary simulation-based performance analysis of TL based AMC scheme appears in our study in [23]. In this work, we experimentally demonstrate the applicability of our proposed technique.
  • ii) AL is first introduced into AMC methods for OFDM-VLC systems. We make a comparison of classification accuracy of AL and TL based AMC schemes as a function of the number of training data. Experimental results show that our proposed AlexNet-AL and AlexNet-TL methods achieve higher classification accuracy in real-world small training data scenarios compared to the baseline.
  • iii) We propose a data augmentation (DA) based scheme that can further improve the classification accuracy for different training data sizes.

The rest of this paper is organized as follows: Section 2 introduces the operating principle of proposed AMC schemes. Section 3 presents the experimental setup of VLC system and describes the datasets and training process details. Section 4 discusses the experimental results. Finally, in Section 5, we summarize the paper.

2. Principle

2.1 Proposed modulation format classification techniques

The proposed AlexNet-AL/TL modulation format classification process is illustrated in Fig. 1. We firstly generate the constellation diagram by presenting the received complex signal samples into scattering points on a complex plane. To get more color features, we subsequently convert the constellation diagram to the contour stellar image [24]. Specifically, we count the number of adjacent sample points at each sample point of the constellation diagram by adopting a red square. Then, we fill each sample point with different colors according to the number of its adjacent sample points, where yellow means larger number, green means middle number, and blue means smaller number. On the whole, we transform the received complex signal samples into contour stellar images, which are used as inputs of the CNN model.

 figure: Fig. 1.

Fig. 1. Proposed AlexNet-AL/TL modulation format classification process.

Download Full Size | PDF

In our proposed AlexNet-AL/TL schemes, we utilize AlexNet as the base CNN model and introduce AL, TL and DA techniques. AlexNet is the landmark model for image classification, consisting of five convolutional layers and three fully-connected layers with a 1000-way SoftMax layer. It made a huge contribution in popularizing the CNN in computer vision. To classify the 6 modulation formats of our dataset, we change the size of the output layer of the AlexNet from 1000 to 6.

2.2 AlexNet-AL

In real VLC systems, unlabeled data are relatively easy to get at a low cost, but labeled data are usually expensive and time-consuming to get. AL finds effective ways to select unlabeled samples to be labeled, in order to maximize the accuracy using as few labeled samples as possible. Figure 2 depicts a typical active learning cycle. AL proceeds in rounds, where at each round, we train the model using the labeled training set. The current model is used to assess the informativeness of unlabeled samples. The most informative samples are selected by using a query selection strategy. The labels are obtained by the labeling oracle (e.g., a human annotator) and then the labeled samples are included in the labeled training set. The model is retrained using the new labeled training set. The process repeats until we have no budget left for getting labels.

 figure: Fig. 2.

Fig. 2. A typical active learning cycle.

Download Full Size | PDF

Our proposed AlexNet-AL method is described in Algorithm 1. Its basic idea is to select the samples that are most difficult to classify by the current model. Given a labeled sample training set ${X^L}$ and an unlabeled samples pool ${X^U}$, AlexNet-AL first trains the model using the current labeled samples. Then, all unlabeled samples are predicted using current model. We use margin sampling (MS) [25] to pick the most uncertain samples $x_i^{MS}$ of ${X^U}$. MS picks the sample with the smallest separation of the top two class predictions:

$$x_i^{MS} = \mathop {\arg \min }\limits_{{x_i}} ({p({ {{y_1}} |{x_i}} )- p({ {{y_2}} |{x_i}} )} )$$
where ${y_1}$ and ${y_2}$ are the first and second most probable class labels predicted by AlexNet, respectively. Intuitively, the classifier is uncertain about the sample if the probabilities of predicting a sample to its most likely class and to its second most likely class are too close. Therefore, we need the human annotator to help the classifier discriminate these two classes.

2.3 AlexNet-TL

To overcome the limitations on the number of training data in real VLC systems, we propose TL with pre-trained AlexNet for the classification of six modulation formats. TL refers to the knowledge learned in one domain (called source domain) being used in another domain (called target domain) to improve the model’s generalization performance. Figure 3 shows our proposed AlexNet-TL method. It involves two steps: the synthesis of pre-trained AlexNet (the source domain) and the fine-tuning step (the target domain). In the pre-training step, AlexNet is trained on more than a million images with 1000 object categories from ImageNet [26], which are different from the contour stellar images in the target domain. Then, in the fine-tuning step, the pre-trained AlexNet is reused and the size of the final output layer of the network is modified to the number of classes in the target domain. The model is retrained using data from the target domain. Different from the fixed feature extractor used in [23], all the weights of the AlexNet are fine-tuned in this work. With TL, we can apply the knowledge learned in the source domain to the target domain.

 figure: Fig. 3.

Fig. 3. The proposed AlexNet-TL method.

Download Full Size | PDF

2.4 DA

DA is a method widely used in DL algorithms to increase the diversity of training datasets, prevent model overfitting and improve the robustness of the model. By artificially expanding the training dataset, DA can be used to deal with the lack of training data. In the case of limited training data in real VLC systems, we proposed a DA approach for AlexNet-AL and AlexNet-TL to achieve better performances. Specifically, when training the model using AlexNet-AL and AlexNet-TL methods, we rotate the contour stellar images with a random angle from $- 90^\circ$ to $90^\circ$. The effect of contour stellar images after DA operation is shown in Fig. 4. Obviously, the diversity of training data is increased after DA operation and the training dataset is expanded when training the model.

 figure: Fig. 4.

Fig. 4. A contour stellar image with DA operation.

Download Full Size | PDF

3. Experiments

3.1 Experimental setup

To verify the feasibility of our proposed methods, we set up physical experiments for recognizing different quadrature amplitude modulation (QAM) signals in VLC systems. Figure 5 illustrates the experimental setup of OFDM-VLC system. At the transmitter (Tx), the input bits are converted to parallel bits and mapped using QAM. Then, the pilot is added to OFDM symbols in order to estimate the channel subsequently. To obtain real-value OFDM symbols, Hermitian symmetry is employed. Cyclic prefix (CP) is added to overcome inter symbol interference after inverse fast Fourier transform (IFFT). Finally, the OFDM symbols are loaded into an arbitrary waveform generator (AWG, AWG7000A, Tektronix). And then the signal is superposed with a direct current (DC) component by a Bias-Tee (ZFBT-6GW+, Mini-circuit). Light-emitting diode (LED, LUW W5AM, OSRAM) converts electrical signal to optical signal. Then the optical signal is transmitted through 1.4-m free space. Two lenses are used for alignment.

 figure: Fig. 5.

Fig. 5. Experimental setup of OFDM-VLC system.

Download Full Size | PDF

At the receiver (Rx), an avalanche photodiode (APD, APD210, Menlo Systems) converts the optical signal into the electrical signal and an oscilloscope (OSC, MSO73304DX, Tektronix) is used to record the signal. The modulation format of the received signal is recognized after CP extraction, FFT and channel estimation. Then, the identified signal can be demodulated. The configuration of the experiment is shown in Table 1. We divide the data subcarriers into 11 groups and calculate the average SNR [2730] of each subcarrier group. The received signal of the subcarrier group is converted to contour stellar image for subsequent modulation classification.

Tables Icon

Table 1. Configuration of the experiment

3.2 Dataset description

We use MATLAB 2021a to generate the training and testing datasets we need. As shown in Fig. 6, we generate six modulated signals and convert them to contour stellar images at $SNR = 3n$ dB (where, $n = 0,1,2,\ldots ,5$) including 2QAM, 4QAM, 8QAM, 16QAM, 32QAM and 64QAM. For training data, each modulation format has 5 contour stellar images for six different SNRs ranging from 0 to 15dB with an interval of 3dB. The whole training data contains 180 images in total. Similarly, in the testing dataset, each modulation format has 30 contour stellar images for six different SNRs, adding up to 1080 images.

 figure: Fig. 6.

Fig. 6. Contour stellar images for six modulation format types with different SNRs.

Download Full Size | PDF

3.3 Training details

To test the efficacy of the proposed methods when the training data is very limited, we investigate the classification accuracy under different training data sizes. AlexNet-AL starts with 30 labeled samples which are randomly chosen and the rest 150 samples can be considered unlabeled. In each subsequent round, AlexNet-AL uses the model trained in the previous round to predict the rest unlabeled samples and selects 30 unlabeled samples to be labeled and added to the labeled training data for training the model. A sequence of six rounds is conducted. For AlexNet-TL, 30 labeled samples are randomly picked and added to the training data for training the model in each round. To achieve better performance for proposed methods, DA operation is added to AlexNet-AL and AlexNet-TL for comparison. We set up AlexNet-Raw as the baseline model, avoiding AL and TL. Similarly, for AlexNet-Raw, 30 labeled samples are randomly picked and added to the training data for training the model in each round. In short, the training data size increases gradually in each of the 6 rounds, i.e., 30, 60, 90, 120, 150 and 180, respectively. While training the models in each round, 80% of the labeled samples are used for training and 20% for validation.

All the methods are implemented by using Pytorch framework and performed using NVIDIA GeForce RTX 3080 GPU. The number of training epochs is 2000. The training uses stochastic gradient descent method with a momentum factor of 0.9. The learning rate is 0.001 and the mini-batch size is 64. Ten random trails are done for each method and the mean of classification accuracies of these methods are reported.

4. Results and discussions

4.1 Comparison of AL and TL

Figure 7 shows the performance curves for AlexNet-AL and AlexNet-TL methods. It is clear that the proposed AlexNet-AL and AlexNet-TL methods significantly improve the classification accuracy when the training data size is small. Specifically, using 60 labeled samples, AlexNet-AL increases the classification accuracy by 6.82% while AlexNet-TL increases the classification accuracy by 14.6% compared to AlexNet-Raw. This means that the data collecting effort from practical VLC systems and the labeling effort from a human annotator can be significantly reduced without compromising the modulation classification performance. When the training data size is already large, AlexNet-AL and AlexNet-TL do not bring a significant improvement.

 figure: Fig. 7.

Fig. 7. Comparison of classification accuracy of proposed AlexNet-AL and AlexNet-TL methods under different training data sizes.

Download Full Size | PDF

To further evaluate the improvements in classification accuracy of our proposed methods, we present the corresponding confusion matrices in Fig. 8 when the number of training samples is 90. On the whole, 2QAM and 8QAM can be easily identified. As for AlexNet-Raw, it is difficult to classify 16QAM, 32QAM and 64QAM while the proposed AlexNet-AL successfully reduces the confusion between 16QAM and 64QAM formats. What’s more, all entries on the diagonal of the confusion matrix are significantly increased after we adopt the AlexNet-TL method, which means the modulation classification accuracy is improved.

 figure: Fig. 8.

Fig. 8. Confusion matrices for different proposed methods for 90 training samples.

Download Full Size | PDF

When training the models, 30 samples are selected and added to the training data in each round. Figure 9 compares the 30 unlabeled samples selected in the second round by AlexNet-Raw, AlexNet-TL and AlexNet-AL. The principle of our proposed AlexNet-AL method is to select the most uncertain or hardest unlabeled samples to be labeled. AlexNet-Raw and AlexNet-TL adopt the random selection strategy. The results in Fig. 9 reveal that the contour stellar images in the bottom graph look harder to recognize, indicating that AlexNet-AL does pick more challenging samples than AlexNet-Raw and AlexNet-TL. Specifically, AlexNet-AL does not pick images with 2QAM format because they are easy to recognize according to the confusion matrices presented in Fig. 8. On the other hand, AlexNet-AL picks 22 images with 16QAM, 32QAM and 64QAM while AlexNet-Raw and AlexNet-TL pick 16 images with 16QAM, 32QAM and 64QAM. It is obvious that AlexNet-AL picks more samples which are difficult to recognize. As for AlexNet-TL, the above result indicates that TL provides a much better initialization of AlexNet than random initialization. However, when there is already a large amount of training data available, the transfer of knowledge learned from external large and well-labeled dataset may not bring a significant change in the results.

 figure: Fig. 9.

Fig. 9. Comparison of the 30 unlabeled samples selected in the second round by (a) AlexNet-Raw and AlexNet-TL, and (b) AlexNet-AL.

Download Full Size | PDF

4.2 DA operation

To further improve the classification accuracy, DA operation is applied for AlexNet-Raw and our proposed AlexNet-AL and AlexNet-TL methods. To be specific, we rotate the contour stellar images with a random angle from $- 90^\circ$ to $90^\circ$ when training a model with DA operation. Figure 10 shows the comparison of classification accuracies of AlexNet-AL, AlexNet-TL and AlexNet-Raw methods with and without DA operation under different training data sizes. Compared with the baseline without DA operation, AlexNet-AL, AlexNet-TL and AlexNet-Raw methods with DA operation improve the classification accuracies under different training data sizes. Particularly, the DA based AlexNet-AL can increase the classification accuracy from 77.45% to 88.78% when the training data size is 90, whereas the DA based AlexNet-Raw increases the classification accuracy from 72.3% to 82.5%. Overall, the proposed AlexNet-AL with DA operation achieves the best performance under all training data sizes. When expanding the training dataset using DA operation, AL brings significant improvements for modulation format classification in VLC systems.

 figure: Fig. 10.

Fig. 10. Comparison of classification accuracy of AlexNet-AL, AlexNet-TL and AlexNet-Raw methods with and without DA operation under different training data sizes. Classification performances with DA operation are plotted with solid lines while classification performances without DA operation are plotted with dashed lines.

Download Full Size | PDF

5. Conclusions

In this paper, we propose the TL and AL based AMC schemes for practical VLC systems with the objective of solving the limited training data problem and experimentally demonstrate their use in a real VLC system. Experimental results show that the proposed AlexNet-AL and AlexNet-TL methods significantly improve the classification accuracy with very small training data size. They can remarkably reduce the data collecting effort and labeling effort in practical VLC systems. In addition, the DA operation is introduced when training the models, which can further increase the classification accuracy, especially in case of insufficient training samples. In summary, our proposed AMC methods show great potential for applications in real-world scenarios where there is a small amount of data available for the training of DL models.

Funding

Shenzhen Municipal Science and Technology Innovation Council (WDZC20200820160650001); Tsinghua Shenzhen International Graduate School and Tsinghua–Berkeley Shenzhen Institute under Scientific Research Startup Fund (Project No.: 01010600001); Innovation Group Project of Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (No. 311021011).

Acknowledgments

The authors would like to express sincere appreciation to Shenzhen Municipal Science and Technology Innovation Council (WDZC20200820160650001); Tsinghua Shenzhen International Graduate School and Tsinghua–Berkeley Shenzhen Institute under Scientific Research Startup Fund (Project No.: 01010600001); Innovation Group Project of Southern Marine Science and Engineering Guangdong Laboratory (Zhuhai) (No. 311021011).

Disclosures

The authors declare no conflicts of interest.

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

References

1. P. H. Pathak, X. Feng, P. Hu, and P. Mohapatra, “Visible light communication, networking, and sensing: A survey, potential and challenges,” IEEE Commun. Surv. Tutorials 17(4), 2047–2077 (2015). [CrossRef]  

2. L. E. M. Matheus, A. B. Vieira, L. F. M. Vieira, M. A. M. Vieira, and O. Gnawali, “Visible light communication: concepts, applications and challenges,” IEEE Commun. Surv. Tutorials 21(4), 3204–3237 (2019). [CrossRef]  

3. L. Wang, Z. Wei, C.-J. Chen, L. Wang, H. Y. Fu, L. Zhang, K.-C. Chen, M.-C. Wu, Y. Dong, Z. Hao, and Y. Luo, “13 GHz E-O bandwidth GaN-based micro-LED for multi-gigabit visible light communication,” Photonics Res. 9(5), 792–802 (2021). [CrossRef]  

4. S. Dang, O. Amin, B. Shihada, and M.-S. Alouini, “What should 6G be?” Nat. Electron. 3(1), 20–29 (2020). [CrossRef]  

5. N. Chi, Y. Zhou, Y. Wei, and F. Hu, “Visible light communication in 6G: Advances, challenges, and prospects,” IEEE Veh. Technol. Mag. 15(4), 93–102 (2020). [CrossRef]  

6. L. Wu, Z. Zhang, J. Dang, and H. Liu, “Adaptive modulation schemes for visible light communications,” J. Lightwave Technol. 33(1), 117–125 (2015). [CrossRef]  

7. O. Narmanlioglu, R. C. Kizilirmak, T. Baykas, and M. Uysal, “Link adaptation for MIMO OFDM visible light communication systems,” IEEE Access 5, 26006–26014 (2017). [CrossRef]  

8. J. He, J. He, and J. Shi, “An enhanced adaptive scheme with pairwise coding for OFDM-VLC system,” IEEE Photonics Technol. Lett. 30(13), 1254–1257 (2018). [CrossRef]  

9. M. L. D. Wong and A. K. Nandi, “Efficacies of selected blind modulation type detection methods for adaptive OFDM systems,” in Proceedings of International Conference on Signal Processing and Communication Systems (2007).

10. O. A. Dobre, A. Abdi, Y. Bar-Ness, and W. Su, “Survey of automatic modulation classification techniques: classical approaches and new trends,” IET Commun. 1(2), 137–156 (2007). [CrossRef]  

11. F. Hameed, O. A. Dobre, and D. C. Popescu, “On the likelihood-based approach to modulation classification,” IEEE Trans. Wireless Commun. 8(12), 5884–5892 (2009). [CrossRef]  

12. J. Zheng and Y. Lv, “Likelihood-based automatic modulation classification in OFDM with index modulation,” IEEE Trans. Veh. Technol. 67(9), 8192–8204 (2018). [CrossRef]  

13. A. Hazza, M. Shoaib, S. A. Alshebeili, and A. Fahad, “An overview of feature-based methods for digital modulation classification,” in Proceedings of 2013 1st International Conference on Communications, Signal Processing, and their Applications (IEEE, 2013), pp. 1–6.

14. F. N. Khan, K. Zhong, W. H. Al-Arashi, C. Yu, C. Lu, and A. P. T. Lau, “Modulation format identification in coherent receivers using deep machine learning,” IEEE Photonics Technol. Lett. 28(17), 1886–1889 (2016). [CrossRef]  

15. S. Peng, H. Jiang, H. Wang, H. Alwageed, Y. Zhou, M. M. Sebdani, and Y. D. Yao, “Modulation classification based on signal constellation diagrams and deep learning,” IEEE Trans. Neural Netw. Learning Syst. 30(3), 718–727 (2019). [CrossRef]  

16. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Commun. ACM 25, 1097–1105 (2012). [CrossRef]  

17. B. Tang, Y. Tu, Z. Zhang, and Y. Lin, “Digital signal modulation classification with data augmentation using generative adversarial nets in cognitive radio networks,” IEEE Access 6, 15713–15722 (2018). [CrossRef]  

18. W. Liu, X. Li, C. Yang, and M. Luo, “Modulation classification based on deep learning for DMT subcarriers in VLC system,” in Optical Fiber Communications Conference and Exhibition (IEEE, 2020), paper M3I.6.

19. L. Zhang, X. Zhou, J. Du, and P. Tian, “Fast self-learning modulation recognition method for smart underwater optical communication systems,” Opt. Express 28(25), 38223–38240 (2020). [CrossRef]  

20. D. Azzimonti, C. Rottondi, A. Giusti, M. Tornatore, and A. Bianco, “Comparison of domain adaptation and active learning techniques for quality of transmission estimation with small-sized training datasets,” J. Opt. Commun. Netw. 13(1), A56–A66 (2021). [CrossRef]  

21. B. Settles, Active learning literature survey (University of Wisconsin-Madison, 2009).

22. S. J. Pan and Q. Yang, “A survey on transfer learning,” IEEE T. Knowl. Data En. 22(10), 1345–1359 (2010). [CrossRef]  

23. Z. Zhao, Z. Wei, Z. Wang, Y. Zhang, M. Li, F. N. Khan, and H. Fu, “Modulation format recognition based on transfer learning for visible light communication systems,” in 26th Optoelectronics and Communications Conference, OSA Technical Digest (Optica Publishing Group, 2021), paper JS2B.12.

24. Y. Lin, Y. Tu, Z. Dou, L. Chen, and S. Mao, “Contour stella image and deep learning for signal recognition in the physical layer,” IEEE Trans. Cogn. Commun. Netw. 7(1), 34–46 (2021). [CrossRef]  

25. D. Wang and Y. Shang, “A new active labeling method for deep learning,” in Proceedings of International Joint Conference on Neural Networks (IEEE, 2014), pp. 112–119.

26. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 248–255.

27. R. A. Shafik, M. S. Rahman, and A. R. Islam, “On the extended relationships among EVM, BER and SNR as performance metrics,” in Proceedings of International Conference on Electrical and Computer Engineering (IEEE, 2006), pp. 408–411.

28. J. He, Y. Zhou, J. Shi, and Q. Tang, “Modulation classification method based on clustering and gaussian model analysis for VLC system,” IEEE Photonics Technol. Lett. 32(11), 651–654 (2020). [CrossRef]  

29. F. N. Khan, Z. Dong, C. Lu, and A. P. T. Lau, “Optical performance monitoring for fiber-optic communication networks,” in Enabling Technologies for High Spectral-Efficiency Coherent Optical Communication Networks, X. Zhou and C. Xie, eds. (Wiley, 2016).

30. F. N. Khan, A. P. T. Lau, C. Lu, and P. K. A. Wai, “Chromatic dispersion monitoring for multiple modulation formats and data rates using sideband optical filtering and asynchronous amplitude sampling technique,” Opt. Express 19(2), 1007–1015 (2011). [CrossRef]  

Data availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1. Proposed AlexNet-AL/TL modulation format classification process.
Fig. 2.
Fig. 2. A typical active learning cycle.
Fig. 3.
Fig. 3. The proposed AlexNet-TL method.
Fig. 4.
Fig. 4. A contour stellar image with DA operation.
Fig. 5.
Fig. 5. Experimental setup of OFDM-VLC system.
Fig. 6.
Fig. 6. Contour stellar images for six modulation format types with different SNRs.
Fig. 7.
Fig. 7. Comparison of classification accuracy of proposed AlexNet-AL and AlexNet-TL methods under different training data sizes.
Fig. 8.
Fig. 8. Confusion matrices for different proposed methods for 90 training samples.
Fig. 9.
Fig. 9. Comparison of the 30 unlabeled samples selected in the second round by (a) AlexNet-Raw and AlexNet-TL, and (b) AlexNet-AL.
Fig. 10.
Fig. 10. Comparison of classification accuracy of AlexNet-AL, AlexNet-TL and AlexNet-Raw methods with and without DA operation under different training data sizes. Classification performances with DA operation are plotted with solid lines while classification performances without DA operation are plotted with dashed lines.

Tables (2)

Tables Icon

Algorithm 1. AlexNet-AL

Tables Icon

Table 1. Configuration of the experiment

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

x i M S = arg min x i ( p ( y 1 | x i ) p ( y 2 | x i ) )
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.