Abstract

In this work, deep learning is used to monitor coherent channel performance with eye diagram measurement. Experiments show that the proposed technique can determine the modulation format, optical signal to noise ratio (OSNR), roll-off factor (ROF), and timing skew of a quadrature amplitude modulation (QAM) transmitter with high accuracy. Trained vanilla convolutional neural network (CNN) and MobileNet can be utilized to jointly monitor above four parameters with >98% prediction accuracy for 32 GBd coherent channels with quadrature phase shift keying (QPSK), 8-QAM or 16-QAM formats, under OSNR from 15 to 40 dB, IQ skew from −15 to 15 ps, ROF from 0.05 to 1. Our proposed deep learning approach outperforms many traditional machine learning methods, such as decision tree, k-nearest neighbor algorithm (KNN), and histogram of oriented gradient (HOG) based support vector machine (SVM). Unlike other optical performance monitoring approaches, the use of eye diagram measurement combined with deep learning could enable joint monitoring of multiple system performance parameters with reduced hardware implementation complexity. Comparing with vanilla CNN, MobileNet has relatively simplified iteration algorithm, thus reduces the requirement on the computational power, while still maintaining high accuracy for classification issues.

© 2019 IEEE

PDF Article

References

You do not have subscription access to this journal. Citation lists with outbound citation links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access OSA Member Subscription

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an OSA member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access OSA Member Subscription