Abstract
In recent years, incoherent approaches in the generation, transport, and detection of millimeter-wave Radio-over-Fiber signals have attracted a lot of attention due to their inherent technological simplicity and cost-effectiveness, which is however at the expense of additional phase-induced noises caused at the receiver's output. The power of deep learning, a subset of machine learning, has appeared recently to be very effective to improve the performance of communication blocks, particularly in signal compression, signal detection, and end-to-end communications. In this article, we propose and demonstrate a new receiver architecture by incorporating deep learning at the receiver. The proposed receiver is demonstrated on an unlocked heterodyning Radio-over-Fiber link. Results show that the proposed deep learning based receiver exhibits a greater tolerance against phase-induced noises, with a bit error rate improvement from
$10^{-1}$
to
$10^{-5}$
. In addition, the proposed deep learning based receiver performs better, in terms of bit error rate, than conventional self-homodyning based approach when the frequency spacing between reference tone and the main data signal is small.
PDF Article
More Like This
Cited By
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
Contact your librarian or system administrator
or
Login to access Optica Member Subscription