Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Digital twin-enabled self-evolved optical transceiver using deep reinforcement learning

Not Accessible

Your library or personal account may give you access

Abstract

Due to their high flexibility, programmable optical transceivers (POT) are regarded as one of the key optical components in optical fiber communications, where diverse transceiver freedom degrees can be controlled according to real-time network states. However, the adaptivity of classic POT modeling and controlling is limited to the prior-knowledge-dependent quality of the transmission estimation model or uncomprehensive training dataset, which has great difficulties in enabling adaptive POT modeling and controlling to evolve with time-varied network states. Here, a powerful dynamic modeling technique called digital twin (DT), enabled by the deep reinforcement learning (DRL), is first proposed for the adaptive POT modeling and controlling, to the best of our knowledge. The experimental and simulation results show that the lowest spectrum consumption and minimum latency are both available in the proposed POT, compared with the classic POTs based on neural networks and maximum capability provisioning. We believe that the proposed DT will open a new avenue for the adaptive optical component modeling and controlling for dynamic optical networks.

© 2020 Optical Society of America

Full Article  |  PDF Article
More Like This
Routing in optical transport networks with deep reinforcement learning

José Suárez-Varela, Albert Mestres, Junlin Yu, Li Kuang, Haoyu Feng, Albert Cabellos-Aparicio, and Pere Barlet-Ros
J. Opt. Commun. Netw. 11(11) 547-558 (2019)

Reconfiguring multicast sessions in elastic optical networks adaptively with graph-aware deep reinforcement learning

Xiaojian Tian, Baojia Li, Rentao Gu, and Zuqing Zhu
J. Opt. Commun. Netw. 13(11) 253-265 (2021)

Experimental evaluation of a latency-aware routing and spectrum assignment mechanism based on deep reinforcement learning

C. Hernández-Chulde, R. Casellas, R. Martínez, R. Vilalta, and R. Muñoz
J. Opt. Commun. Netw. 15(11) 925-937 (2023)

Cited By

You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Figures (6)

You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Equations (2)

You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.

Contact your librarian or system administrator
or
Login to access Optica Member Subscription

Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All Rights Reserved