Most commercially available optical transceivers operate at a constant line rate (e.g. 100 Gb/s), regardless of the link length. But this scenario is likely to change soon. In the forthcoming distance-adaptive transceivers, transmission parameters, such as modulation format and forward error correction (FEC) overhead, are likely to be optimized to extract the highest data rate allowed by the link. For instance, the same transceiver could operate at 100 Gb/s in a 5.000-km link, and at 180 Gb/s in 2.000 km. However, such flexibility will require additional degrees of freedom at the client interface. In the example given, no gain is achieved for a system operating at 180 Gb/s, in case the client interface operates in multiples of 100 Gb/s, as in the 100GbE standard. This configuration, in the authors' own words, would generate a “stranded capacity” of 80 Gb/s in the transceiver.
In this JOCN article, Ives and co-authors address the problem of reducing the stranded capacity of distance-adaptive transceivers using 25GbE client interfaces. It is noteworthy that the first 25GbE products are starting to become available in the market. Two scenarios are investigated: (i) native 25GbE interfaces and (ii) 100GbE interfaces inverse multiplexed (split) into 4 25GbE lanes. By inverse multiplexing the 100GbE signal, the 25GbE lanes can exploit stranded capacity in multiple distance-adaptive transceivers. The authors show that this option successfully utilizes most of the stranded capacity, with a relatively low loss compared to native 25GbE interfaces. As expected, the solution comes at the expense of increasing the complexity of the system, in particular by the addition of the electrical switch required for inverse multiplexing. The paper winds up by suggesting that in order to be economically viable, the distance-adaptive solution must add no more than ~25% to the price of the fixed-reach option.
You must log in
to add comments.