## Abstract

Dynamic bandwidth and wavelength allocation are used to demonstrate high quality of service (QoS) in time wavelength-division multiplexed–passive optical networks (TWDM-PONs). Both bandwidth and wavelength assignment are performed on the basis of transmission containers (T-CONTs) and therefore by means of upstream service priority traffic flows. Our medium access control (MAC) protocol therefore ensures consistency in processing alike classes of service across all optical network units (ONUs) in agreement with their QoS figures. For evaluation of the MAC protocol performance, a simulator has been implemented in OPNET featuring a 40 km, 40 Gbps TWDM-PON with four stacked wavelengths at 10 Gbps each and 256 ONUs. Simulation results have confirmed the efficiency of allocating bandwidth to each wavelength and the significant increase of network traffic flow due to adaptive polling from 9.04 to 9.74 Gbps. The benefit of T-CONT-centric allocation has also been measured with respect to packet delay and queue occupancy, achieving low packet delay across all T-CONTs. Therefore, improved NG-PON2 performance and greater efficiency are obtained in this first demonstration of T-CONTs allocated to both wavelength and time.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

## I. Introduction

Time and wavelength-division multiplexed–passive optical networks (TWDM-PONs) have been standardized by FSAN to provide the primary solution for developing next-generation passive optical networks, stage 2 (NG-PON2) [1]. Extending beyond the specifications of NG-PON1 (10GPONs) [2,3], NG-PON2 (ITU-T G.989 series) defines aggregate rates of at least 40 Gbps in both the downstream and upstream directions, $1:64$ split ratio, 40 km differential reach, and 1 Gbps access rate per optical network unit (ONU) [4,5]. A TWDM-PON with 40 Gbps downstream data rate, 10 Gbps upstream data rate, 20 km span, and $1:512$ split ratio has been successfully demonstrated in Ref. [6], deploying gigabit PONs (GPONs) and 10GPONs coexisting in the same network architecture. The GPON standard was first developed to allow an optical access network with high bandwidth and therefore fast transmission rates and also service differentiation, a representative example being the GIANT architecture and scheduler [7,8]. Alternatively, Ref. [9] describes a so-called XL-GPON providing once again 40 Gbps in the downstream direction and 10 Gbps in the upstream direction by increasing the single carrier serial downstream bit rate of a 10GPON. In Ref. [10], a TWDM-PON prototype system was successfully demonstrated, providing the same downstream and upstream data rates by stacking four pairs of wavelengths operating at 10GPON rates.

In the data link layer of such networks, research concentrates in the design of new protocols and algorithms for hybrid time and wavelength allocation. The most significant TWDM-PON protocol proposals currently available in the literature are summarized below, although they all exploit the use of (10G) EPON as a means of delivering combined bandwidth and wavelength allocation. The network described in Ref. [11] configures users into logical PONs on demand. Traffic is scheduled in the time domain enabling wavelength resource sharing among PON branches without the need for fast wavelength tuning. Logical PONs unable to meet user bandwidth requirements are reconfigured to allow wavelengths to be reallocated and shared with higher efficiency. Also, stressing reconfigurability while increasing network flexibility without increasing complexity and cost, Ref. [12] demonstrates wavelength reallocation of ONUs on demand, exhibiting the necessary degree of wavelength and timeslot sharing. Focusing on the algorithms themselves, Ref. [13] enhances the performance of a long reach wavelength-division multiplexing/time-division multiplexing (WDM/TDM) PON by introducing multi-threaded MAC algorithms to achieve dynamic bandwidth and wavelength assignment. In Ref. [14], the authors claim this can be achieved by applying mixed integer linear programming. To save in power consumption and deployment cost, Ref. [15] proposes a TWDM multi-PON programmed to reschedule wavelength allocation by counting on wavelengths shared by all connected TWDM-PON branches. Alternatively, the authors of Ref. [16] have developed an energy optimization model based on adapting the number of scheduling wavelengths affecting the network energy saving and load balancing characteristics. More recently, Ref. [17] proposed a new dynamic bandwidth allocation (DBA) algorithm for TWDM-PONs that reduces the frequency of wavelength tuning, improving the channel utilization and the mean packet delay. Furthermore, the authors of Ref. [18] developed a wavelength and bandwidth allocation algorithm based on linear prediction (LP) to dynamically assign resources, therefore reducing the mean end-to-end delay. Besides, the algorithm proposed in Ref. [19] exploits the tunability and the sleep/doze capabilities of a 10 Gbps vertical-cavity surface-emitting optical network unit (10G-VCSEL-ONU) to improve energy saving at both the optical line terminal (OLT) and ONUs. Finally, Ref. [20] proposes wavelength relocation assignment to reduce the power consumption and the cost of TWDM-PON transceivers.

While the NG-PON2 standard (ITU-T G.989) specifies that bandwidth allocation in TWDM-PONs should exploit the XGPON protocol stack [21], the literature clearly suggests that up to now it has primarily been the (10G) EPON standard that has been explored as the basis for the development of new protocols for time and wavelength allocation. This is highly significant and is part of the motivation behind this work because the distinction in the medium access control (MAC) between the (10G) EPON and 10GPON (XGPON) results in fundamentally different bandwidth allocation processes.

To the best of our knowledge, this is also the first time joint bandwidth and wavelength assignment of ONUs has been implemented on the basis of all available transimission container (T-CONT) traffic, also specified in the NG-PON2 standard. This is confirmed by relevant XGPON and TWDM-PON research following the latest trends in 5G networks. In an earlier XGPON study [22,23], utilizing time allocation only, ONU groups were considered, assuming each ONU only uses one traffic priority type to assure bandwidth to a group of base stations, as opposed to each base station individually, thus allowing mobile operators to optimize the base station installation cost. In the first of two more recent TWDM-PON publications, the focus was on software defined access and software defined mobile network control for cell allocation and capital expenditure (CAPEX) reduction [24]. In the other publication, the wavelength bandwidth allocation algorithm presented aims to reduce the number of active wavelengths required to transmit fronthaul data and hence effectively accommodates the remote radio heads of C-RANs into a TWDM-PON [25].

The approach here has been driven by the key part T-CONTs play in the assignment of user bandwidth in XGPONs and in extension their significance in developing a MAC layer for TWDM-PONs complying with the NG-PON2 standard (ITU-T G.989). T-CONTs provide a mechanism to communicate service classes from clients. We have therefore implemented groups of T-CONT traffic and assigned ONUs with similar T-CONT profiles to the same group. Our objective has been to develop a novel MAC protocol for XGPONs that displays new wavelength and bandwidth allocation algorithms, jointly allocating network resources to T-CONT groups to fulfill the stipulated quality-of-service (QoS) requirements and optimize the transfer of services independently of ONUs. Furthermore, the wavelength allocation strategy also ensures that T-CONT groups have equal access to all wavelengths, allowing congestion-free and service-agnostic traffic flow for every T-CONT type.

Starting in Section II, we present how we propose to utilize T-CONTs to reduce class-of-service (CoS) queuing across ONUs. In Section III we describe new T-CONT-based algorithms for assigning users to wavelengths and subsequently how the XG-PON standard is uniquely used to timeshare wavelengths exhibiting both fixed and adaptive polling. Section IV is dedicated on the implementation of a bespoke self-similar traffic source modeled in OPNET to critically evaluate in Section V the network throughout packet delay and queue length under all traffic conditions. Finally, Section VI provides a summary of the substantial performance benefits of this approach.

## II. TWDM-PONs Exhibiting Class of Service

In (10G) EPONs, the OLT assigns network bandwidth on an ONU basis [26]. Therefore, the exchange messages are formulated differently from those of XGPON: the frame structure varies, the scheduling functions follow a different design, and the bandwidth allocation algorithms crucially require different implementation methodology.

Bandwidth allocation in XGPONs is based on T-CONTs. T-CONTs are traffic
containers available to ONUs to transport upstream traffic. However, each
T-CONT represents traffic of one type of service with specific QoS
characteristics. Furthermore, each T-CONT displays one or more GPON
encapsulation method (GEM) ports, each bearing one kind of traffic. A GEM
port is a virtual port performing the encapsulation of Ethernet frames for
transmitting data between the OLT and ONUs. To transmit data, the OLT
sends bandwidth map (*BWmap*) messages in the downstream
channel to assign turns (or tickets) to each T-CONT of one ONU to send
data in the upstream direction. These tickets are called
*Alloc-IDs*. Section III.B provides a detailed description of how
*Alloc-IDs* have been used to develop our algorithms.
Before bandwidth allocation can be implemented, though, a comprehensive
analysis is required of the distribution of traffic types and their
associated QoS. This is given below.

#### A. T-CONT Types

The XGPON standard defines five CoS types; each, as already mentioned, is transmitted upstream by a T-CONT, enabling QoS. ITU-T Recommendation G.987.3 [21] provides information on T-CONT priorities and corresponding services.

*T-CONT 1*: It is intended for the emulation of leased
line services and is supported by constant bit rate with fixed
periodic grants to offer strict demands for throughput and delay. This
class is the only static traffic not serviced by DBA algorithms.

*T-CONT 2*: This container is intended for a variable
bit rate requiring low delay and low packet loss rate transmission.
The assigned bandwidth for this kind of service, such as HDTV and
video on demand (VoD), is ensured in the service level agreement (SLA)
and assigned based on the bandwidth requirement in each polling
cycle.

*T-CONT 3*: It is based on a reservation method to
provide medium delay and low packet loss rate connection. It is
supported to give improved performance in comparison with T-CONT4
service and is offered at a guaranteed minimum data rate. Any
additional bandwidth requirement is counted in each polling cycle and
assigned when available.

*T-CONT 4*: This container is intended for best effort
services providing a high delay and high packet loss rate
transmission. This class of services, such as browsing and FTP, is
serviced after the previous service classes are satisfied.

*T-CONT 5*: The last container is not really a different
class but a combination of two or more of the other four classes and
is reserved for the system designer to choose and operate.

#### B. Proposal for the Allocation of T-CONTs in TWDM-PONs

As already mentioned, the new TWDM-PON MAC protocol has to ascertain the OLT delivers resource management with QoS based on the individual transmission requirements of T-CONTs [27,28]. A basic principle of the protocol we present in this paper is that it exhibits the assortment of traffic of all ONUs with similar T-CONT profiles to the same T-CONT group. Therefore, highly sensitive traffic flows (T-CONT 1, T-CONT 2), for example, across all ONUs can be jointly allocated network resources and uniformly comply with applied QoS restrictions (delay, guaranteed bandwidth) independently of their representing ONU. Wavelength allocation then ensures that T-CONT groups have potentially equal access to all wavelengths, allowing congestion-free, service-agnostic traffic flow for all container types. A number of operating principles apply. First, it is significant to account for that over a given time duration all potential T-CONTs of an ONU only transmit their traffic on a given wavelength. The reason for this is that the laser in each ONU transmitter cannot simultaneously operate at more than one wavelength. Second, the proposed wavelength allocation algorithm assumes that all wavelengths are available to all ONUs and that they are dynamically allocated to T-CONT groups according to the volume of traffic to be transferred into the network and any potential changes to the network configuration (e.g., registration of new ONUs).

Wavelength switching has also been considered to take into account in our scheduler the time required to switch among wavelengths in the TWDM-PON. Laser tuning times of 100 μs have been applied as supported by relevant literature [29–31]. As will be explained in the following sections, the deployed report/grant round-trip time and laser tuning time amount to approximately 500 μs, which is lower than the adapted polling cycles of 2 and 6.75 ms, eliminating any doubts that wavelength switching could degrade the network bandwidth allocation performance.

The assignment of T-CONT groups to wavelengths and ultimately that of bandwidth to each traffic group (time slots to group ONUs) is implemented in coordination, dynamically and on demand since the protocol first exhibits the assortment of traffic from ONUs with similar T-CONT profiles to the same T-CONT group, representing the transmission of all possible classes of service contributing different loads to the overall network load.

It is implemented in coordination and on demand because wavelength allocation ensures that T-CONT groups have potentially equal access to all wavelengths determined by the availability of cooperative fixed and adaptive polling cycles, being able to optimize the utilization of idle time slots and therefore increase the efficiency in exploiting the overall network capacity. To elaborate further, we have designed and implemented two new bandwidth allocation policies: a traditional fixed polling cycle scheme and a complementary adaptive polling cycle scheme to efficiently manage the bandwidth allocation of 10 Gpbs data streams across all available upstream channels (wavelengths). The assignment of time slots to T-CONTs follows the XGPON standard (ITU-T G.987) as defined in NG-PON2 (ITU-T G.989) for developing a MAC layer for TWDM-PONs, with the caveat of introducing cooperative fixed and adaptive polling cycles to benefit wavelength resource utilization by optimizing the use of potentially idle time slots and therefore increase the efficiency of exploiting the overall network capacity. This can be better realized considering T-CONT3, in which four service traffic flows highly impact the network performance due to their higher load, compared to T-CONT1, with a two-traffic load.

Traffic models have also been explored as part of this work to allow us to capture during simulation the most reoccurring network traffic flows across T-CONTs 1–4. T-CONT 5, as mentioned, is not a different class from the above and is not accounted for. Traffic self-similarity and packet distribution per allocation interval have therefore been given significant consideration. As traffic of T-CONT3 with four containers is characterized by its high burstiness and self-similarity, we have developed, for the benefit of service on demand, a self-similar TWDM-PON traffic generator based on the superposition of hierarchical Bernoulli on/off sources. This self-similar traffic generator will be explained in subsequent sections.

## III. MAC Protocol and Algorithms

The previous section touched upon the fundamental features of bandwidth and wavelength allocation implemented in this paper. We initially exhibit fair and dynamic assignment of users to wavelengths through the assignment of their T-CONTs (Section III.A). Subsequently, the XG-PON standard [27,28] is used as the reference to time share each of the wavelengths leading to a traffic-centric, overall bandwidth allocation (Sections III.B and III.C).

#### A. Dynamic Wavelength Assignment

Initiating the wavelength allocation process, ONUs are grouped according to their bandwidth requirements, as they are defined by their associated T-CONTs to ensure efficient network resource management. If the number of T-CONTs in a network is $t$, the possible number of groups $p$ is ${2}^{t}-1$. The possible group configurations for a TWDM-PON with typically four ($t=4$) traffic queues, i.e., T-CONTs 1, 2, 3, and 4 are shown in Table I.

Group 1, for example, comprises all network ONUs only requiring transmission of T-CONT 1 traffic; Group 7 comprises all ONUs with T-CONT 1 and T-CONT 4 traffic, etc. A total of 15 ($p={2}^{4}-1$) T-CONT groups are therefore defined.

Subsequently, the OLT uses Eq. (1) to perform the assignment of traffic to wavelengths as follows:

Based on the initial distribution process described by Eq. (1), the OLT therefore assigns to each wavelength four ONUs from each group (15 groups), that is, 240 ONUs in total. The remaining 16 ONUs (two from Group 1 and one from each of the other 14 groups) are allocated to residual wavelengths as detailed in Table II, using a second round of wavelength assignment. The proposed algorithm specifies that for residual wavelengths priority is given to the transfer of T-CONT 3 and 4 queues and therefore those remaining ONUs with high bandwidth requirement as opposed to lower bandwidth T-CONT 1 and 2 traffic. The OLT on this occasion attempts to display an equal number of individual T-CONTs transmitted per wavelength. As a result, bandwidth requirements are balanced across all upstream wavelengths, utilizing as much as possible all available network resources.

The principle of assigning residual wavelengths is shown in Table II. Residual wavelength allocation is performed in T-CONT group sequence starting from Group 15 and the unassigned ONU from stage 1 with a high bandwidth requirement. The last two table entries, and the last to be assigned a wavelength, include Group 1, one entry for each remaining ONU with low-priority traffic and small required bandwidth. The allocation of the four residual wavelengths in Table II is performed routinely up to the point of the Group 6 ONU, where traffic is assigned to ${\lambda}_{3}$ and not to ${\lambda}_{2}$ being next in line. The justification for the algorithm to result in this action is directly related to the condition discussed before whereby the OLT attempts to display an equal number of individual T-CONTs transmitted per wavelength. Therefore, after the assignment of Group 7 is concluded, ${\lambda}_{1}$ and ${\lambda}_{2}$ display two T-CONT 3 and two T-CONT 4 ONUs and ${\lambda}_{3}$ displays two T-CONT 4 and only one T-CONT 3 ONU. Therefore, the T-CONT 3 ONU of Group 6 would be assigned to ${\lambda}_{3}$. Following the same procedure, Groups 4 and 3 are assigned both to ${\lambda}_{4}$ because ${\lambda}_{4}$ as a result would transmit one T-CONT 3 and one T-CONT 4 compared to ${\lambda}_{1}$, ${\lambda}_{2}$, and ${\lambda}_{3}$ with two T-CONT 3 ONUs and two T-CONT 4 ONUs. The remaining three ONUs from Groups 1 (two ONUs) and 2 are designated ${\lambda}_{2}$, ${\lambda}_{4}$, and ${\lambda}_{3}$, respectively, in order to maintain an equal distribution of T-CONT 1 and 2. An illustration of the proposed wavelength allocation strategy implemented in our algorithm is shown in Fig. 1.

To conclude, in the scenario of a TWDM-PON with four wavelengths and four T-CONTs, our algorithm is shown to be able to balance the distribution of traffic among wavelengths, demonstrating the transmission of 34 T-CONTs each per wavelength with 35 T-CONTs for wavelength 1. The exception is ${\lambda}_{1}$ supporting an additional T-CONT 1 queue, which hardly influences the performance of the traffic distribution since T-CONT 1 supports relatively low traffic density [20]. Our algorithm therefore manages to prevent a bottleneck caused by saturating a given wavelength.

The selection of the conditions presented by the algorithm account for realistic network operation and maximum use of network resources, with respect to user access and T-CONTs 3 and 4 representing the majority of network traffic. Details of the generated T-CONT traffic are provided in Sections III.A and III.B. The efficiency of the described algorithm lies with its adaptability and dynamicity in responding to varying ONU bandwidth requirements imposed by fluctuations to the volume of T-CONT traffic.

#### B. Dynamic Bandwidth Assignment of T-CONTs

As already mentioned in Section I, each T-CONT of a given ONU represents service traffic of
one bandwidth type with specific QoS characteristics. The allocation
in time of the available bandwidth of each wavelength of a TWDM-PON to
T-CONTs is performed by the designed MAC protocol, taking into account
the QoS requirements of those T-CONTs. For upstream transmission, the
XG-PON standard defines that T-CONTs are identified by their
allocation identification (*Alloc-ID*). With respect to
managing traffic queues for each T-CONT, the OLT integrates their
encapsulation method (XGEM) ports so that each T-CONT is playing the
role of a single buffer [27].
T-CONTs are represented, therefore, by an aggregated traffic
descriptor corresponding to a six-tuple, including parameters
${R}_{F}$, ${R}_{A}$, ${R}_{M}$, ${\chi}_{AB}$, $P$, and $\omega $ [21].

The fixed bandwidth ${R}_{F}$ represents the reserved portion of the link capacity that is allocated to a given traffic flow regardless of its traffic demand and overall traffic load conditions. Assured bandwidth ${R}_{A}$ is a portion of the link capacity that is allocated to a given traffic flow as long as the flow has unsatisfied traffic demand regardless of the overall traffic conditions. Maximum bandwidth ${R}_{M}$ defines the upper limit on the total bandwidth that can be allocated to a traffic flow under any traffic conditions. ${\chi}_{AB}$ is the ternary eligibility indicator for additional bandwidth assignment: none; non-assured (NA), where bandwidth is only given if it is available but not guaranteed; and best-effort (BE), where a demand is only met if remaining bandwidth is available. $P$ and $\omega $ are the priority and weight factors for BE bandwidth assignment, respectively.

For developing DBA algorithms, the XG-PON standard supports two methods, namely, the status reporting (SR) and traffic monitoring (TM) methods. SR-based DBA is performed using explicit buffer occupancy reports that are requested by the OLT, while TM DBA operates according to the OLT’s observations of the idle XGEM frame patterns [21]. In this paper we consider the SR DBA method only.

Accordingly, when a dynamic bandwidth report upstream (DBRu) flag is
set in the downstream XGPON transmission convergence (XGTC) frame, the
ONU should send the DBRu report for the given T-CONT in the upstream
XGTC frame. It should be noted that if the DBRu flag is not set, the
ONU does not need to send a DBRu report. Following a T-CONT’s
bandwidth request, the *buffer occupancy*
(*BufOcc*) field in the DBRu is used, containing the
total amount of service data unit (SDU) traffic associated with the
corresponding Alloc-ID. The OLT then using the generated DBRu reports
calculates the grant messages for each ONU and subsequently transmits
them via the bandwidth map (BWmap) located in the downstream XGTC
header. The BWmap incorporates, in order of sequence, the bandwidth
allocation structures for all transmitted Alloc-IDs including their
*StartTime* and *GrantSize* field. The
*StartTime* field specifies the location of the first
byte of the upstream XGTC frame, and the *GrantSize*
field the allocated bandwidth at one-word granularity (4 bytes) [27,32].

Since in the XG-PON standard DBRu reports are transmitted together with
upstream data, T-CONTs with a large *StartTime* are
scheduled at the rear of the BWmap, having therefore to wait a long
time before they can update their DBRu report. The exhibited elapsed
time is the reason for the formation of idle bandwidth in a given
transmission window. In addition, since for NG-PON2 (ITU-T G.989) the
network reach is extended to 40 km and beyond, the propagation
delay associated with the exchange of control messages is longer
compared to typical 20 km reach XG-PONs, generating prolonged
idle periods in upstream transmission. Both issues above significantly
decrease the upstream channel utilization rate.

We propose that the OLT first receives all DBRu reports from the ONUs
prior to an ONU starting to dispatch T-CONT data as shown in
Fig. 2. The OLT can then
send the BWmap for the subsequent cycle $n$ in advance, coinciding with the upstream
data transmission of cycle $n-1$. Any idle bandwidth potentially generated
during two successive polling cycles is therefore diminished,
automatically providing a good solution for the prolonged propagation
delays of longer-reach networks without the need for modifications to
the polling schemes or DBA algorithms. For DBRu report transmission,
the *start_time* of each DBRu report is determined
based on the distance between the OLT and an ONU and for the duration
of that ONU’s activation in order to avoid collisions between
the DBRu reports from different ONUs. It should be noted that the
*start_time* parameter defined here is different to the
*StartTime* field of the BWmap described a few lines
above. With respect to scheduling, an ONU transmits its DBRu report
subsequent to a wait response time ${T}_{RT}$, its equalization delay
$EqD$, and its *start_time*,
where $EqD$ is the interval between the actual and
desired arrival time of the burst containing its registration physical
layer operations and maintenance (PLOAM) message.

To support our algorithm dynamic bandwidth allocation based on the
actual traffic queuing of T-CONTs, the buffer occupancy field
*BufOcc* in the DBRu report of a T-CONT with Alloc_ID
$i$ is provided by

*GrantSize*, respectively, of a T-CONT with Alloc-ID $i$. Table III includes the parameters presented in Eqs. (2)–(19), providing their definition and function in the operation of the presented algorithm.

According to the reference DBA model of the XG-PON standard [13], to avoid a T-CONT with heavy traffic load monopolizing the entire available bandwidth, the OLT scheduler in each cycle should limit the upper limit of the total bandwidth that can be allocated to a traffic flow, expressed by ${R}_{M}$. Nevertheless, the XG-PON standard does not specify how ${R}_{M}$ should be defined or what kind of polling scheme should be employed allowing the development of competitive MAC algorithms. For the purpose of optimizing the use of the XG-PON standard to provide accountable QoS performance in TWDM-PONs and meet the requirements of practical deployments, we have implemented the allocation of bandwidth to T-CONTs exhibiting both fixed and adaptive polling cycles with fixed and adaptive ${R}_{M}$ values. Commonly, in both scenarios, the OLT first assigns a fixed bandwidth ${R}_{F}$ to all T-CONTs and subsequently determines a guaranteed bandwidth ${R}_{G}$ by allocating to those traffic flows still not satisfied by their initially allocated bandwidth a further portion of the link capacity, resulting in what we call an assured bandwidth ${R}_{A}$. This process is repeated until either the defined ${R}_{A}$ value for a T-CONT is reached or its bandwidth demand satisfied. If at the end of this process there are T-CONTs requiring bandwidth on top and above the assured bandwidth (expressed by ${\chi}_{AB}$) in their aggregated traffic descriptor, the OLT has the provision for their request to be granted up to the point it is satisfied or the maximum available network bandwidth fully assigned. The following Subsections III.C and III.D provide the details of the developed fixed and adaptive allocation algorithms and how they operate in alignment with the proposed dynamic wavelength assignment algorithm described in Section III.A.

#### C. Polling Scheme With Fixed ${R}_{M}$

In a polling cycle with fixed ${R}_{M}$, the downstream XGTC frame, incorporating the BWmap, is sent to ONUs at periodic intervals. In addition, to provide the necessary synchronization between the OLT and ONUs, the polling cycle is required to be greater than the upstream physical layer (PHY) frame offset ${T}_{\mathrm{eqd}}$ at the OLT, which is the elapsed time between the start of the downstream PHY frame carrying a specific BWmap and the upstream PHY frame implementing that BWmap. In the GPON standard it is also referred to as the zero-distance equalization delay [20]. In our algorithm we calculate the upstream PHY frame offset using

To first determine a maximum bandwidth value ${R}_{M}$for all T-CONTs, regardless of their traffic conditions and produced bandwidth requests, we use

where ${C}_{\text{avail}}^{l}$ and ${K}^{l}$ are the capacity and number of T-CONTs available on the $l$th wavelength, respectively. The OLT scheduler will then allocate bandwidth in the order of T-CONT priority as shown below. ${R}_{i}$ in Table III is defined as the*GrantSize*for Alloc-ID $i$, representing the allocated T-CONT bandwidth. ${R}_{i}^{\text{req}}$ and ${R}_{F}$ are the requested bandwidth for Alloc-ID $i$ and fixed bandwidth, respectively:

In cases when the assigned bandwidth for T-CONTs 3 and 4 in Eq. (7) is less than the maximum bandwidth $({R}_{F}<{R}_{i}^{\text{req}}<{R}_{M})$, the bandwidth assigned together to all T-CONTs in wavelength $l$, expressed in Table III by ${R}_{\text{total}}^{l}$, could potentially be less than the available wavelength capacity, considering also the T-CONT groupings and wavelength assignment mechanism preciously described. To maximize the utilization of a wavelength, if that scenario were to occur, the OLT scheduler distributes any remaining bandwidth to T-CONTs 3 and 4 up to the ${R}_{M}$ limit in a manner following the XG-PON standard [17]. Therefore, ${R}_{\text{total}}^{l}$ and ${R}_{\text{rem}}^{l}$ are defined, being the sum of the assigned and remaining bandwidth, respectively, for all T-CONTs of the $l$th wavelength, given by

To allocate the calculated amount of ${R}_{\text{rem}}^{l}$ to those T-CONTs with a request for additional NA or BE bandwidth, expressed by ${\chi}_{AB}$, we use Eq. (10) as follows:#### D. Polling Scheme With Adaptive ${R}_{M}$

The adaptive polling scheme also incorporates an adaptive ${R}_{M}$ value. Once the OLT receives all DBRu reports from ONUs, it determines the next polling cycle’s length depending on the produced sum of the requested bandwidth of each T-CONT in each wavelength. Our algorithm therefore performs dynamic bandwidth allocation by responding to changing network traffic conditions. As shown in Fig. 2, when the traffic load is low, the polling cycle between the first and second BWmap is of the minimum length of 500 μs, calculated as already explained using Eq. (3) for a TWDM-PON with 40 km reach. For a congested network (i.e., DBRu reports carry information of high bandwidth requests), Fig. 2 displays the generation of a polling cycle increased to 1.25 ms, adapting to new T-CONT traffic. To be able to implement the transition between changing cycle sizes, we have introduced a DBA processing time (${T}_{\mathrm{DBA}\_\text{proc}}^{l}$) used in the proposed MAC protocol to decide the right timing for the transmission of the new BWmap. This is of interest since a new polling cycle starts when the Bwmap departs from the OLT. The timing for the implementation of the new bandwidth map for ONUs is also determined in accordance with ${T}_{\mathrm{DBA}\_\text{proc}}^{l}$ since the configuration of their transmitters is updated when the Bwmap arrives at an ONU.

To calculate ${T}_{\mathrm{DBA}\_\text{proc}}^{l}$, ${R}_{\text{total}}^{\text{req}}$ and ${R}_{\mathrm{min}\_\text{avail}}^{l}$ are first defined, being respectively the sum of the requested bandwidth of all T-CONTs of wavelength $l$ and its minimum available capacity. Their expressions are given by Eqs. (11) and (12) as follows:

## IV. Traffic Modeling

Most T-CONT 3 and T-CONT 4 traffic flows, such as those generated by HTTP, FTP, or variable bit rate (VBR) video applications exhibit a self-similarity profile [33]. The evaluation of our protocol and performance merits of the algorithms have therefore been conducted in the presence of self-similar traffic, exhibiting realistic traffic conditions by incorporating the necessary variations in traffic. OPNET self-similar processes have been used in our simulations, the functionality of some of which are presented next for clarity, together with information on the bespoke self-similar source generator we programmed for the purpose of this work. The reason for the latter is the benefits gained from having total control over the parameters that characterize the burstiness and load of the generated T-CONT traffic. Self-similar traffic is characterized by great infrequence, extreme variability (variance), high correlation, and burstiness over a wide range of time intervals due to inter-arrival traffic sharply reaching a peak and suddenly dropping down [27]. The exhibited packet distribution over a given interval is therefore an important factor, as will be specified below, in determining the features of generated self-similar traffic.

If $X(k),k=1,2,\dots ,n$ is the number of packets in each interval $k$ with $n$ being the length of a discrete time series, the aggregated ${X}_{m}(j)$ is obtained by averaging $X(k)$ over non-overlapping intervals of length $m=1,2,\dots $. This is given by Eq. (20) as follows:

A series $X(k)$ is second order self-similar (see [34]) if the variance of the aggregated packets is given by The Hurst parameter $H$, representing the level of self-similarity, equals $1+\beta /2$. A Hurst parameter of 1 represents highly self-similar traffic, while a Hurst parameter of 0.5 represents short-range dependent traffic. While a standard OPNET traffic source employs a Hurst parameter of 0.8, in agreement with examples presented in the literature, our generator offers the flexibility of implementing in each case the Hurst values better fitting the intended traffic. A superposition of hierarchical Bernoulli on/off sources is used next for generating packets in each interval. It is assumed that traffic exhibits the same statistical behavior for all time scales. A superposition of independent stochastic processes, each working on a different time scale, can therefore be used to model the intended behavior. Each source ${S}_{i}(k)$ is characterized by only two parameters, namely, ${p}_{i}$, which is the probability of an on state, and ${N}_{i}$, the number of packets per interval during an on state. They are determined by fitting the autocorrelation function and the variance-time (VT) behavior described in [34] and given by Eq. (22) as follows:## V. Performance Evaluation

The OPNET simulation model implemented for evaluating the protocol performance and contribution of designed algorithms represents a TWDM-PON with 40 Gbps upstream capacity, 256 ONUs, and four wavelength channels. The data rate of each wavelength is 10 Gbps. The distance between the OLT and ONUs is uniformly distributed to a maximum of 40 km to provide distances compatible with NG-PON2. A summary of the key network simulation parameters is given in Table IV.

As already discussed, four T-CONT types, T-CONT 1, 2, 3, and 4, were
generated for simulating network traffic. The buffer size for each T-CONT
is limited to 10 Mbytes. T-CONT 1 traffic, including video conferencing or
voice services that require a predictable response time and a static
amount of bandwidth, only assumes constant bit rates (CBRs) provided by an
8 Mbps data stream [36]. T-CONT 2
is implemented by on/off traffic following a Pareto distribution with a
mean of 500 μs. The packet inter-arrival time is also
following a Pareto distribution to generate burst traffic with a mean of
200 μs, while the packet size is described by an exponential
distribution with a mean of 1000 bytes. As a result, the average data rate
for T-CONT 2 is 20 Mbps, calculated by multiplying the *mean
on period/(mean on period + mean off period)* with the
*mean packet size/mean inter-arrival time*.

T-CONTs 3 and 4 are represented by self-similar traffic as described in Section IV. To characterize real-life traces with our packet generation fitting method, we have approximated the BCpAug89 trace with a Hurst parameter of 0.83, variance of 6.81, and mean of 3.18 [37]. This trace contains time stamps and packet sizes of one million captured Ethernet packet arrivals seen at the Bellcore Morristown Research and Engineering facility [30]. The aggregation level $m$ of [36] is more than ${10}^{5}$, and in order to fit this value, 20 independent Bernoulli sources had to be used $({2}^{20-1}=5.24288\times {10}^{5})$. In our traffic model we have therefore calculated the variances of 20 Bernoulli sources and subsequently determined the possible $({N}_{i},{p}_{i})$ value sets using Eq. (22). The results are shown in Table V below.

It would be worthwhile at this point to highlight again that the simulated traffic profile, based on which we have evaluated the network performance presented below, is fully dynamic and heterogeneous. We provided in Section III.A a description of how T-CONT-based traffic groups have been formed to utilize all possible configurations of ONU traffic containers to be potentially transmitted in the network and therefore all classes of service comprising the propagated network load. Following the example in Section III.A, the first traffic group consists of 18 ONUs with T-CONT 1 traffic queues only, whereas the 7th and 15th groups are represented by 17 ONUs with T-CONT 1 and T-CONT 2 traffic queues in the first case and all four possible T-CONT queues in the other. Therefore, the different volume of services associated with each ONU contribute different loads to the overall network load experienced in our protocol through the transmission of each T-CONT group. During the network operation we have therefore demonstrated the necessary variations of traffic, as you would expect in a practical network scenario, that have led us to the deployment of dynamic bandwidth allocation to efficiently track the network performance. Having also carefully configured our model, as we have described before, to exhibit practical, bursty traffic for needing T-CONTs, we have also demonstrated for each cycle on-demand assignment of bandwidth to ONUs. This is the first implementation of a new protocol where T-CONTs are allocated in both wavelength and time to investigate QoS performance in TWDM-PONs. The network throughput, end-to-end packet delay, the average queue length, and the queue size state are therefore examined for both our algorithms for varying ONU and network loads.

In the most recent literature we presented in the introduction section, of the algorithms for TWDM-PONs, Refs. [17] and [20] only present the global average end-to-end delay and throughput without taking into consideration performance based on T-CONTs. The DBA algorithm proposed in Ref. [18] provides the global end-to-end delay and total bandwidth utilization (%), lacking an analysis of the performance of services and wavelength utilization. Finally, Ref. [19] describes the achieved energy efficiency when applying their novel tuning technique, neglecting any QoS figures.

For the purpose of evaluating our protocol we have implemented two simulation scenarios, comparing and contrasting between a polling scheme with fixed ${R}_{M}$ and a polling scheme with adaptable ${R}_{M}$. For the first scenario the polling cycle time is 2 ms, in common with typical TDMA-PONs, and according to a 2 ms polling cycle the transmitted number of frames would be 15 and not 16 because of the guard time between ONUs and the required transmission time for DBRu reports. The available capacity for each wavelength ${C}_{\text{avail}}^{l}$ is therefore equal to 2.34 Mbytes $(125\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{\mu s}\times 15\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{frames}\times 10\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{Gbps})$, and ${R}_{M}$ equals 17.08 Kbytes and 17.2 Kbytes, respectively, for the first and remaining three wavelengths as given by Eq. (4), with ${K}^{l}$ being equal to 35 and 34 T-CONTs, respectively, as shown in Section III.A. For the adaptive polling scheme, the cycle time varies between 500 μs and 6.75 ms according to the value of ${T}_{\mathrm{DBA}\_\mathrm{proc}}^{l}$ varying from 64 μs to 6.439 ms as given by Eq. (13). The maximum ${R}_{M}$, measured at the highest traffic load, is therefore 112.26 Kbytes, due to the required bandwidth of T-CONT 3 and T-CONT 4 at the highest traffic load being 133.05 Mbps $(10\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{Gbps}-(8\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{Mbps}\times 34+20\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{Mbps}\times 34))/68$.

The throughput achieved for each of the four wavelengths of the TWDM-PON for both the fixed and adaptive polling schemes is shown in Fig. 3. In the calculation of throughput, we have not considered the GEM headers and the GTC headers of the transmitted frames, as is normally the case for algorithms presented in the literature [17–20]. The only overhead accounted for in all works is due to the guard time and/or the control messages (DBRu). Based on the ITU-T G.989.3 recommendation [38], the guard time and DBRu transmission times for each ONU at 10 Gbps are in the order of 64 bits and a few bits, respectively, and are sufficiently accommodated by the 125 μs transmitted frames. Typically, the guard time accommodates the transmission enable and disable times, and it also includes the margin for the individual ONU transmission drift [38].

The horizontal axis in Fig. 3 represents the upstream normalized network offered load on the feeder fiber. At a value of 1.0, the sum of the generated traffic from all T-CONTs would be 40 Gbps, which is equivalent to the total network capacity (i.e., four wavelengths $\times 10\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{Gbps}$). The results show that the maximum throughput achieved per wavelength at full traffic load with adaptive polling ranges is between 9.36 and 9.74 Gbps. With fixed polling, the equivalent figures fall between 8.92 and 9.04 Gbps, initially confirming the improved utilization achieved with adaptive polling cycles for the transmission of high CoSs. We can equally conclude that in both polling scenarios wavelengths are used consistently to provide a balanced allocation of bandwidth across all network channels, avoiding any overloads and potential buffering or dropping of packets, as will be further supported by the following results.

Figure 4 follows from above to present how the polling cycles we have implemented as part of our algorithms respond to increasing network load and, in relation to the exhibited frequency of transmission of the BWmap by the OLT, are able to demonstrate sufficient bandwidth allocation to requesting T-CONTs. For the fixed polling cycle, the OLT sends a BWmap to all T-CONTs periodically and in particular every 16 downstream frames within constant 2 ms cycle times, displaying no variation with the network offered load. By contrast, by making allowances in our algorithm for the polling cycle to be adaptive, it can respond closely to actual network traffic and accordingly exhibit a small cycle time at low load ($<2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ms}$ up to about 0.7), increasing sharply to $>2\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{ms}$ thereafter, as defined by Eq. (13) in Section III.D. The clear advantage of the algorithm therefore is the demonstrated flexibility providing high classes of service, represented primarily at high network load by T-CONT 3 and T-CONT 4 queues, delivering their requested bandwidth without delay. For networks with an offered load less than 0.5 a minimum cycle of four transmission frames (500 μs) has been implemented increasing to 6.75 ms at the highest normalized traffic load.

Continuing with the evaluation of the QoS network gains, following the use of our protocol and algorithms, the overall network throughput against the offered load characteristic is drawn in Fig. 5. Adaptive polling is shown, in agreement with Fig. 3, to achieve higher throughput values. The figures obtained from the graph in Fig. 5 suggest that an overall channel utilization improvement of more than 2 Gbps has been achieved, from 36.20 to 38.50 Gbps, in the presence of adaptive polling compared to fixed polling. To calculate the theoretical maximum throughput figures for each scenario, we use Eq. (12), specifying that the expected results primarily depend on the ratio between the used polling cycle interval and any incurred idle periods. In our simulator we have defined idle periods to be equal to one frame duration to sufficiently represent the guard time between ONUs and DBRu report transmission times. Therefore, in a fixed 2 ms polling cycle, an idle period of 125 μs accounts for 6.25% of that cycle duration and would produce a drop to the maximum expected throughput by 2.5–37.50 Gbps, which is less significant than the values measured.

In the scenario where adaptive polling cycles are used to allocate increasing bandwidth requests by high traffic queues, the cycle times produced respond in sympathy to these traffic changes, forcing Eq. (12), for the same idle period, to increase. The maximum possible 6.75 ms cycle time would allow the transmission of 54 frames, and by considering one frame size equivalent (125 μs) idle periods, a maximum of 39.25 Gbps (40 Gbps $\times 53/54$) throughput would be expected. Once again, the theoretical figure presents a less significant drop to the maximum network throughput than the simulation value, although the difference between the two is smaller than with fixed polling. We presume the difference between the measured and theoretical values to be due to packet processing delays in the OLT and ONUs, indicating that the adaptive polling algorithm outperforms other proposals without being more processing intensive. The overall network throughput can be further increased by optimizing the number of individual T-CONTs transmitted per wavelength.

Next, the contribution of the wavelength and bandwidth allocation policies we have implemented to network queue lengths are presented. Figure 6 therefore draws queue length on a logarithmic scale to represent the average number of bytes stored in each queue during network operation for increasing traffic load for both polling cycles. The observed characteristics provide a more accurate picture of the qualities of each scheduler in meeting the bandwidth allocation requirements of all traffic queues, eliminating potential delays, and in the case of the adaptive scheduler they satisfy even more efficiently T-CONT 3 and T-CONT 4 fast bandwidth requests. Considering that each queue capacity is 10 Mbytes, our plots confirm that for T-CONT 1 and T-CONT 2, traffic queues store negligible data in the order of 16 Kbytes at the maximum offered load regardless of the polling cycle used. This situation occurs because the traffic generated for T-CONT 1 and T-CONT 2 classes of service are fixed at 8 Mbps and 20 Mbps, respectively, with both being comfortably supported by the algorithm we implemented using Eqs. (6) and (18), respectively, to assign bandwidth with fixed and adaptive polling. To cross reference with the performance of the adaptive cycle in Fig. 4 between 0.2 and 0.5 offered load, the short cycle allows bandwidth requests of T-CONT 1 traffic to be promptly served resulting in a zero queue length.

By contrast, as one would expect, applying the fixed polling cycle algorithm in a network operating at the highest traffic load of 1.0, where bandwidth requests are generated by T-CONT 3 and T-CONT 4, traffic would result, despite being shown to be more efficient than the alternatives, in an almost full-length queue. This response is intuitive considering the fixed cycle time and maximum allocated bandwidth ${R}_{M}$ per cycle the OLT is able to dedicate to high CoSs also in the presence of other types of traffic. The adaptive algorithm is designed to provide a solution to precisely address this performance limitation by allowing wavelengths across the network to operate in a fully dynamic manner given bandwidth allocation cycles responding directly to increasing bandwidth requests and therefore provide T-CONT 3 and T-CONT 4 queue lengths occupied by less than 2.5 Mbytes (25%), even at the highest traffic load.

Finally, to demonstrate the achieved data transfer quality, Fig. 7 exhibits the end-to-end packet delay for all four T-CONTs against traffic offered load for the two polling cycles. The delay trends for all four T-CONTs using the adaptive polling cycle are very similar to each other, exhibiting in all cases packet delays of less than 2 ms for traffic loads below 0.7 (Fig. 7), increasing slightly thereafter following a pattern similar to the cycle time response in Fig. 4. This situation occurs because the lower traffic load delay is reduced as a consequence of the short polling cycle used (less than 1 ms for up to 50% the network load). As the polling cycles extend in time with the increase in traffic load, the packet delay for all T-CONTs will naturally increase, albeit, as our algorithm demonstrates, only by a very modest amount. In the case of the constant polling cycle, T-CONT 1 and T-CONT 2 queues receive BWmaps regularly from the OLT and are therefore exposed to constant delays (i.e., 2.96 ms and 3.42 ms, respectively) regardless of the traffic load. However, the packet delay for T-CONT 3 and T-CONT 4 CoSs significantly increases when the offered load expands beyond 0.7 because of the limited available bandwidth. The justification for this trend is the same as that given for the analysis of Fig. 7 and the provision in the algorithm to prioritize T-CONT 3 and T-CONT 4 traffic for the allocation of additional upstream channels as defined by Eq. (10).

Table VI summarizes the performance comparison between the fixed and adaptive algorithms in terms of throughput, polling cycle, queue occupancy, and delay.

## VI. Conclusions

This paper focuses on the development of a new MAC protocol providing efficient control over the QoS experienced by end users while enabling high capacity transfer in TWDM-PONs based on the individual transmission requirements of T-CONTs. A basic principle of the protocol is that it exhibits the assortment of traffic of all ONUs with similar T-CONT profiles to the same T-CONT group. Therefore, highly sensitive traffic flows (T-CONT1, 2), for example, across all ONUs can be jointly allocated network resources and uniformly comply with applied QoS restrictions (delay, guaranteed bandwidth), independently of their representing ONU. Wavelength allocation then ensures T-CONT groups have potentially equal access to all wavelengths, allowing the major benefit of congestion-free, service-agnostic traffic flow for all container types.

The assignment of T-CONT groups to wavelengths and of bandwidth to each traffic group are implemented in coordination, dynamically and on demand. For the latter we have designed and implemented two new efficient bandwidth allocation policies: a traditional fixed polling cycle scheme and a complementary adaptive polling cycle scheme to efficiently manage the bandwidth allocation of 10 Gpbs data streams across all available upstream channels (wavelengths). The assignment of time slots to T-CONTs follows the XGPON standard (ITU-T G.987) as defined in NG-PON2 (ITU-T G.989) for the development of a MAC layer for TWDM-PONs with the caveat of introducing cooperative fixed and adaptive polling cycles to benefit wavelength resource utilization by optimizing the use of potentially idle time slots and therefore increasing the efficiency in exploiting the overall network capacity. This can be better realized considering that T-CONT3 and 4 service traffic flows highly impact the network performance due to their higher load, compared to T-CONT1 and 2 traffic load.

The simulated traffic profile, based on which we have evaluated the network performance, is fully dynamic and heterogeneous. We provided in Section III.A a description of how T-CONT-based traffic groups have been formed to utilize all possible configurations of ONU traffic containers, to be potentially transmitted in the network and therefore all CoSs comprising the propagated network load. Hence, the different volume of services associated with each ONU contribute different loads to the overall network load, experienced in our protocol through the transmission of each T-CONT group. Moreover, during the network operation, we have demonstrated the necessary variations of traffic, as you would expect in a practical network scenario, that have led us to the deployment of dynamic bandwidth allocation to efficiently track the network performance. Having also carefully configured our model to exhibit practical, bursty traffic for needing T-CONTs, we have also demonstrated for each cycle on-demand assignment of bandwidth to ONUs. This is the first protocol implementation where T-CONTs are allocated in both wavelength and time to improve QoS performance in TWDM-PONs. The network throughput, end-to-end packet delay, the average queue length, and the queue size state have therefore been examined for both our algorithms for varying ONU and network load.

OPNET simulations were undertaken based on a 40 km, 40 Gbps TWDM-PON with four stacked wavelengths at 10 Gbps each and 256 ONUs. Performance characteristics display first significant throughput for each wavelength ranging between 8.92–9.04 Gbps and 9.36–9.74 Gbps for fixed and adaptive scheduling, respectively. These performance figures signify that the traffic load is optimally balanced between the upstream wavelengths, avoiding congestion in any one of them. Further performance evaluation also confirmed that for traffic loads exceeding 70%, the adaptive polling significantly improves the end-to-end packet delay for T-CONT 3 and 4 with a worst-case 14 ms packet delay measured. With fixed cycles, T-CONTs 1 and 2 can similarly achieve very low end-to-end packet delay up to 4 ms with reduced processing requirements. T-CONTs 3 and 4 increasingly benefit from the presence of adaptive polling cycles due to higher bandwidth requirements, exhibiting 11 ms and 14 ms packet delays, respectively. Also, by opting for fixed polling, delays of only 2.96 ms and 3.42 ms were measured respectively for T-CONTs 1 and 2. Hence, this new approach of allocating T-CONTs in both wavelength and time is the first demonstration of the technique to provide more efficient QoS performance for NG-PON2.

## References

**1. **FSAN Group
[Online]. Available: http://www.fsan.org.

**2. **Next Generation PON
Evolution [Online]. Available: https://www.huawei.com.

**3. **“Nokia Next
Generation PON” [Online]. Available: https://www.finnet.fi/app/uploads/2016/11/Finnet-Nokia-Speed-and-Convergence-November-2-2016.pdf.

**4. **L. Yi, Z. Li, M. Bi, W. Wei, and W. Hu, “Symmetric 40-Gb/s
TWDM-PON with 39-dB power budget,” IEEE
Photon. Technol. Lett., vol. **25**,
no. 7,
pp. 644–647,
2013. [CrossRef]

**5. **P. Chanclou, A. Cui, F. Geilhardt, H. Nakamura, and D. Nesset, “Network operator
requirements for the next generation of optical access
networks,” IEEE Netw.,
vol. **26**, no. 2,
pp. 8–14,
2012. [CrossRef]

**6. **Y. Luo, X. Zhou, F. Effenberger, X. Yan, G. Peng, Y. Qian, and Y. Ma, “Time and wavelength
division multiplexed passive optical network (TWDM-PON) for next
generation PON stage 2 (NG-PON2),” J.
Lightwave Technol., vol. **31**,
no. 4,
pp. 587–593,
2012. [CrossRef]

**7. **GIgaPON Access NeTwork
Project [Online]. Available: https://cordis.europa.eu/project/rcn/61161/factsheet/it.

**8. **N. Angelopoulos, P. Solina, and J. D. Angelopoulos, “The IST-GIANT project
(GIgaPON Access NeTwork),” in 7th
European Conf. on Networks and Optical Communications,
Darmstadt, Germany, June 2002.

**9. **E. Harstead, D. V. Veen, and P. Vetter, “Technologies for
NGPON2: Why I think 40G TDM PON (XLG-PON) is the clear
winner,” in OFC/NFOEC,
Mar. 2012.

**10. **Y. Ma, Y. Qian, G. Peng, X. Zhou, X. Wang, J. Yu, Y. Luo, X. Yan, and F. Effenberger, “Demonstration of a
40 Gb/s time and wavelength division multiplexed
passive optical network prototype system,” in
OFC/NFOEC, Mar. 2012.

**11. **L. Zhou, Z. Xu, Q. Huang, X. Cheng, Y.-K. Yeo, and S. Xu, “A passive optical
network with shared transceivers for dynamical resource
allocation,” IEEE Trans.
Commun., vol. **61**,
no. 4,
pp. 1554–1561,
Apr. 2013. [CrossRef]

**12. **N.-C. Tran, E. Tangdiongga, C. Okonkwo, H.-D. Jung, and T. Koonen, “Flexibility level
adjustment in reconfigurable WDM-TDM optical access
networks,” J. Lightwave
Technol., vol. **30**,
no. 15,
pp. 2542–2550,
Aug. 2012. [CrossRef]

**13. **A. Buttaboni, M. D. Andrade, and M. Tornatore, “A multi-threaded
dynamic bandwidth and wavelength allocation scheme with void filling
for long reach WDM/TDM PONs,” J.
Lightwave Technol., vol. **31**,
no. 8,
pp. 1149–1157,
Apr. 2013. [CrossRef]

**14. **X. Hu, X. Chen, Z. Zhang, and J. Bei, “Dynamic wavelength and
bandwidth allocation in flexible TWDM optical access
network,” IEEE Commun. Lett.,
vol. **18**, no. 12,
pp. 2113–2116,
2014. [CrossRef]

**15. **H. Yang, W. Sun, J. Li, and W. Hu, “Energy efficient TWDM
multi-PON system with wavelength relocation,”
J. Opt. Commun. Netw.,
vol. **6**, no. 6,
pp. 571–577,
2014. [CrossRef]

**16. **R. Wang, H. H. Lee, S. S. Lee, and B. Mukherjee, “Energy saving via
dynamic wavelength sharing in TWDM-PON,”
IEEE J. Sel. Areas Commun.,
vol. **32**, no. 8,
pp. 1566–1574,
2014. [CrossRef]

**17. **H. Wang, S. Su, R. Gu, and Y. Ji, “A minimum wavelength
tuning scheme for dynamic wavelength assignment in
TWDM-PON,” in *14th Int. Conf. on
Optical Communications and Networks (ICOCN)*,
2015.

**18. **H. Wang, Y. Liang, R. Gu, Y. Ji, Y. Ma, C. Zhang, and X. Wang, “LP-DWBA: A DWBA
algorithm based on linear prediction in
TWDM-PON,” in *14th Int. Conf. on
Optical Communications and Networks (ICOCN)*,
2015.

**19. **M. P. I. Dias, E. Wong, P. V. Dung, and L. Valcarenghi, “Offline
energy-efficient dynamic wavelength and bandwidth allocation algorithm
for TWDM-PONs, in IEEE Int. Conf. on
Communications (ICC), June 2015.

**20. **H. Yang, W. Sun, J. Li, and W. Hu, “Energy efficient TWDM
Multi-PON system with wavelength relocation,”
J. Opt. Commun. Netw.,
vol. **6**, no. 6,
pp. 571–577,
2014. [CrossRef]

**21. **“10-gigabit-capable passive optical networks (XG-PON):
Transmission convergence (TC) specifications,” ITU-T Recommendation G.987.3,
Oct. 2010.

**22. **P. Alvarez, N. Marchetti, D. Payne, and M. Ruffini, “Backhauling mobile
systems with XG-PON using grouped assured
bandwidth,” in 19th European Conf. on
Networks and Optical Communications (NOC), Milano,
Italy, 2014.

**23. **P. Alvarez, N. Marchetti, and M. Ruffini, “Evaluating dynamic
bandwidth allocation of virtualized passive optical networks over
mobile traffic traces,” J. Opt. Commun.
Netw., vol. **8**,
no. 3,
pp. 129–136,
2016. [CrossRef]

**24. **A. Marotta, K. Kondepu, D. Cassioli, C. Antonelli, L. M. Correia, and L. Valcarenghi, “Software defined 5G
converged access as a viable techno-economic
solution,” in Optical Fiber
Communications Conf. and Exposition (OFC), San
Diego, California, 2018.

**25. **Y. Nakayama, H. Uzawa, D. Hisano, H. Ujikawa, H. Nakamura, J. Terada, and A. Otaka, “Efficient DWBA
algorithm for TWDM-PON with mobile fronthaul in 5G
networks,” in IEEE Global
Communications Conf. (GLOBECOM),
Singapore, 2017.

**26. **H. Ranaweera, E. Wong, C. Lim, and A. Nirmalatha, “Next generation
optical-wireless converged network
architectures,” IEEE Netw.,
vol. **26**, no. 2,
pp. 22–27,
2012. [CrossRef]

**27. **“10-gigabit-capable passive optical networks (XG-PON):
General requirements,” ITU-T Recommendation G.987.1, Jan. 2010.

**28. **F. J. Effenberger, “The XG-PON system: Cost
effective 10 Gb/s access,”
J. Lightwave Technol.,
vol. **29**, no. 4,
pp. 403–409,
2011. [CrossRef]

**29. **B. Dixit, D. Lannoo, D. Colle, M. Pickavet, and P. Demeester, “Dynamic bandwidth
allocation with optimal wavelength switching in
TWDM-PONs,” in 17th Int. Conf. on
Transparent Optical Networks, July 2015.

**30. **H. Wang, S. Su, R. Gu, and Y. Ji, “A minimum wavelength
tuning scheme for dynamic wavelength assignment in
TWDM-PON,” in 14th Int. Conf. on
Optical Communications and Networks, July 2015.

**31. **A. Buttaboni, M. De Andrade, and M. Tornatore, “Dynamic bandwidth and
wavelength allocation with coexistence of transmission technologies in
TWDM PONs,” in 6th Int. Symp. on
Telecommunications Network Strategy and Planning,
Sept. 2014.

**32. **“Gigabit-capable
passive optical networks (G-PON): Transmission convergence layer
specification,” ITU-T
Recommendation G.984.3, Mar. 2008.

**33. **J. Zhang, N. Ansari, Y. Luo, F. Effenberger, and F. Ye, “Next-generation PONs: A
performance investigation of candidate architectures for
next-generation access stage 1,” IEEE
Commun. Mag., vol. **47**,
no. 8,
pp. 49–57,
2009. [CrossRef]

**34. **K. Park and W. Willinger, *Self-Similar Network Traffic and
Performance Evaluation*,
Wiley,
2000.

**35. **J. Potemans, B. Van den Broeck, J. Theunis, P. Leys, E. Van Lil, and A. Van de Capelle, “A tunable discrete
traffic generator based on a hierarchical scheme of Bernoulli
sources,” in Int. Conf. on
Communications (ICC), Anchorage, Alaska,
May 2003.

**36. **Y.-S. Ho and H. J. Kim, “Advances in multimedia
information processing–PCM,” in
6th Pacific-Rim Conf. on Multimedia (Part II),
Jeju Island, South Korea, Nov. 2005.

**37. **Bellcore BCpAug89 trace results
available in the Internet Traffic Archive, ftp://ita.ee.IbI.gov/html/traces.html.

**38. **“40-gigabit-capable passive optical networks
(NG-PON2): Transmission convergence layer
specification,” ITU-T
Recommendation G.989.3, 2015.