Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Performance evaluation of data center service localization based on virtual resource migration in software defined elastic optical network

Open Access Open Access

Abstract

Data center interconnection with elastic optical network is a promising scenario to meet the high burstiness and high-bandwidth requirements of data center services. In our previous work, we implemented cross stratum optimization of optical network and application stratums resources that allows to accommodate data center services. In view of this, this study extends the data center resources to user side to enhance the end-to-end quality of service. We propose a novel data center service localization (DCSL) architecture based on virtual resource migration in software defined elastic data center optical network. A migration evaluation scheme (MES) is introduced for DCSL based on the proposed architecture. The DCSL can enhance the responsiveness to the dynamic end-to-end data center demands, and effectively reduce the blocking probability to globally optimize optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of our OpenFlow-based enhanced SDN testbed. The performance of MES scheme under heavy traffic load scenario is also quantitatively evaluated based on DCSL architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning scheme.

© 2015 Optical Society of America

1. Introduction

Due to the rapid evolving of cloud computing and high-bitrate data-center-supported services, such as remote storage, video on demand and online games, data center applications have attracted much attention of service providers and network operators. A large number of service providers and enterprises are hosting their computing resources and storage contents in data centers to pursue lower delay, higher availability and efficiency at a lower cost. Since data center services are typically diverse in terms of required bandwidths and usage patterns, the network traffic shows high burstiness and high-bandwidth characteristics, which poses a significant challenge to the data center networking for more efficient interconnection with reduced latency and high bandwidth [1]. In fact, due to the lower energy consumption and construction cost [2], many data centers are constructed in the region with enough energy resources, which are far from populated areas geographically. It causes that data center services are impossible to guarantee the high-level end-to-end quality of service (QoS) through the long distance transmission.

In order to support the QoS of user flexibly, the service provider and network operator have provided and demonstrated several technologies for service connection locally from various perspectives. For instance, some access network technologies (e.g., PON and C-RAN) can provide the cost-effective networks [3, 4 ] to optimize the service connection and enhance the QoS of users. From the perspective of cloud data center, OpenStack can comprise an expanding collection of independent service modules that includes the management and dynamic orchestration of virtualized compute, storage, and networking resources hosted on the hardware servers within data centers. The OpenStack neutron is conceived to automate virtualized layer 2 and layer 3 connectivity services between virtual machines and physical servers within a data center [5]. However, the technologies above mentioned optimize the resources partly in the local sides, such as access network and data center network. Such solutions are also hardly to realize the end-to-end QoS guarantee fundamentally due to the incomplete optimization.

On the other hand, the service provisioning from data center to user is a promising scenario in data center optical interconnection [6] due to the end-to-end characteristic. Particularly, intra-data center [7] and inter-data center [8] optical interconnection have been already researched well as the typical scenarios. Several approaches have been proposed for dynamically moving resources closer to the users, especially for micro data center technology [9, 10 ]. In fact, the micro data center approach deploys the IT resources locally with separately considering the cloud resources and IP network resources. It is difficult to guarantee the global resource optimization. Also, the transport network resource is not considered in micro data center approach. To the best of our knowledge, it has not been addressed that data center resources are pulled to the edge side of users with joint optimization cross application and optical network stratums in order to realize service localization. Recently, as a centralized software control architecture, the software defined networking (SDN) enabled by OpenFlow protocol [11–13 ] can provide maximum flexibility and make a unified control over various resources for the joint optimization of functions and services with a global view [14–16 ]. Therefore, nowadays it is very important to apply SDN technique to localize the service in elastic data center optical network [6].

The enhanced SDN (eSDN) over elastic optical networks for data center service migration is proposed to meet the QoS requirement in our previous work [17]. On the basis of it, in this paper, we propose a novel data center service localization (DCSL) architecture based on virtual resource migration (VRM) in software defined elastic data center optical network. Additionally, a migration evaluation scheme (MES) for DCSL is introduced based on the proposed architecture. The DCSL can enhance the responsiveness to the dynamic end-to-end data center demands, and effectively reduce the blocking probability to globally optimize optical network and application resources. The overall feasibility and efficiency of the proposed architecture are experimentally verified on the control plane of our OpenFlow-based enhanced SDN testbed [17]. The performance of MES scheme under heavy traffic load scenario is also quantitatively evaluated based on DCSL architecture in terms of path blocking probability, provisioning latency and resource utilization, compared with other provisioning scheme. In this journal version, we make the following changes/modifications with respect to the previous OFC paper version [6]. In this paper, we investigate several technologies for service connection locally from various perspectives, e.g., some access network technologies and micro data center. The DCSL architecture is described in more detail in the journal version, which includes the various kinds of traffic route for DCSL and the procedure of DCSL. We also improve the migration evaluation scheme by introducing the dynamic migration evaluation factor with the time-varying. In addition, the vector space is introduced into the migration evaluation scheme to calculate the application utilization weight reasonably. Finally, we add the quantitative analysis and simulation setting in the results evaluation at low and moderate loads as well.

The rest of this paper is organized as follows. Section 2 introduces the DCSL architecture. The migration evaluation scheme under this network architecture is proposed in Section 3. Then we describe the testbed and present the numeric results and analysis in Section 4. Section 5 concludes the whole paper by summarizing our contribution and discussing our future work on this area.

2. DCSL architecture for software defined elastic data center optical network

The data center service localization (DCSL) architecture can be implemented based on software defined data center interconnection with elastic optical network, which is designed to gather and migrate with multiple stratum resources (i.e., optical network, computing and storage resources) in a control manner of open system. In this section, the main core and structure of the novel architecture are briefly pointed out. After that, the functional building blocks of controllers and coupling relationship between them in control plane are presented in detail as follows.

2.1 DCSL architecture for elastic data center optical network

The DCSL architecture based on virtual resource migration in software defined elastic data center optical network is illustrated in Fig. 1 . The distributed data centers are interconnected with elastic optical network geographically, which mainly consists of two stratums: the elastic network resources stratum (e.g., spectral sub-carriers) and the application resources stratum (e.g., CPU and storage), as illustrated in Fig. 1. Each stratum resources are software defined with OpenFlow protocol and controlled by a network controller (NC) and an application controller (AC) respectively in a unified manner. To control the heterogeneous networks with extended OpenFlow protocol (OFP), OpenFlow-enabled bandwidth-variable optical switch nodes with OFP agent software are required, which are referred to as OF-BVOS and proposed and demonstrated in [17]. In the standardization, e.g., Open Networking Foundation (ONF) [18] and Internet Engineering Task Force (IETF) [19], there are two types of multi-domain or multi-layer control architecture to be discussed for multiple controllers’ connection. One is the hierarchical control, and the other is peering control. For the former, an orchestrator is used to cooperate multiple controllers in hierarchical control way. Compared to the peering architecture, the hierarchical control architecture is much scalable with resource abstraction techniques. If multiple types of networks (e.g., IP network and optical network) have been deployed, the hierarchical architecture is convenient for extending with the corresponding network controllers due to the unified resource abstraction. Additionally, the hierarchical architecture may simplify the information exchange between different control elements. Therefore, this paper adopts the hierarchical control architecture in the experiment, while the role of AC is to orchestrate application and optical network resources for the services. For simplicity, we only consider one AC and one NC in this paper, while multiple NCs will be performed in the future work. Note that, there are three kinds of traffic route in Fig. 1, i.e., original traffic, DCSL traffic and VRM traffic. The “original traffic” means the route of traffic from user to original data center server without the data center service localization. The “DCSL traffic” indicates the route of traffic from user to the new data center server after the local migration of data center service. In addition, the “VRM traffic” means the route of virtual resource migration from original data center server to new server selected through migration evaluation scheme. The procedure of DCSL has included four parts. When a request for a service is arrival, the AC can find which data center server should be the optimal destination to provide the service, and whether the server is in the local area of user. If the data center server is local, it will provide the service immediately. If the data center is far from the user, the resource migration should be triggered for DCSL. Then, the DCSL performs the migration evaluation scheme to find which local node is optimal for the migration destination. After the migration evaluation scheme, the optimal local server can be chosen and the resources are migrated from the original data center to the new server. Then the service can be accommodated to the new local server through the DCSL. The motivations for DCSL architecture based on VRM in elastic optical network are twofold. Firstly, the DCSL architecture can emphasize the cooperation between the AC and NC to realize the most optimal migrated destination node with cross stratum optimization (CSO) through virtual data center resource migration flexibly. Secondly, due to the resource migration, the DCSL also pulls the data center resource closer to user to further implement data center service localization, which can enhance the end-to-end QoS (e.g., latency and bandwidth) and promote resource utilization effectively.

 figure: Fig. 1

Fig. 1 The architecture of DCSL based on virtual resource migration for software defined elastic data center optical network.

Download Full Size | PDF

2.2 Functional models of DCSL for software defined elastic data center network

To achieve the architecture described above, the network and application controllers have to be extended in order to support the DCSL functions. The functional building blocks of two controllers and the basic interactions among these functional modules are shown in Fig. 2 . In the NC, the network virtualization module is responsible for virtualizing the required optical network resources, while the enhanced OpenFlow module interworks the information with OF-BVOS periodically to perceive the elastic optical network via OFP. When the migration request is arrival, the DCSL control module can perform the migration evaluation scheme. After the proposed scheme completing, the DCSL control module decides which node can be the most optimal server or virtual machine to accommodate for users, allocates application resources and determines the location of application or where to migrate virtual machines. The local node research is within the scope of the network nodes which can be connected to application resources. Note that, we assume that only parts of network nodes are connected to application resources. Adding application resources to every node leads to very high costs for the network. Then it provides this request to path computation element (PCE) module in turn, including the request parameters (e.g., bandwidth and latency) and eventually returning a success reply containing the information of the provisioned lightpath. The PCE is capable of computing a network path or route based on a network graph, and of applying computational constraints. To conveniently perform the path computation with the cross stratum optimization of optical and application stratum resources, the NC can be interacted with AC through network-application interface (NAI). After receiving the application resources information from the AC, the end-to-end path computation can be completed in PCE module considering CSO of network and application resources, where the various computation strategies are alternative as a plug-in. The enhanced OF module performs spectrum assignment for the computed path and provisions the lightpath by using extended OFP. Note that, the OFP agent software embedded in OF-BVOS maintains optical flow table and models node information as software and maps the content to control the physical hardware [17]. After the lightpath is setup successfully, the information of the path is conserved into database management (DBM), which interacts with network virtualization module and stores the virtual network and application resources for DCSL. Meanwhile, the AC obtains data center resource information periodically or based on event-based trigger through an application monitor module. Note that, the VMWare software is deployed in data center of our experiment. We extend the OpenFlow protocol to invoke the API of VMWare to monitor and collect the data center resources or servers, and control the resources migration.

 figure: Fig. 2

Fig. 2 The functional models of network and application controllers.

Download Full Size | PDF

3. Migration evaluation scheme

3.1 Network modeling

The data center service localization architecture based on elastic data center optical networking is represented as G (V, L, F, A), where V = {v1,v2,...,vn} denotes the set of bandwidth-variable optical switching nodes, L = {l1,l2,...,ln} indicates the set of bi-directional fiber links between nodes in V. F = {ω1,ω2,...,ωF} is the set of spectrum sub-carriers on each fiber link and A denotes the set of data center servers, while N, L, F and A represent the number of network nodes, links, the spectrum sub-carriers and data center nodes respectively. In each data center server, two time-varying application stratum parameters describe the service condition of data center application resources, which are comprised of memory utilization UR(t) modeled RAM and CPU usage UC(t). From another perspective, the parameters in network stratum contain the hop Hp of each candidate path, and the occupied spectrum bandwidth Bl and distance Dl of each link, which are related to traffic cost of the corresponding link. From the users’ point of view, they pay much attention to the experience of QoS rather than concerning about which server to provide services. Therefore, we migrate the virtual machines to the local nodes of user side with the required computing and storage resources to enhance the experience of QoS in a close range. The resource migration can be triggered if the data center server providing the service is not in local side. The migration request needs to migrate the application resources of original data center to the optimal local server for the service provisioning. Therefore, the migration request contains the source node of data center and the needed network and application resources for migration. For each migration request from source node s, it can be translated into the needed network and application resources. Note that these resources contain the required network bandwidth b and application resources ar in the analysis of network model for simplicity. We denote the ith migration request described above as MRi(s,b,ar), while MRi + 1 will arrive after connection demand MRi in the time order. Additionally, according to the migration request and status of resources, the appropriate data center server in local nodes can be chosen as the destination node of migration based on the scheme.

3.2 Migration evaluation factor

Based on the functional architecture of DCSL described above, we propose a migration evaluation scheme (MES) in the DCSL control module of NC to select the migrate destination and evaluate the network status coordinated data center resources. With the MES scheme, the DCSL selects the new server node in local side and the data center location as the destination based on the application status collected from data center and the network condition provided by NC dynamically. Before a new migration request arrives, the NC maps the virtual migration into migration parameters, i.e., MRi(s,b,ar). In fact, the multiple parameters are hard to be evaluated with different dimension. For application and network occupation, there are several parameters to influence the system performance. In the optical network stratum, the network parameters contain network bandwidth, hops, transmission delay, and modulation format and so on. In this study, we assume all the signals use the same modulation format to transmission. In application stratum, the application parameters include CPU usage, RAM utilization, I/O scheduling and so on. For simplicity, the CPU and RAM utilization can be easily obtained through VMWare and used to evaluate the application resource. We consider the values of parameters are time varying as time goes on. If adopt the fixed weights among the parameters, the shortage degree of resource cannot be perceived using the static formulation. Therefore, we adopt the adjustable evaluation rank as the weighting coefficient to assess the priority of parameters. In addition, the product form of the parameters is not suitable for the formulation. That is because the value will be zero when one of the parameters is exhausted. Due to the time changing characteristic of parameters, we use adjustable evaluation rank rate k as the weighting coefficient to assess the priority of parameters as time goes on. Note that, we adopt the adjustable evaluation rank rate kC,kR as the corresponding weighting coefficient among CPU and RAM utilization, which can describe the relative proportion and priority of them. In order to facilitate the realization of MES scheme on the real testbed (mentioned later in the paper), the settings of the evaluation rank rate in scheme can be simplified appropriately when the simplification does not impact the process and effects of this scheme. We make the continuous rank value discretized and assume several typical values for simplicity, i.e., Ra,Rb. Here, Ra,Rb are the typical constants and the values of them decrease gradually, which express as follows: Ra+Rb=1,RaRb. Initially, the evaluation rank of CPU is higher than RAM. At this point, evaluation ranks satisfy the expressions as follows: kC=Ra,kR=Rb, Ra+Rb=1,RaRb. That means the higher usage corresponds to higher priority. Once the statistical average of RAM utilization exceeds CPU, the evaluation rank of them will be adjusted according to this change as follows: kC=Rb,kR=Ra. By parity of reasoning, kC,kRwill be modified dynamically based on the feedback of statistical average variation. Therefore, the application occupation fac with the application stratum parameters of current each server in local side is expressed as dimensionless Eq. (1), where these parameters are normalized to meet the linear relationship between them.

fac[UC(t),UR(t),kC,kR]=(kC×UC(t)+kR×UR(t))/(kC+kR)

In addition, following the method mentioned above, adjustable evaluation rank rate kB,kDcan be as the corresponding weighting coefficient between bandwidth and distance to describe the priority of parameters, which are dynamically modified in network stratum. So, the network occupation fbc with parameters of current each local node is expressed as dimensionless Eq. (2). In this equation, the parameter B and Bl denote the total bandwidth and occupied bandwidth of the link respectively.

fbc[Bl,Dl,Hp,kB,kD]=kBl=1HpBl/HpB+kDl=1HpDl

Among the local nodes, the candidate servers with the first K minimum of application functions are chosen by NC and expressed as the set Fa = {fa1,fa2,...,fak}. Then, the candidate path between source and each candidate local server can be calculated with minimum network function and denoted as Fb = {fb1,fb2,...,fbk}. From the view of vector graphics, Fa can be also seen as an K-elements-size vector space of K application occupation vectors fa1,fa2,...,fak. The mean vector fa¯ of vector space Fa expresses the center of them. The distance between the vector fa and the mean vector fa¯ is illustrated by faf¯a2. Among these vectors, the vector fai and faj are the farthest and nearest to the mean vector fa¯, which are chosen by Eq. (3). The correlation coefficient of the vector fai and faj is calculated as β, which is shown in Eq. (4). The physical significance of correlation coefficient is related to the degree of data center load balancing, which means the variance among application occupations in local nodes. The larger coefficient represents that the balancing degree becomes better in servers. The reason is the correlation coefficient of application occupation on different servers represents the correlation degree of them. The larger value of correlation coefficient leads to the greater interdependence of the servers’ load, so as to vary with each other. Therefore, the larger coefficient denotes the load of the servers can be more balanced and balancing degree becomes better in servers. On the other hand, the β also determines the weighting of application occupation in Eq. (5). When the degree of load balancing in data center is higher, the weighting coefficient of application occupation should be lower than network resource utilization due to the target implementation of load balancing. That means the network resource will be much scarcer by contrast in the current scenario. To measure the rationality of the choice of virtual resources migration, we define α as the migration evaluation factor to assess the resource utilization globally in application and network stratums, while the dynamic weight between the network and application parameter is described asβ. Based on the Eq. (4) described below, the application utilization weight β changes dynamically according to the feedback of load balancing degree. So the migration evaluation factor αmeets the Eq. (5) as follows.

faif¯a2=maxa{faf¯a2},fajf¯a2=mina{faf¯a2}
β=cov(fai,faj)D(fai)D(faj)=E(faifaj)E(fai)E(faj)E(fai2)[E(fai)]2E(faj2)[E(faj)]2
α=(1β)facmax{fa1,fa2fak}+βfbcmax{fb1,fb2fbk}

3.3 MES scheme

We assume that the network devices support network function virtualization (NFV), where network function runs as software on generic hardware, which can be consolidated onto industry standard elements, e.g., switches, computing and storage. Note that the tradeoff between service provisioning latency and resources migration delay is considered in the scheme. According to application utilization, MES scheme first chooses the best K candidate server nodes in application stratum close to user. In network stratum, the node with minimumαvalue based on the migration evaluation factor will be selected from the K candidates as the migration destination node according to application and network utilization. Then the needed virtual resources are migrated into the destination node of local side for the DCSL. Receiving the traffic request and new pairs of source and destination node, the NC can complete the end-to-end path computation in the connection and service parameters constraints, and perform spectrum assignment for the computed path through OFP between the source and new destination node after the choice of the local node.

4. Experimental demonstration and performance evaluation

To evaluate the overall feasibility and efficiency of the proposed architecture, we set up an elastic optical network with data centers based on our testbed, as shown in Fig. 3 . The testbed is deployed with control plane due to the lack of NFV-enabled hardware, which will be verified in our future work. Four flexi-grid optical nodes are equipped with Huawei Optix OSN 6800, each of which comprises flex ROADM and ODU boards and corresponding tributary card, making them possible to switch or transport the flexi-grid signal in optical networks [17]. We use Open VSwitch (OVS) as software OFP agent to interact between controller and flexi-grid optical nodes. In addition, the OFP agents are also used to emulate other optical nodes in data plane to support the DCSL with OFP. The enhanced OF module is the component that performs the spectrum allocation. Data center servers and the OFP agents are realized on an array of virtual machines created by VMware ESXi V5.1 running on IBM X3650 servers. The virtual operation system technology makes it easy to set up experiment topology based upon the backbone of US which comprises 14 nodes and 21 links. For OpenFlow-based DCSL control plane, the NC server is assigned to support the proposed architecture and deployed by means of three virtual machines for DCSL control, network virtualization and PCE strategy as plug in, while the AC server is used as CSO agent to monitor the application resources from data center networks. Each controller server controls the corresponding resources, while the database servers are responsible for maintaining traffic engineering database (TED), management information base (MIB), connection status and the configuration of the database and transport resources. We deploy the service information generator related with AC, which implements batch data center services for experiments.

 figure: Fig. 3

Fig. 3 Experimental testbed for DCSL and demonstrator setup.

Download Full Size | PDF

Based on the testbed described above, we have designed and verified experimentally DCSL for data center service in elastic optical network. The experimental results are shown in Figs. 4(a)-4(b) . Figures 4(a)-4(b) present the whole signaling procedure for DCSL by using OFP through a Wireshark capture inserted in the NC and AC respectively. As shown in Fig. 4(a), 10.108.65.249 and 10.108.49.14 denote the IP addresses of the NC and AC respectively, while 10.108.50.21 and 10.108.51.22 represent the IP addresses of related OF-BVOSs respectively. Note that existing OpenFlow messages have the original function, which are reused to simplify the implementation in this paper. The new messages types will be defined to support new functionalities in the future work. The features request message is responsible for monitoring by regularly querying OF-BVOSs about the current status. The NC receives the responses and obtains the information from OF-BVOSs via features reply. When data center request arrives, the NC prepares to provide the required data center resources for service localization accommodation, and then sends the request for DCSL to the AC via UDP message. Here, we use UDP message to simplify the procedure and reduce the performance pressure of controllers. After receiving the application resources information from the interworking, the NC performs the MES scheme to provide the virtual migration of data center resources, computes the paths considering CSO of optical network and application resources, and then provisions spectral paths to control all corresponding OF-BVOSs along the computed path via flow mod massage. It can shorten the distance between user and content to promote the QoS of user, and utilize the cross stratum resources effectively. Receiving the setup success reply via packet in, the NC responds the DCSL success reply to AC and updates the application usage to keep the synchronization.

 figure: Fig. 4

Fig. 4 Wireshark capture of the message sequence for DCSL in (a) NC and (b) AC.

Download Full Size | PDF

We also evaluate the performances of DCSL based on the MES scheme from 100 Erlang to 650 Erlang and compare it with the traditional CSO scheme [17] in terms of path blocking probability, resource occupation rate and path provisioning latency using virtual machines. The traditional CSO scheme will not pre-process the distribution of data center resources, and accommodate the services request from user to original destination node immediately with CSO of network and application resources status. Note that, we use the classical first fit algorithm for the spectrum assignment. The traffic requests are setup with bandwidth randomly distributed from 500Mbps to 100Gbps. We assume the CPU utilization in data center is picked from 0.1% to 1% for each demand, while the storage resource in server is occupied from 1GB to 10GB for each service request. We also assume the size of hard disk is 1TB. If each service request occupies a higher percentage, the data center resource should be exhausted with the fewer demands. They arrive at the network following a Poisson process and results have been extracted through the generation of 100,000 demands per execution. Each connection demand’s duration and inter-arrival interval duration follow the negative exponential distribution. We assume the bandwidth of a sub-carrier is 12.5GHz, which is a typical value in elastic optical network. In the MES scheme, we set the values of Ra,Rb as 60% and 40% respectively to avoid the experiment complexity in the simulation settings. Due to the time-varying characteristic of parameters, the proportion between CPU and RAM utilization will be changed as time goes on. Therefore, the values of Ra,Rb are just the initial values. Other possible values will not impact the process and effects of this scheme. All simulations are executed in C + + based on Linux GCC v4.4.7 on a server with 2.4GHz Intel E5620 and 12G RAM.

The AC obtains the CPU and RAM utilization timely through the VMWare API, and then the application occupation of each server can be calculated by Eq. (1). Note that, the weighting coefficient between CPU and RAM should be changed based on the time-varying priority. According to application utilization, MES scheme first chooses the best K candidate server nodes in application stratum close to user, while the application utilization weight β can be computed among the K candidate nodes through Eqs. (3)-(4) . In network stratum, the network utilization with the candidate nodes should be calculated using Eq. (2), while the corresponding αvalue based on the migration evaluation factor can be computed using Eq. (5). Then the node with minimum αvalue will be selected from the K candidates as the migration destination node according to application and network utilization. Here, the new destination node is elected for the services. The first fit algorithm can be performed as the spectrum assignment for the provisioning path between the source and new destination node after the choice of the local node. Figure 5(a) compares the path blocking probability among traditional CSO and MES schemes in the NSFNet topology. Path blocking probability measures both network and application blocking situation, which is measured by CPU and memory overflow. Note that the unit of traffic load in Figs. 5(a)-5(c) is Erlang. It can be seen clearly that MES scheme achieves better path blocking probability values as compared to the other scheme, especially when the network is heavily loaded (from 400 Erlang to 650 Erlang). Note that, the value of blocking probability reduces 29.8% compared to CSO when the traffic load is 600 Erlang. The reason is that, the MES scheme pulls the original data center application resources into the local nodes of user side through the virtual migration. It causes that a large amount of network links and spectral resources can be saved due to the distance shortening of accommodation after the data center service localization. The selected path is more likely to be setup successfully during the spectrum allocation phase. The CSO scheme chooses the destination data center within the limits of original servers, which causes longer routes will be selected for the service provisioning. When the traffic load is low (from 100 Erlang to 200 Erlang), the path blocking probability of MES scheme can be a little lower than CSO scheme, and the performance advantage of the proposed scheme is not so obvious. The value of blocking probability just reduces 8.7% compared to CSO scheme when the traffic load is 100 Erlang. That is because lots of idle network and application resources can be used for service accommodation when a few service demands arrive at the network. The position of destination node isn’t of great importance for services due to the abundant resources. Another phenomenon can be seen the curve of MES scheme gets lower than CSO with the increase of offered load. When the traffic load is moderate (from 250 Erlang to 350 Erlang), the value of blocking probability just reduces 27.5% when the traffic load is 250 Erlang. That means the shorter distance of the traffic route can bring the more possibility to the service provisioning. The comparisons on resource utilization among the two schemes are shown in Fig. 5(b). The resource utilization reflects the percentage of occupied resources to the entire elastic optical network and application resources. As shown in figure, the MES scheme can enhance the resource utilization remarkably compared to the traditional CSO scheme. In Fig. 5(b), the resource utilization rate of MES scheme has promoted 23.7% when the traffic load is 600 Erlang. This is justified by the fact that MES scheme can localize the application resources to provide the data center services close to user, break through the geographical restriction to optimize the optical network and data center resources and as such can yield higher resource efficiency. The resource utilization rate has just promoted 9.5% when the traffic load is 200 Erlang. That is because the fewer resource can be occupied in both schemes when traffic load is low. The performances among those schemes in terms of path provisioning latency are compared in Fig. 5(c). In this work, the latency reflects the average provisioning delay after the service request arriving. We can see the proposed MES outperforms CSO scheme in the path provisioning latency significantly. That is because before the service demands arrive, the MES scheme can utilize the spare time to perform the virtual resource migration and localize the data center resources close to user. It allows saving a large amount of provisioning delay. The CSO scheme accommodates the data center services to original destination node, and thus lead to longer provisioning delays. This phenomenon is more obvious under heavy traffic (from 400 Erlang to 650 Erlang) because more requests need to be serviced and the dissipative resources of CSO scheme increase, which will augment the delay time. The performance of path provisioning latency can proof the enhancement of the responsiveness to the dynamic demands.

 figure: Fig. 5

Fig. 5 Comparison on (a) path blocking probability, (b) resource occupation rate and (c) path provisioning latency among various schemes in heavy traffic load scenario.

Download Full Size | PDF

5. Conclusion

To enhance the QoS guarantee of data center service, this paper presents a DCSL architecture based on virtual resource migration in software defined elastic optical network. Additionally, the MES scheme is introduced for DCSL based on the proposed architecture, which can evaluate the network status and select the optimal migrate destination. The functional architecture is described in this paper. The feasibility and efficiency of DCSL are verified on our OpenFlow-based enhanced SDN testbed built by control plane. We also quantitatively evaluate the performance of MES scheme under heavy traffic load scenario in terms of path blocking probability, resource utilization, and provisioning latency, and compare it with the traditional CSO scheme. The results indicate that the DCSL with MES scheme can utilize optical network and data center resources effectively and enhance end-to-end responsiveness of data center services with the service localization, while leading to a reduced blocking probability.

Our future DCSL work includes two aspects. One is to improve the MES scheme performance with the fine time granularity and extend the testbed to a large scale network topology with multi-layer and multi-domain. The other is to develop the new messages types to support new functionalities for DCSL, and implement the network virtualization in elastic data center optical network with IP over elastic optical network on our OpenFlow-based testbed.

Acknowledgments

This work has been supported in part by NSFC project (61501049, 61271189, 61201154), the Fundamental Research Funds for the Central Universities (2015RC15), and Fund of State Key Laboratory of Information Photonics and Optical Communications (BUPT), P. R. China.

References and links

1. M. Al-Fares, A. Loukissas, and A. Vahdat, “A scalable, commodity data center network architecture,” Comput. Commun. Rev. 38(4), 63–74 (2008). [CrossRef]  

2. C. Kachris and I. Tomkos, “A survey on optical interconnects for data centers,” IEEE Comm. Surv. and Tutor. 14(4), 1021–1036 (2012). [CrossRef]  

3. P. Zhu, J. Li, P. Zhou, B. Lin, Z. Chen, and Y. He, “Upstream WDM-PON transmission scheme based on PDM-OOK modulation and digital coherent detection with dual-modulus algorithm,” Opt. Express 23(10), 12750–12757 (2015). [CrossRef]   [PubMed]  

4. J. Wu, Z. Zhang, Y. Hong, and Y. Wen, “Cloud radio access network (C-RAN): a primer,” IEEE Netw. 29(1), 35–41 (2015). [CrossRef]  

5. T. Szyrkowiec, A. Autenrieth, P. Gunning, P. Wright, A. Lord, J. P. Elbers, and A. Lumb, “First field demonstration of cloud datacenter workflow automation employing dynamic optical transport network resources under OpenStack and OpenFlow orchestration,” Opt. Express 22(3), 2595–2602 (2014). [CrossRef]   [PubMed]  

6. H. Yang, Y. Zhao, J. Zhang, Y. Tan, Y. Ji, J. Han, Y. Lin, and Y. Lee, “Data center service localization based on virtual resource migration in software defined elastic optical network,” in Proceedings of Optical Fiber Communication Conference (OFC 2015), (OSA, 2015), paper Th4G.4. [CrossRef]  

7. I. Tomkos, S. Azodolmolky, J. Sole-Pareta, D. Careglio, and E. Palkopoulou, “A tutorial on the flexible optical networking paradigm: state of the art, trends, and research challenges,” Proc. IEEE 102(9), 1317–1337 (2014). [CrossRef]  

8. H. Yang, J. Zhang, Y. Zhao, Y. Ji, J. Wu, Y. Lin, J. Han, and Y. Lee, “Performance evaluation of multi-stratum resources integrated resilience for software defined inter-data center interconnect,” Opt. Express 23(10), 13384–13398 (2015). [CrossRef]   [PubMed]  

9. M. Aazam and E. Huh, “Fog computing micro datacenter based dynamic resource estimation and pricing model for IoT,” in Proceedings of IEEE International Conference on Advanced Information Networking and Applications (AINA 2015), pp. 687 – 694. [CrossRef]  

10. A. A. Alsaffar and E. Huh, “Multimedia delivery mechanism framework for smart devices based on mega data center and micro data center in PMIPv6 environment,” in Proceedings of International Conference on Information Networking (ICOIN 2015), pp. 367 – 368. [CrossRef]  

11H. Yang, J. Zhang, Y. Zhao, Y. Ji, J. Han, Y. Lin, and Y. Lee, “CSO: Cross Stratum Optimization for Optical as a Service,” IEEE Commun. Mag.  53(8), 130-139 (2015).

12. L. Liu, W. R. Peng, R. Casellas, T. Tsuritani, I. Morita, R. Martínez, R. Muñoz, and S. J. B. Yoo, “Design and performance evaluation of an OpenFlow-based control plane for software-defined elastic optical networks with direct-detection optical OFDM (DDO-OFDM) transmission,” Opt. Express 22(1), 30–40 (2014). [CrossRef]   [PubMed]  

13. M. Channegowda, R. Nejabati, M. Rashidi Fard, S. Peng, N. Amaya, G. Zervas, D. Simeonidou, R. Vilalta, R. Casellas, R. Martínez, R. Muñoz, L. Liu, T. Tsuritani, I. Morita, A. Autenrieth, J. P. Elbers, P. Kostecki, and P. Kaczmarek, “Experimental demonstration of an OpenFlow based software-defined optical network employing packet, fixed and flexible DWDM grid technologies on an international multi-domain testbed,” Opt. Express 21(5), 5487–5498 (2013). [CrossRef]   [PubMed]  

14. F. Paolucci, F. Cugini, N. Hussain, F. Fresi, and L. Poti, “OpenFlow-based flexible optical networks with enhanced monitoring functionalities,” in Proceedings of European Conference and Exhibition on Optical Communications (ECOC 2012), (OSA, 2012), paper Tu.1.D.5. [CrossRef]  

15. L. Liu, R. Muñoz, R. Casellas, T. Tsuritani, R. Martínez, and I. Morita, “OpenSlice: an OpenFlow-based control plane for spectrum sliced elastic optical path networks,” in Proceedings of European Conference on Optical Communication (ECOC 2012), (OSA, 2012), paper Mo.2.D.3. [CrossRef]  

16. R. Martínez, R. Casellas, R. Vilalta, and R. Muñoz, “Experimental assessment of GMPLS/PCE-controlled multi-flow optical transponders in flexgrid networks,” in Proceedings of Optical Fiber Communication Conference (OFC, 2015), (OSA, 2015), paper Tu2B.4. [CrossRef]  

17. H. Yang, J. Zhang, Y. Zhao, Y. Ji, H. Li, Y. Lin, G. Li, J. Han, Y. Lee, and T. Ma, “Performance evaluation of time-aware enhanced software defined networking (TeSDN) for elastic data center optical interconnection,” Opt. Express 22(15), 17630–17643 (2014). [CrossRef]   [PubMed]  

18. Global Transport SDN Demonstration White Paper, ONF and OIF (2014), http://www.oiforum.com/public/ Form_Global_Transport_SDN_Demo_WP.html.

19. E. Haleplidis, ed., “Software-Defined Networking (SDN): Layers and Architecture Terminology,” IETF RFC 7426 (2015), https://tools.ietf.org/html/rfc7426.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1
Fig. 1 The architecture of DCSL based on virtual resource migration for software defined elastic data center optical network.
Fig. 2
Fig. 2 The functional models of network and application controllers.
Fig. 3
Fig. 3 Experimental testbed for DCSL and demonstrator setup.
Fig. 4
Fig. 4 Wireshark capture of the message sequence for DCSL in (a) NC and (b) AC.
Fig. 5
Fig. 5 Comparison on (a) path blocking probability, (b) resource occupation rate and (c) path provisioning latency among various schemes in heavy traffic load scenario.

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

f a c [ U C ( t ) , U R ( t ) , k C , k R ] = ( k C × U C ( t ) + k R × U R ( t ) ) / ( k C + k R )
f b c [ B l , D l , H p , k B , k D ] = k B l = 1 H p B l / H p B + k D l = 1 H p D l
f a i f ¯ a 2 = max a { f a f ¯ a 2 } , f a j f ¯ a 2 = min a { f a f ¯ a 2 }
β = cov ( f a i , f a j ) D ( f a i ) D ( f a j ) = E ( f a i f a j ) E ( f a i ) E ( f a j ) E ( f a i 2 ) [ E ( f a i ) ] 2 E ( f a j 2 ) [ E ( f a j ) ] 2
α = ( 1 β ) f a c max { f a 1 , f a 2 f a k } + β f b c max { f b 1 , f b 2 f b k }
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.