Everything is being connected to the Cloud and Internet of Things, and network robots with big data analysis are creating important applications and services. The cloud network architecture is moving towards mega-cloud data centers (DCs) provided by companies such as Amazon and Google in combination with distributed small DCs or edge computers. While the traditional restrictions imposed by distance and bandwidth are being overcome by the development of advanced optical interconnection, modern applications impose more complex performance and quality of service requirements in terms of processing power, response time, and data amount. The rise in cloud performance must be matched by improvements in network performance. Therefore, we propose an application-triggered cloud network architecture based on huge-bandwidth optical interconnections. This paper addresses edge/center cloud and edge/edge integration with the use of virtual machine migration. In addition, to reduce energy consumption, an application-triggered intra-DC architecture is described. Using the proposed architectures and technologies can realize energy-efficient and high-performance cloud service.
© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Cloud services, autonomous driving vehicles (AD-cars), smart houses, and the Internet of Things (IoT) will use optical access/metro networks to create, store, and process massive amount of data in cloud data centers (DCs). The cloud DC architecture will be changed from a centralized mega-DC cloud to distributed and interconnected small computers such as edge computers and micro-DCs . Optical interconnection eases the bandwidth and distance restrictions that traditionally have hindered the integration of cloud resources .
The smart and connected community (S&CC) network proposal uses optical interconnection to link objects, humans, and applications to a network that offers sophisticated processing functions . S&CC requires that application and processing functions be provided with sophisticated resource integration because each application has different requirements in terms of processing power, data amount, and response time. Our solution is to dynamically combine resources through optical interconnection.
This paper introduces application-triggered network/cloud and cloud/cloud resource coordination for the future distributed cloud network era, as shown in Fig. 1. We classify coordination into three types. Two, vertical integration and horizontal integration, are for inter-DC coordination. Another is intra-DC coordination with application triggering. The network architecture and techniques proposed herein will be used in future optically interconnected cloud networks.
II. Application-Triggered Dynamic Network Configuration Based on Optical Network
The S&CC services era will see everything connected to the network and supported by cloud computing resources. There are various kinds of services such as networked robots, AD-car, and machine learning, all of which have different service-performance requirements. For example, AD-cars demand high levels of trust with real-time response (i.e., trustworthiness) [4,5], while big data analysis needs huge processing/storage resources with sophisticated machine-learning capabilities .
In addition, future networks must consider energy efficiency in realizing dynamic and flexible services . Therefore, we must consider not only inter-DC networking but also the combination of DCs or intra-DC resources including edge computers/micro-DCs. These demands can be tackled with optical network and software-defined networking (SDN) technology, as they can meet the distance and bandwidth restrictions with dynamic operation.
We describe here an application-triggered integrated network/computing resource architecture for S&CC. The architecture is designed to suit several typical S&CC applications such as networked robots, AD-cars, and big data analysis. Figure 2 shows typical applications for S&CC.
The first application field (A) encompasses networked robots, home automation, and the smart house, all of which have low mobility. Everything is connected to the network (i.e., IoT) and controlled by network/cloud application. However, some of the services or applications are delay sensitive and some have huge processing requirements. This demands enhanced collaboration between edge and center cloud resources. This application field is characterized by “vertical integration,” and details are given in Section III.
AD-cars are another key S&CC application; their cloud-processing function (B) must offer trustworthiness and high mobility (see Fig. 2). Because vehicles move very rapidly, they need real-time response from edge computers. Fifth-generation (5G) wireless access meets the requirement for quick access, but the ability to transfer processing functions is also very important. We call this “horizontal integration.” Details will be described in Section IV.
Intra-DC must support and run several services dynamically. Different applications have different processing and resource requirements. Intra-DC application-triggered network and computing resource integration are also very important for reducing energy consumption [see (C) in Fig. 2]. We describe this in Section V.
III. Application-Triggered Dynamic Centralized-Cloud and Edge Computer Integration (Vertical Integration: Type A)
For the S&CC network, smart houses and IoT will be a major application . Data mining and pre-processing or data filtering at the edge computer, as well as sophisticated data analysis such as machine learning at the center cloud, are basic foundations for these applications. However, service flexibility is essential, so network function virtualization (NFV) or SDN will be used. We extend NFV to offer more general functions such as those needed for applications, devices, and programs; the result is called ubiquitous grid networking environment, or “uGrid” [8,9]. This concept classifies functions into blocks and ensures portability. The goal is to move functions to any place or machine under consideration within the response time and resource restriction requirement. We set 10 ms as the delay target to satisfy most delay-sensitive applications such as real-time robot control.
Figure 3 shows an example of vertical integration in the device/edge/center cloud. As the networked robot service places the toughest delay requirement on S&CC, we are targeting test robot services as a type-A application. In terms of robot control, a single robot task is composed of several functions covered by a robot operating system (ROS) [10,11]. Robot control functions can be implemented on a virtual machine (VM). All functions use a remote procedure call for communication. Therefore, a set of robot control functions can be distributed on VMs.
Key components in the trial are surface detection by three-dimensional (3D) sensors of the robot, a process path planner to compute the blending tool path, and a free motion planner to determine the movement of the robot arm.
We constructed a wide-area robot operation testbed linking the USA and Japan as a typical case; control is provided by a centralized cloud located in an American mega-DC . The ROS consists of small modular robot control functions. First, all robot control functions are loaded into the centralized cloud in the USA. Then, delay sensitive functions such as the surface detection and motion planner are moved to one or more edge robot controllers located in the edge computer(s) or micro-DC(s). In the latter case, ROS functions are distributed among centralized cloud and edge device(s) as shown in Fig. 4. Edge devices offer limited computer resources but impressive response time. On the other hand, centralized cloud resources are unlimited. For robot control, 3D video sensing data must be transferred to the surface-detection process located in the edge robot controller for analysis. This function combination is good for robot operation, as shown in Fig. 5. In Fig. 5(a), the robot sends the 3D sensor data (whose peak rate exceeds 1600 kbps) and control signal to the robot controller located in the cloud DC. It takes the main controller approximately 160 s to complete the robot control procedure. When the sub-controller is run on the edge computer [Fig. 5(b)], since the 3D sensor data is transferred only to the sub-controller, only the detected surface and control signal are sent to the main controller located in the cloud DC. Therefore, only 30 kbps bandwidth is required, and it takes approximately 110 s to complete the procedure. This 30% reduction in procedure time indicates that the robot’s working throughput can be improved by the edge and cloud coordinated control.
We recently demonstrated a networked robot application connected to a maximum 100 Gbps wavelength division multiplex transmission wide-area optical network [13,14]. Figure 6 shows the experimental setup with edge/centralized cloud controllers. This experiment involved the cooperation of Japanese network service providers KDDI and NTT Communications; system venders Fujitsu, Mitsubishi Electronic, Furukawa Electronics, and OA Laboratory; and the research institutes of NICT and Keio University. The American side consisted of Texas Instruments (TI) and University of Texas at Dallas (UTD). Figure 7 shows a photo of the experimental setup. The application examined is part of various shapes carried by the conveyor. The robot recognizes parts by their shape, grabs the next part, and moves it to the appropriate place.
IV. AD-CAR Application Using High-Mobility Horizontal Integration (Type B)
One of the important applications for S&CC is the AD-car, a network-controlled car with high mobility. For autonomous driving, three layered application programming interfaces (APIs) can be defined. API-1 covers vehicle decisions, API-2 covers local decisions, and API-3 is for cloud decisions as shown in Fig. 8.
AD-car has many sensors and actuators linked by a controller area network controller. API-1 provides real physical driving control function level control such as accelerator, brake, gear shift, and steering. API-2 is used for local decisions. Sensors in cars can detect signals, parked cars, motor cycles, and other obstacles on the street. Additionally, AD-car can acquire data and control commands which are passed from external sources via the communication network. This involves real-time machine-to-machine (M2M) control. API-3 is used by centralized cloud control. AD-car’s path planning, congestion avoidance, sophisticated route selection, priority control, and other planning functions are implemented using this API. The decisions needed are realized by machine learning at centralized cloud via API-3.
API-2 functions are realized using the vehicle’s personal computers and edge computers with several sensors’ information. For example, street signals and adjacent cars exchange information via AD-car based on M2M communication. Since, AD-car can move rapidly (for example, 60 km/h), API-2 must offer response times under 10 ms including communication and processing time at the edge computer. In addition, trustworthy control is needed. To meet these requirements, we proposed the dynamic edge processing system with horizontal VM live migration, as shown in Fig. 9. For our future experiment, we have designed the edge computing with triple redundancy and majority rule for achieving trustworthiness. This is because moving vehicles sometimes lose the connection to the controller VM. Moreover, live migration demands a migration period and thus a control interrupt in the AD-car program. Triple redundancy with majority rule can ensure trustworthy control under such communication and control interruptions.
Efficient horizontal VM live migration which is realized across edge devices (Micro-DCs or edge computers) is a key to realizing this concept. Figure 10 shows horizontal live migration across the current metro/access network using layer-3 (i.e., IP) networking. This conventional approach was adopted because there is little need to transfer huge amounts of data among edge computers; most applications such as smart homes and networked robots are static. Therefore, the conventional approach to copying data between edge computers is to use the layer-3 function located in the metro IP routers. This IP-based routing is characterized by long delay times.
Our solution was to propose and test lower-layer cut-through among optical network units in the Japanese national project called Elastic Lambda Access Network () [15–17]. Using , live migration via a lower-layer network layer can be realized as shown in Fig. 11. This dynamic function mobility is a new requirement for the S&CC network.
We are now constructing a small open testbed at Keio University Shin-Kawasaki K2-Campus using a Toyota “Estima” AD-car (see Fig. 12). The testbed has two ovals of about 200 and 150 m in length. AD-car is equipped with real-time kinematic global positioning , a high-performance light detection and ranging (LIDAR), and millimeter radio detection and ranging (RADAR). A 5G access network will be used for communication with edge devices.
V. Application-Triggered Automatic Flow Routing In Data Center (Type-C Application)
In this section, intra-DC coordination examples (Hadoop-based  application-triggered DC network coordination and flow-classification-based DC network coordination) are presented.
A. Hadoop-Based Automatic Flow Routing
Hadoop-based application-triggered network reconfiguration was discussed in Ref. , which showed that traffic demand estimation of Hadoop jobs can be done using the known technique of map tasks. However, no detailed estimation method was provided. This omission is rectified by our proposed detailed traffic estimation method [21–23].
Figure 13 shows the proposed application-triggered intra-DC network system [21–23]. Basically, Hadoop clusters are interconnected by hybrid electrical packet and optical circuit switching networks such as Helios  and HydRA . Three elements are added to the conventional in-rack server group.
- (1) Traffic Monitor: monitors traffic in the Hadoop cluster. It passes its findings to the cluster manager.
- (2) Cluster Manager: monitors the Hadoop clusters in the DC and performs online shuffle-heavy  judgment of jobs in progress. The cluster manager notifies the traffic monitor of cluster state and current network configuration.
- (3) Network Manager: sets flow path between top of rack switches (ToRs) using the OpenFlow network control protocol. For shuffle-heavy jobs, optical circuits between ToRs are set.
In the proposed system, layer-2 switching is assumed for data transfer, but application to layer-3 routing is also possible.
There are several parallel Hadoop cluster jobs in the DC, and jobs are executed simultaneously in each cluster. In the proposed system, multiple paths to each cluster are distinguished using virtual local area network identifiers (VLAN IDs) for layer-2 switches. VLAN paths are assigned to determine the flow route. Each path is assigned to either the electrical packet or the optical circuit switching network.
We introduce a new parameter named “shuffle-ratio” . In Hadoop, the size of shuffle data varies depending not only on input data size but also on type of the job . As is well known, Hadoop systems consist of Map and Reduce phases. From the Map phase to the Reduce phase, sometimes huge data sets are moved to realize distributed processing; this operation is called “Shuffle.” To judge shuffle loads, shuffle-heavy or shuffle-light, we created the new parameter of shuffle ratio. Shuffle ratio is defined as relative proportion of input data that needs to be shuffled as follows
A shuffle-heavy Hadoop job will have a high shuffle ratio. First, we evaluated the shuffle ratio using two typical kinds of jobs: TeraSort and WordCount. We repeatedly processed input files of different sizes. The results of shuffle ratio versus input data size are plotted in Fig. 14. TeraSort, which is categorized as shuffle-heavy, shows high shuffle-ratio values and WordCount, which is categorized as shuffle-light, shows small values. This means that shuffle ratio can be determined by a small portion of input data and does not require whole input data. Therefore, shuffle-heavy or shuffle-light online judgment of each job is possible.
In our proposal, Hadoop jobs are executed in the following manner. When the cluster manager detects a job start, it selects an electrical packet-switched flow path to connect to servers in the cluster via the network manager. The cluster manager informs the traffic monitor of connection status. The traffic monitor checks the flow in the cluster and periodically informs the cluster manager of the traffic level. If the job has progressed from Map phase to Shuffle and Reduce phase, shuffle ratio is calculated and shuffle-heavy job judgment is executed in an online manner.
A job is judged as shuffle-heavy if its shuffle ratio exceeds a certain threshold (e.g., 0.4). After the judgment, the network manager determines a proper flow path route, either changing to the optical network or remaining on the electrical network. Note that the optical circuit switching network has higher energy efficiency than the electrical packet switching network. However, the optical circuit switching network resource is limited. Therefore, all jobs may not be assigned to the optical network.
We implemented the prototype system shown in Fig. 15. The Hadoop cluster was formed using Dell PowerEdge 320 servers. 10 Gbps Ethernet switches were used to construct ToRs and the electrical packet switching network. And a micro-machine electromechanical system optical switch was used to form the optical circuit switching network. TeraSort and WordCount were processed in the Hadoop cluster. Hadoop slaves were created in (logically) separate racks and received tasks via the electrical or optical network. Both test jobs had an input file size of 30 GB. The shuffle ratio threshold value was set to 0.40.
Figures 16 and 17 show the shuffle data amount in the optical/electrical flow paths when the Hadoop cluster executed TeraSort and WordCount. The horizontal axis plots elapsed time(s). The left side vertical axis shows the shuffle data size (MB) on the optical/electrical flow paths, and the right side shows the progress of the Map and Reduce phases.
In TeraSort, in the first 330 s, small amounts of data are transferred because only the Map task is executed. As the number of completed Map tasks increases, intermediate data that can be transferred are created and data transfer (Shuffle phase) is started. As shown in Fig. 16, in the Shuffle phase, a large amount of data transfer occurs repeatedly every few seconds. In the proposed architecture, 450 MB of the first batch of shuffle data is transferred on the electrical network. After 20 s, the second batch of shuffle data is transferred via the optical network. After that, the amount of data transferred via the optical network increases. The optical circuit switching network has less energy consumption than the electrical packet switching network. This result shows that our Hadoop system provides the trigger of automatic flow routing. As a result, application-triggered automatic flow routing (swapping) between electrical/optical networks is achieved.
In the WordCount case, 50 MB of the first batch of shuffle data are transferred via the electric network. The shuffled data size is 1/9 compared with the TeraSort case, and shuffle ratio is below 0.40. Thus, the controller rates the job as shuffle-light and transfers all shuffle data as well as the Reduce phase’s data over the electrical network. This result means that the optical network has not been applied for shuffle-light applications. Therefore, we are able to avoid abusing the optical network resources and provide the resources to other shuffle-heavy jobs that are being executed simultaneously.
Table 1 shows the theoretical energy consumption of the proposed system calculated from the measured results in Figs. 16 and 17. Optical circuit switching and electrical packet switching are assumed to have energy consumption values of 0.365 and 6.88 nJ/bit, respectively . In Ref. , 0.365 nJ/bit was derived from the optical packet switching application and a 40 Gbps line card was assumed. Therefore, from these values, we can estimate expected order of energy saving. In executing TeraSort, which is a typical shuffle-heavy job, the proposed system uses the optical flow path and there is an energy reduction of 86% (nearly 72 J as calculated from the measured results shown in Fig. 16). These results confirm that an application-triggered intra-DC network can be realized.
B. Flow-Classification-Based Automatic Flow Routing
As discussed in Section V.A, application-triggered automatic flow routing is viable. However, if Hadoop is not used in the DC, this approach is not applicable. Online flow classification is a preferred approach to solving this problem. Online flow classification requires the classification of “elephant flow” (EF), which has large traffic volume and long duration time, and “mice-flow” (MF), which has small traffic flows and short duration time (within a few seconds). These two online classification methods were proposed in [28,29]. Their applicability to electrical packet/optical circuit hybrid switched DC networks was discussed in .
In developing high-speed optical switch devices, introducing not just optical circuit switching [24,25] but also optical slot switching to the DC network becomes a hot topic [21,22,30–32]. To cope with using both optical circuit and optical slot switching in the hybrid DC network, the definition of a new flow class is required. In Ref. , this middle-sized traffic volume with long-duration time flow is named “doggy flow” (DF). Therefore, three-class (i.e., MF, DF, and EF) online flow classification is required. MFs are assigned to electrical packets, DFs to optical slots, and EFs to the optical circuit switching network. Development of a three-class online flow-classification scheme has been started in [33,34]. Implementation on ToR is preferable but has not yet been achieved.
Future networks must carry multiple new services to support the smart society. Each service has different characteristics and places different requirements on the network in terms of data amount and delay tolerance. This paper introduces the concept of application-triggered automatic network configuration coordinated with processing functions and network resources. To create more flexible and dynamic networks, the migration of VMs and functions must be possible both vertically and horizontally. First, for the smart house and networked robot applications, we realize centralized cloud and edge computing coordination by VM vertical migration and report on world-wide demonstrations. Next, we describe horizontal VM migration for AD-cars. Edge computers are switched to follow the vehicle, keeping response times under 10 ms while providing triple redundancy for trustworthy control. We are now constricting a testbed system for an AD-car in Keio University. Next, to reduce intra-DC energy consumption, we introduce an application-triggered flow path routing method. We successfully demonstrate Hadoop-based automatic path routing on an electrical packet and optical circuit hybrid switching network. Our application-triggered path-routing method makes it possible to realize energy-efficient and high-performance services for the smart society.
The work at Keio University was supported in part by the High-speed Optical Layer 1 Switch system for Time slot switching-based optical data center networks (HOLST) Project funded by the New Energy and Industrial Technology Development Organization (NEDO) of Japan.
1. C. T. Huynh and E. N. Huh, “Prediction technique for resource allocation in micro data center,” in Int. Conf. on Information Networking (ICOIN), Mar. 2015, pp. 483–486.
2. L. Liu, “SDN orchestration for dynamic end-to-end control of data center multi-domain optical networking,” China Commun., vol. 12, no. 8, pp. 10–21, Aug. 2015. [CrossRef]
3. Y. Sun, H. Song, A. J. Jara, and R. Bie, “Internet of things and big data analytics for smart and connected communities,” IEEE Access, vol. 4, pp. 766–773, Feb. 2016. [CrossRef]
4. M. Simsek, A. Aijaz, M. Dohler, J. Sachs, and G. Fettweis, “5G-enabled tactile internet,” IEEE J. Sel. Areas Commun., vol. 34, no. 3, pp. 460–473, Mar. 2016. [CrossRef]
5. Y. Kuwata, J. Teo, G. Fiore, S. Karaman, E. Frazzoli, and J. P. How, “Real-time motion planning with applications to autonomous urban driving,” IEEE Trans. Cont. Syst. Technol., vol. 17, no. 5, pp. 1105–1118, Sept. 2009. [CrossRef]
6. M. Marjani, F. Nasaruddin, A. Gani, A. Karim, I. A. T. Hashem, A. Siddiqa, and I. Yaqoob, “Big IoT data analytics: architecture, opportunities, and open research challenges,” IEEE Access, vol. 5, pp. 5247–5261, Mar. 2017. [CrossRef]
7. N. Yamanaka, S. Okamoto, Y. Imakiire, M. Arase, E. Oki, and M. Veeraraghavan, “The ACTION project: application coordinating with transport, IP and optical networks,” in 18th Int. Conf. on Transparent Optical Networks (ICTON), July 2016, pp. 1–4.
8. M. Akagi, R. Usui, Y. Arakawa, S. Okamoto, and N. Yamanaka, “Cooperating superpeers based service-parts discovery for ubiquitous grid networking (uGrid),” in 7th Int. Conf. on Optical Internet, Oct. 2008, pp. 1–2.
9. D. Ishii, K. Nakahara, S. Okamoto, and N. Yamanaka, “A novel IP routing/signaling based service provisioning concept for ubiquitous grid networking environment,” in IEEE Globecom Workshops, Dec. 2010, pp. 1746–1750.
10. Robot Operating System (ROS) [Online]. Available: http://www.ros.org/
11. M. Quigley, K. Conley, B. Gerkey, J. Faust, T. Foote, J. Leibs, R. Wheeler, and A. Ng, “ROS: an open-source robot operating system,” ICRA Workshop on Open Source Software, vol. 3, no. 3.2, p. 5, May 2009.
12. N. Yoshikane, T. Sato, Y. Isaji, C. Shao, M. Tacca, S. Okamoto, T. Miyazawa, T. Ohshima, C. Yokoyama, Y. Sumida, H. Sugiyama, M. Miyabe, T. Katagiri, N. Kakegawa, S. Matsumoto, Y. Ohara, I. Satou, A. Nakamura, S. Yoshida, K. Ishii, S. Kametani, J. Nicho, J. Meyer, S. Edwards, P. Evans, T. Tsuritani, H. Harai, M. Razo, D. Hicks, A. Fumagalli, and N. Yamanaka, “First demonstration of geographically unconstrained control of an industrial robot by jointly employing SDN-based optical transport networks and edge compute,” in 21st OptoElectronics and Communications Conf. (OECC) held jointly with Int. Conf. on Photonics in Switching (PS), Niigata, Japan, July 2016, pp. 1–3.
13. “Successful remote control of industrial robot by employing SDN-based optical network and cloud/edge computing technology,” iPOP2016 Whitepaper, June 2016 [Online]. Available: https://www.pilab.jp/ipop2016/exhibition/whitepaper.html
14. T. Sato, C. Shao, J. Nicho, N. Yoshikane, S. Okamoto, N. Yamanaka, M. Razo, M. Tacca, and A. Fumagalli, “Remote control experiments of an industrial robot using two distributed robot controllers,” in Net-Centric, Arlington, Virginia, Oct. 2017.
15. T. Sato, K. Ashizawa, H. Takeshita, S. Okamoto, N. Yamanaka, and E. Oki, “Logical optical line terminal placement optimization in the elastic lambda aggregation network with optical distribution network constraints,” J. Opt. Commun. Netw., vol. 7, no. 9, pp. 928–941, Sept. 2015. [CrossRef]
16. S. Okamoto, T. Sato, and N. Yamanaka, “Logical optical line terminal technologies towards flexible and highly reliable metro- and access-integrated networks,” Proc. SPIE, vol. 10129, p. 1012907, Jan. 2017. [CrossRef]
17. T. Kanai, Y. Senoo, K. Asaka, J. Sugawa, H. Tamai, H. Saito, N. Minato, A. Oguri, S. Sumita, T. Sato, N. Kikuchi, S. Matsushita, T. Tsuritani, S. Okamoto, N. Yamanaka, K. Suzuki, and A. Otaka, “First-time demonstration of automatic service restoration by using inter-central-office OLT handover and optical path switching in metro-access network,” in 43rd European Conf. on Optical Communication (ECOC), Sept. 2017, paper W.3.D.3.
18. M. Omae, N. Hashimoto, T. Fujioka, and H. Shimizu, “The application of RTK-GPS and steer-by-wire technology to automatic driving of vehicles and an evaluation of driver behavior,” IATSS Res., vol. 30, no. 2, pp. 29–38, 2006. [CrossRef]
19. Apache Hadoop [Online]. Available: http://hadoop.apache.org/
20. G. Wang, T. S. Eugene Ng, and A. Shaikh, “Programming your network at run-time for big data applications,” in 1st Workshop on Hot Topics in Software Defined Networks (HotSDN), Aug. 2012, pp. 103–108.
21. M. Hirono, T. Sato, J. Matsumoto, S. Okamoto, and N. Yamanaka, “HOLST: architecture design of energy-efficient data center network based on ultra high-speed optical switch,” in 23rd IEEE Int. Symposium on Local and Metropolitan Area Networks (LANMAN), June 2017.
22. A. Yamashita, W. Muro, M. Hirono, T. Sato, S. Okamoto, N. Yamanaka, and M. Veeraraghavan, “Hadoop triggered opt/electrical data-center orchestration architecture for reducing power consumption,” in 19th Int. Conf. on Transparent Optical Networks (ICTON), July 2017, paper Th.A6.4, pp. 1–4.
23. M. Hirono, W. Muro, S. Sekigawa, T. Sato, S. Okamoto, and N. Yamanaka, “Hadoop-based application triggered automatic flow switching in electrical/optical hybrid data-center network,” in 43rd European Conf. on Optical Communication (ECOC), Sept. 2017, paper W.2.A.2.
24. N. Farrington, G. Porter, S. Radhakrishnan, H. H. Bazzaz, V. Subramanya, Y. Fainman, G. Papen, and A. Vahdat, “Helios: a hybrid electrical/optical switch architecture for modular data centers,” in ACM SIGCOMM, Aug. 2010, pp. 339–350.
25. K. Christodoulopoulos, D. Lugones, K. Katrinis, M. Ruffini, and D. O’Mahony, “Performance evaluation of a hybrid optical/electrical interconnect,” J. Opt. Commun. Netw., vol. 7, no. 3, pp. 193–204, Mar. 2015. [CrossRef]
26. X. Wang, M. Veeraraghavan, E. Oki, S. Okamoto, and N. Yamanaka, “Dynamic optical circuits in datacenter networks for shuffle-heavy Hadoop applications,” in 12th Int. Conf. on IP+Optical Network (iPOP), June 2016, paper T4-1.
27. S. J. B. Yoo, “Energy efficiency in the future internet: the role of optical packet switching and optical-label switching,” IEEE J. Sel. Top. Quantum Electron., vol. 17, no. 2, pp. 406–418, Mar. 2011. [CrossRef]
28. Y. Lu, M. Wang, B. Prabhakar, and F. Bonomi, “Elephant trap: a low cost device for identifying large flows,” in Proc. 15th IEEE Symposium on High-Performance Interconnects (HOTI), Aug. 2007, pp. 99–105.
29. T. Pan, X. Guo, C. Zhang, W. Meng, and B. Liu, “ALFE: a replacement policy to cache elephant flows in the presence of mice flooding,” in IEEE Int. Conf. on Communications (ICC), Ottawa, Ontario, June 2012, pp. 2961–2965.
30. K. Tokas, I. Patronas, C. Spatharakis, D. Reisis, P. Bakopoulos, and H. Avramopoulos, “Slotted TDMA and optically switched network for disaggregated datacenters,” in 19th Int. Conf. on Transparent Optical Networks (ICTON), July 2017, paper Mo.B3.4.
31. J.-M. Estarán, E. Dutisseuil, H. Mardoyan, G. de Valicourt, A. Dupas, Q. P. Van, D. Verchere, B. Uscumlic, P. Dong, Y.-K. Chen, S. Bigo, and Y. Pointurier, “Cloud-BOSS intra-data center network: on-demand QoS guarantees via μs optical slot switching,” in 43rd European Conf. on Optical Communication (ECOC), Sept. 2017, paper We.2.A.3.
32. C. Jackson, K. Kondepu, Y. Ou, A. Beldachi, A. Pagès Cruz, F. Agraz, F. Moscatelli, W. Miao, V. Kamchevska, N. Calabretta, G. Landi, S. Spadaro, R. Nejabati, and D. Simeonidou, “COSIGN: a complete SDN enabled all-optical architecture for data centre virtualization with time and space multiplexing,” in 43rd European Conf. on Optical Communication (ECOC), Sept. 2017, paper We.2.A.4.
33. Y. Imakiire, S. Okamoto, N. Yamanaka, and E. Oki, “A study on traffic monitoring time method for high-speed detection of elephant flows,” in IEEE High Performance Switching and Routing Workshop, June 2016.
34. Y. Imakiire, T. Sato, S. Okamoto, and N. Yamanaka, “Proposal of the data center-centric flow classification method using traffic patterns,” IEICE Tech. Rep. Photon. Netw., vol. 117, no. 186, pp. 63–68, Aug. 2017.
Naoaki Yamanaka (M’85-SM’96-F’00) received B.E., M.E., and Ph.D. degrees in engineering from Keio University, Yokohama, Japan, in 1981, 1983, and 1991, respectively. In 1983, he joined the Nippon Telegraph and Telephone Corporation’s (NTT’s) Communication Switching Laboratories in Tokyo, Japan. He is now researching future optical IP networks and optical MPLS router systems. He is currently a Professor in the Department of Information and Computer Science at Keio University, Vice Chair of Keio Leading-Edge Laboratory of Science and Technology, and a Chair of Photonic Internet Labs. He has published over 120 peer-reviewed journal and transaction articles, written over 200 international conference papers, and been awarded 174 patents, including 17 international patents. He received Best of Conference Awards from the 40th, 44th, and 48th IEEE Electronic Components and Technology Conferences in 1990, 1994, and 1998; the TELECOM System Technology Prize from the Telecommunications Advancement Foundation in 1994; the IEEE CPMT Transactions Part B: Best Transactions Paper Award in 1996; the IEICE Transaction Paper Award in 1999; the IEEE ISAS2011 Best Paper Award in 2011; and the IEICE Achievement Award in 2015. He is the technical editor of the IEEE Communication Magazine, the broadband network area editor of IEEE Communication Surveys, a former editor of IEICE Transactions, the TAC Chair of the Asia Pacific Board at the IEEE Communications Society, and a board member of the IEEE CPMT Society. He is an IEICE Fellow.
Satoru Okamoto (M’93–SM’03) received his B.E., M.E., and Ph.D. degrees in electronic engineering from Hokkaido University, Hokkaido, Japan, in 1986, 1988, and 1994, respectively. In 1988, he joined Nippon Telegraph and Telephone Corporation (NTT), Japan. He is now researching future IP + optical network technologies and their applications over photonic network technologies. He is currently a Project Professor at Keio University, Yokohama, Japan. He has published over 80 peer-reviewed journal and transaction articles, written over 160 international conference papers, and been awarded 50 patents, including five international patents. He received the Young Researchers Award and the Achievement Award in 1995 and 2000, respectively, from the IEICE of Japan. He also received the IEICE/IEEE HPSR2002 Outstanding Paper Award, the Certification of Appreciation ISOCORE and PIL in 2008, the IEICE Communications Society Best Paper Award, the IEEE ISAS2011 Best Paper Award in 2011, and a Certification of Appreciation for iPOP Conferences 2005–2014 in 2014. He was an associate editor of the IEICE Transactions on Communications (2006–2011) as well as the chair of the IEICE Technical Committee on Photonic Network (PN) (2010–2011), and was an associate editor of Optics Express of The Optical Society (OSA) (2006–2012). He is an IEICE Fellow.
Masayuki Hirono (S’ 17) received B.E. and M.E. degrees in engineering from Keio University, Yokohama, Japan, in 2016 and 2018, respectively. Currently, he is a second-year master’s degree student at Keio University. He engages in research on future DataCenter networks for energy reduction. He has published two international conference papers.
Yukihiro Imakiire (S’15) received his B.E. and M.E. degrees in engineering from Keio University, Yokohama, Japan, in 2016 and 2018, respectively. He is currently a student in the master’s course at Keio University, Yokohama, Japan. He has been awarded one patent.
Wataru Muro received a B.E. degree in engineering from Keio University, Yokohama, Japan, in 2017. His research interests include the Hadoop system, data-center network architectures, and software engineering.
Takehiro Sato (S’11–M’16) received B.E., M.E., and Ph.D. degrees in engineering from Keio University, Japan, in 2010, 2011, and 2016, respectively. He is currently an assistant professor in the Graduate School of Informatics, Kyoto University, Japan. His research interests include communication protocols and network architectures for next-generation optical networks. From 2011 to 2012, he was a research assistant in the Keio University Global COE Program, “High-level Global Cooperation for Leading-Edge Platform on Access Spaces” by the Ministry of Education, Culture, Sports, Science and Technology, Japan. From 2012 to 2015, he was a research fellow of the Japan Society for the Promotion of Science. From 2016 to 2017, he was a research associate in the Graduate School of Science and Technology, Keio University, Japan. He is a member of the IEICE.
Eiji Oki (M’95–SM’05–F’13) is a Professor at Kyoto University, Japan. He received B.E. and M.E. degrees in instrumentation engineering and a Ph.D. in electrical engineering from Keio University, Yokohama, Japan, in 1991, 1993, and 1999, respectively. In 1993, he joined Nippon Telegraph and Telephone Corporation (NTT) Communication Switching Laboratories, Tokyo, Japan. He has been researching network design and control, traffic-control methods, and high-speed switching systems. From 2000 to 2001, he was a Visiting Scholar at the Polytechnic Institute of New York University, Brooklyn, New York, where he was involved in designing terabit switch/router systems. He was engaged in researching and developing high-speed optical IP backbone networks with NTT Laboratories. He was with the University of Electro-Communications, Tokyo, Japan from July 2008 to February 2017. He joined Kyoto University, Japan, in March 2017. He has been active in the standardization of the path computation element (PCE) and GMPLS in the IETF. He wrote more than 10 IETF RFCs. He was a recipient of several prestigious awards, including the 1999 Excellent Paper Award presented by IEICE, the 2001 Asia-Pacific Outstanding Young Researcher Award presented by IEEE Communications Society, the 2010 Telecom System Technology Prize by the Telecommunications Advanced Foundation, the 2015 IEICE Achievement Award, and IEEE Globcom 2015 Best Paper Award. He is a fellow of IEICE.
Andrea Fumagalli is a Professor of Electrical and Computer Engineering at the University of Texas at Dallas and the Head of the Open Networking Advanced Research (OpNeAR) Lab at UT-Dallas. He holds Ph.D. (1992) and Laurea (1987) degrees in electrical engineering, both from the Politecnico di Torino, Torino, Italy. From 1992 to 1998 he was an Assistant Professor of the Electronics Engineering Department at the Politecnico di Torino, Italy. He joined UT-Dallas as an Associate Professor of Electrical Engineering in August 1997 and was elevated to the rank of Professor in 2005. He then served as the head of the Telecommunications Engineering (TE) program from 2007 to 2012. His research interests include aspects of optical, wireless, and cloud networks, and related protocol design and performance evaluation. He has published more than 200 papers in refereed journals and conferences. He has been involved in a number of research projects focusing on packet-switched, circuit-switched, and survivable network architectures.
Malathi Veeraraghavan is a Professor in the Charles L. Brown Department of Electrical & Computer Engineering at the University of Virginia (UVA). She received the B.Tech degree from Indian Institute of Technology (Madras) in 1984, and M.S. and Ph.D. degrees from Duke University in 1985 and 1988, respectively. After receiving the Distinguished Member of Technical Staff award in a 10-year career at Bell Laboratories, she joined the faculty at Polytechnic University, Brooklyn, New York, where she was Associate Professor of Electrical Engineering from 1999 to 2002. She joined the University of Virginia as Director of the Computer Engineering Program, with a joint faculty position in the Departments of Electrical and Computer Engineering and Computer Science, in 2003. Her research work has been primarily in high-speed networking, which includes optical, datacenter, virtual-circuit, and grid networks. She has also worked on cellular, WiFi, and vehicular networks. Her current research funding is from the National Science Foundation and the U.S. Department of Energy. She holds 30 patents, has over 138 publications, and has received six best-paper awards. She served as the Technical Program Committee Co-Chair for the High-Speed Networking Symposium in IEEE ICC 2013, as Technical Program Committee Chair for IEEE ICC 2002, and Associate Editor of the IEEE/ACM Transactions on Networking. She was General Chair for the IEEE Computer Communications Workshop in 2000, and served as an area editor for IEEE Communication Surveys. She served as editor of IEEE ComSoc e-News and as an associate editor of the IEEE Transactions on Reliability from 1992 to 1994.