Reference : Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures...
Scientific congresses, symposiums and conference proceedings : Paper published in a book
Engineering, computing & technology : Computer science
Computational Sciences
http://hdl.handle.net/10993/37223
Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis
English
Vega Moreno, Carlos Gonzalo [Universidad Autónoma de Madrid > Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones > > ; Naudit HPCN]
Zazo, José Fernando [Universidad Autónoma de Madrid > Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones > > ; Naudit HPCN]
Meyer, Hugo [Barcelona Supercomputing Center]
Zyulkyarov, Ferad [Barcelona Supercomputing Center]
Lopez-Buedo, Sergio [Universidad Autónoma de Madrid > Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones > > ; Naudit HPCN]
Aracil, Javier [Universidad Autónoma de Madrid > Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones > > ; Naudit HPCN]
15-Feb-2018
2017 IEEE 19th International Conference on High Performance Computing and Communications; IEEE 15th International Conference on Smart City; IEEE 3rd International Conference on Data Science and Systems (HPCC/SmartCity/DSS)
Vega Moreno, Carlos Gonzalo
Yes
International
IEEE 19th International Conference on High Performance Computing and Communications
18-20 Dec. 2017
[en] computer centres;data analysis;resource allocation;service-oriented architecture;storage management;scalability boundaries;disaggregated architecture;high-level network data analysis;traditional data centers;rigid architecture;fit-for-purpose servers;provision resources;average workload;heterogeneous data centers;cost-efficient architectures;resource provisioning;intensive data applications;server-oriented architectures;proactive network analysis system;remote memory resources;memory usage;dReDBox;Data centers;Optical switches;Servers;Data analysis;Hardware;Memory management
[en] Traditional data centers are designed with a rigid architecture of fit-for-purpose servers that provision resources beyond the average workload in order to deal with occasional peaks of data. Heterogeneous data centers are pushing towards more cost-efficient architectures with better resource provisioning. In this paper we study the feasibility of using disaggregated architectures for intensive data applications, in contrast to the monolithic approach of server-oriented architectures. Particularly, we have tested a proactive network analysis system in which the workload demands are highly variable. In the context of the dReDBox disaggregated architecture, the results show that the overhead caused by using remote memory resources is significant, between 66% and 80%, but we have also observed that the memory usage is one order of magnitude higher for the stress case with respect to average workloads. Therefore, dimensioning memory for the worst case in conventional systems will result in a notable waste of resources. Finally, we found that, for the selected use case, parallelism is limited by memory. Therefore, using a disaggregated architecture will allow for increased parallelism, which, at the same time, will mitigate the overhead caused by remote memory.
http://hdl.handle.net/10993/37223
10.1109/HPCC-SmartCity-DSS.2017.45
H2020 ; 687632 - dReDBox - Disaggregated Recursive Datacentre-in-a-Box

File(s) associated to this reference

Fulltext file(s):

FileCommentaryVersionSizeAccess
Limited access
08291948.pdfPublisher postprint1.01 MBRequest a copy

Bookmark and Share SFX Query

All documents in ORBilu are protected by a user license.