Data ingestion; file transfer; unified architecture; fast data movement; computing continuum
Abstract :
[en] The computing continuum can enable new, novel big
data use cases across the edge-cloud-supercomputer spectrum.
Fast and high-volume data movement workflows rely on state-of-
the-art architectures built on top of stream ingestion and
file transfer open-source tools. Unfortunately, users struggle
when faced with dealing with such diverse architectures: stream
ingestion was designed for small-size datasets and low latency,
while file transfer was designed for large-size datasets and high
throughput. In this paper, we propose to unify ingestion and
transfer, while introducing architectural design principles and
discussing future implementation challenges.
Disciplines :
Computer science
Author, co-author :
TARIQ, Muhammad Arslan ; University of Luxembourg > Interdisciplinary Centre for Security, Reliability and Trust (SNT) > PCOG
MARCU, Ovidiu-Cristian ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
DANOY, Grégoire ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
BOUVRY, Pascal ; University of Luxembourg > Faculty of Science, Technology and Medicine (FSTM) > Department of Computer Science (DCS)
Language :
English
Title :
Towards Unified Data Ingestion and Transfer for the Computing Continuum
Publication date :
n.d.
Number of pages :
4
Commentary :
This work is partially funded by the SnT-LuxProvide partnership
on bridging clouds and supercomputers and by the
Fonds National de la Recherche Luxembourg (FNR) POLLUX
program under the SERENITY Project (ref.C22/IS/17395419).