论文标题
工程和实验基准基于容器的边缘计算系统
Engineering and Experimentally Benchmarking a Container-based Edge Computing System
论文作者
论文摘要
虽然Edge Computing设想出非常适合延迟敏感的应用程序,但基于实现的研究基准为其性能而言,其性能很少。为了解决这一差距,我们设计了一个模块化边缘云计算系统体系结构,该系统构建基于包括Kafka在内的最新进步,包括Kafka,用于数据流,Docker,AS Application Platform和Firebase Cloud作为实时数据库系统。我们通过比较三种方案:仅云,仅边缘和组合的边缘云,在可扩展性,资源利用率和延迟方面对系统的性能进行基准测试。测量结果表明,仅边缘解决方案仅在仅与位于一个边缘的数据(即没有边缘计算宽数据同步)的数据时优于其他方案。如果需要通过云进行数据同步的应用程序,则edge-cloud缩放比仅限云高10倍,直到系统中的一定数量的并发用户,并且高于此点,仅云的缩放效果更好。在资源利用率方面,我们观察到,尽管平均利用率随用户请求的数量线性增加,但随着数据量越来越多的数据,内存的最大值和网络I/O会大大增加。
While edge computing is envisioned to superbly serve latency sensitive applications, the implementation-based studies benchmarking its performance are few and far between. To address this gap, we engineer a modular edge cloud computing system architecture that is built on latest advances in containerization techniques, including Kafka, for data streaming, Docker, as application platform, and Firebase Cloud, as realtime database system. We benchmark the performance of the system in terms of scalability, resource utilization and latency by comparing three scenarios: cloud-only, edge-only and combined edge-cloud. The measurements show that edge-only solution outperforms other scenarios only when deployed with data located at one edge only, i.e., without edge computing wide data synchronization. In case of applications requiring data synchronization through the cloud, edge-cloud scales around a factor 10 times better than cloud-only, until certain number of concurrent users in the system, and above this point, cloud-only scales better. In terms of resource utilization, we observe that whereas the mean utilization increases linearly with the number of user requests, the maximum values for the memory and the network I/O heavily increase when with an increasing amount of data.