论文标题
让我们跟踪:使用同步和异步精心策划的应用程序进行细粒度无服务器的基准测试
Let's Trace It: Fine-Grained Serverless Benchmarking using Synchronous and Asynchronous Orchestrated Applications
论文作者
论文摘要
使无服务器计算广泛适用,需要详细的性能理解。尽管存在当代基准测试方法,但它们仅报告粗略的结果,不应用分布式跟踪,不考虑异步应用,并为(根本原因)分析提供有限的功能。解决此差距,我们设计和实施Servibench,这是一个无服务器的基准标准套件。 Servibench(i)利用代表生产用法的同步和异步无服务器应用程序,(ii)外推云提供的数据来生成逼真的工作负载,(iii)进行全面的,端到端的实验,以捕获基于(iv)的应用程序分析(iv)的效果分析(ivance),(iv)构图基于(分布式)的无效效果(v),(v),(iva)效果分析(IV)。借助Servibench,我们对AWS进行了全面的实验,涵盖了五个常见的性能因素:中间潜伏期,寒冷开始,尾部潜伏期,可扩展性和动态工作量。我们发现,无服务器应用程序的端到端延迟通常不是通过功能计算来控制的,而是由外部服务呼叫,编排或基于触发的协调来控制。我们根据公平原理释放收集的实验数据,并作为经过测试的可扩展开源工具发布。
Making serverless computing widely applicable requires detailed performance understanding. Although contemporary benchmarking approaches exist, they report only coarse results, do not apply distributed tracing, do not consider asynchronous applications, and provide limited capabilities for (root cause) analysis. Addressing this gap, we design and implement ServiBench, a serverless benchmarking suite. ServiBench (i) leverages synchronous and asynchronous serverless applications representative of production usage, (ii) extrapolates cloud-provider data to generate realistic workloads, (iii) conducts comprehensive, end-to-end experiments to capture application-level performance, (iv) analyzes results using a novel approach based on (distributed) serverless tracing, and (v) supports comprehensively serverless performance analysis. With ServiBench, we conduct comprehensive experiments on AWS, covering five common performance factors: median latency, cold starts, tail latency, scalability, and dynamic workloads. We find that the median end-to-end latency of serverless applications is often dominated not by function computation but by external service calls, orchestration, or trigger-based coordination. We release collected experimental data under FAIR principles and ServiBench as a tested, extensible open-source tool.