论文标题
分散的边缘到云负载平衡:物联网的服务位置
Decentralized Edge-to-Cloud Load-balancing: Service Placement for the Internet of Things
论文作者
论文摘要
物联网(IoT)需要一个新的处理范式,该范式继承了云的可扩展性,同时使用更接近网络边缘的资源最大程度地减少网络延迟。在边缘到云连续体中建立这种灵活性,该连续体由异质计算资源的分布式网络生态系统组成,这是一项挑战。此外,物联网流量动态和对低延迟服务的需求不断上升,这促进了最大程度地减少响应时间和平衡服务放置的需求。雾计算的负载平衡成为具有成本效益的系统管理和操作的基石。本文研究了两个优化目标,并为物联网服务放置的分散负载平衡问题制定了:(全球)IoT Workload平衡和(本地)服务质量(QoS),以最大程度地减少截止日期违反截止日期,服务部署和未宣布的服务。拟议的解决方案EPOS FOG引入了一个分散的多代理系统,用于集体学习,该系统利用边缘到云节点来共同平衡整个网络跨网络的输入工作负载,并最大程度地减少服务执行中涉及的成本。代理商在本地生成了对资源请求的可能分配,然后合作选择一个分配,以使其组合最大化边缘利用率,同时最大程度地减少服务执行成本。与诸如首先拟合和仅基于云的方法相比,各种网络上使用现实的Google群集工作负载进行了广泛的实验评估,这在工作负载平衡和QoS方面表明了EPOS雾的出色性能。结果证实,EPOS FOG可将服务执行延迟延迟25%,而网络节点的负载量最高为90%。这些发现还证明了通过收获集体智能来更具成本效益的分布式计算资源。
The Internet of Things (IoT) requires a new processing paradigm that inherits the scalability of the cloud while minimizing network latency using resources closer to the network edge. Building up such flexibility within the edge-to-cloud continuum consisting of a distributed networked ecosystem of heterogeneous computing resources is challenging. Furthermore, IoT traffic dynamics and the rising demand for low-latency services foster the need for minimizing the response time and balanced service placement. Load-balancing for fog computing becomes a cornerstone for cost-effective system management and operations. This paper studies two optimization objectives and formulates a decentralized load-balancing problem for IoT service placement: (global) IoT workload balance and (local) quality of service (QoS), in terms of minimizing the cost of deadline violation, service deployment, and unhosted services. The proposed solution, EPOS Fog, introduces a decentralized multi-agent system for collective learning that utilizes edge-to-cloud nodes to jointly balance the input workload across the network and minimize the costs involved in service execution. The agents locally generate possible assignments of requests to resources and then cooperatively select an assignment such that their combination maximizes edge utilization while minimizes service execution cost. Extensive experimental evaluation with realistic Google cluster workloads on various networks demonstrates the superior performance of EPOS Fog in terms of workload balance and QoS, compared to approaches such as First Fit and exclusively Cloud-based. The results confirm that EPOS Fog reduces service execution delay up to 25% and the load-balance of network nodes up to 90%. The findings also demonstrate how distributed computational resources on the edge can be utilized more cost-effectively by harvesting collective intelligence.