论文标题

COT:云环境的分散弹性缓存

CoT: Decentralized Elastic Caches for Cloud Environments

论文作者

Zakhary, Victor, Lim, Lawrence, Agrawal, Divyakant, Abbadi, Amr El

论文摘要

分布式缓存被广泛部署,以在十亿用户量表上为社交网络和Web应用程序提供服务。本文介绍了Cache-on-track(COT),这是云环境的分散,弹性和预测的缓存框架。 COT提出了一种新的缓存替换策略,专门针对用于偏斜工作量的小型前端缓存。前端服务器使用重型击球手跟踪算法来连续跟踪顶级热键。 COT动态缓存最热的C键从轨道键中。我们的实验表明,COT的替换策略始终优于LRU,LFU和ARC的命中率,在不同的偏斜工作量上具有相同的高速缓存大小。另外,当两个策略配置具有相同的跟踪(历史记录)大小时,\ algoname略优于LRU-2的命中率。与其他替换策略相比,COT以50 \%至93.75 \%的前端缓存实现服务器尺寸负载平衡。

Distributed caches are widely deployed to serve social networks and web applications at billion-user scales. This paper presents Cache-on-Track (CoT), a decentralized, elastic, and predictive caching framework for cloud environments. CoT proposes a new cache replacement policy specifically tailored for small front-end caches that serve skewed workloads. Front-end servers use a heavy hitter tracking algorithm to continuously track the top-k hot keys. CoT dynamically caches the hottest C keys out of the tracked keys. Our experiments show that CoT's replacement policy consistently outperforms the hit-rates of LRU, LFU, and ARC for the same cache size on different skewed workloads. Also, \algoname slightly outperforms the hit-rate of LRU-2 when both policies are configured with the same tracking (history) size. CoT achieves server size load-balance with 50\% to 93.75\% less front-end cache in comparison to other replacement policies.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源