论文标题

Eyecod:通过基于fllcam的算法和加速器共同设计的眼科跟踪系统加速

EyeCoD: Eye Tracking System Acceleration via FlatCam-based Algorithm & Accelerator Co-Design

论文作者

You, Haoran, Wan, Cheng, Zhao, Yang, Yu, Zhongzhi, Fu, Yonggan, Yuan, Jiayi, Wu, Shang, Zhang, Shunyao, Zhang, Yongan, Li, Chaojian, Boominathan, Vivek, Veeraraghavan, Ashok, Li, Ziyun, Lin, Yingyan Celine

论文摘要

眼睛跟踪已成为一种基本的人机互动方式,可在许多虚拟和增强现实(VR/AR)应用中提供沉浸式体验(例如240 fps),小型和增强的视觉隐私。但是,现有的眼动追踪系统仍然受其限制:(1)大型因子主要是由于采用的笨重镜头相机; (2)相机和后端处理器之间需要高度的沟通成本,因此禁止其更广泛的应用。为此,我们提出了一种基于透镜的扁平型眼镜跟踪算法和加速器共同设计框架,以使Eyecod称为Eyecod,以使眼睛跟踪系统具有大量降低的表单因子和增强的系统效率,而无需牺牲跟踪精度,从而为下一个添加眼睛跟踪解决方案铺平了道路。在系统级别上,我们主张使用无镜头式扁安来促进移动眼镜系统中的小因素需求。在算法级别上,Eyecod整合了预测的对焦管道,该管道首先通过分割预测利益区域(ROI),然后仅专注于ROI零件以估算凝视方向,从而大大降低了冗余计算和数据运动。在硬件级别上,我们进一步开发了一个专用的加速器,该加速器(1)在上述细分和凝视估计模型之间集成了新型的工作负载编排,(2)利用渠道内的重复使用机会,用于发定层次的层次,(3)利用输入功能范围的分区来节省激活记忆。塞利康的测量验证了我们的眼睛始终降低通信和计算成本,从而使CPU,GPU和先前的ART眼睛跟踪处理器分别为CIS-GEP的总体系统加速为10.95倍,3.21倍和12.85倍,同时保持跟踪精度。

Eye tracking has become an essential human-machine interaction modality for providing immersive experience in numerous virtual and augmented reality (VR/AR) applications desiring high throughput (e.g., 240 FPS), small-form, and enhanced visual privacy. However, existing eye tracking systems are still limited by their: (1) large form-factor largely due to the adopted bulky lens-based cameras; and (2) high communication cost required between the camera and backend processor, thus prohibiting their more extensive applications. To this end, we propose a lensless FlatCam-based eye tracking algorithm and accelerator co-design framework dubbed EyeCoD to enable eye tracking systems with a much reduced form-factor and boosted system efficiency without sacrificing the tracking accuracy, paving the way for next-generation eye tracking solutions. On the system level, we advocate the use of lensless FlatCams to facilitate the small form-factor need in mobile eye tracking systems. On the algorithm level, EyeCoD integrates a predict-then-focus pipeline that first predicts the region-of-interest (ROI) via segmentation and then only focuses on the ROI parts to estimate gaze directions, greatly reducing redundant computations and data movements. On the hardware level, we further develop a dedicated accelerator that (1) integrates a novel workload orchestration between the aforementioned segmentation and gaze estimation models, (2) leverages intra-channel reuse opportunities for depth-wise layers, and (3) utilizes input feature-wise partition to save activation memory size. On-silicon measurement validates that our EyeCoD consistently reduces both the communication and computation costs, leading to an overall system speedup of 10.95x, 3.21x, and 12.85x over CPUs, GPUs, and a prior-art eye tracking processor called CIS-GEP, respectively, while maintaining the tracking accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源