论文标题

基于云的混合现实的基于卡尔曼过滤的头部运动预测

Kalman Filter-based Head Motion Prediction for Cloud-based Mixed Reality

论文作者

Gül, Serhan, Bosse, Sebastian, Podborski, Dimitri, Schierl, Thomas, Hellge, Cornelius

论文摘要

体积视频使观众可以在混合现实(MR)环境中体验具有六个自由度的高度逼真的3D内容。渲染复杂的体积视频可能需要高度高的移动设备计算能力。减轻移动设备上计算负担的一种有希望的技术是在云服务器上执行渲染。但是,基于云的渲染系统遭受了增加的相互作用(运动性能)延迟,这可能会导致MR环境中的注册错误。减少有效延迟的一种方法是预测观众的头部姿势并提前从体积视频中呈现相应的视图。在本文中,我们在基于云的体积视频流系统中设计了一个Kalman过滤器,以进行头部运动预测。我们使用记录的头部运动迹线分析了方法的性能,并将其性能与自动估计模型进行比较,以进行不同的预测间隔(look-theav-eav-time)。我们的结果表明,在60毫秒的图形时间内,Kalman滤波器可以比自动降低模型更准确地预测磁头方向0.5度。

Volumetric video allows viewers to experience highly-realistic 3D content with six degrees of freedom in mixed reality (MR) environments. Rendering complex volumetric videos can require a prohibitively high amount of computational power for mobile devices. A promising technique to reduce the computational burden on mobile devices is to perform the rendering at a cloud server. However, cloud-based rendering systems suffer from an increased interaction (motion-to-photon) latency that may cause registration errors in MR environments. One way of reducing the effective latency is to predict the viewer's head pose and render the corresponding view from the volumetric video in advance. In this paper, we design a Kalman filter for head motion prediction in our cloud-based volumetric video streaming system. We analyze the performance of our approach using recorded head motion traces and compare its performance to an autoregression model for different prediction intervals (look-ahead times). Our results show that the Kalman filter can predict head orientations 0.5 degrees more accurately than the autoregression model for a look-ahead time of 60 ms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源