论文标题
从移动相机视频中评估点云:无参考度量标准
Evaluating Point Cloud from Moving Camera Videos: A No-Reference Metric
论文作者
论文摘要
点云是用于三维(3D)内容的最广泛使用的数字表示格式之一,其视觉质量可能会在生产过程中遇到噪声和几何移位扭曲,以及在变速箱过程中的压缩和下采样扭曲。为了应对点云质量评估(PCQA)的挑战,已经提出了许多PCQA方法来通过评估渲染的静态2D投影来评估点云的视觉质量水平。尽管这种基于投影的PCQA方法在成熟的图像质量评估(IQA)方法的帮助下实现了竞争性能,但他们忽略了3D模型也以动态观看方式感知到3D模型,其中根据渲染设备的反馈不断改变观点。因此,在本文中,我们通过使用视频质量评估(VQA)方法来评估移动相机视频的点云,并探索处理PCQA任务的方式。首先,我们通过通过几个圆形路径将相机围绕点云旋转来生成捕获的视频。然后,我们通过分别使用可训练的2D-CNN和预训练的3D-CNN模型从所选的关键帧和视频剪辑中提取空间和时间质量感知功能。最后,点云的视觉质量由视频质量值表示。实验结果表明,所提出的方法可有效预测点云的视觉质量水平,甚至可以使用全参考(FR)PCQA方法竞争。消融研究进一步验证了提出的框架的合理性,并确认了通过动态观看方式提取的质量感知特征所做的贡献。该代码可在https://github.com/zzc-1998/vqa_pc上找到。
Point cloud is one of the most widely used digital representation formats for three-dimensional (3D) contents, the visual quality of which may suffer from noise and geometric shift distortions during the production procedure as well as compression and downsampling distortions during the transmission process. To tackle the challenge of point cloud quality assessment (PCQA), many PCQA methods have been proposed to evaluate the visual quality levels of point clouds by assessing the rendered static 2D projections. Although such projection-based PCQA methods achieve competitive performance with the assistance of mature image quality assessment (IQA) methods, they neglect that the 3D model is also perceived in a dynamic viewing manner, where the viewpoint is continually changed according to the feedback of the rendering device. Therefore, in this paper, we evaluate the point clouds from moving camera videos and explore the way of dealing with PCQA tasks via using video quality assessment (VQA) methods. First, we generate the captured videos by rotating the camera around the point clouds through several circular pathways. Then we extract both spatial and temporal quality-aware features from the selected key frames and the video clips through using trainable 2D-CNN and pre-trained 3D-CNN models respectively. Finally, the visual quality of point clouds is represented by the video quality values. The experimental results reveal that the proposed method is effective for predicting the visual quality levels of the point clouds and even competitive with full-reference (FR) PCQA methods. The ablation studies further verify the rationality of the proposed framework and confirm the contributions made by the quality-aware features extracted via the dynamic viewing manner. The code is available at https://github.com/zzc-1998/VQA_PC.