论文标题
人们作为场景探测
People as Scene Probes
论文作者
论文摘要
通过在场景中分析人们和其他对象的运动,我们演示了如何从单个相机观点获取的视频中推断出深度,遮挡,照明和阴影信息。然后,这些信息用于具有高度的自动化和现实主义的综合新对象。特别是,当用户将新对象(2D切口)放在图像中时,它会自动重新缩放,恢复,正确遮挡,并将逼真的阴影朝着正确的方向施放,相对于太阳,并且与场景的几何形状适当地符合。我们在一系列场景上展示了结果(在补充视频中最好观看),并与深度估计和阴影合成的替代方法进行比较。
By analyzing the motion of people and other objects in a scene, we demonstrate how to infer depth, occlusion, lighting, and shadow information from video taken from a single camera viewpoint. This information is then used to composite new objects into the same scene with a high degree of automation and realism. In particular, when a user places a new object (2D cut-out) in the image, it is automatically rescaled, relit, occluded properly, and casts realistic shadows in the correct direction relative to the sun, and which conform properly to scene geometry. We demonstrate results (best viewed in supplementary video) on a range of scenes and compare to alternative methods for depth estimation and shadow compositing.