论文标题

自主机器人动力学预测和适应的自主性自动型自我建模

Egocentric Visual Self-Modeling for Autonomous Robot Dynamics Prediction and Adaptation

论文作者

Hu, Yuhang, Chen, Boyuan, Lipson, Hod

论文摘要

机器人对自己的动态建模的能力是自主计划和学习以及自主损害检测和恢复的关键。传统上,动态模型是从外部观察结果中进行了预编程或学到的。在这里,我们首次证明了如何仅使用单个第一人称视角摄像机以自我监督的方式学习任务无关的动态自模型,而无需任何先验的机器人形态,运动学或任务。通过在12多个机器人上的实验,我们使用Visual Input演示了模型在基本运动任务中的功能。值得注意的是,机器人可以自主检测异常,例如损坏的组件,并适应其行为,从而在动态环境中展示弹性。此外,该模型的推广性在具有不同配置的机器人之间得到了验证,强调了其作为多种机器人系统的通用工具的潜力。我们作品中提出的以自主性,适应性和弹性的机器人系统为中心的自觉自我模型铺平了道路。

The ability of robots to model their own dynamics is key to autonomous planning and learning, as well as for autonomous damage detection and recovery. Traditionally, dynamic models are pre-programmed or learned from external observations. Here, we demonstrate for the first time how a task-agnostic dynamic self-model can be learned using only a single first-person-view camera in a self-supervised manner, without any prior knowledge of robot morphology, kinematics, or task. Through experiments on a 12-DoF robot, we demonstrate the capabilities of the model in basic locomotion tasks using visual input. Notably, the robot can autonomously detect anomalies, such as damaged components, and adapt its behavior, showcasing resilience in dynamic environments. Furthermore, the model's generalizability was validated across robots with different configurations, emphasizing its potential as a universal tool for diverse robotic systems. The egocentric visual self-model proposed in our work paves the way for more autonomous, adaptable, and resilient robotic systems.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源