论文标题

转移对非偶然机器人形态的非视觉对象属性的隐式知识

Transferring Implicit Knowledge of Non-Visual Object Properties Across Heterogeneous Robot Morphologies

论文作者

Tatiya, Gyan, Francis, Jonathan, Sinapov, Jivko

论文摘要

当与物体交互并发现其内在特性时,人类利用多种传感器方式。仅使用视觉模态不足以在对象属性背后得出直觉(例如,两个盒子中的哪个更重),因此也必须考虑非视态模态,例如触觉和听觉。 Whereas robots may leverage various modalities to obtain object property understanding via learned exploratory interactions with objects (e.g., grasping, lifting, and shaking behaviors), challenges remain: the implicit knowledge acquired by one robot via object exploration cannot be directly leveraged by another robot with different morphology, because the sensor models, observed data distributions, and interaction capabilities are different across these different robot configurations.为了避免从头开始学习交互式对象感知任务的昂贵过程,我们为每个新机器人提出了一个多阶段投影框架,以转移异构机器人形态的对象属性的隐式知识。我们使用包含两个执行7,600个对象相互作用的异质机器人的数据集评估了对象质体识别和对象识别任务的方法。结果表明,知识可以跨机器人传输,以便新部署的机器人可以在无需详尽探索所有对象的情况下引导其识别模型引导其识别模型。我们还提出了一种数据增强技术,并表明该技术改善了模型的概括。我们在此处发布代码和数据集:https://github.com/gtatiya/implitic-knowledge-transfer。

Humans leverage multiple sensor modalities when interacting with objects and discovering their intrinsic properties. Using the visual modality alone is insufficient for deriving intuition behind object properties (e.g., which of two boxes is heavier), making it essential to consider non-visual modalities as well, such as the tactile and auditory. Whereas robots may leverage various modalities to obtain object property understanding via learned exploratory interactions with objects (e.g., grasping, lifting, and shaking behaviors), challenges remain: the implicit knowledge acquired by one robot via object exploration cannot be directly leveraged by another robot with different morphology, because the sensor models, observed data distributions, and interaction capabilities are different across these different robot configurations. To avoid the costly process of learning interactive object perception tasks from scratch, we propose a multi-stage projection framework for each new robot for transferring implicit knowledge of object properties across heterogeneous robot morphologies. We evaluate our approach on the object-property recognition and object-identity recognition tasks, using a dataset containing two heterogeneous robots that perform 7,600 object interactions. Results indicate that knowledge can be transferred across robots, such that a newly-deployed robot can bootstrap its recognition models without exhaustively exploring all objects. We also propose a data augmentation technique and show that this technique improves the generalization of models. We release our code and datasets, here: https://github.com/gtatiya/Implicit-Knowledge-Transfer.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源