论文标题

在人体机器人互动中进行抽象的关系学习

Towards Abstract Relational Learning in Human Robot Interaction

论文作者

Faridghasemnia, Mohamadreza, Nardi, Daniele, Saffiotti, Alessandro

论文摘要

人类在其环境中对实体有丰富的代表。实体由其属性描述,共享属性的实体通常与语义相关。例如,如果两本书具有“自然语言处理”作为其“标题”属性的价值,那么我们可以期望他们的“主题”属性也将是平等的,即NLP。人类倾向于概括这种观察结果,并推断出任何实体的“主题”属性为“ NLP”的足够条件。如果机器人需要成功与人交互,则需要以类似的方式代表实体,属性和概括。这在上下文化的认知剂中结束,该认知剂可以调整其理解,而上下文为正确理解提供了足够的条件。在这项工作中,我们解决了如何通过人类机器人相互作用获得这些表示的问题。我们将视觉感知和自然语言输入整合在一起,以逐步构建世界的语义模型,然后使用归纳推理来推断该模型中捕获通用语义关系的逻辑规则。这些关系可用于丰富人类机器人的相互作用,以推断事实填充知识库,或者消除机器人感觉输入中的不确定性。

Humans have a rich representation of the entities in their environment. Entities are described by their attributes, and entities that share attributes are often semantically related. For example, if two books have "Natural Language Processing" as the value of their `title' attribute, we can expect that their `topic' attribute will also be equal, namely, "NLP". Humans tend to generalize such observations, and infer sufficient conditions under which the `topic' attribute of any entity is "NLP". If robots need to interact successfully with humans, they need to represent entities, attributes, and generalizations in a similar way. This ends in a contextualized cognitive agent that can adapt its understanding, where context provides sufficient conditions for a correct understanding. In this work, we address the problem of how to obtain these representations through human-robot interaction. We integrate visual perception and natural language input to incrementally build a semantic model of the world, and then use inductive reasoning to infer logical rules that capture generic semantic relations, true in this model. These relations can be used to enrich the human-robot interaction, to populate a knowledge base with inferred facts, or to remove uncertainty in the robot's sensory inputs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源