论文标题
学会转移不足的软机器人手的动态模型
Learning to Transfer Dynamic Models of Underactuated Soft Robotic Hands
论文作者
论文摘要
转移学习是一种流行的方法,可以通过利用来自另一个域的数据来绕过一个域中的数据限制。这在机器人技术中特别有用,因为它允许从业人员使用物理机器人减少数据收集,这可能是耗时的并导致磨损。通过神经网络进行此操作的最常见方法是采用现有的神经网络,只需使用新数据训练它。但是,我们表明,在某些情况下,这可能会导致性能明显差,而不是简单地使用转移模型而没有适应。我们发现,这些问题的主要原因是,对少量数据进行培训的模型在某些地区可能具有混乱或不同的行为。我们在训练有素的过渡模型的Lyapunov指数上得出了上限,并演示了使用这种见解的两种方法。两者在传统的微调方面都有显着改善。在实际不足的软机器人手上进行的实验清楚地证明了将动态模型从一只手转移到另一只手的能力。
Transfer learning is a popular approach to bypassing data limitations in one domain by leveraging data from another domain. This is especially useful in robotics, as it allows practitioners to reduce data collection with physical robots, which can be time-consuming and cause wear and tear. The most common way of doing this with neural networks is to take an existing neural network, and simply train it more with new data. However, we show that in some situations this can lead to significantly worse performance than simply using the transferred model without adaptation. We find that a major cause of these problems is that models trained on small amounts of data can have chaotic or divergent behavior in some regions. We derive an upper bound on the Lyapunov exponent of a trained transition model, and demonstrate two approaches that make use of this insight. Both show significant improvement over traditional fine-tuning. Experiments performed on real underactuated soft robotic hands clearly demonstrate the capability to transfer a dynamic model from one hand to another.