论文标题

眼睛目光控制的机器人臂适合患有SSMI的人

Eye Gaze Controlled Robotic Arm for Persons with SSMI

论文作者

Sharma, Vinay Krishna, Murthy, L. R. D., Saluja, KamalPreet Singh, Mollyn, Vimal, Sharma, Gourav, Biswas, Pradipta

论文摘要

背景:严重的言语和运动障碍的人(SSMI)经常使用一种称为眼指向的技术与外界进行交流。他们的一位父母,看护人或老师在他们面前拿着印刷板,并手动分析他们的眼睛注视,他们的意图得到了解释。这种技术通常容易出错,耗时,并且取决于单个看护人。 目的:我们旨在通过使用市售平板电脑,计算机或笔记本电脑来以电子方式自动化眼睛跟踪过程,而无需任何专用硬件以进行眼睛凝视跟踪。眼目光跟踪器用于开发视频通过基于的AR(增强现实)显示屏,该显示器控制着眼睛注视的机器人设备,并部署了用于织物打印任务。 方法论:我们进行了以用户为中心的设计过程,并分别评估了基于Web Cam的凝视跟踪器,并通过与SSMI用户有关的基于的人类机器人互动来查看视频。我们还报告了一项有关用网络摄像头凝视跟踪器操纵机器人臂的用户研究。 结果:使用我们定制的眼睛注视控制的界面,有能力的身体用户可以在少于2秒的中间选择九个屏幕区域之一,而使用SSMI的用户可以在4秒的中位数中这样做。使用眼睛注视控制的人类机器人AR显示器,具有SSMI的用户可以在平均持续时间少于15秒的时间内执行代表性的选择任务,并使用COTS Eye Tracker在60秒内实现一个随机指定的目标,并使用基于Web摄像员的基于网络摄像机的眼睛注视跟踪器平均2分钟。

Background: People with severe speech and motor impairment (SSMI) often uses a technique called eye pointing to communicate with outside world. One of their parents, caretakers or teachers hold a printed board in front of them and by analyzing their eye gaze manually, their intentions are interpreted. This technique is often error prone and time consuming and depends on a single caretaker. Objective: We aimed to automate the eye tracking process electronically by using commercially available tablet, computer or laptop and without requiring any dedicated hardware for eye gaze tracking. The eye gaze tracker is used to develop a video see through based AR (augmented reality) display that controls a robotic device with eye gaze and deployed for a fabric printing task. Methodology: We undertook a user centred design process and separately evaluated the web cam based gaze tracker and the video see through based human robot interaction involving users with SSMI. We also reported a user study on manipulating a robotic arm with webcam based eye gaze tracker. Results: Using our bespoke eye gaze controlled interface, able bodied users can select one of nine regions of screen at a median of less than 2 secs and users with SSMI can do so at a median of 4 secs. Using the eye gaze controlled human-robot AR display, users with SSMI could undertake representative pick and drop task at an average duration less than 15 secs and reach a randomly designated target within 60 secs using a COTS eye tracker and at an average time of 2 mins using the webcam based eye gaze tracker.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源