论文标题

嵌入式情绪 - 一种数据驱动的方法,从原始语音输入中学习可转移的特征表示形式以识别情绪

Embedded Emotions -- A Data Driven Approach to Learn Transferable Feature Representations from Raw Speech Input for Emotion Recognition

论文作者

Schiller, Dominik, Mertes, Silvan, André, Elisabeth

论文摘要

传统的自动情感识别方法依赖于手工制作的功能的应用。但是,最近,深度学习的出现使算法能够自动学习输入数据的有意义表示。在本文中,我们研究了将知识从大型文本和音频语料库中学到的知识转移到自动情感识别任务的适用性。为了评估我们的方法的实用性,我们正在参加今年的间言,比较老年人情感次挑战,其目标是将老年人的口头叙事与演讲者的情感进行分类。我们的结果表明,可以有效地将学习的功能表示形式用于分类口语中的情绪。我们发现从音频信号中提取的功能的性能不如从转录本中提取的功能一致。尽管与基线系统相比,在开发集合中,声学功能在课堂上取得的最佳成绩,但其性能在挑战的测试集中大大下降。但是,从文本表格中提取的功能在这两套方面都显示出令人鼓舞的结果,并且表现优于官方基线的5.7个百分点未加权的平均召回率。

Traditional approaches to automatic emotion recognition are relying on the application of handcrafted features. More recently however the advent of deep learning enabled algorithms to learn meaningful representations of input data automatically. In this paper, we investigate the applicability of transferring knowledge learned from large text and audio corpora to the task of automatic emotion recognition. To evaluate the practicability of our approach, we are taking part in this year's Interspeech ComParE Elderly Emotion Sub-Challenge, where the goal is to classify spoken narratives of elderly people with respect to the emotion of the speaker. Our results show that the learned feature representations can be effectively applied for classifying emotions from spoken language. We found the performance of the features extracted from the audio signal to be not as consistent as those that have been extracted from the transcripts. While the acoustic features achieved best in class results on the development set, when compared to the baseline systems, their performance dropped considerably on the test set of the challenge. The features extracted from the text form, however, are showing promising results on both sets and are outperforming the official baseline by 5.7 percentage points unweighted average recall.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源