论文标题

建立可解释的深度学习模型,以进行知识追踪

Towards Interpretable Deep Learning Models for Knowledge Tracing

论文作者

Lu, Yu, Wang, Deliang, Meng, Qinggang, Chen, Penghe

论文摘要

作为建模学习者知识状态的重要技术,传统知识追踪(KT)模型已被广泛用于支持智能辅导系统和MOOC平台。在深度学习技术的快速进步的推动下,深度神经网络最近被采用,以设计新的KT模型,以实现更好的预测性能。但是,由于它们的产出和工作机制遭受了内在的决策过程和复杂的内部结构的影响,因此这些模型的缺乏可解释性阻碍了它们的实际应用。因此,我们建议采用事后方法来解决基于深度学习的知识追踪(DLKT)模型的可解释性问题。具体而言,我们专注于应用层的相关性传播(LRP)方法来解释基于RNN的DLKT模型,通过将模型的输出层与其输入层的相关性进行反向传播。实验结果表明,使用LRP方法来解释DLKT模型的预测,并部分验证了问题级别和概念级别的计算相关性得分。我们认为,这可能是朝着充分解释DLKT模型并促进其在教育领域的实际应用的坚实一步。

As an important technique for modeling the knowledge states of learners, the traditional knowledge tracing (KT) models have been widely used to support intelligent tutoring systems and MOOC platforms. Driven by the fast advancements of deep learning techniques, deep neural network has been recently adopted to design new KT models for achieving better prediction performance. However, the lack of interpretability of these models has painfully impeded their practical applications, as their outputs and working mechanisms suffer from the intransparent decision process and complex inner structures. We thus propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models. Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model by backpropagating the relevance from the model's output layer to its input layer. The experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions, and partially validate the computed relevance scores from both question level and concept level. We believe it can be a solid step towards fully interpreting the DLKT models and promote their practical applications in the education domain.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源