论文标题
变压器鹰队过程
Transformer Hawkes Process
论文作者
论文摘要
现代数据采集通常会在社交媒体,医疗保健和金融市场等各个领域中产生大量的事件序列数据。这些数据通常表现出复杂的短期和长期时间依赖性。但是,大多数现有的基于神经网络的复发点过程模型无法捕获此类依赖性,并产生不可靠的预测性能。为了解决这个问题,我们提出了一个变压器霍克斯工艺(THP)模型,该过程利用自我发挥的机制来捕获长期依赖性,同时享有计算效率。各种数据集上的数值实验表明,THP以明显的余量优于现有模型,而事件预测的准确性则均优于现有模型。此外,THP是相当一般的,可以纳入其他结构知识。我们提供了一个具体的示例,其中THP在合并其关系信息时可以提高学习多个点过程的预测性能。
Modern data acquisition routinely produce massive amounts of event sequence data in various domains, such as social media, healthcare, and financial markets. These data often exhibit complicated short-term and long-term temporal dependencies. However, most of the existing recurrent neural network based point process models fail to capture such dependencies, and yield unreliable prediction performance. To address this issue, we propose a Transformer Hawkes Process (THP) model, which leverages the self-attention mechanism to capture long-term dependencies and meanwhile enjoys computational efficiency. Numerical experiments on various datasets show that THP outperforms existing models in terms of both likelihood and event prediction accuracy by a notable margin. Moreover, THP is quite general and can incorporate additional structural knowledge. We provide a concrete example, where THP achieves improved prediction performance for learning multiple point processes when incorporating their relational information.