论文标题
连续时间和多级图表示来源用途需求预测
Continuous-Time and Multi-Level Graph Representation Learning for Origin-Destination Demand Prediction
论文作者
论文摘要
深度神经网络的交通需求预测引起了学术界和行业社会的广泛兴趣。其中,成对起源 - 终止(OD)需求预测是一个有价值但具有挑战性的问题,这是由于几个因素:(i)可能的OD对数,(ii)空间依赖性的内在性以及(iii)交通状态的复杂性。为了解决上述问题,本文提出了一种连续的时间和多级动态图表表示方法,用于原始目的地需求预测(CMOD)。首先,构建了连续的时间动态图表示学习框架,该框架维护每个流量节点(地铁站或出租车区)的动态状态向量。国家向量保留历史交易信息,并根据最近发生的交易不断更新。其次,提出了一个多级结构学习模块,以模拟站点级节点的空间依赖性。它不仅可以从数据自适应地利用节点之间的关系,还可以通过群集级别和区域级的虚拟节点共享消息和表示形式。最后,跨级融合模块旨在集成多级记忆并为最终预测生成综合节点表示。在北京地铁和纽约出租车的两个现实世界数据集上进行了广泛的实验,结果证明了我们的模型与最先进的方法相比。
Traffic demand forecasting by deep neural networks has attracted widespread interest in both academia and industry society. Among them, the pairwise Origin-Destination (OD) demand prediction is a valuable but challenging problem due to several factors: (i) the large number of possible OD pairs, (ii) implicitness of spatial dependence, and (iii) complexity of traffic states. To address the above issues, this paper proposes a Continuous-time and Multi-level dynamic graph representation learning method for Origin-Destination demand prediction (CMOD). Firstly, a continuous-time dynamic graph representation learning framework is constructed, which maintains a dynamic state vector for each traffic node (metro stations or taxi zones). The state vectors keep historical transaction information and are continuously updated according to the most recently happened transactions. Secondly, a multi-level structure learning module is proposed to model the spatial dependency of station-level nodes. It can not only exploit relations between nodes adaptively from data, but also share messages and representations via cluster-level and area-level virtual nodes. Lastly, a cross-level fusion module is designed to integrate multi-level memories and generate comprehensive node representations for the final prediction. Extensive experiments are conducted on two real-world datasets from Beijing Subway and New York Taxi, and the results demonstrate the superiority of our model against the state-of-the-art approaches.