论文标题
知识图驱动的方法来表示复杂事件处理中时空事件模式匹配的视频流
Knowledge Graph Driven Approach to Represent Video Streams for Spatiotemporal Event Pattern Matching in Complex Event Processing
论文作者
论文摘要
复杂的事件处理(CEP)是事件处理范式,可在流数据上执行实时分析并匹配高级事件模式。目前,CEP仅限于处理结构化数据流。视频流由于其非结构化数据模型而变得复杂,并限制了CEP系统对其进行匹配。这项工作介绍了一种基于图形的结构,用于连续发展的视频流,这使CEP系统能够查询复杂的视频事件模式。我们提出了视频事件知识图(VEKG),这是视频数据的图驱动表示。 VEKG将视频对象建模为节点及其关系互动,作为时间和空间的边缘。它创建了使用深度学习模型的合奏从视频中检测到视频的高级语义概念得出的视频数据的语义知识表示。基于CEP的状态优化-VEKG时间汇总图(VEKG-TAG)在VEKG表示上提出了更快的事件检测。 VEKG-TAG是一种时空图聚集方法,在给定的时间长度上提供了VEKG图的汇总视图。我们为两个域(活动识别和流量管理)定义了一组九组事件模式规则,它们充当查询,并在VEKG图上应用,以发现复杂的事件模式。为了显示我们方法的疗效,我们在10个数据集中进行了超过801个视频剪辑的广泛实验。将提出的VEKG方法与其他最先进的方法进行了比较,并能够检测到0.44至0.90的视频上的复杂事件模式。在给定的实验中,优化的VEKG-TAG能够分别降低99%和93%的Vekg节点和边缘,并更快地搜索时间为5.19倍,从而达到了4-20毫秒的次秒中位潜伏期。
Complex Event Processing (CEP) is an event processing paradigm to perform real-time analytics over streaming data and match high-level event patterns. Presently, CEP is limited to process structured data stream. Video streams are complicated due to their unstructured data model and limit CEP systems to perform matching over them. This work introduces a graph-based structure for continuous evolving video streams, which enables the CEP system to query complex video event patterns. We propose the Video Event Knowledge Graph (VEKG), a graph driven representation of video data. VEKG models video objects as nodes and their relationship interaction as edges over time and space. It creates a semantic knowledge representation of video data derived from the detection of high-level semantic concepts from the video using an ensemble of deep learning models. A CEP-based state optimization - VEKG-Time Aggregated Graph (VEKG-TAG) is proposed over VEKG representation for faster event detection. VEKG-TAG is a spatiotemporal graph aggregation method that provides a summarized view of the VEKG graph over a given time length. We defined a set of nine event pattern rules for two domains (Activity Recognition and Traffic Management), which act as a query and applied over VEKG graphs to discover complex event patterns. To show the efficacy of our approach, we performed extensive experiments over 801 video clips across 10 datasets. The proposed VEKG approach was compared with other state-of-the-art methods and was able to detect complex event patterns over videos with F-Score ranging from 0.44 to 0.90. In the given experiments, the optimized VEKG-TAG was able to reduce 99% and 93% of VEKG nodes and edges, respectively, with 5.19X faster search time, achieving sub-second median latency of 4-20 milliseconds.