IEEE Access (Jan 2025)

Multi-Task Prediction Method Based on GGCN for Object Centric Event Logs

  • Li Ke,
  • Fang Huan,
  • Xu Yifei,
  • Shao Chifeng

DOI
https://doi.org/10.1109/ACCESS.2025.3553618
Journal volume & issue
Vol. 13
pp. 53949 – 53963

Abstract

Read online

Event logs constitute the fundamental data for predictive process monitoring research, and the quality and format of these logs are crucial for predictive analysis. Existing process prediction methods are primarily based on flattened event logs, which overlook multi-object interactions and complex dependencies, thereby limiting their ability to model complex processes. In contrast, object-centric event logs associate events with multiple objects, thereby enabling the modeling of multi-object interactions and complex dependencies within processes. Object-centric event logs frequently employ graph neural networks for predictive analysis. To systematically analyze and compare the performance of existing predictive models designed for object centric event logs, this paper proposes a multi-task prediction model, Graph-based Relational Graph Convolutional Network (GGCN), which is based on relational graph convolutional networks and gated recurrent units. The proposed GGCN model embeds the multi-relational aspects of object-centric event logs into a graph structure and inputs this structure into the model, thereby achieving three tasks: next activity prediction, next timestamp prediction, and remaining time prediction. The graph embedding structure of the GGCN model is capable of extracting complex multi-relational associations and temporal dependency features between events. Finally, we conducted multi-task process prediction experiments using four publicly available datasets. The results demonstrate that, compared with baseline models such as Graph Convolutional Networks (GCN), Graph Isomorphism Networks (GIN), and Graph Attention Networks (GAT), the proposed GGCN model exhibits superior predictive performance and data robustness. Furthermore, in relation to the impact of subgraph size in graph embeddings on prediction performance, the experimental results indicate a significant correlation between subgraph size and model performance.

Keywords