Jisuanji kexue yu tansuo (Dec 2021)

Research on Neural Network for Trial Difficulty Prediction

  • WANG Yue, WANG Pinghui, XU Nuo, CHEN Long, YANG Peng, WU Yong

DOI
https://doi.org/10.3778/j.issn.1673-9418.2008098
Journal volume & issue
Vol. 15, no. 12
pp. 2345 – 2352

Abstract

Read online

Trial difficulty prediction (TDP) is the task of automatically predicting the difficulty of a trial given the case text, which has a broad application prospect in judicial intelligent system. In practice, the tools of TDP rely heavily on the experience of experts, which leads different conclusions in predicting the difficulty of the trial. However, there are few related research work. To address these issues, this paper regards it as a text classification problem in natural language processing. Through the analysis, it is found that, traditional text classification methods don??t consider the structural uniqueness and logical dependence among trial elements in complaint, which makes it difficult to predict the difficulty of a trial accurately. In order to solve the mentioned challenges, this paper carefully studies indictments and considers the complex and simple trial elements for judging cases, presents an end-to-end model, MAT-TAN (mask-attention and topological association network). Specifically, this paper proposes a novel mask-attention network (MAT), to carry out fine-grained analysis of a case description text in indictments. The masking mechanism plays a role of the intelligent gatekeeper, focusing on the specific position of the trial elements in indictments. Together with the self-attention mechanism, it extracts the comprehensive and accurate characteristics of each trial element. This paper proposes a novel topological association network (TAN), which models the judicial logic dependency relationship between different elements, and effectively integrates the characteristics of different elements. Finally, the TDP is realized. The experimental results conducted on real-world datasets demonstrate that the MAT-TAN can improve the macro averaged F1 up to 0.036 compared with baselines, showing that it has a better performance in TDP.

Keywords