IEEE Access (Jan 2023)

A Survey on Legal Judgment Prediction: Datasets, Metrics, Models and Challenges

  • Junyun Cui,
  • Xiaoyu Shen,
  • Shaochun Wen

DOI
https://doi.org/10.1109/ACCESS.2023.3317083
Journal volume & issue
Vol. 11
pp. 102050 – 102071

Abstract

Read online

Legal judgment prediction (LJP) applies Natural Language Processing (NLP) techniques to predict judgment results based on fact descriptions automatically. The present work addresses the growing interest in the application of NLP techniques to the task of LJP. Despite the current performance gap between machines and humans, promising results have been achieved in a variety of benchmark datasets, owing to recent advances in NLP research and the availability of large-scale public datasets. To provide a comprehensive survey of existing LJP tasks, datasets, models, and evaluations, this study presents the following contributions: 1) an analysis of 43 LJP datasets constructed in 9 different languages, together with a classification method of LJP based on three different attributes; 2) a summary of 16 evaluation metrics categorized into 4 different types to evaluate the performance of LJP models for different outputs; 3) a review of 8 legal-domain pretrained models in 4 languages, highlighting four major research directions for LJP; 4) state-of-the-art results for 11 representative datasets from different court cases and an in-depth discussion of the open challenges in this area. This study aims to provide a comprehensive review for NLP researchers and legal professionals to understand the advances in LJP over the past years, and to facilitate further joint efforts towards improving the performance of LJP models.

Keywords