Scientific Reports (Jan 2025)
Explainable AI-based suicidal and non-suicidal ideations detection from social media text with enhanced ensemble technique
Abstract
Abstract This research presents a novel framework for distinguishing between actual and non-suicidal ideation in social media interactions using an ensemble technique. The prompt identification of sentiments on social networking platforms is crucial for timely intervention serving as a key tactic in suicide prevention efforts. However, conventional AI models often mask their decision-making processes primarily designed for classification purposes. Our methodology, along with an updated ensemble method, bridges the gap between Explainable AI and leverages a variety of machine learning algorithms to improve predictive accuracy. By leveraging Explainable AI’s interpretability to analyze the features, the model elucidates the reasoning behind its classifications leading to a comprehension of hidden patterns associated with suicidal ideations. Our system is compared to cutting-edge methods on several social media datasets using experimental evaluations, demonstrating that it is superior, since it detects suicidal content more accurately than others. Consequently, this study presents a more reliable and interpretable strategy (F1-score for suicidal = 95.5% and Non-Suicidal = 99%), for monitoring and intervening in suicide-related online discussions.
Keywords