论文标题

可解释和高性能的仇恨和进攻性语音检测

Explainable and High-Performance Hate and Offensive Speech Detection

论文作者

Babaeianjelodar, Marzieh, Prudhvi, Gurram Poorna, Lorenz, Stephen, Chen, Keyu, Mondal, Sumona, Dey, Soumyabrata, Kumar, Navin

论文摘要

信息通过社交媒体平台的传播可以创造可能对弱势社区的环境,并使社会中某些群体保持沉默。为了减轻这种情况,已经开发了几种模型来检测仇恨和冒犯性言论。由于在社交媒体平台上检测仇恨和令人反感的演讲可能会错误地将个人排除在社交媒体平台之外,这可以减少信任,因此有必要创建可解释和可解释的模型。因此,我们基于在Twitter数据上培训的XGBOOST算法建立了一个可解释的高性能模型。对于不平衡的Twitter数据,XGBOOST的表现优于LSTM,Autogluon和ULMFIT模型,而F1得分为0.75,而0.38和0.37分别为0.75和0.38。当我们将数据放到三个单独的类别的大约5000个推文中时,XGBoost的性能优于LSTM,Autogluon和Ulmfit。仇恨语音检测的F1分别为0.79,0.69、0.77和0.66。 XGBoost在下采样版本中的表现也比LSTM,Autogluon和Ulmfit更好,用于进攻性语音检测,F1得分分别为0.83和0.88、0.82和0.79。我们在XGBoost模型的输出上使用Shapley添加说明(SHAP),以使其与Black-Box模型相比,与LSTM,Autogluon和Ulmfit相比,它可以解释和解释。

The spread of information through social media platforms can create environments possibly hostile to vulnerable communities and silence certain groups in society. To mitigate such instances, several models have been developed to detect hate and offensive speech. Since detecting hate and offensive speech in social media platforms could incorrectly exclude individuals from social media platforms, which can reduce trust, there is a need to create explainable and interpretable models. Thus, we build an explainable and interpretable high performance model based on the XGBoost algorithm, trained on Twitter data. For unbalanced Twitter data, XGboost outperformed the LSTM, AutoGluon, and ULMFiT models on hate speech detection with an F1 score of 0.75 compared to 0.38 and 0.37, and 0.38 respectively. When we down-sampled the data to three separate classes of approximately 5000 tweets, XGBoost performed better than LSTM, AutoGluon, and ULMFiT; with F1 scores for hate speech detection of 0.79 vs 0.69, 0.77, and 0.66 respectively. XGBoost also performed better than LSTM, AutoGluon, and ULMFiT in the down-sampled version for offensive speech detection with F1 score of 0.83 vs 0.88, 0.82, and 0.79 respectively. We use Shapley Additive Explanations (SHAP) on our XGBoost models' outputs to makes it explainable and interpretable compared to LSTM, AutoGluon and ULMFiT that are black-box models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源