Model-driven Abusive Language Detection

Loading...
Thumbnail Image

Date

Authors

LeBlanc, Hannah

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Abusive user-posted content has become a serious issue for many different forms of online communication, especially communication on social media platforms. While the effects of abusive content on society and governments are still being determined, we know that comments containing abusive language can damage lives and put targets of abuse at risk. To address this, we have developed a model-driven approach to abusive language detection. Our approach is based on motivators that are likely to lead to someone posting abusive content. Using the motivators as intermediate constructs of abusive language we develop several models based on othering, subjectivity, moods, and emotions. This model-driven approach first considers the processes that lead to a person posting an abusive comment and constructs a predictor based on each of the models. Each individual predictor is combined into a stacked predictor that is able to detect abusive language with an accuracy of 95% and an F1-score of 91. In addition to a strong abusive language detection technique, we explore how each intermediate construct contributes to our predictor and its role in abusive language.

Description

Keywords

Natural Language Processing, Text Classification, abusive Language

Citation

Endorsement

Review

Supplemented By

Referenced By

Creative Commons license

Except where otherwised noted, this item's license is described as Attribution-NoDerivs 3.0 United States