Developing Linguistic Patterns to Mitigate Inherent Human Bias in Offensive Language Detection

Toygar Tanyel, Besher Alkurdi, Serkan Ayvaz*

*Kontaktforfatter

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

12 Downloads (Pure)

Abstract

With the proliferation of social media, there has been a sharp increase in offensive content, particularly targeting vulnerable groups, exacerbating social problems such as hatred, racism, and sexism. Detecting offensive language use is crucial to prevent offensive language from being widely shared on social media. However, the accurate detection of irony, implication, and various forms of hate speech on social media remains a challenge. Natural language-based deep learning models require extensive training with large, comprehensive, and labeled datasets. Unfortunately, manually creating such datasets is both costly and error-prone. Additionally, the presence of human-bias in offensive language datasets is a major concern for deep learning models. In this paper, we propose a linguistic data augmentation approach to reduce bias in labeling processes, which aims to mitigate the influence of human bias by leveraging the power of machines to improve the accuracy and fairness of labeling processes. This approach has the potential to improve offensive language classification tasks across multiple languages and reduce the prevalence of offensive content on social media.
OriginalsprogEngelsk
TidsskriftTurkish Journal of Electrical Engineering and Computer Sciences
Vol/bind32
Udgave nummer6
Sider (fra-til)829-848
DOI
StatusUdgivet - nov. 2024

Emneord

  • cs.CL
  • cs.AI

Fingeraftryk

Dyk ned i forskningsemnerne om 'Developing Linguistic Patterns to Mitigate Inherent Human Bias in Offensive Language Detection'. Sammen danner de et unikt fingeraftryk.

Citationsformater