Efficient Intelligent System for Cyberbullying Detection in English and Roman Urdu Social Media Posts
Keywords:
Online Toxicity, Hate Speech, Abusive Language, Machine Learning Algorithms, English; Roman Urdu, Toxic Comment ClassificationAbstract
The internet has revolutionized communication, offering new platforms like social media, blogs, and comment sections for people to connect. However, these platforms have also seen an uptick in abusive language, hate speech, and cyberbullying. In the more recent work, training models to identify harmful remarks across several classes is explored using algorithms. A recent study examined the efficacy of naive Bayes, logistic regression, and support vector machine as three different approaches to using an online negative feedback engine. Toxic, offensive, disparaging, hateful, and healthy (non-toxic) comments are screened out of the process. With 97.5% accuracy in English analysis and 92.9% accuracy in Urdu analysis, Support Vector Machines (SVM) performed better than the other approaches, according to the data. SVM has shown a strong capacity to identify hate speech and insults. This research is significant because it will contribute to the development of technologies that online platforms can employ to identify and eliminate unwanted information, making the internet a safer and more secure place.
Downloads
Published
How to Cite
Issue
Section
License
This is an open Access Article published by Research Center of Computing & Biomedical Informatics (RCBI), Lahore, Pakistan under CCBY 4.0 International License