Aim: The research aims to create a new hate-speech detection model by utilizing hybridized method that capture complex contextual linkages within textual data. Hate speech remain a threat to peaceful coexistence of humans in societies especially via open social networks in this current age, presenting grave obstacles to online safety and promoting inclusive environments.
Methods: This is achieved by combining the advantages of BERT attention processes with a context analyzer. Careful data augmentation was carried out utilizing back translation, which is made possible by the deep-translator library, enhancing the dataset's diversity and quantity to guarantee a comprehensive and reliable dataset.
Results: The training of the frozen BERT layer out of the two layers of the model produced a total accuracy of 0.99 on the 20th epoch by identifying the multi-labeled classes of hate speech using the Adam optimizer and softmax. Promising performance is shown by the trained model's assessment metrics, which include a macro precision of 0.79875, a macro recall of 0.71587, and a macro F1-score of 0.74825.
Conclusion: By utilizing the hybridized BERT model, damaging information can be understood holistically as it can identify not only explicit hate speech but also subtle sensitivities and underlying meanings.
Key words: Hate Speech, Context Analyzer, BERT-Attention Mechanism, Natural Language Processing (NLP), Detection Model
|