A BERT-based system for multi-topic labeling of Arabic content
Abstract
Text classification (or categorization) is one of the most common natural language processing (NLP) tasks. It is very useful to simplify the management of a large volume of textual data by assigning each text to one or more categories. This operation is challenging when it is a multi-label classification. For Arabic text, this task becomes more challenging due to the complex morphology and structure of Arabic language. In this paper, we address this issue by proposing a classification system for the Mowjaz Multi-Topic Labelling Task. The objective of this task is to classify Arabic articles according to the 10 topics predefined in Mowjaz. The proposed system is based on AraBERT, a pre-trained BERT model for the Arabic language. The first step of this system consists in tokenizing and representing the input articles using the AraBERT model. Then, a fully connected neural network is applied on the output of the AraBERT model to classify the articles according to their topics. The experimental tests conducted on the Mowjaz dataset showed an accuracy of 0.865 for the development set and an accuracy of 0.851 for the test set.