Transformers have shown their effectiveness in various machine learning tasks. However, their “black box” nature often obscures their decision-making processes, particularly in Arabic, posing a barrier to their broader adoption and trust. This study delves into the interpretability of three Arabic transformer models that have been fine-tuned for semantic search tasks. Through a focused case study, we employ these models for retrieving information from the Holy Qur'an, leveraging Explainable AI (XAI) techniques—namely, LIME and SHAP—to shed light on the decision-making processes of these models. The paper underscores the unique challenges posed by the Qur’anic text and demonstrates how XAI can significantly boost the transparency and interpretability of semantic search systems for such complex text. Our findings reveal that applying XAI techniques to Arabic transformer models for Qur'anic content not only demystifies the models internal mechanics but also makes the insights derived from them more accessible to a broader audience. This contribution is twofold: It enriches the field of XAI within the context of Arabic semantic search and illustrates the utility of these techniques in deepening our understanding of intricate religious documents. By providing this nuanced approach to the interpretability of Arabic transformer models in the domain of semantic search, our study underscores the potential of XAI to bridge the gap between advanced machine learning technologies and the nuanced needs of users seeking to explore complex texts like the Holy Qur'an.
Our code is available at https://gist.github.com/a-mustafa/51fcacf30ecdf0c13ac91ad16fecfa89
Key words: Arabic NLP, Explainable Machine Learning, Semantic Search, Transformers
|