Evaluating the Performance of Large Language Models on Arabic Lexical Ambiguities: A Comparative Study with Traditional Machine Translation Systems
Abstract
The rapid advancement in natural language processing (NLP) has led to the development of large language models (LLMs) with impressive capabilities in various tasks, including machine translation. However, the effectiveness of these new systems in handling linguistic complexities, such as Arabic lexical ambiguity, remains underexplored. This study investigates whether LLMs can outperform traditional machine translation (MT) systems in translating Arabic lexical ambiguities, characterized by homonyms, heteronyms, and polysemes. The evaluation involves two prominent LLMs, OpenAI's GPT and Google's Gemini, and compares their performance with traditional MT systems, Google Translate and SYSTRAN. The results indicate that GPT and Gemini offer substantial improvements in translation accuracy and intelligibility over traditional MT systems and highlight the advanced capabilities of LLMs in handling the complexities of Arabic, suggesting a significant step forward in machine translation technologies. This study highlights the potential of LLMs to overcome the limitations of traditional MT systems and provides a foundation for future research. The results contribute to the ongoing development of more effective and accurate translation systems, emphasizing the importance of adopting advanced AI technologies in the field of machine translation.
Full Text:
PDFDOI: https://doi.org/10.5430/wjel.v15n3p354
This work is licensed under a Creative Commons Attribution 4.0 International License.
World Journal of English Language
ISSN 1925-0703(Print) ISSN 1925-0711(Online)
Copyright © Sciedu Press
To make sure that you can receive messages from us, please add the 'sciedupress.com' domain to your e-mail 'safe list'. If you do not receive e-mail in your 'inbox', check your 'bulk mail' or 'junk mail' folders. If you have any questions, please contact: wjel@sciedupress.com
-----------------------------------------------------------------------------------------------------------------------------------------------------------