Comparing Human Translation And Google Translate: Enhancing Communication For Oral Health
Main Article Content
Abstract
Computer software-based translation of texts from one language to another is assuming increasing importance in different fields. This study aims to assess the accuracy of Google Translate (GT) in translating English SOHO-5 (E-SOHO-5) into Arabic compared to a Human Translator (HT). We evaluated the quality of translations from GT and a professional HT, by comparing them to a reference translation created by a multidisciplinary expert committee. This assessment was conducted using the BiLingual Evaluation Understudy metric. The translations produced by GT were also assessed and edited by the expert committee. The findings of this study showed that human translation consistently outperformed GT in terms of BLEU scores across unigrams, trigrams, and tetragrams while GT outperformed HT in bigrams. The average BiLingual Evaluation Understudy score for human translation was 0.447, while GT achieved an average score of 0.441. GT exhibited lower accuracy compared to human translation. To achieve linguistic and cultural equivalence in research instruments, machine translation requires post-editing.