Word Embedding for High Performance Cross-Language Plagiarism Detection Techniques





Plagiarism, Cross-Language, FastText, Word2Vec, Doc2Vec, GloVe, Sen2Vec, BERT, SLSTM


Academic plagiarism has become a serious concern as it leads to the retardation of scientific progress and violation of intellectual property. In this context, we make a study aiming at the detection of cross-linguistic plagiarism based on Natural language Preprocessing (NLP), Embedding Techniques, and Deep Learning. Many systems have been developed to tackle this problem, and many rely on machine learning and deep learning methods. In this paper, we propose Cross-language Plagiarism Detection (CL-PD) method based on Doc2Vec embedding techniques and a Siamese Long Short-Term Memory (SLSTM) model. Embedding techniques help capture the text's contextual meaning and improve the CL-PD system's performance. To show the effectiveness of our method, we conducted a comparative study with other techniques such as GloVe, FastText, BERT, and Sen2Vec on a dataset combining PAN11, JRC-Acquis, Europarl, and Wikipedia. The experiments for the Spanish-English language pair show that Doc2Vec+SLSTM achieve the best results compared to other relevant models, with an accuracy of 99.81%, a precision of 99.75%, a recall of 99.88%, an f-score of 99.70%, and a very small loss in the test phase.




How to Cite

Bouaine, C., Faouzia Benabbou, & Imane Sadgali. (2023). Word Embedding for High Performance Cross-Language Plagiarism Detection Techniques. International Journal of Interactive Mobile Technologies (iJIM), 17(10), pp. 69–91. https://doi.org/10.3991/ijim.v17i10.38891