Word Embedding for High Performance Cross-Language Plagiarism Detection Techniques
DOI:
https://doi.org/10.3991/ijim.v17i10.38891Keywords:
Plagiarism, Cross-Language, FastText, Word2Vec, Doc2Vec, GloVe, Sen2Vec, BERT, SLSTMAbstract
Academic plagiarism has become a serious concern as it leads to the retardation of scientific progress and violation of intellectual property. In this context, we make a study aiming at the detection of cross-linguistic plagiarism based on Natural language Preprocessing (NLP), Embedding Techniques, and Deep Learning. Many systems have been developed to tackle this problem, and many rely on machine learning and deep learning methods. In this paper, we propose Cross-language Plagiarism Detection (CL-PD) method based on Doc2Vec embedding techniques and a Siamese Long Short-Term Memory (SLSTM) model. Embedding techniques help capture the text's contextual meaning and improve the CL-PD system's performance. To show the effectiveness of our method, we conducted a comparative study with other techniques such as GloVe, FastText, BERT, and Sen2Vec on a dataset combining PAN11, JRC-Acquis, Europarl, and Wikipedia. The experiments for the Spanish-English language pair show that Doc2Vec+SLSTM achieve the best results compared to other relevant models, with an accuracy of 99.81%, a precision of 99.75%, a recall of 99.88%, an f-score of 99.70%, and a very small loss in the test phase.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2023 chaimaa bouaine, Faouzia Benabbou, Imane Sadgali
This work is licensed under a Creative Commons Attribution 4.0 International License.