Big Data Cleaning Algorithms in Cloud Computing

Feng Zhang, Hui-Feng Xue, Dong-Sheng Xu, Yong-Heng Zhang, Fei You

Abstract


Big data cleaning is one of the important research issues in cloud computing theory. The existing data cleaning algorithms assume all the data can be loaded into the main memory at one-time, which are infeasible for big data. To this end, based on the knowledge base, a data cleaning algorithm is proposed in cloud computing by Map-Reduce. It extracts atomic knowledge of the selected nodes firstly, then analyzes their relations, deletes the same objects, builds an atomic knowledge sequence based on weights, lastly cleans data according to the sequence. The experimental results show that the cloud computing environment big data algorithm is effective and feasible, and has better expansibility.

Keywords


big data, cleaning algorithms, cloud computing, data cleaning, Map-Reduce

Full Text:

PDF



International Journal of Online and Biomedical Engineering (iJOE) – eISSN: 2626-8493
Creative Commons License
Indexing:
Scopus logo Clarivate Analyatics ESCI logo IET Inspec logo DOAJ logo DBLP logo EBSCO logo Ulrich's logo Google Scholar logo MAS logo