Map-Reduce Based High Performance Clustering On Large Scale Dataset Using Parallel Data Processing
The amount of data in our world has been exploding, and analyzing large data sets—so-called big data—will become a key basis of competition, reinforcement new waves of productivity growth, innovation, and consumer surplus. Big data refers to the size of a dataset that has grown too large to be manipulated through traditional methods. These methods include capture, storage, and processing of the data in a tolerable amount of time. Apache Hadoop is an open-source software framework for storage and large scale processing of data-sets on clusters of hardware. It works with Map Reduce software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. Clustering analysis is an unsupervised learning task that consist on classify objects into group. Then, the objects from one group share similar feature and are different from objects belonging to other groups. This paper shows that Map Reduce framework K-means clustering algorithm can obtain a higher performance when handling large scale document automatic classification in a multimode environment.
Haixun Wang Wei Wang Jiong Yang Philip S. Yu Clustering by Pattern Similarity in Large Data Sets
Vishal S Patil1, Pravin D. Soni2 HADOOP SKELETON & FAULT TOLERANCE IN HADOOP CLUSTERS
Hyeokju Lee, Joon Her, Sung-Ryul Kim Implementation of a Large-scalable Social Data Analysis System based on Map-Reduce.
Anil K. Jain, Data clustering: 50 years beyond K-means ,Pattern Recognition Letters 31 (2010) 651–666.
Dan pelleg, Andrew moore ,X-means: Extending K-means with Efficient Estimation of the Number of Clusters
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.