International Journal of Computer Applications |
Foundation of Computer Science (FCS), NY, USA |
Volume 129 - Number 15 |
Year of Publication: 2015 |
Authors: Arpit Gupta, Rajiv Pandey, Komal Verma |
10.5120/ijca2015907156 |
Arpit Gupta, Rajiv Pandey, Komal Verma . Analysing Distributed Big Data through Hadoop Map Reduce. International Journal of Computer Applications. 129, 15 ( November 2015), 26-31. DOI=10.5120/ijca2015907156
This term paper focuses on how the big data is analysed in a distributed environment through Hadoop Map Reduce. Big Data is same as “small data” but bigger in size. Thus, it is approached in different ways. Storage of Big Data requires analysing the characteristics of data. It can be processed by the employment of Hadoop Map Reduce. Map Reduce is a programming model working parallel for large clusters. There are some principles that are followed by Hadoop Map Reduce. It also solves the challenges of cluster computing as it hides complexity and minimizes the movement of data.