MapReduce – Simplified Data Processing on Large Cluster

DOI®: doi.org/10.21276/ijre.2018.5.5.4

CITATION: Dayalan, M. (2018). MapReduce : Simplified Data Processing on Large Cluster.International Journal Of Research And Engineering, 5(5), 399-403. doi:10.21276/ijre.2018.5.5.4

Author(s): 1Muthu Dayalan

Affiliation(s)1Senior Software Developer, ANNA University, India

Abstract:

MapReduce is a data processing approach, where a single machine acts as a master, assigning map/reduce tasks to all the other machines attached in the cluster. Technically, it could be considered as a programming model, which is applied in generating, implementation and generating large data sets. The key concept behind MapReduce is that the programmer is required to state the current problem in two basic functions, map and reduce. The scalability is handles within the system, rather than being handled by the concerned programmer. By applying various restrictions on the applied programming style, MapReduce performs several moderated functions such fault tolerance, locality optimization, load balancing as well as massive parallelization. Intermediate k/v pairs are generated by the Map, and then fed o the reduce workers by the use of the incorporated file system. The data received by the reduce workers is then merged using the same key, to produce multiple output file to the concerned user (Dean & Ghemawat, 2008). Additionally, the programmer is only required to master and write the codes regarding the easy to understand functionality.

Public Knowledge Project [ INDEXED LINK at index.pkp.sfu.ca ]



Indexed by other automated sites of Open Journal System under Public Knowledge Project http://pkp.sfu.ca/ojs

http-www-ijre-org