Learn

Hadoop MapReduce

MapReduce is a programming model for processing large data sets with a parallel, distributed algorithm on a Hadoop cluster. It consists of:

  • A Map() function that performs filtering and sorting
  • A Reducer() function that performs s summary operation

The following session explains in detail about:

  • Map Reduce Process
  • Use cases of Map Reduce
  • Sample Cluster Configuration
  • Anatomy of a Map Reduce Program
  • Need of Map Reduce

Technologies

Training

Installation

Sign Up for Updates



Your information will be protected.

Feedback



Please fill in ALL fields with correct information. Your information will be protected.