How JobTracker assign tasks to the TaskTracker?

The TaskTracker periodically sends heartbeat messages to the JobTracker to assure that it is alive. This messages also inform the JobTracker about the number of available slots. This return message updates JobTracker to know about where to schedule task.

In Hadoop MapReduce, the JobTracker is responsible for assigning tasks to TaskTrackers. The process involves the following steps:

  1. Job Submission:
    • When a MapReduce job is submitted to the Hadoop cluster, the JobTracker is notified.
  2. Job Splits:
    • The input data for the job is divided into splits, and tasks are created based on these splits. Each split is typically a block of data.
  3. Task Assignment:
    • The JobTracker assigns Map tasks to available TaskTrackers. Map tasks process the input data and produce intermediate key-value pairs.
  4. Intermediate Data Shuffling:
    • The intermediate data generated by Map tasks needs to be shuffled and sorted before being passed to the Reduce tasks. This involves the exchange of data between nodes in the cluster.
  5. Task Completion:
    • Once the Map tasks are completed, the JobTracker assigns Reduce tasks to TaskTrackers. Reduce tasks process the shuffled and sorted data.
  6. Final Output:
    • The output of the Reduce tasks is the final result of the MapReduce job.
  7. Monitoring and Failure Handling:
    • The JobTracker monitors the progress of all tasks and handles task failures. If a TaskTracker fails or takes too long to complete a task, the JobTracker can reassign the task to another available TaskTracker.

It’s important to note that the JobTracker is a single point of failure in Hadoop 1.x versions. In Hadoop 2.x and later versions, the architecture was improved with the introduction of the ResourceManager and ApplicationMaster, which distribute the responsibilities of the JobTracker and provide better scalability and fault tolerance.