The Logical division of data is called Input Split and physical division of data is called HDFS Block.
In Hadoop, Input Splits and HDFS (Hadoop Distributed File System) Blocks are two fundamental concepts related to data storage and processing. Here’s the difference between them:
- HDFS Block:
- Definition: HDFS divides a large file into smaller blocks, typically with a default size of 128 MB or 256 MB (configurable). Each block is a contiguous chunk of data.
- Storage: These blocks are stored on the DataNodes in the Hadoop cluster.
- Fault Tolerance: Multiple copies (usually three) of each block are maintained across different DataNodes to provide fault tolerance. This ensures that if a DataNode or a block becomes unavailable, the data can still be retrieved from a replica on another node.
- Input Split:
- Definition: Input Split is a logical division of data for processing in a MapReduce job. It is not a physical division like HDFS blocks but a logical division for the purpose of parallel processing.
- Size: The size of an Input Split is determined by the InputFormat used in a Hadoop job. Each Input Split is processed by a separate Mapper.
- Map Tasks: In a MapReduce job, each Input Split is processed by a different Mapper task. The number of Mappers is determined by the number of Input Splits.
Summary:
- HDFS Blocks are physical divisions of data stored in the Hadoop Distributed File System, and they represent the actual chunks of data on the storage nodes.
- Input Splits are logical divisions of data used for parallel processing in MapReduce jobs, and they determine the number of Map tasks in a Hadoop job.
In essence, HDFS Blocks deal with the physical storage of data, while Input Splits deal with the logical division of data for processing in a MapReduce job.