Reading and Writing the Data from Data-Node



Hadoop

 

Basic Architecture of Hadoop Distributed File System :


The definition of Hadoop is “Apache Hadoop is an open source software framework for storage and large-scale processing of data-set on the cluster of commodity hardware”. The large scale storage is the responsibility of the Hadoop distributed file system. The HDFS follows the master-slave architecture. The master being the name-node and the slave being the data-nodes. The architecture of Hadoop file system is as shown in the diagram below.

How to HDFS looks ?
Basic HDFS Architecture

 The function and the components of the HDFS are:-


1) Client: - 

The client could be any java program or system that is used by the user to interact with the HDFS system. All the commands for a read or a write are sent to the Hadoop framework using this client. The commands that the Hadoop system can send is the read command that is to read the data and write command that is used to write the data in the HDFS and many other commands such as to compress the data or perform the check sum or execute the map reduce job.

2) Name-node: - 

It is the master node in the architecture of the HDFS. The name-node is like the brain of the system that stores all the information of the data. It allocates block numbers to the new data that needs to be written into the HDFS and also stores where the data has been written. When the client requests a read, the name node searches the indexes for the block address for that data. There are 2 name nodes that are present in the Hadoop system one is the primary name-node and other one is the passive name node which is made into active when primary name-node goes down.

3) Data-node: - 

The data- nodes are the physical data storage node. The data is stored in these data-nodes. The disk in these data-nodes are divided into blocks. The name node allocates the data-node and block when the data is going to be written. 

The data that is written has a default replication factor of 3. That means when a data is being written in the HDFS, it is stored in 2 data-nodes which are copies of the 1st data node.This is done while keeping in mind that in case if one of the data-node fails then the data is not lost as it could be read from the replicated from the other data nodes. 

The data-nodes does not communicate with one another but they send their status (heartbeat) to the name-node to show that they are still functioning. In case the data-node stop sending the heart beat signals the name-node recognizes that the data-node is down and automatically creates a new data-node by copying the data from the replicated data-node of the failed data-node.


THE PROCEDURE OF READING THE DATA FROM THE DATA-NODE :-

 

Procedure of reading a data
Procedure of reading a data

The clients sends an open request to the Name-Node. The open request contains the name of the file that the client wants to open. The name node searches the indexes for that particular file and returns the file with that file’s block address. Then the client on receiving the block-address issues the read command to the data-nodes, thus creating an Input-Stream to write the data. The data nodes writes the data into the Input Stream that has been created

As the 1st block to data is read from the file system. The address of the 2nd block is relayed to client from the name-node which in turn sends it to the data-node, so that the Input Stream could read the next block data. After the reading is completed, a close command is issued from the client that closes the Input Stream.


THE PROCEDURE OF WRITING THE DATA FROM THE DATA-NODE :-

 

how to write a data in HDFS using datanode
Procedure of writing the data


The client sends a create() request to the name-node. The name-node processes that request by performing a few checks :
  1. If there is file of the same name or not.
  2. Client that sends the request has the authority to create the file or not.
  3. Protocols that are defined by the administrator.
  4.  
Once these checks are matched the name-node allocates the data-node and the block address according to the algorithm set up by the administrator.
 

The allocated address are sent back to the client, who then creates an output stream. The data is broken down in to equal chunk of data. These chunks are arranged into a waiting queue. This output stream is used to write the chunk of data from the waiting queue into the block. The chunk of data, while being written into the data-node is sent into the ack queue. 

The HDFS Client that opens a file for writing the data is granted on lease for the file, no other client can write the file. The writing client periodically renews the lease by sending the heartbeat(signals) to the name-node. When the file is closed, the lease is revoked. The lease duration is bound by the soft limit or hard limit. Until the soft limit is expired, the client can access the file. If soft limit expires & client fails to close the file or renews the lease. Another client client can lease the access.

HDFS file consist of blocks. When there is need of another block to write the file, then name node generates a block with unique block number and determines the data node for replication of the block. The DataNode forms a stream to flow the bytes. Bytes are pushed as a sequence of packets. 

After data has written to the HDFS, HDFS is not provided any guarantee that the data are visible to the new reader, until file is not closed.  


In a cluster of thousands of nodes, failures of a node may be possible. A replica stored on a DataNode may become corrupted because of faults in memory, disk, or network. HDFS generates and stores checksums for each data block of an HDFS file.


Checksum are verified by the HDFS client by calculating the checksum, while reading to detect any corruption caused either by client, DataNodes, or network.

When a client creates an HDFS file, Name-Node computes the checksum sequence for each block and sends it to a DataNode along with the data. A DataNode stores checksums in a metadata file separate from the block's data file. 

When HDFS reads a file, each block's data and checksums are shipped to the client. The client computes the checksum for the received data and verifies that the newly computed checksums matches the checksums it received. If not, the client notifies the NameNode of the corrupt replica and then fetches a different replica of the block from another DataNode.
If the replication factor is defined as 3. There would be 3 data-nodes allocated to save the data. These blocks are serialized. That means once that data is copied into the 1st data-node then only it will be saved in 2nd data node and after that the 3rd data node. After being written into all the 3 data-nodes the acknowledgement is sent to the Output Stream. Then that data is deleted from the ack queue.

Popular Posts