WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Running Hadoop with Lustre"

From Obsolete Lustre Wiki
Jump to navigationJump to search
Line 15: Line 15:
 
* Hadoop sometimes generates a large amount of temporary or intermediate data during the Map/Reduce process. HDFS stores these files on the local disk, which results in a considerable load on [[the OS/disk]] '''[[(OS and disk I/O?)]]'''.
 
* Hadoop sometimes generates a large amount of temporary or intermediate data during the Map/Reduce process. HDFS stores these files on the local disk, which results in a considerable load on [[the OS/disk]] '''[[(OS and disk I/O?)]]'''.
  
* During the Map/Reduce process, the Reduce node uses the HTTP protocol to retrieve Map results from the Map node protocol. The HTTP protocol is not a good choice for big data transfers '''[[because?...]]'''
+
* During the Map/Reduce process, the Reduce node uses the HTTP protocol to retrieve Map results from the Map node protocol. The HTTP protocol is not a good choice for large data transfers '''[[because?...]]'''
  
 
* Hadoop is designed for Map/Reduce jobs, which makes it difficult to extend '''HDFS?''' as a normal file system.
 
* Hadoop is designed for Map/Reduce jobs, which makes it difficult to extend '''HDFS?''' as a normal file system.

Revision as of 12:57, 5 August 2009

This page describes how Hadoop performs with the Lustre file system when the Hadoop Distributed File System (HDFS) is replaced by Lustre.

Advantages of Using Hadoop with Lustre

Using Hadoop with Lustre offers several advantages over HDFS. We have made several enhancements to improve the use of Hadoop with Lustre. Advantages include:

  • Lustre has its own network protocol, which is better for bulk data transfer compared to the HTTP protocol. Additionally, as a real shared file system, each client sees the same file system image, so hardlink (hardlinks?) can be used to avoid data transfer between nodes.
  • Lustre is more easily? extended and can be mounted as a normal POSIX file system.

Disadvantages of Using Hadoop with HDFS

  • Hadoop sometimes generates a large amount of temporary or intermediate data during the Map/Reduce process. HDFS stores these files on the local disk, which results in a considerable load on the OS/disk (OS and disk I/O?).
  • During the Map/Reduce process, the Reduce node uses the HTTP protocol to retrieve Map results from the Map node protocol. The HTTP protocol is not a good choice for large data transfers because?...
  • Hadoop is designed for Map/Reduce jobs, which makes it difficult to extend HDFS? as a normal file system.
  • Using Hadoop is time-consuming for small files.

Test Comparisons Between Lustre vs HDFS

This paper provides suggestions about how to set up Lustre with Hadoop and how to use stripe information to help Hadoop schedule the job.

Using Lustre with Hadoop