WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.
Running Hadoop with Lustre: Difference between revisions
Line 17: | Line 17: | ||
* Hadoop sometimes generates a large amount of temporary or intermediate data during the Map/Reduce process. HDFS stores these files on the local disk, which results in a considerable load on [[the OS/disk]] '''[[(OS and disk I/O?)]]'''. | * Hadoop sometimes generates a large amount of temporary or intermediate data during the Map/Reduce process. HDFS stores these files on the local disk, which results in a considerable load on [[the OS/disk]] '''[[(OS and disk I/O?)]]'''. | ||
* During the Map/Reduce process, the Reduce node uses the HTTP protocol to retrieve Map results from the Map node protocol. The HTTP protocol is not a good choice for large data transfers '''[[because?...]]''' | * During the Map/Reduce process, the Reduce node uses the HTTP protocol to retrieve Map results from the Map node protocol. The HTTP protocol is [[not a good choice]] for large data transfers '''[[because?...]]''' | ||
* Hadoop is designed for Map/Reduce jobs, which makes it difficult to extend '''[[HDFS?]]''' as a normal file system. | * Hadoop is designed for Map/Reduce jobs, which makes it difficult to extend '''[[HDFS?]]''' as a normal file system. |
Revision as of 12:55, 5 August 2009
This page provides access to information about how to integrate Apache Hadoop with Lustre. We have made several enhancements to improve the use of Hadoop with Lustre and have conducted performance tests to compare the performance of Lustre vs. HDFS when used with Hadoop.
Advantages of Using Hadoop with Lustre
Using Hadoop with Lustre provides several advantages including:
- Lustre is a real parallel file system, which enables temporary or intermediate data to be stored parallel in multinode (in parallel on multiple nodes?), alleviating the load of a single node (reducing the load on single nodes?).
- Lustre has its own network protocol, which is better (more efficient?) for bulk data transfer than the HTTP protocol. Additionally, because Lustre is a shared file system, each client sees the same file system image, so hardlink (hardlinks?) can be used to avoid data transfer between nodes.
- Lustre is more easily? extended and can be mounted as a normal POSIX file system.
Disadvantages of Using Hadoop with HDFS
Using Hadoop with HDFS has several drawbacks, including:
- Hadoop sometimes generates a large amount of temporary or intermediate data during the Map/Reduce process. HDFS stores these files on the local disk, which results in a considerable load on the OS/disk (OS and disk I/O?).
- During the Map/Reduce process, the Reduce node uses the HTTP protocol to retrieve Map results from the Map node protocol. The HTTP protocol is not a good choice for large data transfers because?...
- Hadoop is designed for Map/Reduce jobs, which makes it difficult to extend HDFS? as a normal file system.
- Using Hadoop is time-consuming for small files.
Comparing the Performance of Lustre and HDFS
The paper Using Lustre with Apache Hadoop provides an overview of Hadoop and HDFS and describes how to set up Lustre with Hadoop. It also provides performance results for several categories of test cases including application tests (word counts, reading and outputting large non-splittable files), sub-phase tests to identify the time-consuming phases for specific applications, and benchmark I/O tests.