[edit] WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Lustre Center of Excellence at Oak Ridge National Laboratory

From Obsolete Lustre Wiki
(Difference between revisions)
Jump to: navigation, search
(Added titles to workshop presentation links)
 
(23 intermediate revisions by 3 users not shown)
Line 1: Line 1:
The Sun Lustre Center of Excellence (LCE) at ORNL advances the state of Lustre for use in large scale HPC environments. As you'll see from the projects below, the ORNL LCE focuses on both systems and applications aspects of ensuring that Lustre meets the needs of DOE and the HPC community in general.
+
<small>''(Updated: Dec 2009)''</small>
 
+
__TOC__
==Current projects and their status==
+
The Lustre™ Center of Excellence (LCE) at ORNL advances the state of Lustre for use in large scale HPC environments. As you'll see from the projects below, the ORNL LCE focuses on both systems and applications aspects of ensuring that Lustre meets the needs of DOE and the HPC community in general.
===Applications IO Performance===
+
Mike Booth, one of the LCE personnel, is investigating applications IO performance on the Lustre systems at ORNL to identify ways to achieve more consistent and higher IO performance. He is also working on enhancing the ADIOS IO layer.
+
<p>
+
Feel free to [mailto:Michael.Booth@Sun.COM contact Mike] for more information and status of his work.
+
  
 
==Events==
 
==Events==
Line 12: Line 8:
 
in analyzing IO and storage requirements and identifying ways for
 
in analyzing IO and storage requirements and identifying ways for
 
Lustre to address these requirements.
 
Lustre to address these requirements.
 
The next event will be an Applications Workshop this Autumn 2009.
 
  
 
===LCE Summit - February 2008===
 
===LCE Summit - February 2008===
*[[Media:LCE_Summit_Summary_Draft_March_14_2008.pdf|LCE Summit Meeting Summary]]
+
*[[Media:LCE_Summit_Summary_Draft_March_14_2008.pdf|''LCE Summit Meeting Summary'']]
*[[Media:LCESummitSlides.pdf|LCE Summit Slides and Notes]]
+
*[[Media:LCESummitSlides.pdf|''LCE Summit Slides and Notes'']]
  
 
===LCE Application I/O Workshop - April 16, 2008===
 
===LCE Application I/O Workshop - April 16, 2008===
*[[Media:April2008ApplicationIOWorkshop.pdf|Application IO Workshop, Agenda and Notes]]
+
*[[Media:April2008ApplicationIOWorkshop.pdf|''Application IO Workshop, Agenda and Notes'']]
*[[Media:Lustre_workshop_WangDi.pdf|Lustre and Application IO - Wang Di's Slides]]
+
*[[Media:Lustre_workshop_WangDi.pdf|''Lustre and Application IO - Wang Di's Slides'']]
*[[Media:Lustre_workshop_Oleg.pdf|Lustre and Application IO - Oleg Drokin's Slides]]
+
*[[Media:Lustre_workshop_Oleg.pdf|''Lustre and Application IO - Oleg Drokin's Slides'']]
  
 
===Lustre Scalability Workshop - Feb 10 & 11, 2009, ORNL===
 
===Lustre Scalability Workshop - Feb 10 & 11, 2009, ORNL===
*[[Media:Notes_on_SW1_Notes.pdf|Scalability Workshop Notes]]
+
*[[Media:Notes_on_SW1_Notes.pdf|''Scalability Workshop Notes'']]
*[[Media:LustreScalabilityWP_Updated.pdf|Scalability White Paper]]
+
*[[Media:LustreScalabilityWP_Updated.pdf|''Scalability White Paper'']]
*[[Media:Shipman_Feb_lustre_scalability.pdf|Galen Shipman's presentation]]
+
*[[Media:Shipman_Feb_lustre_scalability.pdf|''Galen Shipman's presentation'']]
*[[Media:Eric-Barton_-_Lustre-Multi_PF_Roadmap-090130.pdf|Eric Barton's presentation]]
+
*[[Media:Eric-Barton_-_Lustre-Multi_PF_Roadmap-090130.pdf|''Eric Barton's presentation'']]
  
 
===Lustre Scalability Workshop - May 19 & 20, 2009, ORNL===
 
===Lustre Scalability Workshop - May 19 & 20, 2009, ORNL===
 
The May Lustre Scalability Workshop at ORNL was focused on long term (2015) HPC IO and storage requirements and on presentations on the IO objectives of the DOD HPCS program and how Lustre will achieve them.
 
The May Lustre Scalability Workshop at ORNL was focused on long term (2015) HPC IO and storage requirements and on presentations on the IO objectives of the DOD HPCS program and how Lustre will achieve them.
*[[Media:Dawson_Lustre_Workshop_May_2009.pdf|Lustre Scalability Workshop, Initial Gap Response - John Dawson]]
+
*[[Media:Dawson_Lustre_Workshop_May_2009.pdf|''Lustre Scalability Workshop, Initial Gap Response'' - John Dawson]]
*[[Media:Carrier_2009-05-19_ORNL_LCE_HPCS.pdf|HPCS IO - John Carrier]]
+
*[[Media:Carrier_2009-05-19_ORNL_LCE_HPCS.pdf|''HPCS IO'' - John Carrier]]
*[[Media:Newman_May_Lustre_Workshop.pdf|What is HPCS and How Does it Impact IO - Henry Newman]]
+
*[[Media:Newman_May_Lustre_Workshop.pdf|''What is HPCS and How Does it Impact IO'' - Henry Newman]]
*[[Media:Shipman_May_lustre_scalability_workshop.pdf|2015 Parallel File System Requirements - Galen Shipman]]
+
*[[Media:Shipman_May_lustre_scalability_workshop.pdf|''2015 Parallel File System Requirements'' - Galen Shipman]]
*[[Media:Dilger_Lustre_HPCS_May_Workshop.pdf|Lustre HPCS Design Overview - Andreas Dilger]]
+
*[[Media:Dilger_Lustre_HPCS_May_Workshop.pdf|''Lustre HPCS Design Overview'' - Andreas Dilger]]
 +
 
 +
===Scalability Workshop Follow Up===
 +
*[[Media:Lustre_Scalability_Workshop.pdf‎ |''Scalability Gap Response'']] dated October 2009 is the final version of Sun's response to the scalability gaps identified and discussed during the LCE Scalability Workshops in February and May of 2009.
  
 
==White Papers==
 
==White Papers==
Line 42: Line 39:
 
LCE personnel have written a variety of papers on High Performance IO and potential Lustre features. Links to these documents are below.
 
LCE personnel have written a variety of papers on High Performance IO and potential Lustre features. Links to these documents are below.
 
   
 
   
*[[Media:Peta-Scale_wp.pdf|Peta-Scale IO with the Lustre File System]]
+
*[https://www.sun.com/offers/details/Peta-Scale_wp.xml ''Lustre Scalability - An Oak Ridge National Laboratory/Lustre Center of Excellence Paper'']
  
*[[Media:Lce_pop_submitted.pdf|Improving I/O Performance in POP (Parallel Ocean Program)]]
+
*[[Media:Lce_pop_submitted.pdf|''Improving I/O Performance in POP (Parallel Ocean Program)'']]
  
 
* Scalability Improvements
 
* Scalability Improvements
**[[Media:Lustre_Enhancement_Report_Interval_Trees.pdf|Using Interval Tree to scale extent locks]]   
+
**[[Media:Lustre_Enhancement_Report_Interval_Trees.pdf|''Using Interval Tree to scale extent locks'']]   
**[[Media:Lustre_Enhancement_Report_UUID_Hash_Tables.pdf|Implement hash tables to scale export lookups]]  
+
**[[Media:Lustre_Enhancement_Report_UUID_Hash_Tables.pdf|''Implement hash tables to scale export lookups'']]  
**[[Media:A_Novel_Network_Request_Scheduler_for_a_Large_Scale_Storage_System.pdf|A Novel Network Request Scheduler for a Large Scale Storage System]]
+
**[[Media:A_Novel_Network_Request_Scheduler_for_a_Large_Scale_Storage_System.pdf|''A Novel Network Request Scheduler for a Large Scale Storage System'']]
  
 
*Lustre ADIO Driver Enhancements Whitepaper
 
*Lustre ADIO Driver Enhancements Whitepaper
**[[Media:Lustre_ADIO_Driver_Whitepaper_0926.pdf|Lustre ADIO Driver Enhancements]]
+
**[[Media:Lustre_ADIO_Driver_Whitepaper_0926.pdf|''Lustre ADIO Driver Enhancements'']]
  
 
*FMS Application IO Analysis
 
*FMS Application IO Analysis
**[[Media:FMS_Investigation_Report_%280915%29.pdf|FMS Application IO Performance Analysis]]
+
**[[Media:FMS_Investigation_Report_%280915%29.pdf|''FMS Application IO Performance Analysis'']]
  
 
==Presentations==
 
==Presentations==
Line 63: Line 60:
 
==Press Articles==
 
==Press Articles==
  
* [http://www.nccs.gov/2009/06/30/ornl-hosts-lustre-part-ii/ ORNL Hosts Lustre Part II]
+
* [http://www.nccs.gov/2009/06/30/ornl-hosts-lustre-part-ii/ ''ORNL Hosts Lustre Part II'']
  
 
==Lustre Internals Manual==
 
==Lustre Internals Manual==
 
LCE and ORNL have written a Lustre filesystem internals document the describes the internal operation of Lustre.
 
LCE and ORNL have written a Lustre filesystem internals document the describes the internal operation of Lustre.
*[[Media:Understanding_Lustre_Filesystem_Internals.pdf|Understanding Lustre Filesystem Internals]]
+
*[[Media:Understanding_Lustre_Filesystem_Internals.pdf|''Understanding Lustre Filesystem Internals'']]
 +
 
 +
== Archives of Older Material ==
 +
 
 +
This material may be out of date, and is preserved here for archive purposes.
 +
 +
*[[Media:Peta-Scale_wp.pdf|''2008 Paper on IO with the Lustre File System at ORNL'']]
  
 
<br>
 
<br>
 
For more information about computing at Oak Ridge, click [http://www.nccs.gov/ here].
 
For more information about computing at Oak Ridge, click [http://www.nccs.gov/ here].
 
If you have questions or comments on this page or the LCE projects, contact  [mailto:John.Dawson@sun.com John Dawson].
 

Latest revision as of 18:14, 20 January 2011

(Updated: Dec 2009)

Contents

The Lustre™ Center of Excellence (LCE) at ORNL advances the state of Lustre for use in large scale HPC environments. As you'll see from the projects below, the ORNL LCE focuses on both systems and applications aspects of ensuring that Lustre meets the needs of DOE and the HPC community in general.

Events

The LCE sponsors events to encourage HPC community involvement in analyzing IO and storage requirements and identifying ways for Lustre to address these requirements.

LCE Summit - February 2008

LCE Application I/O Workshop - April 16, 2008

Lustre Scalability Workshop - Feb 10 & 11, 2009, ORNL

Lustre Scalability Workshop - May 19 & 20, 2009, ORNL

The May Lustre Scalability Workshop at ORNL was focused on long term (2015) HPC IO and storage requirements and on presentations on the IO objectives of the DOD HPCS program and how Lustre will achieve them.

Scalability Workshop Follow Up

  • Scalability Gap Response dated October 2009 is the final version of Sun's response to the scalability gaps identified and discussed during the LCE Scalability Workshops in February and May of 2009.

White Papers

LCE personnel have written a variety of papers on High Performance IO and potential Lustre features. Links to these documents are below.

Presentations

See the ORNL slides and video from their presentation at LUG 2009.

Press Articles

Lustre Internals Manual

LCE and ORNL have written a Lustre filesystem internals document the describes the internal operation of Lustre.

Archives of Older Material

This material may be out of date, and is preserved here for archive purposes.


For more information about computing at Oak Ridge, click here.

Personal tools
Navigation