WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Lustre Publications

From Obsolete Lustre Wiki
Revision as of 10:04, 2 April 2009 by Kjpriola (talk | contribs)
Jump to navigationJump to search

Note: Publications prior to 2003 are not listed here.

White Papers
Title Description/Source Date
Best Practices for Architecting a Lustre-Based Storage Environment DDN March 2008
Wide Area Filesystem Performance using Lustre on the TeraGrid TeraGrid 2007 conference June 2007
Guidelines for Efficient Parallel I/O on the Cray XT3/XT4 (PDF) Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group 2007
A Center-Wide File System using Lustre Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group 2006
XT7? Integrating and Operating a Conjoined XT3+XT4 System Presented by ORNL on CUG 2007 / Cray User Group (Presentation also available) 2007
Selecting a cluster file system CFS Nov 2005
Lustre: Building a cluster file system for 1,000 node clusters A technical presentation about our successes and mistakes during 2002-2003 / CFS Summer 2003


Case Studies
Title Description/Source Date
XXX x
XXX x


  • Experiences with 10 Months HP SFS/Lustre in HPC Production
  • Performance Monitoring in a HP SFS Environment
  • Experiences with HP SFS/Lustre at SSCK
Presentations
Title Description/Source Date
Lustre cluster in production at GSI Presented by SCHöN, Walter / HEPiX Talks May 2008
Final Report from File Systems Working Group Presented by MASLENNIKOV, Andrei / HEPiX Talks May 2008
Setting up a simple Lustre Filesystem Presented by WIESAND, Stephan / HEPiX Talks

Slides on HPPix site: Storage Evaluations at BNL || May 2008

Storage Evaluations at BNL Presented by Robert Petkus - BNL / HEPiX Talks

Slides on HEPix site: Lustre Experience at CEA/DIF || May 2008

Lustre Experience at CEA/DIF Presented by J-Ch Lafoucriere / HEPiX Talks April 23-27, 2008
XT7? Integrating and Operating a Conjoined XT3+XT4 System This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group (Paper also available) 2007
Using IOR to Analyze the I/O Performance Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group 2007
Karlsruhe0503.pdf Filesystems on SSCK's HP XC6000 Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks 2005
Karlsruhe0510.pdf Experiences & Performance of SFS/Lustre Cluster File System in Production

HP-CAST 4 in Krakau / Karlsruhe Lustre Talks || 10.5.2005

Lustre state and production installations Presentation on gelato.org meeting / CFS May 2004
Lustre File System Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS Summer 2003
Lustre: Building a cluster file system for 1,000 node clusters A technical presentation about our successes and mistakes during 2002-2003 / CFS Summer 2003

CFS


Karlsruhe Lustre Talks

Ohio State University

  • Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre

ORNL

  • Exploiting Lustre File Joining for Effective Collective IO

SUN

Synopsys

  • Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)
    • Glenn Newell, Sr.IT Solutions Mgr,
    • Naji Bekhazi,Director Of R&D,Mask Data Prep (CATS)
    • Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)
    • 2007
    • paper in pdf format

TeraGrid

  • Wide Wrea Filesystem Performance Using Lustre on the TeraGrid

University of Colorado, Boulder

  • Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment
    • Paper in PDF format
    • proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005)
    • The management issues mentioned in the last part of this paper have been addressed.
    • Paper at CU site (It is the same as the attachment to the LCI paper above.)

University of Minnesota

  • Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems