WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Lustre Publications"

From Obsolete Lustre Wiki
Jump to navigationJump to search
m
Line 1: Line 1:
 +
Note: Publications prior to 2003 are not listed here.
 +
 
{| border=1 cellspacing=0
 
{| border=1 cellspacing=0
|+Lustre Publications
+
|+White Papers
 
|-
 
|-
!White Papers
+
!Title
 
!Description/Source
 
!Description/Source
 
!Date
 
!Date
|-class="even"
 
|Sun BluePrint||link to come ||
 
 
|-
 
|-
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG2007|| 2007  
+
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008
|-class="even"
+
|-
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004
+
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference || June 2007
 +
|-
 +
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007  
 +
|-
 +
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006
 +
|-
 +
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007
 +
|-
 +
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003
 
|-class="even"
 
 
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf  '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
 
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf  '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
 
|-
 
|-
 
|-
 
|-
!Case Studies
+
|}
!Source
+
 
 +
 
 +
{| border=1 cellspacing=0
 +
|+Case Studies
 +
|-
 +
!Title
 +
!Description/Source
 
!Date
 
!Date
 
|-class="even"
 
|-class="even"
|TMP||location of temporary directory (default: /tmp)||
+
|XXX||x||
 
|-
 
|-
|CONFIG||path to file containing configuration directives for your Lustre setup||
+
|XXX||x||
|-class="even"
 
|LOG||path to file containing the output from your test||
 
 
|-
 
|-
 
|-
 
|-
!Presentations
+
|}
!Source
+
 
 +
 
 +
{| border=1 cellspacing=0
 +
|+Presentations
 +
|-
 +
!Title
 +
!Description/Source
 
!Date
 
!Date
|-class="even"
+
|-
 +
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008
 +
|-
 +
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008
 +
|-
 +
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks
 +
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL] || May 2008
 +
|-
 +
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL''']  ||Presented by Robert Petkus - BNL / HEPiX Talks
 +
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF] || May 2008
 +
|-
 +
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008
 +
|-
 +
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007
 +
|-
 +
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007
 +
|-
 +
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005
 +
|-
 +
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production''']||
 +
HP-CAST 4 in Krakau  / Karlsruhe Lustre Talks || 10.5.2005
 +
|-
 +
 +
** ISC 2005 in Heidelberg (24.6.2005): [http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf Karlsruhe0506.pdf]
 +
* '''Experiences with 10 Months HP SFS/Lustre in HPC Production'''
 +
** HP-CAST 5 in Seattle (11.11.2005):  [http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf Karlsruhe0511.pdf]
 +
* '''Performance Monitoring in a HP SFS Environment'''
 +
** HP-CCN in Seattle (12.11.2005): [http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf Karlsruhe0512.pdf]
 +
* '''Experiences with HP SFS/Lustre at SSCK'''
 +
** SGPFS 5 in Stuttgart (4.4.2006): [http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf Karlsruhe0604.pdf]
 +
 
 +
 
 +
|-
 
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004
 
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004
 
|-
 
|-
 
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003
 
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003
|-class="even"
+
|-
 
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf  '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
 
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf  '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
 
|-
 
|-
Line 80: Line 128:
 
** October 2000
 
** October 2000
  
== Cray User Group ==
 
 
*
 
** [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf Paper:]This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities.
 
 
* '''Using IOR to Analyze the I/O Performance'''
 
** Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007
 
**[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf Slides in PDF format]
 
 
* '''A Center-Wide File System using Lustre'''
 
** Shane Canon, H.Sharp Oral, proceedings of CUG2006
 
** [http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf Paper in PDF format]
 
 
== DDN ==
 
 
* '''Best Practices for Architecting a Lustre-Based Storage Environment'''
 
** March, 2008
 
** [http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration Download whitepaper from DDN site]
 
 
 
 
== HEPiX Talks ==
 
 
* '''Lustre cluster in production at GSI'''
 
** Presented by SCHöN, Walter
 
** May, 2008
 
** [http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 Slides]
 
 
* '''Final Report from File Systems Working Group'''
 
** May, 2008
 
** Presented by MASLENNIKOV, Andrei
 
** [http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391  Full Report]
 
 
* '''Setting up a simple Lustre Filesystem'''
 
** Presented by WIESAND, Stephan
 
** May, 2008
 
** [http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 Slides]
 
 
* '''Storage Evaluations at BNL'''
 
** Presented by Robert Petkus - BNL
 
** April 23-27, 2007
 
** Performance comparison between ZFS, XFS and EXT3 on a Sun Thumper
 
** [http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf Slides in PDF format]
 
** Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL]
 
 
*  '''Lustre Experience at CEA/DIF'''
 
** Presented by J-Ch Lafoucriere
 
** April 23-27, 2007
 
** [http://wiki.lustre.org/images/5/58/DIF.pdf Slides in PDF format]
 
** Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF]
 
 
== Indiana University ==
 
* '''Wide Area Filesystem Performance using Lustre on the TeraGrid'''
 
** TeraGrid 2007 conference, June 2007
 
** [http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf Paper in PDF format]
 
  
 
== Karlsruhe Lustre Talks ==
 
== Karlsruhe Lustre Talks ==

Revision as of 10:04, 2 April 2009

Note: Publications prior to 2003 are not listed here.

White Papers
Title Description/Source Date
Best Practices for Architecting a Lustre-Based Storage Environment DDN March 2008
Wide Area Filesystem Performance using Lustre on the TeraGrid TeraGrid 2007 conference June 2007
Guidelines for Efficient Parallel I/O on the Cray XT3/XT4 (PDF) Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group 2007
A Center-Wide File System using Lustre Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group 2006
XT7? Integrating and Operating a Conjoined XT3+XT4 System Presented by ORNL on CUG 2007 / Cray User Group (Presentation also available) 2007
Selecting a cluster file system CFS Nov 2005
Lustre: Building a cluster file system for 1,000 node clusters A technical presentation about our successes and mistakes during 2002-2003 / CFS Summer 2003


Case Studies
Title Description/Source Date
XXX x
XXX x


  • Experiences with 10 Months HP SFS/Lustre in HPC Production
  • Performance Monitoring in a HP SFS Environment
  • Experiences with HP SFS/Lustre at SSCK
Presentations
Title Description/Source Date
Lustre cluster in production at GSI Presented by SCHöN, Walter / HEPiX Talks May 2008
Final Report from File Systems Working Group Presented by MASLENNIKOV, Andrei / HEPiX Talks May 2008
Setting up a simple Lustre Filesystem Presented by WIESAND, Stephan / HEPiX Talks

Slides on HPPix site: Storage Evaluations at BNL || May 2008

Storage Evaluations at BNL Presented by Robert Petkus - BNL / HEPiX Talks

Slides on HEPix site: Lustre Experience at CEA/DIF || May 2008

Lustre Experience at CEA/DIF Presented by J-Ch Lafoucriere / HEPiX Talks April 23-27, 2008
XT7? Integrating and Operating a Conjoined XT3+XT4 System This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group (Paper also available) 2007
Using IOR to Analyze the I/O Performance Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group 2007
Karlsruhe0503.pdf Filesystems on SSCK's HP XC6000 Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks 2005
Karlsruhe0510.pdf Experiences & Performance of SFS/Lustre Cluster File System in Production

HP-CAST 4 in Krakau / Karlsruhe Lustre Talks || 10.5.2005

Lustre state and production installations Presentation on gelato.org meeting / CFS May 2004
Lustre File System Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS Summer 2003
Lustre: Building a cluster file system for 1,000 node clusters A technical presentation about our successes and mistakes during 2002-2003 / CFS Summer 2003

CFS


Karlsruhe Lustre Talks

Ohio State University

  • Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre

ORNL

  • Exploiting Lustre File Joining for Effective Collective IO

SUN

Synopsys

  • Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)
    • Glenn Newell, Sr.IT Solutions Mgr,
    • Naji Bekhazi,Director Of R&D,Mask Data Prep (CATS)
    • Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)
    • 2007
    • paper in pdf format

TeraGrid

  • Wide Wrea Filesystem Performance Using Lustre on the TeraGrid

University of Colorado, Boulder

  • Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment
    • Paper in PDF format
    • proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005)
    • The management issues mentioned in the last part of this paper have been addressed.
    • Paper at CU site (It is the same as the attachment to the LCI paper above.)

University of Minnesota

  • Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems