WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Lustre Publications"

From Obsolete Lustre Wiki
Jump to navigationJump to search
m
 
(55 intermediate revisions by 4 users not shown)
Line 1: Line 1:
Note: Publications prior to 2003 are not listed here.
+
__TOC__
 +
You'll find informative sources here about Lustre™ technology and its applications, including videos and podcasts describing Lustre at a high-level and more detailed white papers, blue prints and engineering presentations produced by a variety of Lustre experts and research organizations.  
 +
 
 +
 
 +
===Videos & Podcasts===
  
 
{| border=1 cellspacing=0
 
{| border=1 cellspacing=0
|+White Papers
+
|+Videos & Podcasts from Sun
 
|-
 
|-
 
!Title
 
!Title
Line 8: Line 12:
 
!Date
 
!Date
 
|-
 
|-
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008
+
|[http://www.rce-cast.com/index.php/Podcast/rce-14-lustre-cluster-filesystem.html '''RCE 14: Lustre Cluster File System''']||  Research Computing and Engineering interview with Andreas Dilger, one of the principal file system architects for the Lustre file system.||2009
 +
|-
 +
|[http://link.brightcove.com/services/player/bcpid1640183659?bctid=8899392001 '''Linux HPC Software Stack''']||Sun Systems Engineer Larry McIntosh provides an overview of Sun HPC software reference stack for Lustre.||2008
 +
|-
 +
|[http://channelsun.sun.com/video/lustre/1653611906 '''Lustre Overview by Peter Bojanic''']||Learn about the Lustre parallel file system, the newest addition to the Sun HPC portfolio, which is designed to meet the demands of the world's largest high performance clusters.||December 7, 2007
 +
|-
 +
|[http://channelsun.sun.com/video/storage+cluster/8901697001 '''Sun Storage Cluster''']||Find out how Sun simplifies the deployment of Lustre-based storage.|| January 9, 2009
 
|-
 
|-
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007
+
||[http://channelsun.sun.com/video/radio+hpc+-+episode+11/10049527001 '''Radio HPC - Episode 11''']||Tony Warner chats with Voltaire's Brian Forbes about their companies' partnership in the InfiniBand space, and Peter Bojanic clues us in on what's new with Sun's Lustre file system. ||February 3, 2009
 
|-
 
|-
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007
+
|}
 +
 
 +
===White Papers===
 +
 
 +
{| border=1 cellspacing=0
 +
|+White Papers
 
|-
 
|-
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf  '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun
+
!Title
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007
+
!Description/Source
 +
!Date
 
|-
 
|-
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007
+
||[http://www.sun.com/software/products/lustre/docs/Lustre-networking.pdf '''Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks''']||Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management.|| November 2008
 
|-
 
|-
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007
+
||[https://www.sun.com/offers/details/LustreFileSystem.xml '''Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System''']||Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems.||October 2008
 
|-
 
|-
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007
+
||[http://www.sun.com/offers/docs/open_petascale_computing.pdf '''Pathways to Open Petascale Computing''']||Derived from Sun’s innovative design approach and experience with very large supercomputing deployments, the Sun Constellation System provides the world's first open petascale computing environment — one built entirely with open and standard hardware and software technologies. Cluster architects can use the Sun Constellation System to design and rapidly deploy tightly-integrated, efficient, and cost-effective supercomputing grids and clusters that scale predictably from a few teraflops to over a petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. ||June 2008
 
|-
 
|-
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007
+
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008
 
|-
 
|-
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006
+
||[[Media:Lustre_wan_tg07.pdf|'''Wide Area Filesystem Performance using Lustre on the TeraGrid''']]|| TeraGrid 2007 conference / Indiana University|| June 2007
 
|-
 
|-
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University
+
||[[Media:Yu_lustre.pdf|'''Exploiting Lustre File Joining for Effective Collective IO''']]||Proceedings of the CCGrid'07 / ORNL || May 2007
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006
 
 
|-
 
|-
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006
+
||[http://www.sun.com/blueprints/0507/820-2187.html  '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara|| May 2007
 
|-
 
|-
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005
+
||[[Media:Hpc_cats_wp.pdf|'''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']]||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007
 
|-
 
|-
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||
+
|[[Media:Larkin_paper.pdf|'''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''']]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007
HP-CAST 4 in Krakau  / Karlsruhe Lustre Talks ||10.5.2005
 
 
|-
 
|-
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005
+
|[[Media:Canon_paper.pdf|'''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']]||Presented by ORNL on CUG 2007 [http://wiki.lustre.org/images/f/fa/Canon_slides.pdf (Presentation also available.)] / Cray User Group ||2007
 
|-
 
|-
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005
+
|[[Media:A_Center-Wide_FS_using_Lustre.pdf|'''A Center-Wide File System using Lustre''']]||Shane Canon, Sarp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006
 
|-
 
|-
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle  / Karlsruhe Lustre Talks || 12.11.2005
+
||[[Media:Cac06_lustre.pdf|'''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']]||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also [http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf download the paper at the OSU site]. / Ohio State University || 2006
 
|-
 
|-
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005
+
||[[Media:MSST-2006-paper.pdf|'''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']]||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006
 
|-
 
|-
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)
+
|[[Media:Our_Collateral_selecting-a-cfs.pdf|'''Selecting a cluster file system''']]||CFS||Nov 2005
/ University of Colorado, Boulder ||2005
 
 
|-
 
|-
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
+
||[[Media:LciPaper.pdf|'''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']]||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf available at the CU site.] / University of Colorado, Boulder ||2005
 
|-
 
|-
 
|-
 
|-
 
|}
 
|}
  
 +
===BluePrints===
  
 
{| border=1 cellspacing=0
 
{| border=1 cellspacing=0
|+Case Studies
+
|+BluePrints
 
|-
 
|-
 
!Title
 
!Title
Line 61: Line 75:
 
!Date
 
!Date
 
|-class="even"
 
|-class="even"
|XXX||x||
+
|[http://wikis.sun.com/display/BluePrints/Lustre+File+System+-+Demo+Quick+Start+Guide '''Lustre File System - Demo Quick Start Guide''']||A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes.|| 2009
 
|-
 
|-
|XXX||x||
+
|[http://wikis.sun.com/display/BluePrints/Implementing+the+Lustre+File+System+with+Sun+Storage ''' Implementing the Lustre File System with Sun Storage''']||Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects.||2009
 +
|-class="even"
 +
||[http://wikis.sun.com/display/BluePrints/Tokyo+Tech+Tsubame+Grid+Storage+Implementation '''Tokyo Tech Tsubame Grid Storage Implementation''']||This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid,  as well as the steps for installing and configuring the Lustre file system , within the storage architecture.||2009
 
|-
 
|-
 +
||[http://wikis.sun.com/download/attachments/31395541/820-5304.pdf?version=1 '''Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture''']||To help customers address an almost bewildering set of architectural challenges,  Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements.||May 2008
 
|-
 
|-
 
|}
 
|}
 
+
=== Lustre User Presentations ===
  
 
{| border=1 cellspacing=0
 
{| border=1 cellspacing=0
Line 76: Line 93:
 
!Date
 
!Date
 
|-
 
|-
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008
+
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by Walter Schön / HEPiX Talks || May 2008
 +
|-
 +
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by Andrei Maslennikov / HEPiX Talks || May 2008
 +
|-
 +
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by Stephan Wiesand. Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL] / HEPiX Talks  || May 2008
 
|-
 
|-
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008
+
||[[Media:Storage_Evaluations%40BNL.pdf|'''Storage Evaluations at BNL''']]  ||Presented by Robert Petkus - BNL. Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF] / HEPiX Talks || May 2008
 
|-
 
|-
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks  
+
||[[Media:DIF.pdf|'''Lustre Experience at CEA/DIF''']]||Presented by J-Ch Lafoucriere / HEPiX Talks || April 2008
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL] || May 2008
 
 
|-
 
|-
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks
+
||[[Media:Canon_slides.pdf|'''XT7?:Integrating and Operating a Conjoined XT3+XT4 System''']]|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including a novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_paper.pdf (Paper also available)]|| 2007
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF] || May 2008
 
 
|-
 
|-
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008
+
|[[Media:Using_IOR_to_Analyze_IO_Performance.pdf|'''Using IOR to Analyze the I/O Performance''']]|| Presented by Hongzhang Shan and John Shalf (NERSC) at CUG 2007 / Cray User Group ||2007
 
|-
 
|-
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007
+
||[[Media:Karlsruhe0512.pdf|'''Performance Monitoring in a HP SFS Environment''']]||HP-CCN in Seattle  / Karlsruhe Lustre Talks || November 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007
+
||[[Media:Karlsruhe0503.pdf|'''Filesystems on SSCK's HP XC6000''']]|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004
+
||[[Media:Karlsruhe0506.pdf|'''ISC 2005 in Heidelberg''']]|| Karlsruhe Lustre Talks || June 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003
+
||[[Media:Karlsruhe0510.pdf|'''Experiences & Performance of SFS/Lustre Cluster File System in Production''']] || HP-CAST 4 in Krakau  / Karlsruhe Lustre Talks || May 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
+
|[[Media:Ols2003.pdf|'''Lustre: Building a cluster file system for 1,000 node clusters''']]||A technical presentation about successes and mistakes during 2002-2003 / Cluster File Systems||Summer 2003
 
|-
 
|-
 
|}
 
|}
 
== CFS ==
 
 
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']
 
** A technical presentation on Lustre.
 
** June 2002
 
 
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']
 
** A technical overview of Lustre from 2002.
 
** June 2002
 
 
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']
 
** September 2001
 
 
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']
 
** Lustre component overview.
 
 
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']
 
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers
 
** June 2001
 
 
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']
 
** Sandia presentation on Lustre and Linux clustering
 
 
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']
 
** Powerpoint slides of an overview of cluster and OBD technology
 
 
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']
 
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].
 
** July 2001
 
 
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective''']
 
** A comparative description of several distributed file systems.
 
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.
 
 
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']
 
 
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']
 
** A standards effort exists in the T10 OSD working group proposal.
 
** October 2000
 
 
 
== University of Minnesota ==
 
* '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems'''
 
** MSST2006, Conference on Mass Storage Systems and Technologies (May 2006)
 
**[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf Paper in PDF format]
 

Latest revision as of 17:57, 18 December 2009

You'll find informative sources here about Lustre™ technology and its applications, including videos and podcasts describing Lustre at a high-level and more detailed white papers, blue prints and engineering presentations produced by a variety of Lustre experts and research organizations.


Videos & Podcasts

Videos & Podcasts from Sun
Title Description/Source Date
RCE 14: Lustre Cluster File System Research Computing and Engineering interview with Andreas Dilger, one of the principal file system architects for the Lustre file system. 2009
Linux HPC Software Stack Sun Systems Engineer Larry McIntosh provides an overview of Sun HPC software reference stack for Lustre. 2008
Lustre Overview by Peter Bojanic Learn about the Lustre parallel file system, the newest addition to the Sun HPC portfolio, which is designed to meet the demands of the world's largest high performance clusters. December 7, 2007
Sun Storage Cluster Find out how Sun simplifies the deployment of Lustre-based storage. January 9, 2009
Radio HPC - Episode 11 Tony Warner chats with Voltaire's Brian Forbes about their companies' partnership in the InfiniBand space, and Peter Bojanic clues us in on what's new with Sun's Lustre file system. February 3, 2009

White Papers

White Papers
Title Description/Source Date
Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management. November 2008
Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems. October 2008
Pathways to Open Petascale Computing Derived from Sun’s innovative design approach and experience with very large supercomputing deployments, the Sun Constellation System provides the world's first open petascale computing environment — one built entirely with open and standard hardware and software technologies. Cluster architects can use the Sun Constellation System to design and rapidly deploy tightly-integrated, efficient, and cost-effective supercomputing grids and clusters that scale predictably from a few teraflops to over a petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. June 2008
Peta-Scale I/O with the Lustre File System Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL February 2008
Wide Area Filesystem Performance using Lustre on the TeraGrid TeraGrid 2007 conference / Indiana University June 2007
Exploiting Lustre File Joining for Effective Collective IO Proceedings of the CCGrid'07 / ORNL May 2007
Tokyo Tech Tsubame Grid Storage Implementation By Syuuichi Ihara May 2007
Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS) Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys 2007
Guidelines for Efficient Parallel I/O on the Cray XT3/XT4 Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group 2007
XT7? Integrating and Operating a Conjoined XT3+XT4 System Presented by ORNL on CUG 2007 (Presentation also available.) / Cray User Group 2007
A Center-Wide File System using Lustre Shane Canon, Sarp Oral, proceedings of CUG 2006 \ Cray User Group 2006
Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also download the paper at the OSU site. / Ohio State University 2006
Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota 2006
Selecting a cluster file system CFS Nov 2005
Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also available at the CU site. / University of Colorado, Boulder 2005

BluePrints

BluePrints
Title Description/Source Date
Lustre File System - Demo Quick Start Guide A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes. 2009
Implementing the Lustre File System with Sun Storage Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects. 2009
Tokyo Tech Tsubame Grid Storage Implementation This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid, as well as the steps for installing and configuring the Lustre file system , within the storage architecture. 2009
Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture To help customers address an almost bewildering set of architectural challenges, Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements. May 2008

Lustre User Presentations

Presentations
Title Description/Source Date
Lustre cluster in production at GSI Presented by Walter Schön / HEPiX Talks May 2008
Final Report from File Systems Working Group Presented by Andrei Maslennikov / HEPiX Talks May 2008
Setting up a simple Lustre Filesystem Presented by Stephan Wiesand. Slides on HPPix site: Storage Evaluations at BNL / HEPiX Talks May 2008
Storage Evaluations at BNL Presented by Robert Petkus - BNL. Slides on HEPix site: Lustre Experience at CEA/DIF / HEPiX Talks May 2008
Lustre Experience at CEA/DIF Presented by J-Ch Lafoucriere / HEPiX Talks April 2008
XT7?:Integrating and Operating a Conjoined XT3+XT4 System This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including a novel application of Lustre routing capabilities. / Cray User Group (Paper also available) 2007
Using IOR to Analyze the I/O Performance Presented by Hongzhang Shan and John Shalf (NERSC) at CUG 2007 / Cray User Group 2007
Performance Monitoring in a HP SFS Environment HP-CCN in Seattle / Karlsruhe Lustre Talks November 2005
Filesystems on SSCK's HP XC6000 Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks 2005
ISC 2005 in Heidelberg Karlsruhe Lustre Talks June 2005
Experiences & Performance of SFS/Lustre Cluster File System in Production HP-CAST 4 in Krakau / Karlsruhe Lustre Talks May 2005
Lustre: Building a cluster file system for 1,000 node clusters A technical presentation about successes and mistakes during 2002-2003 / Cluster File Systems Summer 2003