WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Lustre Publications: Difference between revisions

From Obsolete Lustre Wiki
Jump to navigationJump to search
 
(43 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Informative sources regarding Lustre technology and its application.
__TOC__
 
You'll find informative sources here about Lustre™ technology and its applications, including videos and podcasts describing Lustre at a high-level and more detailed white papers, blue prints and engineering presentations produced by a variety of Lustre experts and research organizations.  
You'll find videos and podcasts describing Lustre at a high-level, engineering presentations from Lustre engineers, down to detailed descriptions of applications in white papers produced by research organizations.  




Line 12: Line 11:
!Description/Source
!Description/Source
!Date
!Date
|-
|[http://www.rce-cast.com/index.php/Podcast/rce-14-lustre-cluster-filesystem.html '''RCE 14: Lustre Cluster File System''']||  Research Computing and Engineering interview with Andreas Dilger, one of the principal file system architects for the Lustre file system.||2009
|-
|-
|[http://link.brightcove.com/services/player/bcpid1640183659?bctid=8899392001 '''Linux HPC Software Stack''']||Sun Systems Engineer Larry McIntosh provides an overview of Sun HPC software reference stack for Lustre.||2008
|[http://link.brightcove.com/services/player/bcpid1640183659?bctid=8899392001 '''Linux HPC Software Stack''']||Sun Systems Engineer Larry McIntosh provides an overview of Sun HPC software reference stack for Lustre.||2008
Line 34: Line 35:
||[http://www.sun.com/software/products/lustre/docs/Lustre-networking.pdf '''Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks''']||Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management.|| November 2008
||[http://www.sun.com/software/products/lustre/docs/Lustre-networking.pdf '''Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks''']||Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management.|| November 2008
|-
|-
||[http://www.sun.com/software/products/lustre/docs/lustrefilesystem_wp.pdf '''Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System''']||Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems.||October 2008
||[https://www.sun.com/offers/details/LustreFileSystem.xml '''Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System''']||Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems.||October 2008
|-
||[http://www.sun.com/offers/docs/open_petascale_computing.pdf '''Pathways to Open Petascale Computing''']||Derived from Sun’s innovative design approach and experience with very large , supercomputing deployments,  the Sun Constellation System provides the world's first , open petascale computing environment — one built entirely with open and standard , hardware and software technologies. Cluster architects can use the Sun Constellation , System to design and rapidly deploy tightly-integrated,  efficient,  and cost-effective , supercomputing grids and clusters that scale predictably from a few teraflops to over a , petaflop. With a totally modular approach,  processors,  memory,  interconnect fabric, and storage can all be scaled independently depending on individual needs. ||June 2008
|-
|-
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008
||[http://www.sun.com/offers/docs/open_petascale_computing.pdf '''Pathways to Open Petascale Computing''']||Derived from Sun’s innovative design approach and experience with very large supercomputing deployments, the Sun Constellation System provides the world's first open petascale computing environment — one built entirely with open and standard hardware and software technologies. Cluster architects can use the Sun Constellation System to design and rapidly deploy tightly-integrated, efficient, and cost-effective supercomputing grids and clusters that scale predictably from a few teraflops to over a petaflop. With a totally modular approach, processors, memory,  interconnect fabric, and storage can all be scaled independently depending on individual needs. ||June 2008
|-
|-
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008
|-
|-
||[http://wiki.lustre.org/images/4/49/WP_BestPractices_Lustre_DDN_032108.pdf '''Best Practices for Architecting a Lustre-based Storage Environment''']|| A series of best practices to consider when deploying a highly-reliable, high-performance Lustre environment. Covered topics include storage infrastructure failover, maximizing computational capability by minimizing I/O overhead, ensuring predictable striped file performance, and protecting large, persistent data stores. / DataDirect Networks||2008
||[[Media:Lustre_wan_tg07.pdf|'''Wide Area Filesystem Performance using Lustre on the TeraGrid''']]|| TeraGrid 2007 conference / Indiana University|| June 2007
|-
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007
|-
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007
|-
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf  '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara [http://www.sun.com/blueprints/0507/820-2187.html Also a Sun BluePrints Publication]|| May 2007
|-
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007
|-
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007
|-
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007
|-
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available.)] / Cray User Group ||2007
|-
|-
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006
||[[Media:Yu_lustre.pdf|'''Exploiting Lustre File Joining for Effective Collective IO''']]||Proceedings of the CCGrid'07 / ORNL || May 2007
|-
|-
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also [http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf download the paper at the OSU site]. / Ohio State University || 2006
||[http://www.sun.com/blueprints/0507/820-2187.html  '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara|| May 2007
|-
|-
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006
||[[Media:Hpc_cats_wp.pdf|'''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']]||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007
|-
|-
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006
|[[Media:Larkin_paper.pdf|'''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''']]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007
|-
|-
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005
|[[Media:Canon_paper.pdf|'''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']]||Presented by ORNL on CUG 2007 [http://wiki.lustre.org/images/f/fa/Canon_slides.pdf (Presentation also available.)] / Cray User Group ||2007
|-
|-
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] || HP-CAST 4 in Krakau  / Karlsruhe Lustre Talks ||10.5.2005
|[[Media:A_Center-Wide_FS_using_Lustre.pdf|'''A Center-Wide File System using Lustre''']]||Shane Canon, Sarp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006
|-
|-
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005
||[[Media:Cac06_lustre.pdf|'''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']]||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also [http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf download the paper at the OSU site]. / Ohio State University || 2006
|-
|-
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005
||[[Media:MSST-2006-paper.pdf|'''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']]||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006
|-
|-
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle  / Karlsruhe Lustre Talks || 12.11.2005
|[[Media:Our_Collateral_selecting-a-cfs.pdf|'''Selecting a cluster file system''']]||CFS||Nov 2005
|-
|-
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005
||[[Media:LciPaper.pdf|'''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']]||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf available at the CU site.] / University of Colorado, Boulder ||2005
|-
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf available at the CU site.] / University of Colorado, Boulder ||2005
|-
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf  '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
|-
|-
|-
|-
Line 103: Line 84:
|-
|-
|}
|}
 
=== Lustre User Presentations ===
 
===Presentations===


{| border=1 cellspacing=0
{| border=1 cellspacing=0
Line 114: Line 93:
!Date
!Date
|-
|-
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by Walter Schön / HEPiX Talks || May 2008
|-
|-
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by Andrei Maslennikov / HEPiX Talks || May 2008
|-
|-
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by Stephan Wiesand. Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL] / HEPiX Talks  || May 2008
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by Stephan Wiesand. Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL] / HEPiX Talks  || May 2008
|-
|-
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL''']  ||Presented by Robert Petkus - BNL. Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF] / HEPiX Talks || May 2008
||[[Media:Storage_Evaluations%40BNL.pdf|'''Storage Evaluations at BNL''']]  ||Presented by Robert Petkus - BNL. Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF] / HEPiX Talks || May 2008
|-
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008
|-
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007
|-
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007
|-
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004
|-
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003
|-
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf  '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
|-
|}
 
====Presentations from Lustre Engineers, March 2008====
 
Also listed at [[Learn:Lustre_All-Hands_Meeting_3/08|Lustre All-Hands Meeting_3/08]]
 
{| border=1 cellspacing=0
|+Lustre Launch in Beijing
|-
!Title
!Description
!Date
|-class="even"
|Development||||
|-
|[http://wiki.lustre.org/images/a/a3/RMG-process-0308.pdf '''RMG Processes''']||Andreas Dilger|| March 17, 2008
|-
||[http://wiki.lustre.org/images/8/8d/Eeb-launch-0308.pdf '''Lustre Development Strategy''']||Eric Barton|| March 17, 2008
|-
||[http://wiki.lustre.org/images/6/6c/Cmd-0308.pdf '''CMD'''] || Yury Umanets|| March 17, 2008
|-
||[http://wiki.lustre.org/images/f/f4/Mdt.pdf '''HEAD MDS'''] ||Nikita Danilov|| March 17, 2008
|-
||[http://wiki.lustre.org/images/d/d9/Uss-0308.pdf '''User Space Servers''']||Alex Zhuravlev|| March 17, 2008
|-
||[http://wiki.lustre.org/images/2/29/DMU-0308.pdf '''DMU''']||Ricardo Correia|| March 17, 2008
|-
||[http://wiki.lustre.org/images/2/22/Recovery_overview-0308.pdf '''Recovery''']||Mike Pershin|| March 17, 2008
|-
||[http://wiki.lustre.org/images/c/c6/LDLM-0308.pdf '''DLM''']||Oleg Drokin|| March 17, 2008
|-
|-
||[http://wiki.lustre.org/images/0/02/Lnet-0308.pdf '''LNET''']||Issac Huang|| March 17, 2008
||[[Media:DIF.pdf|'''Lustre Experience at CEA/DIF''']]||Presented by J-Ch Lafoucriere / HEPiX Talks || April 2008
|-
|-
|-class="even"
||[[Media:Canon_slides.pdf|'''XT7?:Integrating and Operating a Conjoined XT3+XT4 System''']]|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including a novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_paper.pdf (Paper also available)]|| 2007
|Quality Engineering||||
|-
|-
||[http://wiki.lustre.org/images/9/92/Jd.day1-0308.pdf '''Day 1'''][http://wiki.lustre.org/images/2/25/Jd.day2-0308.pdf '''Day 2''']||JD|| March 17, 2008
|[[Media:Using_IOR_to_Analyze_IO_Performance.pdf|'''Using IOR to Analyze the I/O Performance''']]|| Presented by Hongzhang Shan and John Shalf (NERSC) at CUG 2007 / Cray User Group ||2007
|-
|-
||[http://wiki.lustre.org/images/7/72/Lustre-release%26weekly-testing-030.pdf '''Lustre Release & weekly testing''']||Jian Yu|| March 17, 2008
||[[Media:Karlsruhe0512.pdf|'''Performance Monitoring in a HP SFS Environment''']]||HP-CCN in Seattle  / Karlsruhe Lustre Talks || November 2005
|-
|-
||[http://wiki.lustre.org/images/6/62/Buildsys-0308.pdf '''Build System''']||Yibin Wang|| March 17, 2008
||[[Media:Karlsruhe0503.pdf|'''Filesystems on SSCK's HP XC6000''']]|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005
|-
|-
||[http://wiki.lustre.org/images/a/af/Head-testing-0308.pdf '''HEAD testing''']||Zheng Chen|| March 17, 2008
||[[Media:Karlsruhe0506.pdf|'''ISC 2005 in Heidelberg''']]|| Karlsruhe Lustre Talks || June 2005
|-
|-
||[http://wiki.lustre.org/images/c/cf/Latest.b1_6-0308.pdf '''b1.6 testing''']||Peng Ye|| March 17, 2008
||[[Media:Karlsruhe0510.pdf|'''Experiences & Performance of SFS/Lustre Cluster File System in Production''']] || HP-CAST 4 in Krakau  / Karlsruhe Lustre Talks || May 2005
|-
|-
||[http://wiki.lustre.org/images/0/0b/Perf.testing-0308.pdf '''Performance testing''']||Jack Chen|| March 17, 2008
|[[Media:Ols2003.pdf|'''Lustre: Building a cluster file system for 1,000 node clusters''']]||A technical presentation about successes and mistakes during 2002-2003 / Cluster File Systems||Summer 2003
|-
||[http://wiki.lustre.org/images/6/60/Automation-0308.pdf '''Automation''']||Minh Diep|| March 17, 2008
|-
||[http://wiki.lustre.org/images/9/9f/Cham_accep-sm.pdf '''Acc-small''']||Elena Gryaznova|| March 17, 2008
|-
|-
|}
|}
===Archive (Prior to 2003)===
<big><strong>From CFS</strong></big>
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']
** A technical presentation on Lustre.
** June 2002
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']
** A technical overview of Lustre from 2002.
** June 2002
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']
** September 2001
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']
** Lustre component overview.
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers
** June 2001
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']
** Sandia presentation on Lustre and Linux clustering
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']
** Powerpoint slides of an overview of cluster and OBD technology
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].
** July 2001
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective''']
** A comparative description of several distributed file systems.
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']
** A standards effort exists in the T10 OSD working group proposal.
** October 2000

Latest revision as of 16:57, 18 December 2009

You'll find informative sources here about Lustre™ technology and its applications, including videos and podcasts describing Lustre at a high-level and more detailed white papers, blue prints and engineering presentations produced by a variety of Lustre experts and research organizations.


Videos & Podcasts

Videos & Podcasts from Sun
Title Description/Source Date
RCE 14: Lustre Cluster File System Research Computing and Engineering interview with Andreas Dilger, one of the principal file system architects for the Lustre file system. 2009
Linux HPC Software Stack Sun Systems Engineer Larry McIntosh provides an overview of Sun HPC software reference stack for Lustre. 2008
Lustre Overview by Peter Bojanic Learn about the Lustre parallel file system, the newest addition to the Sun HPC portfolio, which is designed to meet the demands of the world's largest high performance clusters. December 7, 2007
Sun Storage Cluster Find out how Sun simplifies the deployment of Lustre-based storage. January 9, 2009
Radio HPC - Episode 11 Tony Warner chats with Voltaire's Brian Forbes about their companies' partnership in the InfiniBand space, and Peter Bojanic clues us in on what's new with Sun's Lustre file system. February 3, 2009

White Papers

White Papers
Title Description/Source Date
Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management. November 2008
Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems. October 2008
Pathways to Open Petascale Computing Derived from Sun’s innovative design approach and experience with very large supercomputing deployments, the Sun Constellation System provides the world's first open petascale computing environment — one built entirely with open and standard hardware and software technologies. Cluster architects can use the Sun Constellation System to design and rapidly deploy tightly-integrated, efficient, and cost-effective supercomputing grids and clusters that scale predictably from a few teraflops to over a petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. June 2008
Peta-Scale I/O with the Lustre File System Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL February 2008
Wide Area Filesystem Performance using Lustre on the TeraGrid TeraGrid 2007 conference / Indiana University June 2007
Exploiting Lustre File Joining for Effective Collective IO Proceedings of the CCGrid'07 / ORNL May 2007
Tokyo Tech Tsubame Grid Storage Implementation By Syuuichi Ihara May 2007
Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS) Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys 2007
Guidelines for Efficient Parallel I/O on the Cray XT3/XT4 Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group 2007
XT7? Integrating and Operating a Conjoined XT3+XT4 System Presented by ORNL on CUG 2007 (Presentation also available.) / Cray User Group 2007
A Center-Wide File System using Lustre Shane Canon, Sarp Oral, proceedings of CUG 2006 \ Cray User Group 2006
Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also download the paper at the OSU site. / Ohio State University 2006
Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota 2006
Selecting a cluster file system CFS Nov 2005
Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also available at the CU site. / University of Colorado, Boulder 2005

BluePrints

BluePrints
Title Description/Source Date
Lustre File System - Demo Quick Start Guide A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes. 2009
Implementing the Lustre File System with Sun Storage Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects. 2009
Tokyo Tech Tsubame Grid Storage Implementation This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid, as well as the steps for installing and configuring the Lustre file system , within the storage architecture. 2009
Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture To help customers address an almost bewildering set of architectural challenges, Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements. May 2008

Lustre User Presentations

Presentations
Title Description/Source Date
Lustre cluster in production at GSI Presented by Walter Schön / HEPiX Talks May 2008
Final Report from File Systems Working Group Presented by Andrei Maslennikov / HEPiX Talks May 2008
Setting up a simple Lustre Filesystem Presented by Stephan Wiesand. Slides on HPPix site: Storage Evaluations at BNL / HEPiX Talks May 2008
Storage Evaluations at BNL Presented by Robert Petkus - BNL. Slides on HEPix site: Lustre Experience at CEA/DIF / HEPiX Talks May 2008
Lustre Experience at CEA/DIF Presented by J-Ch Lafoucriere / HEPiX Talks April 2008
XT7?:Integrating and Operating a Conjoined XT3+XT4 System This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including a novel application of Lustre routing capabilities. / Cray User Group (Paper also available) 2007
Using IOR to Analyze the I/O Performance Presented by Hongzhang Shan and John Shalf (NERSC) at CUG 2007 / Cray User Group 2007
Performance Monitoring in a HP SFS Environment HP-CCN in Seattle / Karlsruhe Lustre Talks November 2005
Filesystems on SSCK's HP XC6000 Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks 2005
ISC 2005 in Heidelberg Karlsruhe Lustre Talks June 2005
Experiences & Performance of SFS/Lustre Cluster File System in Production HP-CAST 4 in Krakau / Karlsruhe Lustre Talks May 2005
Lustre: Building a cluster file system for 1,000 node clusters A technical presentation about successes and mistakes during 2002-2003 / Cluster File Systems Summer 2003