WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Lustre Publications"

From Obsolete Lustre Wiki
Jump to navigationJump to search
 
(11 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
__TOC__
 
__TOC__
You'll find informative sources here about Lustre technology and its applications, including videos and podcasts describing Lustre at a high-level and more detailed white papers, blue prints and engineering presentations produced by a variety of Lustre experts and research organizations.  
+
You'll find informative sources here about Lustre™ technology and its applications, including videos and podcasts describing Lustre at a high-level and more detailed white papers, blue prints and engineering presentations produced by a variety of Lustre experts and research organizations.  
  
  
Line 41: Line 41:
 
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008
 
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008
 
|-
 
|-
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007
+
||[[Media:Lustre_wan_tg07.pdf|'''Wide Area Filesystem Performance using Lustre on the TeraGrid''']]|| TeraGrid 2007 conference / Indiana University|| June 2007
 
|-
 
|-
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007
+
||[[Media:Yu_lustre.pdf|'''Exploiting Lustre File Joining for Effective Collective IO''']]||Proceedings of the CCGrid'07 / ORNL || May 2007
 
|-
 
|-
 
||[http://www.sun.com/blueprints/0507/820-2187.html  '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara|| May 2007
 
||[http://www.sun.com/blueprints/0507/820-2187.html  '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara|| May 2007
 
|-
 
|-
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007
+
||[[Media:Hpc_cats_wp.pdf|'''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']]||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007
 
|-
 
|-
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007  
+
|[[Media:Larkin_paper.pdf|'''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''']]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007  
 
|-
 
|-
|[http://wiki.lustre.org/images/b/b9/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 [http://wiki.lustre.org/images/f/fa/Canon_slides.pdf (Presentation also available.)] / Cray User Group ||2007
+
|[[Media:Canon_paper.pdf|'''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']]||Presented by ORNL on CUG 2007 [http://wiki.lustre.org/images/f/fa/Canon_slides.pdf (Presentation also available.)] / Cray User Group ||2007
 
|-
 
|-
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, Sarp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006
+
|[[Media:A_Center-Wide_FS_using_Lustre.pdf|'''A Center-Wide File System using Lustre''']]||Shane Canon, Sarp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006
 
|-
 
|-
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also [http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf download the paper at the OSU site]. / Ohio State University || 2006
+
||[[Media:Cac06_lustre.pdf|'''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']]||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also [http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf download the paper at the OSU site]. / Ohio State University || 2006
 
|-
 
|-
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006
+
||[[Media:MSST-2006-paper.pdf|'''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']]||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006
 
|-
 
|-
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005
+
|[[Media:Our_Collateral_selecting-a-cfs.pdf|'''Selecting a cluster file system''']]||CFS||Nov 2005
 
|-
 
|-
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf available at the CU site.] / University of Colorado, Boulder ||2005
+
||[[Media:LciPaper.pdf|'''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']]||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf available at the CU site.] / University of Colorado, Boulder ||2005
|-
 
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf  '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003
 
 
|-
 
|-
 
|-
 
|-
Line 86: Line 84:
 
|-
 
|-
 
|}
 
|}
===Engineering Presentations===
+
=== Lustre User Presentations ===
  
 
{| border=1 cellspacing=0
 
{| border=1 cellspacing=0
Line 101: Line 99:
 
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by Stephan Wiesand. Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL] / HEPiX Talks  || May 2008
 
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by Stephan Wiesand. Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL] / HEPiX Talks  || May 2008
 
|-
 
|-
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL''']  ||Presented by Robert Petkus - BNL. Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF] / HEPiX Talks || May 2008
+
||[[Media:Storage_Evaluations%40BNL.pdf|'''Storage Evaluations at BNL''']]  ||Presented by Robert Petkus - BNL. Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF] / HEPiX Talks || May 2008
|-
 
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 2008
 
|-
 
||[http://wiki.lustre.org/images/f/fa/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including a novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_paper.pdf (Paper also available)]|| 2007
 
|-
 
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan and John Shalf (NERSC) at CUG 2007 / Cray User Group ||2007
 
|-
 
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || April 2006
 
 
|-
 
|-
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || November 2005
+
||[[Media:DIF.pdf|'''Lustre Experience at CEA/DIF''']]||Presented by J-Ch Lafoucriere / HEPiX Talks || April 2008
 
|-
 
|-
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle  / Karlsruhe Lustre Talks || November 2005
+
||[[Media:Canon_slides.pdf|'''XT7?:Integrating and Operating a Conjoined XT3+XT4 System''']]|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including a novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_paper.pdf (Paper also available)]|| 2007
 
|-
 
|-
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005
+
|[[Media:Using_IOR_to_Analyze_IO_Performance.pdf|'''Using IOR to Analyze the I/O Performance''']]|| Presented by Hongzhang Shan and John Shalf (NERSC) at CUG 2007 / Cray User Group ||2007
 
|-
 
|-
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks || June 2005
+
||[[Media:Karlsruhe0512.pdf|'''Performance Monitoring in a HP SFS Environment''']]||HP-CCN in Seattle  / Karlsruhe Lustre Talks || November 2005
 
|-
 
|-
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] || HP-CAST 4 in Krakau  / Karlsruhe Lustre Talks || May 2005
+
||[[Media:Karlsruhe0503.pdf|'''Filesystems on SSCK's HP XC6000''']]|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation at [http://gelato.org/ gelato.org] technical community meeting / Cluster File Systems|| May 2004
+
||[[Media:Karlsruhe0506.pdf|'''ISC 2005 in Heidelberg''']]|| Karlsruhe Lustre Talks || June 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre 1.0 / Cluster File Systems||Summer 2003
+
||[[Media:Karlsruhe0510.pdf|'''Experiences & Performance of SFS/Lustre Cluster File System in Production''']] || HP-CAST 4 in Krakau  / Karlsruhe Lustre Talks || May 2005
 
|-
 
|-
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about successes and mistakes during 2002-2003 / Cluster File Systems||Summer 2003
+
|[[Media:Ols2003.pdf|'''Lustre: Building a cluster file system for 1,000 node clusters''']]||A technical presentation about successes and mistakes during 2002-2003 / Cluster File Systems||Summer 2003
 
|-
 
|-
 
|}
 
|}

Latest revision as of 17:57, 18 December 2009

You'll find informative sources here about Lustre™ technology and its applications, including videos and podcasts describing Lustre at a high-level and more detailed white papers, blue prints and engineering presentations produced by a variety of Lustre experts and research organizations.


Videos & Podcasts

Videos & Podcasts from Sun
Title Description/Source Date
RCE 14: Lustre Cluster File System Research Computing and Engineering interview with Andreas Dilger, one of the principal file system architects for the Lustre file system. 2009
Linux HPC Software Stack Sun Systems Engineer Larry McIntosh provides an overview of Sun HPC software reference stack for Lustre. 2008
Lustre Overview by Peter Bojanic Learn about the Lustre parallel file system, the newest addition to the Sun HPC portfolio, which is designed to meet the demands of the world's largest high performance clusters. December 7, 2007
Sun Storage Cluster Find out how Sun simplifies the deployment of Lustre-based storage. January 9, 2009
Radio HPC - Episode 11 Tony Warner chats with Voltaire's Brian Forbes about their companies' partnership in the InfiniBand space, and Peter Bojanic clues us in on what's new with Sun's Lustre file system. February 3, 2009

White Papers

White Papers
Title Description/Source Date
Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management. November 2008
Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems. October 2008
Pathways to Open Petascale Computing Derived from Sun’s innovative design approach and experience with very large supercomputing deployments, the Sun Constellation System provides the world's first open petascale computing environment — one built entirely with open and standard hardware and software technologies. Cluster architects can use the Sun Constellation System to design and rapidly deploy tightly-integrated, efficient, and cost-effective supercomputing grids and clusters that scale predictably from a few teraflops to over a petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. June 2008
Peta-Scale I/O with the Lustre File System Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL February 2008
Wide Area Filesystem Performance using Lustre on the TeraGrid TeraGrid 2007 conference / Indiana University June 2007
Exploiting Lustre File Joining for Effective Collective IO Proceedings of the CCGrid'07 / ORNL May 2007
Tokyo Tech Tsubame Grid Storage Implementation By Syuuichi Ihara May 2007
Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS) Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys 2007
Guidelines for Efficient Parallel I/O on the Cray XT3/XT4 Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group 2007
XT7? Integrating and Operating a Conjoined XT3+XT4 System Presented by ORNL on CUG 2007 (Presentation also available.) / Cray User Group 2007
A Center-Wide File System using Lustre Shane Canon, Sarp Oral, proceedings of CUG 2006 \ Cray User Group 2006
Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. You can also download the paper at the OSU site. / Ohio State University 2006
Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota 2006
Selecting a cluster file system CFS Nov 2005
Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. The same paper is also available at the CU site. / University of Colorado, Boulder 2005

BluePrints

BluePrints
Title Description/Source Date
Lustre File System - Demo Quick Start Guide A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes. 2009
Implementing the Lustre File System with Sun Storage Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects. 2009
Tokyo Tech Tsubame Grid Storage Implementation This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid, as well as the steps for installing and configuring the Lustre file system , within the storage architecture. 2009
Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture To help customers address an almost bewildering set of architectural challenges, Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements. May 2008

Lustre User Presentations

Presentations
Title Description/Source Date
Lustre cluster in production at GSI Presented by Walter Schön / HEPiX Talks May 2008
Final Report from File Systems Working Group Presented by Andrei Maslennikov / HEPiX Talks May 2008
Setting up a simple Lustre Filesystem Presented by Stephan Wiesand. Slides on HPPix site: Storage Evaluations at BNL / HEPiX Talks May 2008
Storage Evaluations at BNL Presented by Robert Petkus - BNL. Slides on HEPix site: Lustre Experience at CEA/DIF / HEPiX Talks May 2008
Lustre Experience at CEA/DIF Presented by J-Ch Lafoucriere / HEPiX Talks April 2008
XT7?:Integrating and Operating a Conjoined XT3+XT4 System This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including a novel application of Lustre routing capabilities. / Cray User Group (Paper also available) 2007
Using IOR to Analyze the I/O Performance Presented by Hongzhang Shan and John Shalf (NERSC) at CUG 2007 / Cray User Group 2007
Performance Monitoring in a HP SFS Environment HP-CCN in Seattle / Karlsruhe Lustre Talks November 2005
Filesystems on SSCK's HP XC6000 Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks 2005
ISC 2005 in Heidelberg Karlsruhe Lustre Talks June 2005
Experiences & Performance of SFS/Lustre Cluster File System in Production HP-CAST 4 in Krakau / Karlsruhe Lustre Talks May 2005
Lustre: Building a cluster file system for 1,000 node clusters A technical presentation about successes and mistakes during 2002-2003 / Cluster File Systems Summer 2003