http://wiki.old.lustre.org/api.php?action=feedcontributions&user=Kjpriola&feedformat=atomObsolete Lustre Wiki - User contributions [en]2024-03-29T07:46:59ZUser contributionsMediaWiki 1.35.5http://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5652Lustre Publications2009-04-02T21:47:56Z<p>Kjpriola: </p>
<hr />
<div>===Videos & Podcasts===<br />
<br />
{| border=1 cellspacing=0<br />
|+Videos & Podcasts from Sun<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://link.brightcove.com/services/player/bcpid1640183659?bctid=8899392001 '''Linux HPC Software Stack''']||Sun Systems Engineer Larry Mcintosh provides an overview of Sun HPC software reference stack for Lustre.||2008<br />
|-<br />
|[http://channelsun.sun.com/video/lustre/1653611906 '''Lustre Overview by Peter Bojanic''']||Find out more about Lustre parallel file system, the newest addition to the Sun HPC portfolio, which is designed to meet the demands of the worlds largest high performance clusters.||December 7, 2007<br />
|-<br />
|[http://channelsun.sun.com/video/storage+cluster/8901697001 '''Sun Storage Cluster''']||Find out how Sun simplifies the depolyment of Lustre based storage.|| January 9, 2009<br />
|-<br />
||[http://channelsun.sun.com/video/radio+hpc+-+episode+11/10049527001 '''Radio HPC - Episode 11''']||Tony Warner chats with Voltaire's Brian Forbes about their companies' partnership in the Infiniband space, and Peter Bojanic clues us in on what's new with Sun's Lustre file system. ||February 3, 2009<br />
|-<br />
|}<br />
===White Papers===<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Lustre-networking.pdf '''Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks''']||Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management.|| November 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/lustrefilesystem_wp.pdf '''Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System''']||Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems.||October 2008<br />
|-<br />
||[http://www.sun.com/offers/docs/open_petascale_computing.pdf '''Pathways to Open Petascale Computing''']||Derived from Sun’s innovative design approach and experience with very large , supercomputing deployments, the Sun Constellation System provides the world's first , open petascale computing environment — one built entirely with open and standard , hardware and software technologies. Cluster architects can use the Sun Constellation , System to design and rapidly deploy tightly-integrated, efficient, and cost-effective , supercomputing grids and clusters that scale predictably from a few teraflops to over a , petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. ||June 2008<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008<br />
|-<br />
||[http://wiki.lustre.org/images/4/49/WP_BestPractices_Lustre_DDN_032108.pdf '''Best Practices for Architecting a Lustre-based Storage Environment''']|| A series of best practices to consider when deploying a highly-reliable, high-performance Lustre environment. Covered topics include storage infrastructure failover, maximizing computational capability by minimizing I/O overhead, ensuring predictable striped file performance, and protecting large, persistent data stores. / DataDirect Networks||2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun <br />
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
/ University of Colorado, Boulder ||2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
===BluePrints===<br />
<br />
{| border=1 cellspacing=0<br />
|+BluePrints<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|[http://wikis.sun.com/display/BluePrints/Lustre+File+System+-+Demo+Quick+Start+Guide '''Lustre File System - Demo Quick Start Guide''']||A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes.|| 2009<br />
|-<br />
|[http://wikis.sun.com/display/BluePrints/Implementing+the+Lustre+File+System+with+Sun+Storage ''' Implementing the Lustre File System with Sun Storage''']||Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects.||2009<br />
|-class="even"<br />
||[http://wikis.sun.com/display/BluePrints/Tokyo+Tech+Tsubame+Grid+Storage+Implementation '''Tokyo Tech Tsubame Grid Storage Implementation''']||This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid, as well as the steps for installing and configuring the Lustre file system , within the storage architecture.||2009<br />
|-<br />
||[http://wikis.sun.com/download/attachments/31395541/820-5304.pdf?version=1 '''Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture''']||To help customers address an almost bewildering set of architectural challenges, Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements.||May 2008<br />
|-<br />
|}<br />
<br />
===Case Studies===<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
===Presentations===<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
====Presentations from Lustre Engineers====<br />
<br />
{| border=1 cellspacing=0<br />
|+Lustre Launch in Beijing<br />
|-<br />
!Title<br />
!Description<br />
!Date<br />
|-class="even"<br />
|Development||||<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/RMG-process-0308.pdf '''RMG Processes''']||Andreas Dilger|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/8/8d/Eeb-launch-0308.pdf '''Lustre Development Strategy''']||Eric Barton|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/6/6c/Cmd-0308.pdf '''CMD'''] || Yury Umanets|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/f4/Mdt.pdf '''HEAD MDS'''] ||Nikita Danilov|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/d9/Uss-0308.pdf '''User Space Servers''']||Alex Zhuravlev|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/29/DMU-0308.pdf '''DMU''']||Ricardo Correia|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/22/Recovery_overview-0308.pdf '''Recovery''']||Mike Pershin|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/c/c6/LDLM-0308.pdf '''DLM''']||Oleg Drokin|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/0/02/Lnet-0308.pdf '''LNET''']||Issac Huang|| March 17, 2008<br />
|-<br />
|-class="even"<br />
|Quality Engineering||||<br />
|-<br />
||[http://wiki.lustre.org/images/9/92/Jd.day1-0308.pdf '''Day 1'''][http://wiki.lustre.org/images/2/25/Jd.day2-0308.pdf '''Day 2''']||JD|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/7/72/Lustre-release%26weekly-testing-030.pdf '''Lustre Release & weekly testing''']||Jian Yu|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/6/62/Buildsys-0308.pdf '''Build System''']||Yibin Wang|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/a/af/Head-testing-0308.pdf '''HEAD testing''']||Zheng Chen|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/c/cf/Latest.b1_6-0308.pdf '''b1.6 testing''']||Peng Ye|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Perf.testing-0308.pdf '''Performance testing''']||Jack Chen|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/6/60/Automation-0308.pdf '''Automation''']||Minh Diep|| March 17, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/9/9f/Cham_accep-sm.pdf '''Acc-small''']||Elena Gryaznova|| March 17, 2008<br />
|-<br />
|}<br />
<br />
<br />
===Archive (Prior to 2003)===<br />
<br />
<big><strong>From CFS</strong></big><br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=GetInvolved:Get_Involved&diff=5651GetInvolved:Get Involved2009-04-02T19:15:01Z<p>Kjpriola: </p>
<hr />
<div>Find out about what the Lustre Community is doing, and get involved.<br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Community</strong></big> <br />
<br />
Lustre user community events and resources.<br />
<br />
* Sign up for one or more [[Mailing_Lists|Lustre Mailing Lists]]<br />
* [[Lustre_User_Group|Lustre User Group 2009]] Lustre community's premier event to learn and share knowledge about Lustre technology.<br />
** [[Lug_08|Lustre User Group 2008]] Agenda, presentations and panel discussions from the 2008 program (slide decks and videos available).<br />
** [[Lug_07|Lustre User Group 2007]] Agenda and presentations from the 2007 program (slide decks available). <br />
** [[Lug_06|Lustre User Group 2006]] Agenda and presentations from the 2006 program (slide decks available).<br />
* Developers, find out how to [[Contribution_Policy|contribute code or test]].<br />
* Read case studies and other [[Lustre_Publications|Lustre publications]], including Lustre engineering presentations.<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Community Development Projects</strong></big> <br />
Interesting projects from the Lustre user community that are available for public use.<br />
<br />
* [http://wiki.lustre.org/images/d/d9/Lustre-amanda.pdf Backup and Recovery: Amanda and Lustre] Use an Amanda client to back up a Lustre filesystem from Lustre clients.(PDF)<br />
* [http://sourceforge.net/projects/lmt LLNL - Lustre Monitoring Tool] Tracks activity of server nodes (MDS, OSS and portals routers) for one or more Lustre filesystems.<br />
* [http://www.bullopensource.org/lustre/ Bull - Open Source Tools for Lustre] Manage one or more Lustre filesystems from an administrative node.<br />
* [http://shine-wiki.async.eu/wiki/Home CEA Administration Tool for Lustre 1.6] Set up and manage a Lustre filesystem on a cluster.<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Lustre Centres of Excellence</strong></big> <br />
<br />
Find out about active Lustre Centres of Excellence (LCEs).<br />
<br />
* [http://ornl-lce.clusterfs.com/index.php?title=Main_Page Oak Ridge National Laboratory - Lustre Centre of Excellence]<br />
* [http://cea-lce.clusterfs.com/index.php?title=Main_Page CEA LCE]<br />
* [http://llnl-lce.clusterfs.com/index.php?title=Main_Page LLNL LCE]<br />
* [http://psc-lce.clusterfs.com/index.php?title=Main_Page Pittsburgh Supercomputing Center LCE]<br />
* [http://tsinghua-lce.clusterfs.com/index.php?title=Main_Page Tsinghua University LCE]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Learn:Learn&diff=5650Learn:Learn2009-04-02T19:13:09Z<p>Kjpriola: </p>
<hr />
<div>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Ideally suited for data-intensive applications that require the highest possible I/O performance, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Interoperability, Features and Roadmap</strong></big><br />
<br />
These resources detail Lustre's interoperability, features, and plans for future releases.<br />
<br />
* [[Lustre_Support_Matrix|Lustre Support Matrix]] lists supported networks and kernels for current Lustre releases. <br />
<br />
* [[Lustre_1.8|Lustre 1.8]] describes features and benefits offered by upgrading to this version. For the latest information on the Lustre 1.8 schedule, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_2.0|Lustre 2.0]] describes features being developed for this next-generation release of Lustre. For the latest information on the Lustre 2.0 schedule, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_Roadmap|Lustre Roadmap]] provides estimated code freeze and GA dates for upcoming releases, supported kernels, new features, and retirement dates for Lustre products.<br />
<br />
:Several Roadmap features have their own wiki pages.<br />
<br />
** [[Metadata_Clustering|Metadata Clustering]]<br />
** [[Windows_Native_Client|Windows Native Client]]<br />
*** [[Windows_Native_Client_Build_Guide|Windows Native Client Build Guide]]<br />
*** [[Windows_Native_Client_Questions|Windows Native Client Questions]]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Publications</strong></big><br />
<br />
A number of papers, presentations and publications are available for Lustre. Use these resources to learn about the benefits offered by Lustre and plans for future development.<br />
<br />
* [[Lustre_Publications|Lustre white papers, case studies, engineering presentations]]<br />
** [[Lustre_Documentation|Lustre Documentation]]<br />
** [[Lustre_Launch|Lustre All-Hands Meeting 3/08]]<br />
** [[ Lustre All-Hands Meeting 12/08]]<br />
* [http://manual.lustre.org/index.php?title=Main_Page#Lustre_Operations_Manual Lustre Operations Manual]<br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Training and Internals</strong></big><br />
<br />
Lustre training is available from Sun Microsystems.<br />
<br />
* Lustre training, [http://www.sun.com/training/catalog/courses/CL-400.xml Administering Lustre Based Clusters (CL-100)], is available from Sun Microsystems. <br />
<br />
* [[Lustre_Internals|Lustre Internals]] (formerly part of Lustre advanced training) are also available, covering complex, code-level transactions.<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Learn:Learn&diff=5649Learn:Learn2009-04-02T19:12:29Z<p>Kjpriola: </p>
<hr />
<div>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Ideally suited for data-intensive applications that require the highest possible I/O performance, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Interoperability, Features and Roadmap</strong></big><br />
<br />
These resources detail Lustre's interoperability, features, and plans for future releases.<br />
<br />
* [[Lustre_Support_Matrix|Lustre Support Matrix]] lists supported networks and kernels for current Lustre releases. <br />
<br />
* [[Lustre_1.8|Lustre 1.8]] describes features and benefits offered by upgrading to this version. For the latest information on the Lustre 1.8 schedule, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_2.0|Lustre 2.0]] describes features being developed for this next-generation release of Lustre. For the latest information on the Lustre 2.0 schedule, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_Roadmap|Lustre Roadmap]] provides estimated code freeze and GA dates for upcoming releases, supported kernels, new features, and retirement dates for Lustre products.<br />
<br />
:Several Roadmap features have their own wiki pages.<br />
<br />
** [[Metadata_Clustering|Metadata Clustering]]<br />
** [[Windows_Native_Client|Windows Native Client]]<br />
*** [[Windows_Native_Client_Build_Guide|Windows Native Client Build Guide]]<br />
*** [[Windows_Native_Client_Questions|Windows Native Client Questions]]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Publications</strong></big><br />
<br />
A number of papers, presentations and publications are available for Lustre. Use these resources to learn about the benefits offered by Lustre and plans for future development.<br />
<br />
* [[Lustre_Publications|Lustre white papers, case studies, engineering presentations]]<br />
** [[Lustre_Documentation|Lustre Documentation]]<br />
** [[Lustre_Launch|Lustre All-Hands Meeting 3/08]]<br />
** [[ Lustre All-Hands Meeting 12/08]]<br />
* [http://manual.lustre.org/index.php?title=Main_Page#Lustre_Operations_Manual Lustre Operations Manual]<br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Training and Internals</strong></big><br />
Lustre training is available from Sun Microsystems.<br />
<br />
* Lustre training, [http://www.sun.com/training/catalog/courses/CL-400.xml Administering Lustre Based Clusters (CL-100)], is available from Sun Microsystems. <br />
<br />
* [[Lustre_Internals|Lustre Internals]] (formerly part of Lustre advanced training) are also available, covering complex, code-level transactions.<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5648Lustre Publications2009-04-02T19:08:03Z<p>Kjpriola: </p>
<hr />
<div>===Videos & Podcasts===<br />
<br />
{| border=1 cellspacing=0<br />
|+Videos & Podcasts from Sun<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://link.brightcove.com/services/player/bcpid1640183659?bctid=8899392001 '''Linux HPC Software Stack''']||Sun Systems Engineer Larry Mcintosh provides an overview of Sun HPC software reference stack for Lustre.||2008<br />
|-<br />
|[http://channelsun.sun.com/video/lustre/1653611906 '''Lustre Overview by Peter Bojanic''']||Find out more about Lustre parallel file system, the newest addition to the Sun HPC portfolio, which is designed to meet the demands of the worlds largest high performance clusters.||December 7, 2007<br />
|-<br />
|[http://channelsun.sun.com/video/storage+cluster/8901697001 '''Sun Storage Cluster''']||Find out how Sun simplifies the depolyment of Lustre based storage.|| January 9, 2009<br />
|-<br />
||[http://channelsun.sun.com/video/radio+hpc+-+episode+11/10049527001 '''Radio HPC - Episode 11''']||Tony Warner chats with Voltaire's Brian Forbes about their companies' partnership in the Infiniband space, and Peter Bojanic clues us in on what's new with Sun's Lustre file system. ||February 3, 2009<br />
|-<br />
|}<br />
===White Papers===<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Lustre-networking.pdf '''Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks''']||Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management.|| November 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/lustrefilesystem_wp.pdf '''Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System''']||Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems.||October 2008<br />
|-<br />
||[http://www.sun.com/offers/docs/open_petascale_computing.pdf '''Pathways to Open Petascale Computing''']||Derived from Sun’s innovative design approach and experience with very large , supercomputing deployments, the Sun Constellation System provides the world's first , open petascale computing environment — one built entirely with open and standard , hardware and software technologies. Cluster architects can use the Sun Constellation , System to design and rapidly deploy tightly-integrated, efficient, and cost-effective , supercomputing grids and clusters that scale predictably from a few teraflops to over a , petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. ||June 2008<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008<br />
|-<br />
||[http://wiki.lustre.org/images/4/49/WP_BestPractices_Lustre_DDN_032108.pdf '''Best Practices for Architecting a Lustre-based Storage Environment''']|| A series of best practices to consider when deploying a highly-reliable, high-performance Lustre environment. Covered topics include storage infrastructure failover, maximizing computational capability by minimizing I/O overhead, ensuring predictable striped file performance, and protecting large, persistent data stores. / DataDirect Networks||2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun <br />
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
/ University of Colorado, Boulder ||2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
===BluePrints===<br />
<br />
{| border=1 cellspacing=0<br />
|+BluePrints<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|[http://wikis.sun.com/display/BluePrints/Lustre+File+System+-+Demo+Quick+Start+Guide '''Lustre File System - Demo Quick Start Guide''']||A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes.|| 2009<br />
|-<br />
|[http://wikis.sun.com/display/BluePrints/Implementing+the+Lustre+File+System+with+Sun+Storage ''' Implementing the Lustre File System with Sun Storage''']||Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects.||2009<br />
|-class="even"<br />
||[http://wikis.sun.com/display/BluePrints/Tokyo+Tech+Tsubame+Grid+Storage+Implementation '''Tokyo Tech Tsubame Grid Storage Implementation''']||This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid, as well as the steps for installing and configuring the Lustre file system , within the storage architecture.||2009<br />
|-<br />
||[http://wikis.sun.com/download/attachments/31395541/820-5304.pdf?version=1 '''Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture''']||To help customers address an almost bewildering set of architectural challenges, Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements.||May 2008<br />
|-<br />
|}<br />
<br />
===Case Studies===<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
===Presentations===<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
===Archive (Prior to 2003)===<br />
<br />
<big><strong>From CFS</strong></big><br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5647Lustre Publications2009-04-02T19:07:28Z<p>Kjpriola: </p>
<hr />
<div>===Videos & Podcasts===<br />
<br />
{| border=1 cellspacing=0<br />
|+Videos & Podcasts from Sun<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://link.brightcove.com/services/player/bcpid1640183659?bctid=8899392001 '''Linux HPC Software Stack''']||Sun Systems Engineer Larry Mcintosh provides an overview of Sun HPC software reference stack for Lustre.||2008<br />
|-<br />
|[http://channelsun.sun.com/video/lustre/1653611906 '''Lustre Overview by Peter Bojanic''']||Find out more about Lustre parallel file system, the newest addition to the Sun HPC portfolio, which is designed to meet the demands of the worlds largest high performance clusters.||December 7, 2007<br />
|-<br />
|[http://channelsun.sun.com/video/storage+cluster/8901697001 '''Sun Storage Cluster''']||Find out how Sun simplifies the depolyment of Lustre based storage.|| January 9, 2009<br />
|-<br />
||[http://channelsun.sun.com/video/radio+hpc+-+episode+11/10049527001 '''Radio HPC - Episode 11''']||Tony Warner chats with Voltaire's Brian Forbes about their companies' partnership in the Infiniband space, and Peter Bojanic clues us in on what?s new with Sun?s Lustre file system. ||February 3, 2009<br />
|-<br />
|}<br />
===White Papers===<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Lustre-networking.pdf '''Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks''']||Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management.|| November 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/lustrefilesystem_wp.pdf '''Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System''']||Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems.||October 2008<br />
|-<br />
||[http://www.sun.com/offers/docs/open_petascale_computing.pdf '''Pathways to Open Petascale Computing''']||Derived from Sun’s innovative design approach and experience with very large , supercomputing deployments, the Sun Constellation System provides the world's first , open petascale computing environment — one built entirely with open and standard , hardware and software technologies. Cluster architects can use the Sun Constellation , System to design and rapidly deploy tightly-integrated, efficient, and cost-effective , supercomputing grids and clusters that scale predictably from a few teraflops to over a , petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. ||June 2008<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008<br />
|-<br />
||[http://wiki.lustre.org/images/4/49/WP_BestPractices_Lustre_DDN_032108.pdf '''Best Practices for Architecting a Lustre-based Storage Environment''']|| A series of best practices to consider when deploying a highly-reliable, high-performance Lustre environment. Covered topics include storage infrastructure failover, maximizing computational capability by minimizing I/O overhead, ensuring predictable striped file performance, and protecting large, persistent data stores. / DataDirect Networks||2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun <br />
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
/ University of Colorado, Boulder ||2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
===BluePrints===<br />
<br />
{| border=1 cellspacing=0<br />
|+BluePrints<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|[http://wikis.sun.com/display/BluePrints/Lustre+File+System+-+Demo+Quick+Start+Guide '''Lustre File System - Demo Quick Start Guide''']||A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes.|| 2009<br />
|-<br />
|[http://wikis.sun.com/display/BluePrints/Implementing+the+Lustre+File+System+with+Sun+Storage ''' Implementing the Lustre File System with Sun Storage''']||Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects.||2009<br />
|-class="even"<br />
||[http://wikis.sun.com/display/BluePrints/Tokyo+Tech+Tsubame+Grid+Storage+Implementation '''Tokyo Tech Tsubame Grid Storage Implementation''']||This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid, as well as the steps for installing and configuring the Lustre file system , within the storage architecture.||2009<br />
|-<br />
||[http://wikis.sun.com/download/attachments/31395541/820-5304.pdf?version=1 '''Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture''']||To help customers address an almost bewildering set of architectural challenges, Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements.||May 2008<br />
|-<br />
|}<br />
<br />
===Case Studies===<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
===Presentations===<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
===Archive (Prior to 2003)===<br />
<br />
<big><strong>From CFS</strong></big><br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5646Lustre Publications2009-04-02T18:20:25Z<p>Kjpriola: </p>
<hr />
<div>===White Papers===<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Lustre-networking.pdf '''Lustre File System Networking: High-Performance Features and Flexible Support for a Wide Array of Networks''']||Information about Lustre networking that can be used to plan cluster file system deployments for optimal performance and scalability. Covered topics include Lustre message passing, Lustre Network Drivers, and routing in Lustre networks, and the paper describes how these features can be used to improve cluster storage management.|| November 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/lustrefilesystem_wp.pdf '''Lustre File System: High-Performance Storage Architecture and Scalable Cluster File System''']||Basic information about the Lustre file system. Covered topics include general characteristics and markets in which Lustre has a strong presence, a typical Lustre file system configuration, an overview of Lustre networking (LNET), an introduction of Lustre capabilities that support high availability and rolling upgrades, discussion of file storage in a Lustre file system, additional features, and information about a how a Lustre file system compares to other shared file systems.||October 2008<br />
|-<br />
||[http://www.sun.com/offers/docs/open_petascale_computing.pdf '''Pathways to Open Petascale Computing''']||Derived from Sun’s innovative design approach and experience with very large , supercomputing deployments, the Sun Constellation System provides the world's first , open petascale computing environment — one built entirely with open and standard , hardware and software technologies. Cluster architects can use the Sun Constellation , System to design and rapidly deploy tightly-integrated, efficient, and cost-effective , supercomputing grids and clusters that scale predictably from a few teraflops to over a , petaflop. With a totally modular approach, processors, memory, interconnect fabric, and storage can all be scaled independently depending on individual needs. ||June 2008<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://www.sun.com/software/products/lustre/docs/Peta-Scale_wp.pdf '''Peta-Scale I/O with the Lustre File System''']||Describes low-level infrastructure in the Lustre file system that addresses scalability in very large clusters. Covered topics include scalable I/O, locking policies and algorithms to cope with scale, implications for recovery, and other scalability issues. / ORNL ||February 2008<br />
|-<br />
||[http://wiki.lustre.org/images/4/49/WP_BestPractices_Lustre_DDN_032108.pdf '''Best Practices for Architecting a Lustre-based Storage Environment''']|| A series of best practices to consider when deploying a highly-reliable, high-performance Lustre environment. Covered topics include storage infrastructure failover, maximizing computational capability by minimizing I/O overhead, ensuring predictable striped file performance, and protecting large, persistent data stores. / DataDirect Networks||2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun <br />
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
/ University of Colorado, Boulder ||2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
===BluePrints===<br />
<br />
{| border=1 cellspacing=0<br />
|+BluePrints<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|[http://wikis.sun.com/display/BluePrints/Lustre+File+System+-+Demo+Quick+Start+Guide '''Lustre File System - Demo Quick Start Guide''']||A simple cookbook for non-Linux experts on how to set up a Linux-based Lustre file system using small servers, workstations, PCs, or other available hardware for demonstration purposes.|| 2009<br />
|-<br />
|[http://wikis.sun.com/display/BluePrints/Implementing+the+Lustre+File+System+with+Sun+Storage ''' Implementing the Lustre File System with Sun Storage''']||Describes an implementation of the Sun Lustre file system as a scalable storage cluster using Sun Fire servers, and highspeed/lowlatency InfiniBand interconnects.||2009<br />
|-class="even"<br />
||[http://wikis.sun.com/display/BluePrints/Tokyo+Tech+Tsubame+Grid+Storage+Implementation '''Tokyo Tech Tsubame Grid Storage Implementation''']||This Sun BluePrints™ article describes the storage architecture of the Tokyo Tech , TSUBAME grid, as well as the steps for installing and configuring the Lustre file system , within the storage architecture.||2009<br />
|-<br />
||[http://wikis.sun.com/download/attachments/31395541/820-5304.pdf?version=1 '''Sun Storage and Archive Solution for HPC: Sun BluePrints Reference Architecture''']||To help customers address an almost bewildering set of architectural challenges, Sun has developed the Sun Storage and Archive Solution for HPC, a reference architecture that , can be easily customized to meet specific application goals and business requirements.||May 2008<br />
|-<br />
|}<br />
<br />
===Case Studies===<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
===Presentations===<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
===Archive (Prior to 2003)===<br />
<br />
<big><strong>From CFS</strong></big><br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5645Lustre Publications2009-04-02T17:52:54Z<p>Kjpriola: </p>
<hr />
<div>Note: Publications prior to 2003 are not listed here.<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun <br />
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
/ University of Colorado, Boulder ||2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
====Publications from CFS Prior to 2003====<br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5644Lustre Publications2009-04-02T17:41:05Z<p>Kjpriola: </p>
<hr />
<div>Note: Publications prior to 2003 are not listed here.<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun <br />
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems''']||MSST2006, Conference on Mass Storage Systems and Technologies (May 2006) / University of Minnesota ||2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
/ University of Colorado, Boulder ||2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
== Publications from CFS Prior to 2003==<br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5643Lustre Publications2009-04-02T17:38:08Z<p>Kjpriola: </p>
<hr />
<div>Note: Publications prior to 2003 are not listed here.<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf '''Tokyo Tech Tsubame Grid Storage Implementation''']||By Syuuichi Ihara / Sun <br />
[http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]|| May 2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper '''Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)''']||Glenn Newell, Sr.IT Solutions Mgr, Naji Bekhazi, Director of R&D,Mask Data Prep (CATS), Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)/ Synopsys||2007<br />
|-<br />
||[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf '''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid''']||Teragrid 2007 Conference, Madison,WI / TeraGrid||2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
||[http://wiki.lustre.org/images/8/81/LciPaper.pdf '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment''']||Proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005). The management issues mentioned in the last part of this paper have been addressed. [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
/ University of Colorado, Boulder ||2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
== CFS ==<br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000<br />
<br />
<br />
== University of Minnesota ==<br />
* '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems'''<br />
** MSST2006, Conference on Mass Storage Systems and Technologies (May 2006)<br />
**[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf Paper in PDF format]</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5642Lustre Publications2009-04-02T17:26:49Z<p>Kjpriola: </p>
<hr />
<div>Note: Publications prior to 2003 are not listed here.<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference / Indiana University|| June 2007<br />
|-<br />
||[http://wiki.lustre.org/images/d/db/Yu_lustre.pdf '''Exploiting Lustre File Joining for Effective Collective IO''']||Proceedings of the CCGrid'07 / ORNL || May 2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
||[http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre''']||Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. Lustre performance comparison when using InfiniBand and Quadrics interconnects. / Ohio State University<br />
[http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site] || 2006<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
== CFS ==<br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000<br />
<br />
<br />
== SUN == <br />
* '''Tokyo Tech Tsubame Grid Storage Implementation'''<br />
** By Syuuichi Ihara, May 2007<br />
** [http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf Paper in pdf format]<br />
** [http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]<br />
<br />
== Synopsys ==<br />
<br />
* Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)<br />
** Glenn Newell, Sr.IT Solutions Mgr,<br />
** Naji Bekhazi,Director Of R&D,Mask Data Prep (CATS)<br />
** Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)<br />
** 2007<br />
** [http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper in pdf format ]<br />
<br />
==TeraGrid==<br />
*'''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid'''<br />
**Teragrid 2007 Conference, Madison,WI'''<br />
**[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf Paper in PDF format]<br />
<br />
== University of Colorado, Boulder ==<br />
* '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment'''<br />
** [http://wiki.lustre.org/images/8/81/LciPaper.pdf Paper in PDF format]<br />
** proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005)<br />
** The management issues mentioned in the last part of this paper have been addressed.<br />
** [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
<br />
== University of Minnesota ==<br />
* '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems'''<br />
** MSST2006, Conference on Mass Storage Systems and Technologies (May 2006)<br />
**[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf Paper in PDF format]</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5641Lustre Publications2009-04-02T17:16:11Z<p>Kjpriola: </p>
<hr />
<div>Note: Publications prior to 2003 are not listed here.<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference || June 2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''] ||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks ||10.5.2005<br />
|-<br />
||[http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf '''ISC 2005 in Heidelberg''']|| Karlsruhe Lustre Talks ||24.6.2005<br />
|-<br />
||[http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf '''Experiences with 10 Months HP SFS/Lustre in HPC Production''']||HP-CAST 5 in Seattle / Karlsruhe Lustre Talks || 11.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf '''Performance Monitoring in a HP SFS Environment''']||HP-CCN in Seattle / Karlsruhe Lustre Talks || 12.11.2005<br />
|-<br />
||[http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf '''Experiences with HP SFS/Lustre at SSCK''']||SGPFS 5 in Stuttgart / Karlsruhe Lustre Talks || 4.4.2006<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
== CFS ==<br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000<br />
<br />
<br />
== Ohio State University == <br />
<br />
* '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre'''<br />
** Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. <br />
** Lustre performance comparison when using InfiniBand and Quadrics interconnects<br />
** [http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf Paper in PDf format]<br />
** [http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site]<br />
<br />
== ORNL == <br />
<br />
* '''Exploiting Lustre File Joining for Effective Collective IO'''<br />
** [http://wiki.lustre.org/images/d/db/Yu_lustre.pdf Paper in pdf format]<br />
** Proceedings of the CCGrid'07, May 2007.<br />
<br />
== SUN == <br />
* '''Tokyo Tech Tsubame Grid Storage Implementation'''<br />
** By Syuuichi Ihara, May 2007<br />
** [http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf Paper in pdf format]<br />
** [http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]<br />
<br />
== Synopsys ==<br />
<br />
* Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)<br />
** Glenn Newell, Sr.IT Solutions Mgr,<br />
** Naji Bekhazi,Director Of R&D,Mask Data Prep (CATS)<br />
** Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)<br />
** 2007<br />
** [http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper in pdf format ]<br />
<br />
==TeraGrid==<br />
*'''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid'''<br />
**Teragrid 2007 Conference, Madison,WI'''<br />
**[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf Paper in PDF format]<br />
<br />
== University of Colorado, Boulder ==<br />
* '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment'''<br />
** [http://wiki.lustre.org/images/8/81/LciPaper.pdf Paper in PDF format]<br />
** proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005)<br />
** The management issues mentioned in the last part of this paper have been addressed.<br />
** [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
<br />
== University of Minnesota ==<br />
* '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems'''<br />
** MSST2006, Conference on Mass Storage Systems and Technologies (May 2006)<br />
**[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf Paper in PDF format]</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Lustre_Publications&diff=5640Lustre Publications2009-04-02T17:04:51Z<p>Kjpriola: </p>
<hr />
<div>Note: Publications prior to 2003 are not listed here.<br />
<br />
{| border=1 cellspacing=0<br />
|+White Papers<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
|[http://www.datadirectnet.com/resource-downloads/best-practices-for-architecting-a-lustre-based-storage-environment-registration '''Best Practices for Architecting a Lustre-Based Storage Environment''']|| DDN ||March 2008<br />
|-<br />
||[http://wiki.lustre.org/images/2/20/Lustre_wan_tg07.pdf '''Wide Area Filesystem Performance using Lustre on the TeraGrid''']|| TeraGrid 2007 conference || June 2007<br />
|-<br />
|[http://wiki.lustre.org/images/3/3f/Larkin_paper.pdf '''Guidelines for Efficient Parallel I/O on the Cray XT3/XT4''' (PDF)]||Jeff Larkin, Mark Fahey, proceedings of CUG 2007 / Cray User Group|| 2007 <br />
|-<br />
|[http://wiki.lustre.org/images/7/77/A_Center-Wide_FS_using_Lustre.pdf '''A Center-Wide File System using Lustre''']||Shane Canon, H.Sharp Oral, proceedings of CUG 2006 \ Cray User Group|| 2006<br />
|-<br />
|[http://wiki.lustre.org/images/b/b9/Canon_slides.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']||Presented by ORNL on CUG 2007 / Cray User Group [http://wiki.lustre.org/images/f/fa/Canon_paper.pdf (Presentation also available)] ||2007<br />
|-<br />
|[http://wiki.lustre.org/images/9/95/Our_Collateral_selecting-a-cfs.pdf '''Selecting a cluster file system''']||CFS||Nov 2005<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Case Studies<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-class="even"<br />
|XXX||x||<br />
|-<br />
|XXX||x||<br />
|-<br />
|-<br />
|}<br />
<br />
<br />
{| border=1 cellspacing=0<br />
|+Presentations<br />
|-<br />
!Title<br />
!Description/Source<br />
!Date<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=23&sessionId=10&materialId=slides&confId=27391 '''Lustre cluster in production at GSI''']||Presented by SCHöN, Walter / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/materialDisplay.py?contribId=28&sessionId=10&materialId=slides&confId=27391 '''Final Report from File Systems Working Group''']||Presented by MASLENNIKOV, Andrei / HEPiX Talks || May 2008<br />
|-<br />
||[http://indico.cern.ch/contributionDisplay.py?contribId=17&sessionId=10&confId=27391 '''Setting up a simple Lustre Filesystem'''] ||Presented by WIESAND, Stephan / HEPiX Talks <br />
Slides on HPPix site: [https://indico.desy.de/getFile.py/access?contribId=26&amp;sessionId=40&amp;resId=1&amp;materialId=slides&amp;confId=257 Storage Evaluations at BNL] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/d/da/Storage_Evaluations%40BNL.pdf '''Storage Evaluations at BNL'''] ||Presented by Robert Petkus - BNL / HEPiX Talks <br />
Slides on HEPix site: [https://indico.desy.de/getFile.py/access?contribId=44&amp;sessionId=39&amp;resId=0&amp;materialId=slides&amp;confId=257 Lustre Experience at CEA/DIF] || May 2008<br />
|-<br />
||[http://wiki.lustre.org/images/5/58/DIF.pdf '''Lustre Experience at CEA/DIF''']||Presented by J-Ch Lafoucriere / HEPiX Talks || April 23-27, 2008<br />
|-<br />
||[http://wiki.lustre.org/images/f/fa/Canon_paper.pdf '''XT7? Integrating and Operating a Conjoined XT3+XT4 System''']|| This paper describes the processes and tools used to move production work from the pre-existing XT3 to the new system incorporating that same XT3, including novel application of Lustre routing capabilities. / Cray User Group [http://wiki.lustre.org/images/b/b9/Canon_slides.pdf (Paper also available)]|| 2007<br />
|-<br />
|[http://wiki.lustre.org/images/e/ef/Using_IOR_to_Analyze_IO_Performance.pdf '''Using IOR to Analyze the I/O Performance''']|| Presented by Hongzhang Shan,John Shalf (NERSC) on CUG 2007 / Cray User Group ||2007<br />
|-<br />
||[http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf Karlsruhe0503.pdf '''Filesystems on SSCK's HP XC6000''']|| Einführungsveranstaltung im Rechenzentrum / Karlsruhe Lustre Talks || 2005<br />
|-<br />
||[http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf Karlsruhe0510.pdf '''Experiences & Performance of SFS/Lustre Cluster File System in Production''']||<br />
HP-CAST 4 in Krakau / Karlsruhe Lustre Talks || 10.5.2005<br />
|-<br />
<br />
** ISC 2005 in Heidelberg (24.6.2005): [http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf Karlsruhe0506.pdf]<br />
* '''Experiences with 10 Months HP SFS/Lustre in HPC Production'''<br />
** HP-CAST 5 in Seattle (11.11.2005): [http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf Karlsruhe0511.pdf]<br />
* '''Performance Monitoring in a HP SFS Environment'''<br />
** HP-CCN in Seattle (12.11.2005): [http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf Karlsruhe0512.pdf]<br />
* '''Experiences with HP SFS/Lustre at SSCK'''<br />
** SGPFS 5 in Stuttgart (4.4.2006): [http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf Karlsruhe0604.pdf]<br />
<br />
<br />
|-<br />
|[http://wiki.lustre.org/images/a/a3/Gelato-2004-05.pdf '''Lustre state and production installations''']||Presentation on gelato.org meeting / CFS||May 2004<br />
|-<br />
|[http://wiki.lustre.org/images/e/ea/Lustre-usg-2003.pdf '''Lustre File System ''']||Presentation on the state of Lustre in mid-2003 and the path towards Lustre1.0 / CFS||Summer 2003<br />
|-<br />
|[http://wiki.lustre.org/images/d/d2/Ols2003.pdf '''Lustre: Building a cluster file system for 1,000 node clusters''']||A technical presentation about our successes and mistakes during 2002-2003 / CFS||Summer 2003<br />
|-<br />
|}<br />
<br />
== CFS ==<br />
<br />
* [http://wiki.lustre.org/images/6/6f/T10-062002.pdf '''Lustre: Scalable Clustered Object Storage''']<br />
** A technical presentation on Lustre.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/b/b5/001_lustretechnical-fall2002.pdf '''Lustre - the inter-galactic cluster file system?''']<br />
** A technical overview of Lustre from 2002.<br />
** June 2002<br />
<br />
* [http://wiki.lustre.org/images/7/79/Intragalactic-2001.pdf '''Lustre Light: a simpler fully functional cluster file system''']<br />
** September 2001<br />
<br />
* [http://wiki.lustre.org/images/c/c9/LustreSystemAnatomy.pdf '''Lustre System Anatomy''']<br />
** Lustre component overview.<br />
<br />
* [http://wiki.lustre.org/images/a/af/Intergalactic-062001.pdf '''Lustre: the intergalactic file system for the international labs?''']<br />
** Presentation for Linux World and elsewhere on Lustre and Next-Generation Data Centers<br />
** June 2001<br />
<br />
* [http://wiki.lustre.org/images/4/44/Obdcluster.pdf '''The object-based storage cluster file systems and parallel I/O''']<br />
** Sandia presentation on Lustre and Linux clustering<br />
<br />
* [http://wiki.lustre.org/images/a/a2/Sdi-clusters.pdf '''Linux clustering and storage management''']<br />
** Powerpoint slides of an overview of cluster and OBD technology<br />
<br />
* [http://wiki.lustre.org/images/8/81/Lustre-sow-dist.pdf '''Lustre Technical Project Summary''']<br />
** A Lustre roadmap presented to address the [http://wiki.lustre.org/images/7/70/SGSRFP.pdf Tri-Labs/DOD SGS File System RFP].<br />
** July 2001<br />
<br />
* [http://wiki.lustre.org/images/b/bd/Dfsprotocols.pdf '''File Systems for Clusters from a Protocol Perspective'''] <br />
** A comparative description of several distributed file systems.<br />
** Proc. Second Extreme Linux Topics Workshop, Monterey CA, June 1999.<br />
<br />
* [http://www.pdl.cs.cmu.edu/NASD '''CMU NASD project''']<br />
<br />
* [http://wiki.lustre.org/images/2/24/Osd-r03.pdf '''Working draft T10 OSD''']<br />
** A standards effort exists in the T10 OSD working group proposal.<br />
** October 2000<br />
<br />
<br />
== Karlsruhe Lustre Talks ==<br />
<br />
* http://www.rz.uni-karlsruhe.de/dienste/lustretalks.php<br />
* '''Filesystems on SSCK's HP XC6000'''<br />
** Einführungsveranstaltung im Rechenzentrum (2005): [http://wiki.lustre.org/images/7/7c/Karlsruhe0503.pdf Karlsruhe0503.pdf]<br />
* '''Experiences & Performance of SFS/Lustre Cluster File System in Production'''<br />
** HP-CAST 4 in Krakau (10.5.2005): [http://wiki.lustre.org/images/9/95/Karlsruhe0510.pdf Karlsruhe0510.pdf]<br />
* '''Experiences with HP SFS/Lustre in HPC Production'''<br />
** ISC 2005 in Heidelberg (24.6.2005): [http://wiki.lustre.org/images/5/5f/Karlsruhe0506.pdf Karlsruhe0506.pdf]<br />
* '''Experiences with 10 Months HP SFS/Lustre in HPC Production'''<br />
** HP-CAST 5 in Seattle (11.11.2005): [http://wiki.lustre.org/images/1/17/Karlsruhe0511.pdf Karlsruhe0511.pdf]<br />
* '''Performance Monitoring in a HP SFS Environment'''<br />
** HP-CCN in Seattle (12.11.2005): [http://wiki.lustre.org/images/a/aa/Karlsruhe0512.pdf Karlsruhe0512.pdf]<br />
* '''Experiences with HP SFS/Lustre at SSCK'''<br />
** SGPFS 5 in Stuttgart (4.4.2006): [http://wiki.lustre.org/images/0/0b/Karlsruhe0604.pdf Karlsruhe0604.pdf]<br />
<br />
== Ohio State University == <br />
<br />
* '''Benefits of High Speed Interconnects to Cluster File Systems: A Case Study with Lustre'''<br />
** Parallel and Distributed Processing Symposium, 2006. IPDPS 2006. <br />
** Lustre performance comparison when using InfiniBand and Quadrics interconnects<br />
** [http://wiki.lustre.org/images/d/d8/Cac06_lustre.pdf Paper in PDf format]<br />
** [http://nowlab.cse.ohio-state.edu/publications/conf-papers/2006/yu-cac06.pdf Download paper at OSU site]<br />
<br />
== ORNL == <br />
<br />
* '''Exploiting Lustre File Joining for Effective Collective IO'''<br />
** [http://wiki.lustre.org/images/d/db/Yu_lustre.pdf Paper in pdf format]<br />
** Proceedings of the CCGrid'07, May 2007.<br />
<br />
== SUN == <br />
* '''Tokyo Tech Tsubame Grid Storage Implementation'''<br />
** By Syuuichi Ihara, May 2007<br />
** [http://wiki.lustre.org/images/7/79/Thumper-BP-6.pdf Paper in pdf format]<br />
** [http://www.sun.com/blueprints/0507/820-2187.html Sun BluePrints Publications]<br />
<br />
== Synopsys ==<br />
<br />
* Optimizing Storage and I/O For Distributed Processing On Enterprise & High Performance Compute(HPC)Systems For Mask Data Preparation Software (CATS)<br />
** Glenn Newell, Sr.IT Solutions Mgr,<br />
** Naji Bekhazi,Director Of R&D,Mask Data Prep (CATS)<br />
** Ray Morgan,Sr.Product Marketing Manager,Mask Data Prep(CATS)<br />
** 2007<br />
** [http://wiki.lustre.org/index.php?title=Image:Hpc_cats_wp.pdf paper in pdf format ]<br />
<br />
==TeraGrid==<br />
*'''Wide Wrea Filesystem Performance Using Lustre on the TeraGrid'''<br />
**Teragrid 2007 Conference, Madison,WI'''<br />
**[http://wiki.lustre.org/index.php?title=Image:Lustre_wan_tg07.pdf Paper in PDF format]<br />
<br />
== University of Colorado, Boulder ==<br />
* '''Shared Parallel Filesystem in Heterogeneous Linux Multi-Cluster Environment'''<br />
** [http://wiki.lustre.org/images/8/81/LciPaper.pdf Paper in PDF format]<br />
** proceedings of the 6th LCI International Conference on Linux Clusters: The HPC Revolution (2005)<br />
** The management issues mentioned in the last part of this paper have been addressed.<br />
** [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf Paper at CU site] (It is the same as the attachment to the LCI paper above.)<br />
<br />
== University of Minnesota ==<br />
* '''Coordinating Parallel Hierarchical Storage Management in Object-base Cluster File Systems'''<br />
** MSST2006, Conference on Mass Storage Systems and Technologies (May 2006)<br />
**[http://wiki.lustre.org/images/f/fc/MSST-2006-paper.pdf Paper in PDF format]</div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Contribute:Contribute&diff=5609Contribute:Contribute2009-03-30T14:10:34Z<p>Kjpriola: </p>
<hr />
<div>Developers can offer Lustre a new direction in creating code or by testing pre-release versions. <br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Code</strong></big><br />
<br />
Find out what you need to know to contribute to Lustre's open-source code.<br />
<br />
* Join the [[Mailing_Lists|Lustre Development Mailing List]] for developers<br />
* Read the [[Contribution_Policy|Contribution Policy]] before testing or submitting code.<br />
* Use [[Open_CVS|Open CVS]] to download pre-release versions of Lustre for coding or test.<br />
* [[Coding_Guidelines|Coding Guidelines]] help Developers avoid problems during Lustre code merges.<br />
* [[Documenting_Code|Documenting Code]] using Doxygen.<br />
<br />
</div><br />
<br />
<div class="categoryRight"><br />
<br />
<big><strong>Developer Resources</strong></big><br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
* Lustre design documents<br />
** [http://arch.lustre.org/index.php?title=LustreHLDs High-Level Designs]<br />
** [http://arch.lustre.org/index.php?title=LustreDLDs Detailed-Level Designs]<br />
* Lustre features under development<br />
** [[ZFS_Resources|ZFS Resources]]<br />
** [[Lustre_OSS/MDS_with_ZFS_DMU|Lustre OSS/MDS with ZFS DMU]]<br />
* [[Lustre_FAQ|FAQ]]<br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
<big><strong>Test</strong></big><br />
<br />
Read helpful information for testing Lustre.<br />
<br />
* [[Acceptance_Small_(acc-sm)_Testing_on_Lustre|Acceptance Small Testing]]<br />
* [[Testing_Framework|Testing Framework]]<br />
* [[Buffalizing_Tests|Buffalizing Tests]]<br />
* [[Lustre_Test_Plans|Test Plans]]<br />
* [[POSIX_Testing|Posix Testing]]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=GetInvolved:Get_Involved&diff=5608GetInvolved:Get Involved2009-03-30T14:09:41Z<p>Kjpriola: </p>
<hr />
<div>Find out about what the Lustre Community is doing, and get involved.<br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Community</strong></big> <br />
<br />
Get connected through Lustre community events.<br />
<br />
* Sign up to one of the [[Mailing_Lists|Lustre Mailing Lists]]<br />
* [[Lustre_User_Group|Lustre User Group 2009]]<br />
** [[Lug_08|Lustre User Group 2008]]<br />
** [[Lug_07|Lustre User Group 2007]]<br />
** [[Lug_06|Lustre User Group 2006]]<br />
* Developers, find out how to [[Contribution_Policy|contribute code or test]].<br />
* Read case studies and other [[Lustre_Publications|Lustre publications]], including Lustre engineering presentations.<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Community Development Projects</strong></big> <br />
Interesting projects from the Lustre user community that are available for your use.<br />
<br />
* [http://wiki.lustre.org/images/d/d9/Lustre-amanda.pdf Backup and Recovery: Amanda and Lustre] (PDF)<br />
* [http://sourceforge.net/projects/lmt LLNL - Lustre Monitoring Tool] <br />
* [http://www.bullopensource.org/lustre/ Bull - Open Source Tools for Lustre] <br />
* [http://shine-wiki.async.eu/wiki/Home CEA Administration Tool for Lustre 1.6] <br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
<big><strong>Lustre Centres of Excellence</strong></big> <br />
<br />
Find out about active LCEs.<br />
<br />
* [http://ornl-lce.clusterfs.com/index.php?title=Main_Page Oak Ridge National Laboratory - Lustre Centre of Excellence]<br />
* [http://cea-lce.clusterfs.com/index.php?title=Main_Page CEA Lustre Centre of Excellence]<br />
** [[LCE_CEA]]<br />
* [http://llnl-lce.clusterfs.com/index.php?title=Main_Page LLNL Lustre Centre of Excellence]<br />
* [http://psc-lce.clusterfs.com/index.php?title=Main_Page Pittsburgh Supercomputing Center Lustre Centre of Excellence]<br />
* [http://tsinghua-lce.clusterfs.com/index.php?title=Main_Page Tsinghua University Lustre Centre of Excellence]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Contribute:Contribute&diff=5607Contribute:Contribute2009-03-30T14:07:51Z<p>Kjpriola: </p>
<hr />
<div>Developers can offer Lustre a new direction in creating code or by testing pre-release versions. <br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Code</strong></big><br />
Find out what you need to know to contribute to Lustre's open-source code.<br />
<br />
* Join the [[Mailing_Lists|Lustre Development Mailing List]] for developers<br />
* Read the [[Contribution_Policy|Contribution Policy]] before testing or submitting code.<br />
* Use [[Open_CVS|Open CVS]] to download pre-release versions of Lustre for coding or test.<br />
* [[Coding_Guidelines|Coding Guidelines]] help Developers avoid problems during Lustre code merges.<br />
* [[Documenting_Code|Documenting Code]] using Doxygen.<br />
<br />
</div><br />
<br />
<div class="categoryRight"><br />
<br />
<big><strong>Developer Resources</strong></big><br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
* Lustre design documents<br />
** [http://arch.lustre.org/index.php?title=LustreHLDs High-Level Designs]<br />
** [http://arch.lustre.org/index.php?title=LustreDLDs Detailed-Level Designs]<br />
* Lustre features under development<br />
** [[ZFS_Resources|ZFS Resources]]<br />
** [[Lustre_OSS/MDS_with_ZFS_DMU|Lustre OSS/MDS with ZFS DMU]]<br />
* [[Lustre_FAQ|FAQ]]<br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
<big><strong>Test</strong></big><br />
Read helpful information for testing Lustre.<br />
<br />
* [[Acceptance_Small_(acc-sm)_Testing_on_Lustre|Acceptance Small Testing]]<br />
* [[Testing_Framework|Testing Framework]]<br />
* [[Buffalizing_Tests|Buffalizing Tests]]<br />
* [[Lustre_Test_Plans|Test Plans]]<br />
* [[POSIX_Testing|Posix Testing]]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Learn:Learn&diff=5606Learn:Learn2009-03-30T14:06:10Z<p>Kjpriola: </p>
<hr />
<div>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Ideally suited for data-intensive applications that require the highest possible I/O performance, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Interoperability, Features and Roadmap</strong></big><br />
<br />
These resources detail Lustre's interoperability, features, and plans for future releases.<br />
<br />
* [[Lustre_Support_Matrix|Lustre Support Matrix]] lists supported networks and kernels for current Lustre releases. <br />
<br />
* [[Lustre_1.8|Lustre 1.8]] provides feature descriptions and lists the benefits offered by upgrading to this version. For the latest information on when Lustre 1.8 is expected to release, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_2.0|Lustre 2.0]] provides feature descriptions and lists the benefits offered by upgrading to this version. For the latest information on when Lustre 2.0 is expected to release, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_Roadmap|Lustre Roadmap]] provides estimated code freeze and GA dates for upcoming releases, supported kernels, new features, and retirement dates for Lustre products.<br />
<br />
:The Roadmap lists a number of new features under development; several have their own wiki pages.<br />
<br />
** [[Metadata_Clustering|Metadata Clustering]]<br />
** [[Windows_Native_Client|Windows Native Client]]<br />
*** [[Windows_Native_Client_Build_Guide|Windows Native Client Build Guide]]<br />
*** [[Windows_Native_Client_Questions|Windows Native Client Questions]]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Publications</strong></big><br />
<br />
A number of papers, presentations and publications are available for Lustre. Use these resources to learn about the benefits offered by Lustre and plans for future development.<br />
<br />
* [[Publications|Lustre white papers, case studies, engineering presentations]]<br />
** [[Lustre_Documentation|Lustre Documentation]]<br />
** [[Lustre_Launch|Lustre All-Hands Meeting 3/08]]<br />
** [[ Lustre All-Hands Meeting 12/08]]<br />
* [http://manual.lustre.org/index.php?title=Main_Page#Lustre_Operations_Manual Lustre Operations Manual]<br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Training and Internals</strong></big><br />
Lustre training is available from Sun Microsystems.<br />
<br />
* Lustre training, [http://www.sun.com/training/catalog/courses/CL-400.xml Administering Lustre Based Clusters (CL-100)], is available from Sun Microsystems. <br />
<br />
* [[Lustre_Internals|Lustre Internals]] (formerly part of Lustre advanced training) are also available, covering complex, code-level transactions.<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Download:Download&diff=5605Download:Download2009-03-30T14:04:41Z<p>Kjpriola: </p>
<hr />
<div>Lustre™ is a scalable, secure, highly-available cluster file system. It is designed, developed and maintained by Sun Microsystems, Inc. <br />
[[Learn|Learn More]]<br />
<br />
Official production releases and pre-release versions of Lustre software are available for download. Official releases offer new features and enhancements, and have undergone thorough test cyles. They are available at the Sun [http://www.sun.com/software/products/lustre/get.jsp download] site. Pre-release versions of Lustre are still being coded or are undergoing release testing. They are available for checkout from the Lustre source repository. <br />
<br />
If you are ready to get a production-level release of Lustre or ready to try a pre-release version, download it here.<br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Official Releases</strong></big><br />
<br />
The latest official release of Lustre software is always available from Sun Microsystems, Inc., along with earlier production versions. To download an official release of Lustre, visit the Sun [http://www.sun.com/software/products/lustre/get.jsp download] site.<br />
<br />
Currently, all Lustre 1.6.x versions are available for download. To determine which Lustre release supports the features and environment you want, see the Lustre Support Matrix. <br />
<br />
* <strong>Get [http://www.sun.com/software/products/lustre/get.jsp Lustre from Sun].</strong><br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Pre-Release Versions</strong></big><br />
<br />
As an open-source product, we encourage contributions to develop and test a more robust, feature-rich Lustre by trying out pre-release versions of the software. To obtain Lustre code from the source repository, you must have CVS (version control) installed.<br />
<br />
* <strong>Get [[Open_CVS|Lustre from CVS]].</strong><br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Main_Page&diff=5604Main Page2009-03-27T15:47:52Z<p>Kjpriola: </p>
<hr />
<div><div class="homeLeft"><br />
<h1>High-performance and Scalability</h1><br />
<br />
<p>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Installed in 8 of 10 of the world's largest and most data-intensive computing environments, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br><br />
<strong>More on Lustre performance, service, and support at [http://www.sun.com/software/products/lustre/ sun.com/lustre].</strong></p><br />
<br />
<h4>What’s New</h4><br />
<br />
<strong>[[Lustre_1.8|Upcoming Release of Lustre 1.8]]</strong><br />
<p>Lustre 1.8 is in the final cycles of release testing and expected to GA in April 2008. Lustre 1.8 will introduce several robust, new features including: <br />
<ul><br />
<li>[[Lustre_1.8#Adaptive_Timeouts|Adaptive Timeouts]]</li><br />
<li>[[Lustre_1.8#OSS_Read_Cache|OSS Read Cache]]</li><br />
<li>[[Lustre_1.8#OST_Pools|OST Pools]]</li><br />
<li>[[Lustre_1.8#Version-Based_Recovery|Version-based Recovery]]</li><br />
</ul><br />
</p><br />
<p>[[Lustre_1.8|Read more]] about 1.8 features and why you should upgrade.</p><br />
<br />
<strong>[[Lustre_User_Group|LUG 2009 - April 16-17, 2009]]</strong><br />
<p>The Lustre User Group is the premier event to learn about Lustre technology, acquire best practices, and share knowledge about Lustre technology. LUG 2009 is a once-a-year opportunity for users to get answers, advice, and suggestions regarding their specific Lustre implementations.<br />
<br />
Don't miss this opportunity to meet with Lustre developers, and discuss upcoming enhancements and capabilities. For details on this year's program and to register, see [[Lustre_User_Group|LUG 2009]]<br />
</p><br />
</div><br />
<br />
<div class="homeRight"><br />
<ul><br />
<li class="dnld"><br />
<big>[[Download]]</big><br />
Get official releases or pre-release versions.<br />
<ul><br />
<li><strong>[http://www.sun.com/software/products/lustre/get.jsp Get Lustre]</strong></li><br />
<li>[[Open_CVS|Pre-Release Versions]]</li><br />
</ul><br />
</li><br />
<br />
<li class="lrn"><br />
<big>[[Learn]]</big><br />
Find out about Lustre interoperability, features and publications.<br />
<ul><br />
<li>[[Learn|About Lustre]]</li><br />
<li>[[Learn|Publications]]</li><br />
</ul><br />
</li><br />
<br />
<li class="use"><br />
<big>[[Use]]</big><br />
User resources to make Lustre perform at its best.<br />
<ul><br />
<li>[[Use|Install and Configure]]</li><br />
<li>[[Use|Administer]]</li><br />
</ul><br />
</li><br />
<br />
<li class="contribute"><br />
<big>[[Contribute]]</big><br />
Developer resources and tools to contribute to Lustre.<br />
<ul><br />
<li>[[Contribute|Code]]</li><br />
<li>[[Contribute|Test]]</li><br />
</ul><br />
</li><br />
<br />
<li class="participate"><br />
<big>[[Get Involved]]</big><br />
Find groups and community developer projects.<br />
<ul><br />
<li>[[Community]]</li><br />
<li>[[Lustre Centres of Excellence]]</li><br />
</ul><br />
</li><br />
</ul><br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Main_Page&diff=5603Main Page2009-03-27T15:42:27Z<p>Kjpriola: </p>
<hr />
<div><div class="homeLeft"><br />
<h1>High-performance and Scalability</h1><br />
<br />
<p>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Installed in 8 of 10 of the world's largest and most data-intensive computing environments, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br><br />
<strong>Learn about Lustre features, downloads, and support at [http://www.sun.com/software/products/lustre/ sun.com/lustre].</strong></p><br />
<br />
<h4>What’s New</h4><br />
<br />
<strong>[[Lustre_1.8|Upcoming Release of Lustre 1.8]]</strong><br />
<p>Lustre 1.8 is in the final cycles of release testing and expected to GA in April 2008. Lustre 1.8 will introduce several robust, new features including: <br />
<ul><br />
<li>[[Lustre_1.8#Adaptive_Timeouts|Adaptive Timeouts]]</li><br />
<li>[[Lustre_1.8#OSS_Read_Cache|OSS Read Cache]]</li><br />
<li>[[Lustre_1.8#OST_Pools|OST Pools]]</li><br />
<li>[[Lustre_1.8#Version-Based_Recovery|Version-based Recovery]]</li><br />
</ul><br />
</p><br />
<p>[[Lustre_1.8|Read more]] about 1.8 features and why you should upgrade.</p><br />
<br />
<strong>[[Lustre_User_Group|LUG 2009 - April 16-17, 2009]]</strong><br />
<p>The Lustre User Group is the premier event to learn about Lustre technology, acquire best practices, and share knowledge about Lustre technology. LUG 2009 is a once-a-year opportunity for users to get answers, advice, and suggestions regarding their specific Lustre implementations.<br />
<br />
Don't miss this opportunity to meet with Lustre developers, and discuss upcoming enhancements and capabilities. For details on this year's program and to register, see [[Lustre_User_Group|LUG 2009]]<br />
</p><br />
</div><br />
<br />
<div class="homeRight"><br />
<ul><br />
<li class="dnld"><br />
<big>[[Download]]</big><br />
Get official releases or pre-release versions.<br />
<ul><br />
<li><strong>[http://www.sun.com/software/products/lustre/get.jsp Download Lustre]</strong></li><br />
<li>[[Open_CVS|Pre-Release Versions]]</li><br />
</ul><br />
</li><br />
<br />
<li class="lrn"><br />
<big>[[Learn]]</big><br />
Find out about Lustre interoperability, features and publications.<br />
<ul><br />
<li>[[Learn|About Lustre]]</li><br />
<li>[[Learn|Publications]]</li><br />
</ul><br />
</li><br />
<br />
<li class="use"><br />
<big>[[Use]]</big><br />
User resources to make Lustre perform at its best.<br />
<ul><br />
<li>[[Use|Install and Configure]]</li><br />
<li>[[Use|Administer]]</li><br />
</ul><br />
</li><br />
<br />
<li class="contribute"><br />
<big>[[Contribute]]</big><br />
Developer resources and tools to contribute to Lustre.<br />
<ul><br />
<li>[[Contribute|Code]]</li><br />
<li>[[Contribute|Test]]</li><br />
</ul><br />
</li><br />
<br />
<li class="participate"><br />
<big>[[Get Involved]]</big><br />
Find groups and community developer projects.<br />
<ul><br />
<li>[[Community]]</li><br />
<li>[[Lustre Centres of Excellence]]</li><br />
</ul><br />
</li><br />
</ul><br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Use:Use&diff=5602Use:Use2009-03-27T10:52:45Z<p>Kjpriola: </p>
<hr />
<div>Resources for System Administrators and users on the operations, administration, and maintenance of Lustre.<br />
<br />
<br />
<div class="categoryLeft"><br />
<br />
<big><strong>Install and Configure</strong></big><br />
<br />
* [[Lustre_Quick_Start|Quick Start Guide]]<br />
* [[Lustre_Howto|Lustre How-To Guide]]<br />
* [[BuildLustre|Building Lustre]]<br />
* [[Debian_Install| Installing Lustre on Debian]]<br />
* [[Upgrade_To_16|Upgrading from 1.4.6 and Later to 1.6]]<br />
* [[Change_Log_1.6|Change Log 1.6]]<br />
* [[Change_Log_1.4|Change Log 1.4]]<br />
* [[Kernel_Patch_Management|Kernel Patch Management]]<br />
* [[Patchless_Client|Patchless Client]]<br />
* [[Mount_Conf|Mountconf]]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Administer</strong></big><br />
<br />
* [[LibLustre_How-To_Guide|LibLustre How-To Guide]]<br />
* [[Kerb_Lustre|Kerberos (security)]]<br />
* [[Lustre_Failover|Lustre Failover]]<br />
* [[File_System_Backup|Filesystem Backup]]<br />
* [[Recovery_Overview|Recovery Overview]]<br />
* [[Fsck_Support|FSCK Support]]<br />
* [[PIOS-DMU|PIOS-DMU]]<br />
* [[Software_RAID|Software RAID]]<br />
* [[RAID5_Patches|RAID5 Patches]]<br />
* [[Striping_Guidelines|Striping Guidelines]]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
<big><strong>Tune</strong></big><br />
<br />
* [[Lustre_Tuning|Lustre Tuning]]<br />
* [[Lustre_DDN_Tuning|Lustre DDN Tuning]]<br />
* [http://manual.lustre.org/manual/LustreManual16_HTML/LustreProc.html#50446383_pgfId-5529|Lustre Proc] The Lustre manual chapter on proc tunable parameters for Lustre and their usage. It describes several of the proc tunables, including those that affect the client's RPC behavior and prepare for a substantial reorganization of proc entries.<br />
* [[Service_Monitors|Service Monitors]]<br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
<big><strong>Troubleshoot</strong></big><br />
<br />
* [[Lustre_Debugging|Debugging Lustre]]<br />
* [[Lustre_Messages|Lustre Messages]]<br />
* [[Lustre_RAS|Lustre RAS]]<br />
* [[Mailing_Lists|Mail List]]<br />
* [[Report_Bugs|Report Bugs]] <br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
<big><strong>User Resources</strong></big><br />
<br />
* [http://manual.lustre.org/index.php?title=Main_Page Operations Manual]<br />
* [[Lustre_FAQ|FAQ]]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Learn:Learn&diff=5601Learn:Learn2009-03-27T10:20:42Z<p>Kjpriola: </p>
<hr />
<div>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Ideally suited for data-intensive applications that require the highest possible I/O performance, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br />
<div class="categoryLeft"><br />
<br />
=====Interoperability, Features and Roadmap=====<br />
<br />
These resources detail Lustre's interoperability, features, and plans for future releases.<br />
<br />
* [[Lustre_Support_Matrix|Lustre Support Matrix]] lists supported networks and kernels for current Lustre releases. <br />
<br />
* [[Lustre_1.8|Lustre 1.8]] provides feature descriptions and lists the benefits offered by upgrading to this version. For the latest information on when Lustre 1.8 is expected to release, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_2.0|Lustre 2.0]] provides feature descriptions and lists the benefits offered by upgrading to this version. For the latest information on when Lustre 2.0 is expected to release, see the [[Lustre_Roadmap|Lustre Roadmap]]. <br />
<br />
* [[Lustre_Roadmap|Lustre Roadmap]] provides estimated code freeze and GA dates for upcoming releases, supported kernels, new features, and retirement dates for Lustre products.<br />
<br />
:The Roadmap lists a number of new features under development; several have their own wiki pages.<br />
<br />
** [[Metadata_Clustering|Metadata Clustering]]<br />
** [[Windows_Native_Client|Windows Native Client]]<br />
*** [[Windows_Native_Client_Build_Guide|Windows Native Client Build Guide]]<br />
*** [[Windows_Native_Client_Questions|Windows Native Client Questions]]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
=====Publications=====<br />
<br />
A number of papers, presentations and publications are available for Lustre. Use these resources to learn about the benefits offered by Lustre and plans for future development.<br />
<br />
* [[Publications|Lustre white papers, case studies, engineering presentations]]<br />
** [[Lustre_Documentation|Lustre Documentation]]<br />
** [[Lustre_Launch|Lustre All-Hands Meeting 3/08]]<br />
** [[ Lustre All-Hands Meeting 12/08]]<br />
* [http://manual.lustre.org/index.php?title=Main_Page#Lustre_Operations_Manual Lustre Operations Manual]<br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
=====Training and Internals =====<br />
Lustre training is available from Sun Microsystems.<br />
<br />
* Lustre training, [http://www.sun.com/training/catalog/courses/CL-400.xml Administering Lustre Based Clusters (CL-100)], is available from Sun Microsystems. <br />
<br />
* [[Lustre_Internals|Lustre Internals]] (formerly part of Lustre advanced training) are also available, covering complex, code-level transactions.<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Main_Page&diff=5596Main Page2009-03-26T23:44:30Z<p>Kjpriola: </p>
<hr />
<div><div class="homeLeft"><br />
<h1>High-performance and Scalability</h1><br />
<br />
<p>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Installed in 8 of 10 of the world's largest and most data-intensive computing environments, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br><br />
<strong>Learn about Lustre features, downloads, and support at [http://www.sun.com/software/products/lustre/ sun.com/lustre].</strong></p><br />
<br />
<h4>What’s New</h4><br />
<br />
<strong>[[Lustre_1.8|Release of Lustre 1.8]]</strong><br />
<p>Lustre 1.8 is in the final cycles of release testing and expected to GA in April 2008. Lustre 1.8 will introduce several robust, new features including: <br />
<ul><br />
<li>[[Lustre_1.8#Adaptive_Timeouts|Adaptive Timeouts]]</li><br />
<li>[[Lustre_1.8#OSS_Read_Cache|OSS Read Cache]]</li><br />
<li>[[Lustre_1.8#OST_Pools|OST Pools]]</li><br />
<li>[[Lustre_1.8#Version-Based_Recovery|Version-based Recovery]]</li><br />
</ul><br />
</p><br />
<p>[[Lustre_1.8|Read more]] about 1.8 features and why you should upgrade.</p><br />
<br />
<strong>[[Lustre_User_Group|LUG 2009 - April 16-17, 2009]]</strong><br />
<p>The Lustre User Group is the premier event to learn about Lustre technology, acquire best practices, and share knowledge about Lustre technology. LUG 2009 is a once-a-year opportunity for users to get answers, advice, and suggestions regarding their specific Lustre implementations.<br />
<br />
Don't miss this opportunity to meet with Lustre developers and discuss upcoming enhancements and capabilities. For details on this year's program and to register, see [[Lustre_User_Group|LUG 2009]]<br />
</p><br />
</div><br />
<br />
<div class="homeRight"><br />
<ul><br />
<li class="dnld"><br />
<big>[[Download]]</big><br />
Get official releases or pre-release versions.<br />
<ul><br />
<li>[http://www.sun.com/software/products/lustre/get.jsp Official Releases]</li><br />
<li>[[Open_CVS|Pre-Release Versions]]</li><br />
</ul><br />
</li><br />
<br />
<li class="lrn"><br />
<big>[[Learn]]</big><br />
Find out about Lustre releases and publications.<br />
<ul><br />
<li>[[Learn|About Lustre Releases]]</li><br />
<li>[[Learn|Publications]]</li><br />
</ul><br />
</li><br />
<br />
<li class="use"><br />
<big>[[Use]]</big><br />
User resources to make Lustre perform at its best.<br />
<ul><br />
<li>[[Use|Install and Configure]]</li><br />
<li>[[Use|Administer]]</li><br />
</ul><br />
</li><br />
<br />
<li class="contribute"><br />
<big>[[Contribute]]</big><br />
Developer resources and tools to contribute to Lustre.<br />
<ul><br />
<li>[[Contribute|Code]]</li><br />
<li>[[Contribute|Test]]</li><br />
</ul><br />
</li><br />
<br />
<li class="participate"><br />
<big>[[Get Involved]]</big><br />
Find groups and community developer projects.<br />
<ul><br />
<li>[[Community]]</li><br />
<li>[[Lustre Centres of Excellence]]</li><br />
</ul><br />
</li><br />
</ul><br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Main_Page&diff=5595Main Page2009-03-26T23:31:47Z<p>Kjpriola: </p>
<hr />
<div><div class="homeLeft"><br />
<h1>High-performance and Scalability</h1><br />
<br />
<p>The Lustre™ file system redefines I/O performance and scalability standards for the<br />
world’s largest and most complex computing environments. Installed in 8 of 10 of the world's largest and most data-intensive computing environments, Lustre is an object-based cluster file system that scales to tens of thousands of nodes and petabytes of storage with groundbreaking I/O and metadata throughput.<br />
<br><br />
<br><br />
<strong>Learn more about Lustre features, downloads, support and more at [http://www.sun.com/software/products/lustre/ sun.com/lustre].</strong></p><br />
<br />
<h2>What’s New</h2><br />
<br />
<h5>Release of Lustre 1.8</h5><br />
<p>Lustre 1.8 is in the final cycles of release testing and expected to GA in April 2008. Lustre 1.8 will introduce several robust, new features including: <br />
<ul><br />
<li>[[Lustre_1.8#Adaptive_Timeouts|Adaptive Timeouts]]</li><br />
<li>[[Lustre_1.8#OSS_Read_Cache|OSS Read Cache]]</li><br />
<li>[[Lustre_1.8#OST_Pools|OST Pools]]</li><br />
<li>[[Lustre_1.8#Version-Based_Recovery|Version-based Recovery]]</li><br />
</ul><br />
</p><br />
<p>[[Lustre_1.8|Read more]] about 1.8 features and why you should upgrade.</p><br />
<br />
<br />
<h4>[[Lustre_User_Group|LUG 2009 - April 16-17, 2009]]</h4><br />
<p>The Lustre User Group is the premier event to learn about Lustre technology, acquire best practices, and share knowledge about Lustre technology. LUG 2009 is a once-a-year opportunity for users to get answers, advice, and suggestions regarding their specific Lustre implementations.<br />
<br />
Don't miss this opportunity to meet with Lustre developers and discuss upcoming enhancements and capabilities. For details on this year's program and to register, see [[Lustre_User_Group|LUG 2009]]<br />
</p><br />
</div><br />
<br />
<div class="homeRight"><br />
<ul><br />
<li class="dnld"><br />
<big>[[Download]]</big><br />
Get official releases or pre-release versions.<br />
<ul><br />
<li>[http://www.sun.com/software/products/lustre/get.jsp Official Releases]</li><br />
<li>[[Open_CVS|Pre-Release Versions]]</li><br />
</ul><br />
</li><br />
<br />
<li class="lrn"><br />
<big>[[Learn]]</big><br />
Find out about Lustre releases and publications.<br />
<ul><br />
<li>[[Learn|About Lustre Releases]]</li><br />
<li>[[Learn|Publications]]</li><br />
</ul><br />
</li><br />
<br />
<li class="use"><br />
<big>[[Use]]</big><br />
User resources to make Lustre perform at its best.<br />
<ul><br />
<li>[[Use|Install and Configure]]</li><br />
<li>[[Use|Administer]]</li><br />
</ul><br />
</li><br />
<br />
<li class="contribute"><br />
<big>[[Contribute]]</big><br />
Developer resources and tools to contribute to Lustre.<br />
<ul><br />
<li>[[Contribute|Code]]</li><br />
<li>[[Contribute|Test]]</li><br />
</ul><br />
</li><br />
<br />
<li class="participate"><br />
<big>[[Get Involved]]</big><br />
Find groups and community developer projects.<br />
<ul><br />
<li>[[Community]]</li><br />
<li>[[Lustre Centres of Excellence]]</li><br />
</ul><br />
</li><br />
</ul><br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=GetInvolved:Get_Involved&diff=5568GetInvolved:Get Involved2009-03-26T19:14:11Z<p>Kjpriola: </p>
<hr />
<div>Find out about what the Lustre Community is doing, and get involved.<br />
<br />
<div class="categoryLeft"><br />
<br />
=====Community===== <br />
<br />
Get connected through Lustre community events.<br />
<br />
* Sign up to one of the [[Mailing_Lists|Lustre Mailing Lists]]<br />
* [[Lustre_User_Group|Lustre User Group 2009]]<br />
** [[Lug_08|Lustre User Group 2008]]<br />
** [[Lug_07|Lustre User Group 2007]]<br />
** [[Lug_06|Lustre User Group 2006]]<br />
* Developers, find out how to [[Contribution_Policy|contribute code or test]].<br />
* Read case studies and other [[Lustre_Publications|Lustre publications]], including Lustre engineering presentations.<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
=====Community Development Projects===== <br />
Interesting projects from the community available for you to use.<br />
<br />
* [http://wiki.lustre.org/images/d/d9/Lustre-amanda.pdf Backup and Recovery: Amanda and Lustre] (PDF)<br />
* [http://sourceforge.net/projects/lmt LLNL - Lustre Monitoring Tool] <br />
* [http://www.bullopensource.org/lustre/ Bull - Open Source Tools for Lustre] <br />
* [http://shine-wiki.async.eu/wiki/Home CEA Administration Tool for Lustre 1.6] <br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
=====Lustre Centres of Excellence===== <br />
Find out about active LCEs.<br />
<br />
* [http://ornl-lce.clusterfs.com/index.php?title=Main_Page Oak Ridge National Laboratory - Lustre Centre of Excellence]<br />
* [http://cea-lce.clusterfs.com/index.php?title=Main_Page CEA Lustre Centre of Excellence]<br />
** [[LCE_CEA]]<br />
* [http://llnl-lce.clusterfs.com/index.php?title=Main_Page LLNL Lustre Centre of Excellence]<br />
* [http://psc-lce.clusterfs.com/index.php?title=Main_Page Pittsburgh Supercomputing Center Lustre Centre of Excellence]<br />
* [http://tsinghua-lce.clusterfs.com/index.php?title=Main_Page Tsinghua University Lustre Centre of Excellence]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Contribute:Contribute&diff=5566Contribute:Contribute2009-03-26T19:13:15Z<p>Kjpriola: </p>
<hr />
<div>Developers can offer Lustre a new direction in creating code or by testing pre-release versions. <br />
<br />
<div class="categoryLeft"><br />
<br />
=====Code=====<br />
Find what you need to contribute code.<br />
<br />
* Join the [[Mailing_Lists|Lustre Development Mailing List]] for Developers<br />
* Read the [[Contribution_Policy|Contribution Policy]] before testing or submitting code.<br />
* Use [[Open_CVS|Open CVS]] to download pre-release versions of Lustre for coding or test.<br />
* [[Coding_Guidelines|Coding Guidelines]] help Developers avoid problems during Lustre code merges.<br />
* [[Documenting_Code|Documenting Code]] using Doxygen.<br />
<br />
</div><br />
<br />
<div class="categoryRight"><br />
<br />
=====Developer Resources=====<br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
* Lustre Design Documents<br />
** [http://arch.lustre.org/index.php?title=LustreHLDs High-Level Designs]<br />
** [http://arch.lustre.org/index.php?title=LustreDLDs Detailed-Level Designs]<br />
* [[ZFS_Resources|ZFS Resources]]<br />
* [[Lustre_OSS/MDS_with_ZFS_DMU|Lustre OSS/MDS with ZFS DMU]]<br />
* [[Lustre_FAQ|FAQ]]<br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
=====Test=====<br />
Read helpful information for testing Lustre.<br />
<br />
* [[Acceptance_Small_(acc-sm)_Testing_on_Lustre|Acceptance Small Testing]]<br />
* [[Testing_Framework|Testing Framework]]<br />
* [[Buffalizing_Tests|Buffalizing Tests]]<br />
* [[Lustre_Test_Plans|Test Plans]]<br />
* [[POSIX_Testing|Posix Testing]]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Learn:Learn&diff=5564Learn:Learn2009-03-26T19:07:52Z<p>Kjpriola: </p>
<hr />
<div>Lustre is hardware agnostic and open source.<br />
<br />
<div class="categoryLeft"><br />
<br />
=====Releases & Roadmap=====<br />
<br />
* [[Lustre_Support_Matrix|Lustre Support Matrix]]<br />
* [[Lustre_Roadmap|Roadmap]]<br />
** [[Metadata_Clustering|Metadata Clustering]]<br />
** [[Windows_Native_Client|Windows Native Client]]<br />
** [[Windows_Native_Client_Build_Guide|Windows Native Client Build Guide]]<br />
** [[Windows_Native_Client_Questions|Windows Native Client Questions]]<br />
* [[Lustre_1.8|Lustre 1.8]]<br />
* [[Lustre_2.0|Lustre 2.0]]<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
=====Publications=====<br />
<br />
* [[Publications|See white papers, case studies, engineering presentations]]<br />
** [[Lustre_Documentation|Lustre Documentation]]<br />
** [[Lustre_Launch|Lustre Launch 3/08]]<br />
** [[ Lustre Launch 12/08]]<br />
* [http://manual.lustre.org/index.php?title=Main_Page#Lustre_Operations_Manual Operations Manual]<br />
* [http://arch.lustre.org/index.php?title=Main_Page Architecture]<br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
=====Training=====<br />
Lustre training is available from Sun Microsystems.<br />
<br />
* [http://www.sun.com/training/catalog/courses/CL-400.xml Administering Lustre Based Clusters (CL-100)]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=GetInvolved:Get_Involved&diff=5563GetInvolved:Get Involved2009-03-26T19:03:39Z<p>Kjpriola: </p>
<hr />
<div>Find out about what the Lustre Community is doing, and get involved.<br />
<br />
<div class="categoryLeft"><br />
<br />
=====Community===== <br />
<br />
Get connected through Lustre community events.<br />
<br />
* Sign up to one of the [[Mailing_Lists Lustre Mailing Lists]]<br />
* [[Lustre_User_Group|Lustre User Group 2009]]<br />
** [[Lug_08|Lustre User Group 2008]]<br />
** [[Lug_07|Lustre User Group 2007]]<br />
** [[Lug_06|Lustre User Group 2006]]<br />
* Developers, find out how to [[Contribution_Policy|contribute code or test]].<br />
* Read case studies and other [[Lustre_Publications|Lustre publications]], including Lustre engineering presentations.<br />
<br />
</div><br />
<div class="categoryRight"><br />
<br />
=====Community Development Projects===== <br />
Interesting projects from the community available for you to use.<br />
<br />
* [http://wiki.lustre.org/images/d/d9/Lustre-amanda.pdf Backup and Recovery: Amanda and Lustre] (PDF)<br />
* [http://sourceforge.net/projects/lmt LLNL - Lustre Monitoring Tool] <br />
* [http://www.bullopensource.org/lustre/ Bull - Open Source Tools for Lustre] <br />
* [http://shine-wiki.async.eu/wiki/Home CEA Administration Tool for Lustre 1.6] <br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
=====Lustre Centres of Excellence===== <br />
Find out about active LCEs.<br />
<br />
* [http://ornl-lce.clusterfs.com/index.php?title=Main_Page Oak Ridge National Laboratory - Lustre Centre of Excellence]<br />
* [http://cea-lce.clusterfs.com/index.php?title=Main_Page CEA Lustre Centre of Excellence]<br />
** [[LCE_CEA]]<br />
* [http://llnl-lce.clusterfs.com/index.php?title=Main_Page LLNL Lustre Centre of Excellence]<br />
* [http://psc-lce.clusterfs.com/index.php?title=Main_Page Pittsburgh Supercomputing Center Lustre Centre of Excellence]<br />
* [http://tsinghua-lce.clusterfs.com/index.php?title=Main_Page Tsinghua University Lustre Centre of Excellence]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Contribute:Contribute&diff=5561Contribute:Contribute2009-03-26T18:23:30Z<p>Kjpriola: </p>
<hr />
<div>Developers can offer Lustre a new direction in creating code or by testing pre-release versions. <br />
<br />
<div class="categoryLeft"><br />
<br />
=====Code=====<br />
Find what you need to contribute code.<br />
<br />
* Join the [[Mailing_Lists Lustre Development|Mailing List]] for Developers<br />
* Read the [[Contribution_Policy|Contribution Policy]] before testing or submitting code.<br />
* Use [[Open_CVS|Open CVS]] to download pre-release versions of Lustre for coding or test.<br />
* [[Coding_Guidelines|Coding Guidelines]] help Developers avoid problems during Lustre code merges.<br />
* [[Documenting_Code|Documenting Code]] using Doxygen.<br />
<br />
</div><br />
<br />
<div class="categoryRight"><br />
<br />
=====Developer Resources=====<br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
* Lustre Design Documents<br />
** [http://arch.lustre.org/index.php?title=LustreHLDs High-Level Designs]<br />
** [http://arch.lustre.org/index.php?title=LustreDLDs Detailed-Level Designs]<br />
* [[ZFS_Resources|ZFS Resources]]<br />
* [[Lustre_OSS/MDS_with_ZFS_DMU|Lustre OSS/MDS with ZFS DMU]]<br />
* [[Lustre_FAQ|FAQ]]<br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
=====Test=====<br />
Read helpful information for testing Lustre.<br />
<br />
* [[Acceptance_Small_(acc-sm)_Testing_on_Lustre|Acceptance Small Testing]]<br />
* [[Testing_Framework|Testing Framework]]<br />
* [[Buffalizing_Tests|Buffalizing Tests]]<br />
* [[Lustre_Test_Plans|Test Plans]]<br />
* [[POSIX_Testing|Posix Testing]]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Contribute:Contribute&diff=5558Contribute:Contribute2009-03-26T18:17:32Z<p>Kjpriola: </p>
<hr />
<div>Developers can offer Lustre a new direction in creating code or by testing pre-release versions. <br />
<br />
<div class="categoryLeft"><br />
<br />
=====Code=====<br />
Find what you need to contribute code.<br />
<br />
* Join the [[Mailing_Lists Lustre Development|Mailing List]] for Developers<br />
* Read the [[Contribution_Policy|Contribution Policy]] before testing or submitting code.<br />
* Use [[Open_CVS|Open CVS]] to download pre-release versions of Lustre for coding or test.<br />
* [[Coding_Guidelines|Coding Guidelines]] help Developers avoid problems during Lustre code merges.<br />
* [[Documenting_Code|Documenting Code]] using Doxygen.<br />
<br />
</div><br />
<br />
<div class="categoryRight"><br />
<br />
=====Developer Resources=====<br />
* [http://arch.lustre.org/index.php?title=Main_Page Lustre Architecture]<br />
* Lustre Design Documents<br />
** [http://arch.lustre.org/index.php?title=LustreHLDs High-Level Designs]<br />
** [http://arch.lustre.org/index.php?title=LustreDLDs Detailed-Level Designs]<br />
* [[ZFS_Resources|ZFS Resources]]<br />
* [[Lustre_OSS/MDS_with_ZFS_DMU|Lustre OSS/MDS with ZFS DMU]]<br />
* [[Lustre_FAQ|FAQ]<br />
<br />
</div><br />
<div class="categoryLeft"><br />
<br />
=====Test=====<br />
Read helpful information for testing Lustre.<br />
<br />
* [[Acceptance_Small_(acc-sm)_Testing_on_Lustre|Acceptance Small Testing]]<br />
<br />
</div></div>Kjpriolahttp://wiki.old.lustre.org/index.php?title=Main_Page&diff=5500Main Page2009-03-24T15:58:30Z<p>Kjpriola: </p>
<hr />
<div>== What Is Lustre? ==<br />
<br />
Lustre is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by Sun Microsystems, Inc.<br />
<br />
The central goal is the development of a next-generation cluster file system which can serve clusters with 10,000's of nodes, provide petabytes of storage, and move 100's of GB/sec with state-of-the-art security and management infrastructure.<br />
<br />
Lustre runs on many of the largest Linux clusters in the world, and is included by Sun's partners as a core component of their cluster offering (examples include HP StorageWorks SFS, and the Cray XT3/4/5 supercomputers). Today's users have also demonstrated that Lustre scales down as well as it scales up, and runs in production on clusters as small as 4 and as large as 32,000 nodes and 200,000 processes.<br />
<br />
The latest version of Lustre is always available from Sun Microsystems, Inc. Public Open Source releases of Lustre are available under the GNU General Public License. These releases are found here, and are used in production supercomputing environments worldwide.<br />
<br />
To be informed of Lustre releases, subscribe to the [http://wiki.lustre.org/index.php?title=Mailing_Lists lustre-announce] mailing list.<br />
<br />
Lustre development would not have been possible without funding and guidance from many organizations, including several U.S. National Laboratories, early adopters, and product partners.<br />
<br />
[http://www.sun.com/software/products/lustre/get.jsp '''Download Lustre Now!''']<br />
<br />
== Lustre User Group == <br />
* [http://wiki.lustre.org/index.php?title=Lustre_User_Group Lustre User Group 2009]<br />
<br />
== Releases ==<br />
<br />
* [http://wiki.lustre.org/index.php?title=Lustre_Roadmap Lustre Roadmap]<br />
* [http://wiki.lustre.org/index.php?title=Lustre_1.8 Lustre 1.8]<br />
* Lustre 2.0 - coming soon!<br />
<br />
== User Resources == <br />
<br />
* [http://www.sun.com/software/products/lustre/get.jsp Lustre Downloads]<br />
* [http://wiki.lustre.org/index.php?title=Lustre_Quick_Start Lustre Quick Start]<br />
* [http://wiki.lustre.org/index.php?title=Mailing_Lists Mailing Lists]<br />
* [http://manual.lustre.org/index.php?title=Main_Page Lustre Documentation]<br />
* [http://wiki.lustre.org/index.php?title=Bug_Filing Filing Bugs]<br />
* [https://bugzilla.lustre.org/showdependencytree.cgi?id=2374 Lustre Knowledge Base]<br />
* [http://wiki.lustre.org/index.php?title=Lustre_FAQ Lustre FAQ]<br />
<br />
== Advanced User Resources == <br />
<br />
*[http://wiki.lustre.org/index.php?title=BuildLustre How to build Lustre]<br />
* [http://wiki.lustre.org/index.php?title=Kerb_Lustre Kerberos]<br />
* [http://wiki.lustre.org/index.php?title=LustreTuning Lustre Tuning]<br />
* [http://manual.lustre.org/manual/LustreManual16_HTML/LustreProc.html#50446383_pgfId-5529 LustreProc] - The Lustre manual chapter on proc tunable parameters for Lustre and their usage. It describes several of the proc tunables, including those that affect the client's RPC behavior and prepare for a substantial reorganization of proc entries.<br />
* [http://wiki.lustre.org/index.php?title=LibLustre_HowTo Liblustre HowTo]<br />
* [http://wiki.lustre.org/index.php?title=Lustre_Publications Lustre Publications] - Papers and presentations about Lustre<br />
<br />
== Developer Resources ==<br />
* [http://arch.lustre.org Lustre Architecture]<br />
* [http://wiki.lustre.org/index.php?title=Contribution_Policy Contribution Policy]<br />
* [http://lists.lustre.org/mailman/listinfo Developer Mailing List]<br />
* CVS usage<br />
** [http://wiki.lustre.org/index.php?title=Open_CVS CVS access to Lustre Source]<br />
** [http://wiki.lustre.org/index.php?title=Cvs_Branches CVS Branches] - How to manage branches with CVS.<br />
** [http://wiki.lustre.org/index.php?title=Cvs_Tips CVS Tips] - Helpful things to know while using Lustre CVS.<br />
* [http://wiki.lustre.org/index.php?title=Lustre_Debugging Debugging Lustre] - A guide to debugging Lustre.<br />
* [http://wiki.lustre.org/index.php?title=Acceptance_Small_%28acc-sm%29_Testing_on_Lustre Accepance Small Testing] - Using the acceptance small (acc-sm) test suite to test Lustre.<br />
* [http://wiki.lustre.org/index.php?title=ZFS_Resources ZFS Resources] - Learn about ZFS.<br />
* [http://wiki.lustre.org/index.php?title=Coding_Guidelines Coding Guidelines] - Developer guidelines to avoid problems during Lustre code merges.<br />
* [http://wiki.lustre.org/index.php?title=Documenting_Code Documenting Code with Doxygen]<br />
* Lustre design documents<br />
** [http://arch.lustre.org/index.php?title=LustreHLDs High-Level Designs]<br />
** [http://arch.lustre.org/index.php?title=LustreDLDs Detailed-Level Designs]<br />
* [http://wiki.lustre.org/index.php?title=Lustre_Launch Lustre Launch]<br />
* [http://wiki.lustre.org/index.php?title=Lustre_Internals Lustre Internals Course]<br />
* [http://wiki.lustre.org/apidoc/index.html Lustre Interface Documentation]<br />
* [http://wiki.lustre.org/images/a/a4/2005-04-security.pdf Lustre Security]<br />
<br />
== Lustre Development Projects ==<br />
<br />
* [http://wiki.lustre.org/index.php?title=IOPerformanceProject I/O Performance]<br />
* [http://wiki.lustre.org/index.php?title=Lustre_OSS/MDS_with_ZFS_DMU Lustre OSS/MDS with ZFS DMU]<br />
<br />
== Community Development Projects ==<br />
* [http://wiki.lustre.org/index.php?title=Networking_Development Networking Development]<br />
* [http://wiki.lustre.org/index.php?title=Diskless_Booting Diskless Booting]<br />
* [http://wiki.lustre.org/index.php?title=Drbd_And_Lustre DRBD and Lustre]<br />
* [http://www.bullopensource.org/lustre Bull- Open Source tools for Lustre]<br />
* [http://www.sourceforge.net/projects/lmt LLNL- Lustre Monitoring Tool]<br />
* [http://wiki.lustre.org/images/d/d9/Lustre-amanda.pdf Backup: Amanda and Lustre]<br />
* [http://lustre-shine.sourceforge.net CEA administration tool for Lustre 1.6]<br />
<br />
== Lustre Centres of Excellence™ ==<br />
<br />
* [http://ornl-lce.clusterfs.com/ ORNL]<br />
* [http://cea-lce.clusterfs.com/ CEA]<br />
* [http://llnl-lce.clusterfs.com/ LLNL]<br />
* [http://psc-lce.clusterfs.com/ PSC]<br />
* [http://tsinghua-lce.clusterfs.com/ Tsinghua]</div>Kjpriola