WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

GetInvolved:Get Involved: Difference between revisions

From Obsolete Lustre Wiki
Jump to navigationJump to search
No edit summary
No edit summary
 
(27 intermediate revisions by 2 users not shown)
Line 1: Line 1:
Find out what the Lustre Community is doing and get involved.
Find out what the Lustre® community is doing and get involved.


<div class="categoryLeft">
<div class="categoryLeft">
Line 5: Line 5:
<big><strong>Community Events and Resources</strong></big>  
<big><strong>Community Events and Resources</strong></big>  


* Register for [http://lug2010.org/ Lustre User Group 2010]
* Sign up for a <ins>[[Lustre_Mailing_Lists|Lustre mailing list]]</ins>.
* Sign up for a [[Mailing_Lists|Lustre Mailing List]].
* Access presentations from <ins>[[Lustre Community Events, Conferences and Meetings|Lustre community events, conferences and meetings]]</ins>.
* Access presentations from [[Lustre Community Events, Conferences and Meetings|Lustre community events, conferences and meetings]]  
* Find out how to <ins>[[Contribute:Contribute|contribute code or help with testing]]</ins>.
* Read <ins>[[Lustre_Publications|publications about Lustre]]</ins> such as videos and podcasts, white papers and blueprints.
* Learn about some typical <ins>[[Lustre Customers]]</ins>.
* Learn more about [http://www.xyratex.com/products/hpc-big-data ClustreStor] the best-in-class family of data storage solutions delivering the fastest and most reliable data storage.


* Find out how to [http://wiki.lustre.org/index.php/Contribute:Contribute contribute code or help with testing].
</div>
* Read [[Lustre_Publications|publications about Lustre]] such as videos and podcasts, white papers and blueprints.
<div class="categoryRight">
* Learn about some typical [[Lustre Customers]].
 
* Sign up for [http://www.sun.com/emrkt/hpc/news/index.html?cid=e9464 Sun HPC News] for articles, tips and reviews of interest to the HPC community.
<big><strong>Lustre® Centers of Excellence</strong></big>
 
Lustre Centers of Excellence (LCEs) work with the community to make significant code and testing contributions to Lustre and the Lustre Community. In alphabetical order, these centers are:
 
<ul>
<li>[http://www.cea.fr CEA], [http://www-hpc.cea.fr/ HPC at CEA] and [[Media:AurelienDegremont.pdf|Lustre HSM Project at CEA]]
<li>[http://www.fz-juelich.de/jsc/juropa/configuration Juelich and the JuRoPA Program]
<li> [https://computing.llnl.gov/LCdocs/ioguide/index.jsp?show=s7 Lawrence Livermore National Laboratory]
<li>[[Media:JamesHoffman.pdf|Naval Research Laboratory]]
<li> [[Lustre_Center_of_Excellence_at_Oak_Ridge_National_Laboratory|Oak Ridge National Laboratory]]
<li>[[Media:SNL_LUG_2009.pdf|Sandia National Laboratory]]
</ul>


</div>
</div>
Line 21: Line 35:
Interesting projects from the Lustre user community that are available for public use.
Interesting projects from the Lustre user community that are available for public use.


* [http://wiki.lustre.org/images/d/d9/Lustre-amanda.pdf Backup and Recovery: Amanda and Lustre] Use an Amanda client to back up a Lustre filesystem from Lustre clients.(PDF)
* <ins>[http://sourceforge.net/projects/lustre-shine/ CEA's Shine Administration Tool for Lustre]</ins> Set up and manage a Lustre filesystem on a cluster.
* [http://sourceforge.net/projects/lmt LLNL - Lustre Monitoring Tool] Tracks activity of server nodes (MDS, OSS and portals routers) for one or more Lustre filesystems.
* <ins>[[Media:Lustre-amanda.pdf|Backup and Recovery - Amanda and Lustre:]]</ins> Use an Amanda client to back up a Lustre filesystem from Lustre clients.(PDF)
* [http://www.bullopensource.org/lustre/ Bull - Open Source Tools for Lustre] Manage one or more Lustre filesystems from an administrative node.
* <ins>[http://sourceforge.net/projects/lmt LLNL - Lustre Monitoring Tool]</ins> Tracks activity of server nodes (MDS, OSS and portals routers) for one or more Lustre filesystems.
* [http://sourceforge.net/projects/lustre-shine/ CEA Administration Tool for Lustre 1.6] Set up and manage a Lustre filesystem on a cluster.
* <ins>[http://www.bullopensource.org/lustre/ Bull - Open Source Tools for Lustre]</ins> Manage one or more Lustre filesystems from an administrative node.


</div>
</div>
<div class="categoryRight">
<big><strong>Lustre Centers of Excellence</strong></big>
Find out about active Lustre Centers of Excellence (LCEs).
* [[Sun Lustre Center of Excellence at Oak Ridge National Laboratory]]
Another Lustre Center of Excellence is located at the Naval Research Laboratory.


Additionally, several Lustre Development Centers work with Sun to make significant code and testing contributions to Lustre and the Lustre Community. These centers are located at CEA, Juelich, LLNL, and Sandia.
</div>
<div class="categoryLeft">
<div class="categoryLeft">


Line 46: Line 48:
Topics contributed by the Lustre user community.
Topics contributed by the Lustre user community.


* [[DRBD and Lustre]] describes the Distributed Replicated Block Device used for building high-availability clusters.  
* <ins>[[Using Pacemaker with Lustre]]</ins> describes how to configure and use Pacemaker with Lustre failover.
* [[Lustre FUSE]] describes how to use Lustre with the FUSE file system.
* <ins>[[DRBD and Lustre]]</ins> describes the Distributed Replicated Block Device used for building high-availability clusters.  
* [[Lustre DDN Tuning]] describes how to configure DDN storage arrays for use with Lustre.
* <ins>[[Lustre FUSE]]</ins> describes how to use Lustre with the FUSE file system.
* [[Debian Install]] describes how to build and install Lustre on a machine running Debian Linux.
* <ins>[[Lustre DDN Tuning]]</ins> describes how to configure DDN storage arrays for use with Lustre.
* [[NFS vs. Lustre]] describes some of the history and rationale behind Lustre and NFS and compares and contrasts them.
* <ins>[[Large-Scale Tuning for Cray XT]]</ins> describes network tuning parameters for request pools in a large cluster of Cray XT3 Catamount nodes.
* <ins>[[Debian Install]]</ins> describes how to build and install Lustre on a machine running Debian Linux.
* <ins>[[NFS vs. Lustre]]</ins> describes some of the history and rationale behind Lustre and NFS and compares and contrasts them.
</div>
</div>

Latest revision as of 10:18, 24 July 2013

Find out what the Lustre® community is doing and get involved.

Community Events and Resources

Lustre® Centers of Excellence

Lustre Centers of Excellence (LCEs) work with the community to make significant code and testing contributions to Lustre and the Lustre Community. In alphabetical order, these centers are:

Community Development Projects Interesting projects from the Lustre user community that are available for public use.

Third Party Contributions

Topics contributed by the Lustre user community.

  • Using Pacemaker with Lustre describes how to configure and use Pacemaker with Lustre failover.
  • DRBD and Lustre describes the Distributed Replicated Block Device used for building high-availability clusters.
  • Lustre FUSE describes how to use Lustre with the FUSE file system.
  • Lustre DDN Tuning describes how to configure DDN storage arrays for use with Lustre.
  • Large-Scale Tuning for Cray XT describes network tuning parameters for request pools in a large cluster of Cray XT3 Catamount nodes.
  • Debian Install describes how to build and install Lustre on a machine running Debian Linux.
  • NFS vs. Lustre describes some of the history and rationale behind Lustre and NFS and compares and contrasts them.