<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.old.lustre.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sbarthel</id>
	<title>Obsolete Lustre Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.old.lustre.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sbarthel"/>
	<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Special:Contributions/Sbarthel"/>
	<updated>2026-05-08T02:30:54Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.7</generator>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=File:821-2076-10.pdf&amp;diff=12278</id>
		<title>File:821-2076-10.pdf</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=File:821-2076-10.pdf&amp;diff=12278"/>
		<updated>2011-01-20T19:31:01Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: uploaded a new version of &amp;quot;File:821-2076-10.pdf&amp;quot;: Lustre 2.0 Operations Manual (Rev 2), released January 2011&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lustre 2.0 manual&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=File:821-2076-10.pdf&amp;diff=12277</id>
		<title>File:821-2076-10.pdf</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=File:821-2076-10.pdf&amp;diff=12277"/>
		<updated>2011-01-20T19:17:45Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: uploaded a new version of &amp;quot;File:821-2076-10.pdf&amp;quot;: Lustre 2.0 Operations Manual (Rev 2), released January 2011&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lustre 2.0 manual&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Accessing_Lustre_Code&amp;diff=12276</id>
		<title>Accessing Lustre Code</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Accessing_Lustre_Code&amp;diff=12276"/>
		<updated>2011-01-20T19:10:19Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Naming conventions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Jan 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;NOTICE:&#039;&#039;&#039;&#039;&#039;  The transition from CVS to Git took place on Monday, December 14.  For more information about the transition, see the [[Git Transition Notice]]. For details about how to migrate to Git, see [[Migrating to Git]].&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
We welcome and encourage contributions to the development and testing of a more robust, feature-rich Lustre™. You can obtain the latest bleeding-edge Lustre source code by anonymous Git access.&lt;br /&gt;
&lt;br /&gt;
 git clone git://git.lustre.org/prime/lustre.git &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; For more information about using Git, including tutorials and guides to help you get started, see the [http://git-scm.com/documentation Git documentation] page. &#039;&#039;For descriptions of the commands you are most likely to need, see the Commands section  at the bottom of this page.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
See [[Contribute]] for more information about developing, testing, and submitting a patch to the Lustre code.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If you have questions or experience problems, send email to the [mailto:lustre-wiki-feedback@sun.com Admins].&lt;br /&gt;
&lt;br /&gt;
For more information about Git, see the [http://git-scm.com/ Git home]&lt;br /&gt;
&lt;br /&gt;
=== Naming conventions ===&lt;br /&gt;
&lt;br /&gt;
Stable development branches are named b&#039;&#039;{major}&#039;&#039;_&#039;&#039;{minor}&#039;&#039; (for example, b1_6 and b1_8). Even-numbered minor releases are considered stable releases.  Odd-numbered minor releases correspond to alpha and beta releases and will sometimes be given v&#039;&#039;{major}&#039;&#039;_&#039;&#039;{minor}&#039;&#039;_&#039;&#039;{patch}&#039;&#039; tags to provide a point of reference for internal and external testing.  &lt;br /&gt;
&lt;br /&gt;
A release branch is created an official release to isolate it from further development and named b_release_&#039;&#039;{major}&#039;&#039;_&#039;&#039;{minor}&#039;&#039;_&#039;&#039;{patch}&#039;&#039; (for example, b_release_1_8_0).  A final release gets a tag in the form v&#039;&#039;{major}&#039;&#039;_&#039;&#039;{minor}&#039;&#039;_&#039;&#039;{patch}&#039;&#039; (for example, v1_8_0 or v1_6_7_1).&lt;br /&gt;
&lt;br /&gt;
Work for the next upcoming version is done on the &#039;&#039;master&#039;&#039; branch.&lt;br /&gt;
&lt;br /&gt;
The Lustre [[Subsystem Map]] describes each of the subsystems in the Lustre code.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Architecture_Descriptions&amp;diff=12275</id>
		<title>Architecture Descriptions</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Architecture_Descriptions&amp;diff=12275"/>
		<updated>2011-01-20T19:09:28Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Jan 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The architecture descriptions listed below provide information about Lustre architecture and design and are intended to help users better understand the conceptual framework of the Lustre file system. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; &#039;&#039;These documents reflect the state of design of a Lustre feature at a particular point in time. They many contain information that is incomplete or obsolete and may not reflect the current architecture, features and functionality of Lustre.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Adaptive Timeouts - Use Cases|&#039;&#039;Adaptive Timeouts - Use Cases&#039;&#039; ]] (Network RPC timeouts based on server and network loading)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Backup|&#039;&#039;Backup&#039;&#039; ]] (File system backup)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Caching OSS|&#039;&#039;Caching OSS&#039;&#039; ]] (Caching on object storage servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Changelogs|&#039;&#039;Changelogs&#039;&#039; ]] (Per-server logs of data or metadata changes)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Changelogs 1.6|&#039;&#039;Changelogs 1.6&#039;&#039; ]] (Used to facilitate efficient replication of large Lustre 1.6 filesystems)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Client Cleanup|&#039;&#039;Client Cleanup&#039;&#039; ]] (Use cases, business drivers, models to consider, implementation contraints)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Clustered Metadata|&#039;&#039;Clustered Metadata&#039;&#039; ]] (Clustered metadata server capability)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Commit on Share|&#039;&#039;Commit on Share&#039;&#039; ]] (Better recover-ability in an environment where clients miss reconnect window)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - CROW|&#039;&#039;CROW&#039;&#039; ]]  (Create On Write optimizes create performance by deferring OSS object creation) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - CTDB with Lustre|&#039;&#039;CTDB with Lustre&#039;&#039; ]] (Cluster implementation of the TDB database with Lustre provides a solution for windows pCIFS)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Cuts|&#039;&#039;Cuts&#039;&#039; ]] (Technique for recovering file system metadata stored on file server clusters)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - DMU OSD|&#039;&#039;DMU OSD&#039;&#039; ]] (An implementation of the Object Storage Device API for a Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - End-to-end Checksumming|&#039;&#039;End-to-end Checksumming&#039;&#039; ]] (Lustre network checksumming)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Epochs|&#039;&#039;Epochs&#039;&#039; ]] (Used to merge distributed data and meta-data updates in a redundant cluster configuration)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - External File Locking|&#039;&#039;External File Locking&#039;&#039; ]] (File range lock and whole-file lock capabilities)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - FIDs on OST|&#039;&#039;FIDs on OST&#039;&#039; ]] (File identifiers used to identify objects on an object storage target)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Fileset|&#039;&#039;Fileset&#039;&#039; ]] (An efficient representation of a group of file identifiers (FIDs))&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Flash Cache|&#039;&#039;Flash Cache&#039;&#039; ]] (Very fast read-only flash storage)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Free Space Management|&#039;&#039;Free Space Management&#039;&#039; ]] (Managing free space for stripe allocation)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - GNS|&#039;&#039;GNS&#039;&#039; ]] (Global namespace for a distributed file system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM|&#039;&#039;HSM&#039;&#039; ]] (Hierarchical storage management)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM and Cache|&#039;&#039;HSM and Cache&#039;&#039; ]] (Reuse of components by Lustre features that involve migration of file system objects)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM Migration|&#039;&#039;HSM Migration&#039;&#039; ]] (Use cases and high-level architecture for migrating files between Lustre and a HSM system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Interoperability fids zfs|&#039;&#039;Interoperability fids zfs&#039;&#039; ]] (Client, server, network, storage interoperability during migration to clusters based on file identifiers and the ZFS file system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Interoperability 1.6 1.8 2.0|&#039;&#039;Interoperability 1.6 / 1.8 / 2.0&#039;&#039; ]] (interoperability definitions and QAS summary)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - IO system|&#039;&#039;IO system&#039;&#039; ]] (Client IO and server I/O request handling)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Libcfs|&#039;&#039;Libcfs&#039;&#039; ]] (Portable runtime environment for process management and debugging support)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Llog over OSD|&#039;&#039;Llog over OSD&#039;&#039; ]] (Re-implement llog API to use OSD device as backend device)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - LRE Images|&#039;&#039;LRE Images&#039;&#039; ]] (Provide development and training Lustre software environments based on supported environments for Lustre)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Lustre Logging API|&#039;&#039;Lustre Logging API&#039;&#039; ]] (Requirements and detailed description)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MDS striping format|&#039;&#039;MDS striping format&#039;&#039; ]] (Striping extended attributes, striping formats, striping APIs)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MDS-on-DMU|&#039;&#039;MDS-on-DMU&#039;&#039; ]] (Metadata server on the ZFS Data Management Unit - use cases, features and functional behavior)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Metadata API|&#039;&#039;Metadata API&#039;&#039; ]] (A set of methods used by the Lustre file system driver to access and manipulate metadata)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Migration (1)|&#039;&#039;Migration (1)&#039;&#039; ]] (Overview of development path for migration capabilities)&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
[[Architecture - Migration (2)|Migration (2)&#039;&#039; ]] (Use cases, quality attribute scenarios, and implementation details)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MPI IO and NetCDF|&#039;&#039;MPI IO and NetCDF&#039;&#039; ]] (Message Passing Interface I/O and network Common Data Form libraries - Lustre ADIO driver improvements and internal optimization)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MPI LND|&#039;&#039;MPI LND&#039;&#039; ]] (Link to paper &#039;&#039;Lustre Networking over MPI&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Multiple Interfaces For LNET|&#039;&#039;Multiple Interfaces For LNET&#039;&#039; ]] (Use cases and configuration management for Lustre networking)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Network Request Scheduler|&#039;&#039;Network Request Scheduler&#039;&#039; ]] (Requirements for network request scheduler to manage incoming RPC requests on a server)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - New Metadata API|&#039;&#039;New Metadata API&#039;&#039; ]] (Proposal and use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Open by fid|&#039;&#039;Open by fid&#039;&#039; ]] (Returns a file descriptor based on a file ID - implementation choices, design description, use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - OSS-on-DMU|&#039;&#039;OSS-on-DMU&#039;&#039; ]](Object storage server on the ZFS Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - PAG|&#039;&#039;PAG&#039;&#039; ]] (Process Authentication Groups - Use of Linux keyring, setuid in Lustre, Kerberos credential, use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Pools of targets|&#039;&#039;Pools of targets&#039;&#039; ]] (Use cases, command line definitions of pools of OSTs, implementation constraints)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Profiling Tools for IO|&#039;&#039;Profiling Tools for IO&#039;&#039; ]] (Profiling system based on Ganglia)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Proxy Cache|&#039;&#039;Proxy Cache&#039;&#039; ]] (Caching and aggregation used to reduce load on backend server and provide better throughput and latency to clients)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Punch and Extent Migration|&#039;&#039;Punch and Extent Migration&#039;&#039; ]] (Prototypes for &#039;&#039;punch&#039;&#039; and &#039;&#039;migrate&#039;&#039; functionality)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Punch and Extent Migration Requirements|&#039;&#039;Punch and Extent Migration Requirements&#039;&#039;]] (Punch functionality use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Recovery Failures|&#039;&#039;Recovery Failures&#039;&#039; ]] (Recovery terminology, architectures, and use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Request Redirection|&#039;&#039;Request Redirection&#039;&#039; ]] (Allows target OST to redirect client requests to other servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Scalable Pinger|&#039;&#039;Scalable Pinger&#039;&#039; ]] (Provides peer health information to Lustre clients and servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Security|&#039;&#039;Security&#039;&#039; ]] (Detailed description of the security architecture for Lustre)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Server Network Striping|&#039;&#039;Server Network Striping&#039;&#039; ]] (Description of Lustre-level striping of file data over multiple object servers with redundancy)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Simple Space Balance Migration|&#039;&#039;Simple Space Balance Migration&#039;&#039; ]] (A subset of full data migration  limited to migrating files that are not currently in use) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - Simplified Interoperation|&#039;&#039;Simplified Interoperation&#039;&#039; ]] (Controlled server shutdown simplifies inter-operation on server upgrades)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Space Manager|&#039;&#039;Space Manager&#039;&#039; ]] (Manages file system free space)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Sub Tree Locks|&#039;&#039;Sub Tree Locks&#039;&#039; ]] (A lock on a directory that protects a namespace (or part of a namespace) rooted at that directory)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - User Level Access|&#039;&#039;User Level Access&#039;&#039; ]] (LNET userspace API driver that exports the LNET API to userspace)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - User Level OSS|&#039;&#039;User Level OSS&#039;&#039; ]] (Functionality related to Lustre client or OSS failure)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Userspace Servers|&#039;&#039;Userspace Servers&#039;&#039; ]] (Requirements for capability to run a Lustre server in user space)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Version Based Recovery|&#039;&#039;Version Based Recovery&#039;&#039; ]] (A recovery mechanism allowing clients to recover outside of a strict order or later in time - requirements, use cases, quality attribute scenarios, implementation details) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - Wide Striping|&#039;&#039;Wide Striping&#039;&#039; ]] (Mechanism to encode striping information compactly to efficiently support striping of files across many devices)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Wire Level Protocol|&#039;&#039;Wire Level Protocol&#039;&#039; ]] (Wire formats used by Lustre - messages, wire and record packet structures, recovery protocol)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Write Back Cache|&#039;&#039;Write Back Cache&#039;&#039; ]] (Allows client meta-data operations to be delayed and batched)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS for Lustre|&#039;&#039;ZFS for Lustre&#039;&#039; ]] (Architecture and requirements related to Lustre servers using the ZFS Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS large dnodes|&#039;&#039;ZFS large dnodes&#039;&#039; ]] (Increased dnode size to allow more data in the inode)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS TinyZAP|&#039;&#039;ZFS TinyZAP&#039;&#039; ]] (A compact ZFS Attribute Processor format that allows arbitrary values to be stored)&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Architecture_Descriptions&amp;diff=12274</id>
		<title>Architecture Descriptions</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Architecture_Descriptions&amp;diff=12274"/>
		<updated>2011-01-20T19:08:44Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Jan 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The architecture descriptions listed below provide information about Lustre architecture and design and are intended to help users better understand the conceptual framework of the Lustre file system. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; &#039;&#039;These documents reflect the state of design of a Lustre feature at a particular point in time. They many contain information that is incomplete or obsolete and may not reflect the current architecture, features and functionality of Lustre.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Adaptive Timeouts - Use Cases|&#039;&#039;Adaptive Timeouts - Use Cases&#039;&#039; ]] (Network RPC timeouts based on server and network loading)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Backup|&#039;&#039;Backup&#039;&#039; ]] (File system backup)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Caching OSS|&#039;&#039;Caching OSS&#039;&#039; ]] (Caching on object storage servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Changelogs|&#039;&#039;Changelogs&#039;&#039; ]] (Per-server logs of data or metadata changes)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Changelogs 1.6|&#039;&#039;Changelogs 1.6&#039;&#039; ]] (Used to facilitate efficient replication of large Lustre 1.6 filesystems)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Client Cleanup|&#039;&#039;Client Cleanup&#039;&#039; ]] (Use cases, business drivers, models to consider, implementation contraints)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Clustered Metadata|&#039;&#039;Clustered Metadata&#039;&#039; ]] (Clustered metadata server capability)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Commit on Share|&#039;&#039;Commit on Share&#039;&#039; ]] (Better recover-ability in an environment where clients miss reconnect window)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - CROW|&#039;&#039;CROW&#039;&#039; ]]  (Create On Write optimizes create performance by deferring OSS object creation) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - CTDB with Lustre|&#039;&#039;CTDB with Lustre&#039;&#039; ]] (Cluster implementation of the TDB database with Lustre provides a solution for windows pCIFS)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Cuts|&#039;&#039;Cuts&#039;&#039; ]] (Technique for recovering file system metadata stored on file server clusters)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - DMU OSD|&#039;&#039;DMU OSD&#039;&#039; ]] (An implementation of the Object Storage Device API for a Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - End-to-end Checksumming|&#039;&#039;End-to-end Checksumming&#039;&#039; ]] (Lustre network checksumming)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Epochs|&#039;&#039;Epochs&#039;&#039; ]] (Used to merge distributed data and meta-data updates in a redundant cluster configuration)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - External File Locking|&#039;&#039;External File Locking&#039;&#039; ]] (File range lock and whole-file lock capabilities)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - FIDs on OST|&#039;&#039;FIDs on OST&#039;&#039; ]] (File identifiers used to identify objects on an object storage target)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Fileset|&#039;&#039;Fileset&#039;&#039; ]] (An efficient representation of a group of file identifiers (FIDs))&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Flash Cache|&#039;&#039;Flash Cache&#039;&#039; ]] (Very fast read-only flash storage)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Free Space Management|&#039;&#039;Free Space Management&#039;&#039; ]] (Managing free space for stripe allocation)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - GNS|&#039;&#039;GNS&#039;&#039; ]] (Global namespace for a distributed file system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM|&#039;&#039;HSM&#039;&#039; ]] (Hierarchical storage management)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM and Cache|&#039;&#039;HSM and Cache&#039;&#039; ]] (Reuse of components by Lustre features that involve migration of file system objects)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM Migration|&#039;&#039;HSM Migration&#039;&#039; ]] (Use cases and high-level architecture for migrating files between Lustre and a HSM system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Interoperability fids zfs|&#039;&#039;Interoperability fids zfs&#039;&#039; ]] (Client, server, network, storage interoperability during migration to clusters based on file identifiers and the ZFS file system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Interoperability 1.6 / 1.8 / 2.0|&#039;&#039;Interoperability 1.6 1.8 2.0&#039;&#039; ]] (interoperability definitions and QAS summary)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - IO system|&#039;&#039;IO system&#039;&#039; ]] (Client IO and server I/O request handling)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Libcfs|&#039;&#039;Libcfs&#039;&#039; ]] (Portable runtime environment for process management and debugging support)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Llog over OSD|&#039;&#039;Llog over OSD&#039;&#039; ]] (Re-implement llog API to use OSD device as backend device)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - LRE Images|&#039;&#039;LRE Images&#039;&#039; ]] (Provide development and training Lustre software environments based on supported environments for Lustre)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Lustre Logging API|&#039;&#039;Lustre Logging API&#039;&#039; ]] (Requirements and detailed description)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MDS striping format|&#039;&#039;MDS striping format&#039;&#039; ]] (Striping extended attributes, striping formats, striping APIs)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MDS-on-DMU|&#039;&#039;MDS-on-DMU&#039;&#039; ]] (Metadata server on the ZFS Data Management Unit - use cases, features and functional behavior)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Metadata API|&#039;&#039;Metadata API&#039;&#039; ]] (A set of methods used by the Lustre file system driver to access and manipulate metadata)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Migration (1)|&#039;&#039;Migration (1)&#039;&#039; ]] (Overview of development path for migration capabilities)&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
[[Architecture - Migration (2)|Migration (2)&#039;&#039; ]] (Use cases, quality attribute scenarios, and implementation details)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MPI IO and NetCDF|&#039;&#039;MPI IO and NetCDF&#039;&#039; ]] (Message Passing Interface I/O and network Common Data Form libraries - Lustre ADIO driver improvements and internal optimization)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MPI LND|&#039;&#039;MPI LND&#039;&#039; ]] (Link to paper &#039;&#039;Lustre Networking over MPI&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Multiple Interfaces For LNET|&#039;&#039;Multiple Interfaces For LNET&#039;&#039; ]] (Use cases and configuration management for Lustre networking)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Network Request Scheduler|&#039;&#039;Network Request Scheduler&#039;&#039; ]] (Requirements for network request scheduler to manage incoming RPC requests on a server)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - New Metadata API|&#039;&#039;New Metadata API&#039;&#039; ]] (Proposal and use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Open by fid|&#039;&#039;Open by fid&#039;&#039; ]] (Returns a file descriptor based on a file ID - implementation choices, design description, use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - OSS-on-DMU|&#039;&#039;OSS-on-DMU&#039;&#039; ]](Object storage server on the ZFS Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - PAG|&#039;&#039;PAG&#039;&#039; ]] (Process Authentication Groups - Use of Linux keyring, setuid in Lustre, Kerberos credential, use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Pools of targets|&#039;&#039;Pools of targets&#039;&#039; ]] (Use cases, command line definitions of pools of OSTs, implementation constraints)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Profiling Tools for IO|&#039;&#039;Profiling Tools for IO&#039;&#039; ]] (Profiling system based on Ganglia)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Proxy Cache|&#039;&#039;Proxy Cache&#039;&#039; ]] (Caching and aggregation used to reduce load on backend server and provide better throughput and latency to clients)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Punch and Extent Migration|&#039;&#039;Punch and Extent Migration&#039;&#039; ]] (Prototypes for &#039;&#039;punch&#039;&#039; and &#039;&#039;migrate&#039;&#039; functionality)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Punch and Extent Migration Requirements|&#039;&#039;Punch and Extent Migration Requirements&#039;&#039;]] (Punch functionality use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Recovery Failures|&#039;&#039;Recovery Failures&#039;&#039; ]] (Recovery terminology, architectures, and use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Request Redirection|&#039;&#039;Request Redirection&#039;&#039; ]] (Allows target OST to redirect client requests to other servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Scalable Pinger|&#039;&#039;Scalable Pinger&#039;&#039; ]] (Provides peer health information to Lustre clients and servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Security|&#039;&#039;Security&#039;&#039; ]] (Detailed description of the security architecture for Lustre)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Server Network Striping|&#039;&#039;Server Network Striping&#039;&#039; ]] (Description of Lustre-level striping of file data over multiple object servers with redundancy)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Simple Space Balance Migration|&#039;&#039;Simple Space Balance Migration&#039;&#039; ]] (A subset of full data migration  limited to migrating files that are not currently in use) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - Simplified Interoperation|&#039;&#039;Simplified Interoperation&#039;&#039; ]] (Controlled server shutdown simplifies inter-operation on server upgrades)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Space Manager|&#039;&#039;Space Manager&#039;&#039; ]] (Manages file system free space)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Sub Tree Locks|&#039;&#039;Sub Tree Locks&#039;&#039; ]] (A lock on a directory that protects a namespace (or part of a namespace) rooted at that directory)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - User Level Access|&#039;&#039;User Level Access&#039;&#039; ]] (LNET userspace API driver that exports the LNET API to userspace)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - User Level OSS|&#039;&#039;User Level OSS&#039;&#039; ]] (Functionality related to Lustre client or OSS failure)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Userspace Servers|&#039;&#039;Userspace Servers&#039;&#039; ]] (Requirements for capability to run a Lustre server in user space)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Version Based Recovery|&#039;&#039;Version Based Recovery&#039;&#039; ]] (A recovery mechanism allowing clients to recover outside of a strict order or later in time - requirements, use cases, quality attribute scenarios, implementation details) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - Wide Striping|&#039;&#039;Wide Striping&#039;&#039; ]] (Mechanism to encode striping information compactly to efficiently support striping of files across many devices)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Wire Level Protocol|&#039;&#039;Wire Level Protocol&#039;&#039; ]] (Wire formats used by Lustre - messages, wire and record packet structures, recovery protocol)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Write Back Cache|&#039;&#039;Write Back Cache&#039;&#039; ]] (Allows client meta-data operations to be delayed and batched)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS for Lustre|&#039;&#039;ZFS for Lustre&#039;&#039; ]] (Architecture and requirements related to Lustre servers using the ZFS Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS large dnodes|&#039;&#039;ZFS large dnodes&#039;&#039; ]] (Increased dnode size to allow more data in the inode)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS TinyZAP|&#039;&#039;ZFS TinyZAP&#039;&#039; ]] (A compact ZFS Attribute Processor format that allows arbitrary values to be stored)&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Architecture_Descriptions&amp;diff=12273</id>
		<title>Architecture Descriptions</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Architecture_Descriptions&amp;diff=12273"/>
		<updated>2011-01-20T19:07:56Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Jan 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The architecture descriptions listed below provide information about Lustre architecture and design and are intended to help users better understand the conceptual framework of the Lustre file system. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; &#039;&#039;These documents reflect the state of design of a Lustre feature at a particular point in time. They many contain information that is incomplete or obsolete and may not reflect the current architecture, features and functionality of Lustre.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Adaptive Timeouts - Use Cases|&#039;&#039;Adaptive Timeouts - Use Cases&#039;&#039; ]] (Network RPC timeouts based on server and network loading)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - &#039;&#039;Backup|Backup&#039;&#039; ]] (File system backup)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Caching OSS|&#039;&#039;Caching OSS&#039;&#039; ]] (Caching on object storage servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Changelogs|&#039;&#039;Changelogs&#039;&#039; ]] (Per-server logs of data or metadata changes)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Changelogs 1.6|&#039;&#039;Changelogs 1.6&#039;&#039; ]] (Used to facilitate efficient replication of large Lustre 1.6 filesystems)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Client Cleanup|&#039;&#039;Client Cleanup&#039;&#039; ]] (Use cases, business drivers, models to consider, implementation contraints)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Clustered Metadata|&#039;&#039;Clustered Metadata&#039;&#039; ]] (Clustered metadata server capability)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Commit on Share|&#039;&#039;Commit on Share&#039;&#039; ]] (Better recover-ability in an environment where clients miss reconnect window)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - CROW|&#039;&#039;CROW&#039;&#039; ]]  (Create On Write optimizes create performance by deferring OSS object creation) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - CTDB with Lustre|&#039;&#039;CTDB with Lustre&#039;&#039; ]] (Cluster implementation of the TDB database with Lustre provides a solution for windows pCIFS)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Cuts|&#039;&#039;Cuts&#039;&#039; ]] (Technique for recovering file system metadata stored on file server clusters)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - DMU OSD|&#039;&#039;DMU OSD&#039;&#039; ]] (An implementation of the Object Storage Device API for a Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - End-to-end Checksumming|&#039;&#039;End-to-end Checksumming&#039;&#039; ]] (Lustre network checksumming)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Epochs|&#039;&#039;Epochs&#039;&#039; ]] (Used to merge distributed data and meta-data updates in a redundant cluster configuration)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - External File Locking|&#039;&#039;External File Locking&#039;&#039; ]] (File range lock and whole-file lock capabilities)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - FIDs on OST|&#039;&#039;FIDs on OST&#039;&#039; ]] (File identifiers used to identify objects on an object storage target)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Fileset|&#039;&#039;Fileset&#039;&#039; ]] (An efficient representation of a group of file identifiers (FIDs))&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Flash Cache|&#039;&#039;Flash Cache&#039;&#039; ]] (Very fast read-only flash storage)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Free Space Management|&#039;&#039;Free Space Management&#039;&#039; ]] (Managing free space for stripe allocation)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - GNS|&#039;&#039;GNS&#039;&#039; ]] (Global namespace for a distributed file system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM|&#039;&#039;HSM&#039;&#039; ]] (Hierarchical storage management)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM and Cache|&#039;&#039;HSM and Cache&#039;&#039; ]] (Reuse of components by Lustre features that involve migration of file system objects)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - HSM Migration|&#039;&#039;HSM Migration&#039;&#039; ]] (Use cases and high-level architecture for migrating files between Lustre and a HSM system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Interoperability fids zfs|&#039;&#039;Interoperability fids zfs&#039;&#039; ]] (Client, server, network, storage interoperability during migration to clusters based on file identifiers and the ZFS file system)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Interoperability 1.6 1.8 2.0|&#039;&#039;Interoperability 1.6 1.8 2.0&#039;&#039; ]] (interoperability definitions and QAS summary)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - IO system|&#039;&#039;IO system&#039;&#039; ]] (Client IO and server I/O request handling)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Libcfs|&#039;&#039;Libcfs&#039;&#039; ]] (Portable runtime environment for process management and debugging support)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Llog over OSD|&#039;&#039;Llog over OSD&#039;&#039; ]] (Re-implement llog API to use OSD device as backend device)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - LRE Images|&#039;&#039;LRE Images&#039;&#039; ]] (Provide development and training Lustre software environments based on supported environments for Lustre)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Lustre Logging API|&#039;&#039;Lustre Logging API&#039;&#039; ]] (Requirements and detailed description)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MDS striping format|&#039;&#039;MDS striping format&#039;&#039; ]] (Striping extended attributes, striping formats, striping APIs)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MDS-on-DMU|&#039;&#039;MDS-on-DMU&#039;&#039; ]] (Metadata server on the ZFS Data Management Unit - use cases, features and functional behavior)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Metadata API|&#039;&#039;Metadata API&#039;&#039; ]] (A set of methods used by the Lustre file system driver to access and manipulate metadata)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Migration (1)|&#039;&#039;Migration (1)&#039;&#039; ]] (Overview of development path for migration capabilities)&lt;br /&gt;
&#039;&#039;&lt;br /&gt;
[[Architecture - Migration (2)|Migration (2)&#039;&#039; ]] (Use cases, quality attribute scenarios, and implementation details)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MPI IO and NetCDF|&#039;&#039;MPI IO and NetCDF&#039;&#039; ]] (Message Passing Interface I/O and network Common Data Form libraries - Lustre ADIO driver improvements and internal optimization)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - MPI LND|&#039;&#039;MPI LND&#039;&#039; ]] (Link to paper &#039;&#039;Lustre Networking over MPI&#039;&#039;)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Multiple Interfaces For LNET|&#039;&#039;Multiple Interfaces For LNET&#039;&#039; ]] (Use cases and configuration management for Lustre networking)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Network Request Scheduler|&#039;&#039;Network Request Scheduler&#039;&#039; ]] (Requirements for network request scheduler to manage incoming RPC requests on a server)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - New Metadata API|&#039;&#039;New Metadata API&#039;&#039; ]] (Proposal and use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Open by fid|&#039;&#039;Open by fid&#039;&#039; ]] (Returns a file descriptor based on a file ID - implementation choices, design description, use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - OSS-on-DMU|&#039;&#039;OSS-on-DMU&#039;&#039; ]](Object storage server on the ZFS Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - PAG|&#039;&#039;PAG&#039;&#039; ]] (Process Authentication Groups - Use of Linux keyring, setuid in Lustre, Kerberos credential, use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Pools of targets|&#039;&#039;Pools of targets&#039;&#039; ]] (Use cases, command line definitions of pools of OSTs, implementation constraints)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Profiling Tools for IO|&#039;&#039;Profiling Tools for IO&#039;&#039; ]] (Profiling system based on Ganglia)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Proxy Cache|&#039;&#039;Proxy Cache&#039;&#039; ]] (Caching and aggregation used to reduce load on backend server and provide better throughput and latency to clients)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Punch and Extent Migration|&#039;&#039;Punch and Extent Migration&#039;&#039; ]] (Prototypes for &#039;&#039;punch&#039;&#039; and &#039;&#039;migrate&#039;&#039; functionality)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Punch and Extent Migration Requirements|&#039;&#039;Punch and Extent Migration Requirements&#039;&#039;]] (Punch functionality use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Recovery Failures|&#039;&#039;Recovery Failures&#039;&#039; ]] (Recovery terminology, architectures, and use cases)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Request Redirection|&#039;&#039;Request Redirection&#039;&#039; ]] (Allows target OST to redirect client requests to other servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Scalable Pinger|&#039;&#039;Scalable Pinger&#039;&#039; ]] (Provides peer health information to Lustre clients and servers)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Security|&#039;&#039;Security&#039;&#039; ]] (Detailed description of the security architecture for Lustre)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Server Network Striping|&#039;&#039;Server Network Striping&#039;&#039; ]] (Description of Lustre-level striping of file data over multiple object servers with redundancy)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Simple Space Balance Migration|&#039;&#039;Simple Space Balance Migration&#039;&#039; ]] (A subset of full data migration  limited to migrating files that are not currently in use) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - Simplified Interoperation|&#039;&#039;Simplified Interoperation&#039;&#039; ]] (Controlled server shutdown simplifies inter-operation on server upgrades)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Space Manager|&#039;&#039;Space Manager&#039;&#039; ]] (Manages file system free space)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Sub Tree Locks|&#039;&#039;Sub Tree Locks&#039;&#039; ]] (A lock on a directory that protects a namespace (or part of a namespace) rooted at that directory)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - User Level Access|&#039;&#039;User Level Access&#039;&#039; ]] (LNET userspace API driver that exports the LNET API to userspace)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - User Level OSS|&#039;&#039;User Level OSS&#039;&#039; ]] (Functionality related to Lustre client or OSS failure)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Userspace Servers|&#039;&#039;Userspace Servers&#039;&#039; ]] (Requirements for capability to run a Lustre server in user space)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Version Based Recovery|&#039;&#039;Version Based Recovery&#039;&#039; ]] (A recovery mechanism allowing clients to recover outside of a strict order or later in time - requirements, use cases, quality attribute scenarios, implementation details) &lt;br /&gt;
&lt;br /&gt;
[[Architecture - Wide Striping|&#039;&#039;Wide Striping&#039;&#039; ]] (Mechanism to encode striping information compactly to efficiently support striping of files across many devices)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Wire Level Protocol|&#039;&#039;Wire Level Protocol&#039;&#039; ]] (Wire formats used by Lustre - messages, wire and record packet structures, recovery protocol)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - Write Back Cache|&#039;&#039;Write Back Cache&#039;&#039; ]] (Allows client meta-data operations to be delayed and batched)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS for Lustre|&#039;&#039;ZFS for Lustre&#039;&#039; ]] (Architecture and requirements related to Lustre servers using the ZFS Data Management Unit)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS large dnodes|&#039;&#039;ZFS large dnodes&#039;&#039; ]] (Increased dnode size to allow more data in the inode)&lt;br /&gt;
&lt;br /&gt;
[[Architecture - ZFS TinyZAP|&#039;&#039;ZFS TinyZAP&#039;&#039; ]] (A compact ZFS Attribute Processor format that allows arbitrary values to be stored)&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Building_and_Installing_Lustre_from_Source_Code&amp;diff=12272</id>
		<title>Building and Installing Lustre from Source Code</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Building_and_Installing_Lustre_from_Source_Code&amp;diff=12272"/>
		<updated>2011-01-20T18:59:21Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Feb 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
If you need to build a customized Lustre™ server kernel or are using a Linux kernel that has not been tested with the version of Lustre you are installing, you may need to build and install Lustre from source code. This involves several steps:&lt;br /&gt;
* Patching the core kernel&lt;br /&gt;
* Configuring the kernel to work with Lustre&lt;br /&gt;
* Creating Lustre and kernel RPMs from source code. &lt;br /&gt;
&lt;br /&gt;
Please note that the Lustre/kernel configurations available at the [http://www.oracle.com/technetwork/indexes/downloads/sun-az-index-095901.html#L Lustre download site] have been extensively tested and verified with Lustre. The recommended method for installing Lustre servers is to use these prebuilt binary packages (RPMs). For more information about this installation method, see [[Installing Lustre from Downloaded RPMs]].&lt;br /&gt;
&lt;br /&gt;
For a list of available RPMs for Lustre 1.8.x and the latest version of Lustre 1.6 (1.6.7.2), see the [[Lustre_Release_Information#Lustre_Support_Matrix|Lustre Test Matrix]].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Caution:&#039;&#039;&#039;&#039;&#039;  Lustre contains kernel modifications which interact with storage devices and may introduce security issues and data loss if not installed, configured and administered correctly. Before installing Lustre, be cautious and back up &#039;&#039;ALL&#039;&#039; data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; When using third-party network hardware with Lustre, the third-party&lt;br /&gt;
modules (typically, the drivers) must be linked against the Linux kernel. The LNET&lt;br /&gt;
modules in Lustre also need these references. To meet these requirements, a specific&lt;br /&gt;
process must be followed to install and recompile Lustre. See [http://wiki.lustre.org/manual/LustreManual20_HTML/InstallingLustrefrSourceCode.html#50438210_pgfId-1299622 Section 29.4: &#039;&#039;Installing Lustre with a Third-Party Network Stack&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;] for an example showing how to install Lustre 1.6.6 using the Myricom MX 1.2.7 driver. The same process can be used for other third party network stacks.&lt;br /&gt;
&lt;br /&gt;
== Patching the Kernel ==&lt;br /&gt;
If you are using non-standard hardware, plan to apply a Lustre patch, or have&lt;br /&gt;
another reason not to use packaged Lustre binaries, you have to apply several Lustre&lt;br /&gt;
patches to the core kernel and run the Lustre configure script against the kernel.&lt;br /&gt;
&lt;br /&gt;
===Introducing the Quilt Utility===&lt;br /&gt;
To simplify the process of applying Lustre patches to the kernel, we recommend that you use the Quilt utility.&lt;br /&gt;
&lt;br /&gt;
Quilt manages a stack of patches on a single source tree. A series file lists the patch files and the order in which they are applied. Patches are applied, incrementally, on the base tree and all preceding patches. You can:&lt;br /&gt;
* Apply patches from the stack (&#039;&#039;quilt push&#039;&#039;) &lt;br /&gt;
* Remove patches from the stack (&#039;&#039;quilt pop&#039;&#039;)&lt;br /&gt;
* Query the contents of the series file (&#039;&#039;quilt series&#039;&#039;), the contents of the stack (&#039;&#039;quilt applied&#039;&#039;, &#039;&#039;quilt previous&#039;&#039;, &#039;&#039;quilt top&#039;&#039;), and the patches that are not applied at a particular moment (&#039;&#039;quilt next&#039;&#039;, &#039;&#039;quilt unapplied&#039;&#039;). &lt;br /&gt;
* Edit and refresh (&#039;&#039;update&#039;&#039;) patches with Quilt, as well as revert inadvertent changes, and fork or clone patches and show the diffs before and after work.&lt;br /&gt;
&lt;br /&gt;
A variety of Quilt packages (RPMs, SRPMs and tarballs) are available from various&lt;br /&gt;
sources. Use the most recent version you can find. Quilt depends on several other&lt;br /&gt;
utilities, e.g., the &#039;&#039;coreutils&#039;&#039; RPM available only in RedHat 9. For other RedHat&lt;br /&gt;
kernels, you have to get the required packages to successfully install Quilt.&lt;br /&gt;
If you cannot locate a Quilt package or fulfill its dependencies, you can build Quilt&lt;br /&gt;
from a tarball, available at the [http://savannah.nongnu.org/projects/quilt Quilt project website].&lt;br /&gt;
&lt;br /&gt;
For additional information on using Quilt, including its commands, see the [http://www.suse.de/~agruen/quilt.pdf Introduction to Quilt] and the [http://linux.die.net/man/1/quilt quilt(1) man page].&lt;br /&gt;
&lt;br /&gt;
===Get the Lustre Source and Unpatched Kernel===&lt;br /&gt;
The Lustre Engineering Team has targeted several Linux kernels for use with Lustre servers (MDS/OSS) and provides a series of patches for each one. The Lustre patches are maintained in the &#039;&#039;kernel_patch&#039;&#039; directory bundled with the Lustre source code.  &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; Each patch series has been tailored to a specific kernel version, and may or may not apply cleanly to other versions of the kernel.&lt;br /&gt;
&lt;br /&gt;
To obtain the Lustre source and unpatched kernel:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;Verify that all of the Lustre installation requirements have been met.&#039;&#039;&lt;br /&gt;
For more information on these prerequisites, see [[Preparing to Install Lustre]].&lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;Download the Lustre source code.&#039;&#039; On the [http://www.oracle.com/technetwork/indexes/downloads/sun-az-index-095901.html#L Lustre download site], select a version of Lustre to download and then select &#039;&#039;Source&#039;&#039; as the platform.&lt;br /&gt;
&lt;br /&gt;
4. &#039;&#039;Download the unpatched kernel.&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
For convenience, Sun maintains an archive of unpatched kernel sources at [http://downloads.lustre.org/public/kernels/ http://downloads.lustre.org/public/kernels/]. &lt;br /&gt;
&lt;br /&gt;
5. &#039;&#039;To save time later, download &#039;&#039;e2fsprogs&#039;&#039; now.&lt;br /&gt;
&lt;br /&gt;
The source code for Sun&#039;s Lustre-enabled &#039;&#039;e2fsprogs&#039;&#039; distribution can be found at [http://downloads.lustre.org/public/tools/e2fsprogs/ http://downloads.lustre.org/public/tools/e2fsprogs/]&lt;br /&gt;
&lt;br /&gt;
===Patch the Kernel===&lt;br /&gt;
This procedure describes how to use Quilt to apply the Lustre patches to the kernel.&lt;br /&gt;
To illustrate the steps in this procedure, a RHEL 5 kernel is patched for Lustre&lt;br /&gt;
1.6.5.1.&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;Unpack the Lustre source and kernel to separate source trees.&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
: a. &#039;&#039;Unpack the Lustre source.&#039;&#039; For this procedure, we assume that the resulting source tree is in &#039;&#039;/tmp/lustre-1.6.5.1&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
: b. &#039;&#039;Unpack the kernel.&#039;&#039; For this procedure, we assume that the resulting source tree (also known as the destination tree) is in &#039;&#039;/tmp/kernels/linux-2.6.18&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
2. Select a config file for your kernel, located in the &#039;&#039;kernel_configs&#039;&#039; directory&lt;br /&gt;
(&#039;&#039;lustre/kernel_patches/kernel_config&#039;&#039;). The &#039;&#039;kernel_config&#039;&#039; directory contains the &#039;&#039;.config&#039;&#039; files, which are named to indicate the kernel and architecture with which they are associated. For example, the configuration file for the 2.6.18 kernel shipped with RHEL 5 (suitable for i686 SMP systems) is &#039;&#039;kernel-2.6.18-2.6-rhel5-i686-smp.config&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
3. Select the &#039;&#039;series&#039;&#039; file for your kernel, located in the &#039;&#039;series&#039;&#039; directory&lt;br /&gt;
(&#039;&#039;lustre/kernel_patches/series&#039;&#039;). The &#039;&#039;series&#039;&#039; file contains the patches that need to be applied to the kernel.&lt;br /&gt;
&lt;br /&gt;
4. Set up the necessary symlinks between the kernel patches and the Lustre&lt;br /&gt;
source. This example assumes that the Lustre source files are unpacked under &#039;&#039;/tmp/lustre-1.6.5.1&#039;&#039; and you have chosen the &#039;&#039;2.6-rhel5.series&#039;&#039; file. Run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd /tmp/kernels/linux-2.6.18&lt;br /&gt;
$ rm -f patches series&lt;br /&gt;
$ ln -s /tmp/lustre-1.6.5.1/lustre/kernel_patches/series/2.6-rhel5.series ./series&lt;br /&gt;
$ ln -s /tmp/lustre-1.6.5.1/lustre/kernel_patches/patches .&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. Use Quilt to apply the patches in the selected &#039;&#039;series&#039;&#039; file to the unpatched&lt;br /&gt;
kernel. Run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd /tmp/kernels/linux-2.6.18&lt;br /&gt;
$ quilt push -av&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The patched destination tree acts as a base Linux source tree for Lustre.&lt;br /&gt;
&lt;br /&gt;
===Create and Install the Lustre Packages===&lt;br /&gt;
&lt;br /&gt;
After patching the kernel, configure it to work with Lustre, create the Lustre&lt;br /&gt;
packages (RPMs) and install them.&lt;br /&gt;
&lt;br /&gt;
1. Configure the patched kernel to run with Lustre. Run:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd &amp;lt;path to kernel tree&amp;gt;&lt;br /&gt;
$ cp /boot/config-‘uname -r‘ .config&lt;br /&gt;
$ make oldconfig || make menuconfig&lt;br /&gt;
$ make include/asm&lt;br /&gt;
$ make include/linux/version.h&lt;br /&gt;
$ make SUBDIRS=scripts&lt;br /&gt;
$ make include/linux/utsrelease.h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
2. Run the Lustre configure script against the patched kernel and create the&lt;br /&gt;
Lustre packages.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
$ cd &amp;lt;path to lustre source tree&amp;gt;&lt;br /&gt;
$ ./configure --with-linux=&amp;lt;path to kernel tree&amp;gt;&lt;br /&gt;
$ make rpms&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This creates a set of &#039;&#039;.rpms&#039;&#039; in &#039;&#039;/usr/src/redhat/RPMS/&amp;lt;arch&amp;gt;&#039;&#039; with an&lt;br /&gt;
appended date-stamp. The SuSE path is &#039;&#039;/usr/src/packages&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; You do not need to run the Lustre configure script against an unpatched&lt;br /&gt;
kernel.&lt;br /&gt;
&lt;br /&gt;
Example set of RPMs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
lustre-1.6.5.1-\&lt;br /&gt;
2.6.18_53.xx.xx.el5_lustre.1.6.5.1.custom_20081021.i686.rpm&lt;br /&gt;
lustre-debuginfo-1.6.5.1-\&lt;br /&gt;
2.6.18_53.xx.xx.el5_lustre.1.6.5.1.custom_20081021.i686.rpm&lt;br /&gt;
lustre-modules-1.6.5.1-\&lt;br /&gt;
2.6.18_53.xx.xxel5_lustre.1.6.5.1.custom_20081021.i686.rpm&lt;br /&gt;
lustre-source-1.6.5.1-\&lt;br /&gt;
2.6.18_53.xx.xx.el5_lustre.1.6.5.1.custom_20081021.i686.rpm&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If the steps to create the RPMs fail, contact Lustre Support by reporting a bug (see [[Reporting Bugs]]).&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; Lustre supports several features and packages that extend the core functionality of Lustre. These features/packages can be enabled at the build time by issuing appropriate arguments to the &#039;&#039;configure&#039;&#039; command. For a list of supported features and packages, run &#039;&#039;./configure –help&#039;&#039; in the Lustre source tree. The &#039;&#039;configs/&#039;&#039; directory of the kernel source contains the config files matching each the kernel version. Copy one to &#039;&#039;.config&#039;&#039; at the root of the kernel tree.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;Create the kernel package.&#039;&#039; Navigate to the kernel source directory and run:&lt;br /&gt;
&lt;br /&gt;
 $ make rpm&lt;br /&gt;
&lt;br /&gt;
Example result:&lt;br /&gt;
&lt;br /&gt;
 kernel-2.6.95.0.3.EL_lustre.1.6.5.1custom-1.i686.rpm&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; Step 3 is only valid for RedHat and SuSE kernels. If you are using a stock Linux kernel, you need to get a script to create the kernel RPM.&lt;br /&gt;
&lt;br /&gt;
4. &#039;&#039;Install the Lustre packages.&#039;&#039; Some Lustre packages are installed on servers (MDS and OSSs), and others are installed on Lustre clients. For guidance on where to install specific packages, see [[Lustre Packages]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039;  Running the patched server kernel on the clients is optional. It is not necessary unless the clients will be used for multiple purposes, for example, to run as a client and an OST.&lt;br /&gt;
&lt;br /&gt;
Lustre packages should be installed in this order:&lt;br /&gt;
&lt;br /&gt;
:a. &#039;&#039;Install the kernel, modules and ldiskfs packages.&#039;&#039; Navigate to the directory where the RPMs are stored, and use the &#039;&#039;rpm -ivh&#039;&#039; command to install the kernel, module and &#039;&#039;ldiskfs&#039;&#039; packages.&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;pre&amp;gt;$ rpm -ivh kernel-lustre-smp-&amp;lt;ver&amp;gt; \&lt;br /&gt;
;kernel-ib-&amp;lt;ver&amp;gt; \&lt;br /&gt;
;lustre-modules-&amp;lt;ver&amp;gt; \&lt;br /&gt;
;lustre-ldiskfs-&amp;lt;ver&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: b. &#039;&#039;Install the utilities/userspace packages.&#039;&#039; Use the &#039;&#039;rpm -ivh&#039;&#039; command to install the utilities packages. For example:&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;pre&amp;gt;$ rpm -ivh lustre-&amp;lt;ver&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
: c. &#039;&#039;Install the&#039;&#039; e2fsprogs &#039;&#039;package.&#039;&#039; Make sure the &#039;&#039;e2fsprogs&#039;&#039; package downloaded in Step 5 is unpacked, and use the &#039;&#039;rpm -i&#039;&#039; command to install it. For example:&lt;br /&gt;
&lt;br /&gt;
: &amp;lt;pre&amp;gt;$ rpm -i e2fsprogs-&amp;lt;ver&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you want to add any optional packages to your Lustre file system, install them&lt;br /&gt;
now.&lt;br /&gt;
&lt;br /&gt;
5. &#039;&#039;Verify that the boot loader (&#039;&#039;grub.conf &#039;&#039;or&#039;&#039; lilo.conf&#039;&#039;) has been updated to load the patched kernel.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
6. &#039;&#039;Reboot the patched clients and the servers.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
: a. &#039;&#039;If you applied the patched kernel to any clients, reboot them.&#039;&#039; Unpatched clients do not need to be rebooted.&lt;br /&gt;
: b. &#039;&#039;Reboot the servers.&#039;&#039; Once all the machines have rebooted, the next steps are to configure Lustre Networking (LNET) and the Lustre file system. See [[Configuring the Lustre File System]].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Change_Log_2.0&amp;diff=12271</id>
		<title>Change Log 2.0</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Change_Log_2.0&amp;diff=12271"/>
		<updated>2011-01-20T18:58:01Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Aug 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Welcome to  Lustre 2.0.0 =&lt;br /&gt;
&lt;br /&gt;
This release represents a departure from the previous trains that were closely related to one another.  As such, we have no previous release to show you the change from.  Future 2.x releases will show the change from this or subsequent releases.  All that said, you can find out more details from one of the following sources:&lt;br /&gt;
&lt;br /&gt;
[http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre 2.0 Operations Manual&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[http://wiki.lustre.org/images/6/60/821-2077-10.pdf &#039;&#039;Lustre 2.0.0 Release Notes&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Support for networks:&#039;&#039;&#039;&lt;br /&gt;
* socklnd   - any kernel supported by Lustre,&lt;br /&gt;
* qswlnd    - Qsnet kernel modules 5.20 and later,&lt;br /&gt;
* openiblnd - IbGold 1.8.2,&lt;br /&gt;
* o2iblnd   - OFED 1.1, 1.2.0, 1.2.5, 1.3, and 1.4.1&lt;br /&gt;
* viblnd    - Voltaire ibhost 3.4.5 and later,&lt;br /&gt;
* ciblnd    - Topspin 3.2.0,&lt;br /&gt;
* iiblnd    - Infiniserv 3.3 + PathBits patch,&lt;br /&gt;
* gmlnd     - GM 2.1.22 and later,&lt;br /&gt;
* mxlnd     - MX 1.2.10 or later,&lt;br /&gt;
* ptllnd    - Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Server support for kernels:&#039;&#039;&#039;&lt;br /&gt;
* 2.6.18-164.11.1.el5 (RHEL 5)&lt;br /&gt;
* 2.6.18-164.11.1.0.1.el5 (OEL 5)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Client support for unpatched kernels:&#039;&#039;&#039; see [http://wiki.lustre.org/index.php?title=Patchless_Client &amp;quot;Patchless Client&amp;quot;]&lt;br /&gt;
* 2.6.18-164.11.1.el5 (RHEL 5),&lt;br /&gt;
* 2.6.18-164.11.1.0.1.el5 (OEL 5)&lt;br /&gt;
* 2.6.16.60-0.42.8 (SLES 10),&lt;br /&gt;
* 2.6.27.19-5 (SLES11)&lt;br /&gt;
* 2.6.29.4-167.fc11 (FC11)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended e2fsprogs version:&#039;&#039;&#039;&lt;br /&gt;
* 1.41.10-sun2&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Configuring_InfiniBand_Connectivity&amp;diff=12270</id>
		<title>Configuring InfiniBand Connectivity</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Configuring_InfiniBand_Connectivity&amp;diff=12270"/>
		<updated>2011-01-20T18:57:04Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Configuring LNET as an OFED Infiniband Network */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Dec 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
To configure Infiniband connectivity, follow the procedures appropriate for your system below.&lt;br /&gt;
&lt;br /&gt;
== Configuring LNET as an OFED Infiniband Network ==&lt;br /&gt;
&lt;br /&gt;
To configure LNET as an OFED InfiniBand network, enter:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
options lnet networks=&amp;quot;o2ib3(ib3)&amp;quot;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
The node specified is on &#039;&#039;o2ib&#039;&#039; network &#039;&#039;3&#039;&#039; using HCA &#039;&#039;ib3&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
For information about other LNET options, see [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfigurationFilesModuleParameters.html#50438293_pgfId-1293321 35.2.1: &#039;&#039;LNET Options&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
== Building and Configuring Infiniband Support for Lustre™ ==&lt;br /&gt;
&lt;br /&gt;
The distributed kernels do not yet include 3rd-party Infiniband modules. As a result,&lt;br /&gt;
our Lustre™ packages can not include IB network drivers for Lustre. However,&lt;br /&gt;
we do distribute the source code. You will need to build your Infiniband software&lt;br /&gt;
stack against the supplied kernel and then build new Lustre packages.&lt;br /&gt;
&lt;br /&gt;
====Voltaire====&lt;br /&gt;
&lt;br /&gt;
To build Lustre with Voltaire Infiniband sources, add the following as an argument to the configure script:&lt;br /&gt;
 --with-vib=&amp;lt;path-tovoltaire-sources&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To configure Lustre, use: &lt;br /&gt;
 --nettype vib --nid &amp;lt;IPoIB address&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; In Lustre v1.4.5, the Voltaire IB module (&#039;&#039;kvibnal&#039;&#039;) will not work on the Altix system. This is due to hardware differences in the Altix system.&lt;br /&gt;
&lt;br /&gt;
====OpenIB generation 1 / Mellanox Gold====&lt;br /&gt;
&lt;br /&gt;
To build Lustre with OpenIB Infiniband sources, add the following as an argument to the configure script:&lt;br /&gt;
 --with-openib=&amp;lt;path_to_openib sources&amp;gt; &lt;br /&gt;
&lt;br /&gt;
To configure Lustre, use:&lt;br /&gt;
 --nettype openib --nid &amp;lt;IPoIB address&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====Silverstorm====&lt;br /&gt;
&lt;br /&gt;
A Silverstorm driver for Lustre is available.&lt;br /&gt;
&lt;br /&gt;
To build Silverstorm with Lustre, configure Lustre with:&lt;br /&gt;
 --with-iib=&amp;lt;path to silverstorm sources&amp;gt;&lt;br /&gt;
&lt;br /&gt;
====OpenIB 1.0====&lt;br /&gt;
An OpenIB 1.0 driver for Lustre is available.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Configuring_Lustre_File_Striping&amp;diff=12269</id>
		<title>Configuring Lustre File Striping</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Configuring_Lustre_File_Striping&amp;diff=12269"/>
		<updated>2011-01-20T18:56:13Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Oct 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
One of the main factors leading to the high performance of Lustre™ file systems is the ability to stripe data over multiple OSTs. The stripe count can be set on a file system, directory, or file level.  An example showing the use of striping is provided below. &lt;br /&gt;
&lt;br /&gt;
For additional information, see [http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html#50438209_pgfId-5529 Chapter 18: &#039;&#039;Managing File Striping and Free Space&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
== Setting Up Striping ==&lt;br /&gt;
&lt;br /&gt;
To see the current stripe size, use the command &#039;&#039;lfs getstripe [file, dir, fs]&#039;&#039;. This command will produce output similar to the following:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
root@LustreClient01 lustre]# lfs getstripe /mnt/lustre&lt;br /&gt;
OBDS:&lt;br /&gt;
0: lustre-OST0000_UUID ACTIVE&lt;br /&gt;
1: lustre-OST0001_UUID ACTIVE&lt;br /&gt;
2: lustre-OST0002_UUID ACTIVE&lt;br /&gt;
3: lustre-OST0003_UUID ACTIVE&lt;br /&gt;
4: lustre-OST0004_UUID ACTIVE&lt;br /&gt;
5: lustre-OST0005_UUID ACTIVE&lt;br /&gt;
/mnt/lustre&lt;br /&gt;
(Default) stripe_count: 2 stripe_size: 4M stripe_offset: 0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the default stripe count is 2 (that is, data blocks are striped over two OSTs), the default stripe size is 4 MB (the stripe size can be set in K, M or G), and all writes start from the first OST.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; When setting the stripe, the offset is set before the stripe count.&lt;br /&gt;
&lt;br /&gt;
The command to set a new stripe pattern on the file system may look like this:&lt;br /&gt;
&lt;br /&gt;
 [root@LustreClient01 lustre]# lfs setstripe -s 4M -c 0 -i 1 /mnt/lustre&lt;br /&gt;
&lt;br /&gt;
This example command sets the stripe of &#039;&#039;/mnt/lustre&#039;&#039; to 4 MB blocks starting at OST0 and spanning over one OST. If a new file is created with these settings, the following results are seen:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@LustreClient01 lustre]# dd if=/dev/zero of=/mnt/lustre/test1 bs=10M count=100&lt;br /&gt;
&lt;br /&gt;
root@LustreClient01 lustre]# lfs df -h&lt;br /&gt;
UUID                  bytes     Used  Available   Use%   Mounted on&lt;br /&gt;
lustre-MDT0000_UUID    4.4G   214.5M       3.9G     4%   /mnt/lustre[MDT:0]&lt;br /&gt;
lustre-OST0000_UUID    2.0G     1.1G     830.1M    53%   /mnt/lustre[OST:0]&lt;br /&gt;
lustre-OST0001_UUID    2.0G    83.3M       1.8G     4%   /mnt/lustre[OST:1]&lt;br /&gt;
lustre-OST0002_UUID    2.0G    83.3M       1.8G     4%   /mnt/lustre[OST:2]&lt;br /&gt;
lustre-OST0003_UUID    2.0G    83.3M       1.8G     4%   /mnt/lustre[OST:3]&lt;br /&gt;
lustre-OST0004_UUID    2.0G    83.3M       1.8G     4%   /mnt/lustre[OST:4]&lt;br /&gt;
lustre-OST0005_UUID    2.0G    83.3M       1.8G     4%   /mnt/lustre[OST:5]&lt;br /&gt;
&lt;br /&gt;
filesystem summary:   11.8G     1.5G       9.7G    12%   /mnt/lustre&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In this example, the entire file was written to the first OST with a very uneven distribution of data blocks.&lt;br /&gt;
&lt;br /&gt;
Continuing with this example, the file is removed and the stripe count is changed to a value of &#039;&#039;-1&#039;&#039; to specify striping over all available OSTs:&lt;br /&gt;
&lt;br /&gt;
 [root@LustreClient01 lustre]# lfs setstripe -s 4M -c 0 -i -1 /mnt/lustre&lt;br /&gt;
&lt;br /&gt;
Now, when a file is created, the new stripe setting evenly distributes the data over all the available OSTs:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@LustreClient01 lustre]# dd if=/dev/zero of=/mnt/lustre/test1 bs=10M count=100&lt;br /&gt;
100+0 records in&lt;br /&gt;
100+0 records out&lt;br /&gt;
1048576000 bytes (1.0 GB) copied, 20.2589 seconds, 51.8 MB/s&lt;br /&gt;
&lt;br /&gt;
[root@LustreClient01 lustre]# lfs df -h&lt;br /&gt;
UUID                  bytes     Used  Available   Use%   Mounted on&lt;br /&gt;
lustre-MDT0000_UUID    4.4G   214.5M       3.9G     4%  /mnt/lustre[MDT:0]&lt;br /&gt;
lustre-OST0000_UUID    2.0G   251.3M       1.6G    12%  /mnt/lustre[OST:0]&lt;br /&gt;
lustre-OST0001_UUID    2.0G   251.3M       1.6G    12%  /mnt/lustre[OST:1]&lt;br /&gt;
lustre-OST0002_UUID    2.0G   251.3M       1.6G    12%  /mnt/lustre[OST:2]&lt;br /&gt;
lustre-OST0003_UUID    2.0G   251.3M       1.6G    12%  /mnt/lustre[OST:3]&lt;br /&gt;
lustre-OST0004_UUID    2.0G   247.3M       1.6G    12%  /mnt/lustre[OST:4]&lt;br /&gt;
lustre-OST0005_UUID    2.0G   247.3M       1.6G    12%  /mnt/lustre[OST:5]&lt;br /&gt;
&lt;br /&gt;
filesystem summary:   11.8G     1.5G       9.7G    12%  /mnt/lustre&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Displaying Stripe Information for a File ==&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;lfs getstripe&#039;&#039; command can be used to display information that shows over which OSTs a file is distributed. For example, the output from the following command (showing multiple &#039;&#039;obdidx&#039;&#039; entries) indicates that the file &#039;&#039;test1&#039;&#039; is striped over all six active OSTs in the configuration:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@LustreClient01 ~]# lfs getstripe /mnt/lustre/test1&lt;br /&gt;
OBDS:&lt;br /&gt;
0: lustre-OST0000_UUID ACTIVE&lt;br /&gt;
1: lustre-OST0001_UUID ACTIVE&lt;br /&gt;
2: lustre-OST0002_UUID ACTIVE&lt;br /&gt;
3: lustre-OST0003_UUID ACTIVE&lt;br /&gt;
4: lustre-OST0004_UUID ACTIVE&lt;br /&gt;
5: lustre-OST0005_UUID ACTIVE&lt;br /&gt;
/mnt/lustre/test1&lt;br /&gt;
     obdidx      objid     objid      group&lt;br /&gt;
          0          8       0x8          0&lt;br /&gt;
          1          4       0x4          0&lt;br /&gt;
          2          5       0x5          0&lt;br /&gt;
          3          5       0x5          0&lt;br /&gt;
          4          4       0x4          0&lt;br /&gt;
          5          2       0x2          0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
In contrast, the output from the following command, which lists just a single &#039;&#039;obdidx&#039;&#039; entry, indicates that the file &#039;&#039;test2&#039;&#039; is contained on a single OST:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt;&lt;br /&gt;
[root@LustreClient01 ~]# lfs getstripe /mnt/lustre/test_2&lt;br /&gt;
OBDS:&lt;br /&gt;
0: lustre-OST0000_UUID ACTIVE&lt;br /&gt;
1: lustre-OST0001_UUID ACTIVE&lt;br /&gt;
2: lustre-OST0002_UUID ACTIVE&lt;br /&gt;
3: lustre-OST0003_UUID ACTIVE&lt;br /&gt;
4: lustre-OST0004_UUID ACTIVE&lt;br /&gt;
5: lustre-OST0005_UUID ACTIVE&lt;br /&gt;
/mnt/lustre/test_2&lt;br /&gt;
   obdidx      objid     objid      group&lt;br /&gt;
        2          8       0x8          0&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Configuring_Lustre_for_Failover&amp;diff=12268</id>
		<title>Configuring Lustre for Failover</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Configuring_Lustre_for_Failover&amp;diff=12268"/>
		<updated>2011-01-20T18:55:18Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Feb 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you plan to enable failover server functionality with Lustre™ (either on an OSS or MDS), you must add high-availability (HA) software to your cluster software. You can use any HA software package with Lustre.&lt;br /&gt;
&lt;br /&gt;
For more information about failover in a Lustre file system, see [http://wiki.lustre.org/manual/LustreManual20_HTML/UnderstandingFailover.html#50438253_pgfId-1304327‎ Chapter 3: &#039;&#039;Understanding Failover in Lustre&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
For a user-contributed guide to configuring Lustre failover with Red Hat Cluster Manager, see [[Using Red Hat Cluster Manager with Lustre]]. &lt;br /&gt;
&lt;br /&gt;
For a user-contributed guide to configuring Lustre failover with Pacemaker, see [[Using Pacemaker with Lustre]]. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039; Using Red Hat Cluster Manager or Pacemaker with Lustre is not officially supported at this time.&#039;&#039;&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12267</id>
		<title>Configuring the Lustre File System</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12267"/>
		<updated>2011-01-20T18:53:06Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Sep 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
This page describes how to configure a simple Lustre™ file system comprised of a combined MGS/MDT, an OST and a client. The administrative utilities provided with Lustre, however, can be used to set up systems with many different configurations. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; We recommend that you use dotted-quad (dot-decimal) notation for IP&lt;br /&gt;
addresses (IPv4) rather than host names. This aids in reading debug logs and helps&lt;br /&gt;
when debugging configurations with multiple interfaces.&lt;br /&gt;
&lt;br /&gt;
== Configuring the Lustre File System ==&lt;br /&gt;
This section contains a procedure for configuring the Lustre File System. For an example showing configuration of a Lustre installation comprising a combined MGS/MDT, an OST and a client see the [[Lustre Configuration Example]]. &lt;br /&gt;
&lt;br /&gt;
To configure Lustre Networking (LNET) and the Lustre file&lt;br /&gt;
system, complete these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;Define the module options for Lustre networking (LNET)&#039;&#039; by adding this line&lt;br /&gt;
to the &#039;&#039;/etc/modprobe.conf&#039;&#039; file. The &#039;&#039;modprobe.conf&#039;&#039; file is a Linux file that specifies which parts of the kernel are loaded.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;options lnet networks=&amp;lt;network interfaces that LNET can use&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:This step restricts LNET to using only the specified network interfaces and prevents LNET from using all network interfaces.&lt;br /&gt;
&lt;br /&gt;
:As an alternative to modifying the &#039;&#039;modprobe.conf&#039;&#039; file, you can modify the &#039;&#039;modprobe.local&#039;&#039; file or the configuration files in the &#039;&#039;modprobe.d&#039;&#039; directory. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; For details on configuring networking and LNET, see [http://wiki.lustre.org/manual/LustreManual20_HTML/UnderstandingLustreNetworking.html#50438191_pgfId-1289854 Chapter 2: &#039;&#039;Understanding Lustre Networking (LNET&#039;&#039;)] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;(Optional) Prepare the block devices to be used as OSTs or MDTs.&#039;&#039; Depending on the hardware used in the MDS and OSS nodes, you may want to set up a hardware or software RAID to increase the reliability of the Lustre system. For more details on how to set up a hardware or software RAID, see the documentation for your RAID controller or see [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfiguringStorage.html#50438208_pgfId-1289851 Chapter 6: &#039;&#039;Configuring Storage on a Lustre File System&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;Create a combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --fsname=&amp;lt;fsname&amp;gt; --mgs --mdt &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The default file system name (&#039;&#039;fsname&#039;&#039;) is &#039;&#039;lustre&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If you plan to generate multiple file systems, the MGS should be on its own dedicated block device.&lt;br /&gt;
&lt;br /&gt;
4. &#039;&#039;Mount the combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. &#039;&#039;Create the OST.&#039;&#039; On the OSS node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --ost --fsname=&amp;lt;fsname&amp;gt; --mgsnode=&amp;lt;NID&amp;gt; &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:You can have as many OSTs per OSS as the hardware or drivers allow.&lt;br /&gt;
&lt;br /&gt;
:You should use only one OST per block device. Optionally, you can create an OST which uses the raw block device and does not require partitioning.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If the block device has more than 8 TB of storage, it must be partitioned due to the ext3 file system limitation. Lustre can support block devices with multiple partitions, but they are not recommended because bottlenecks may result.&lt;br /&gt;
&lt;br /&gt;
6. &#039;&#039;Mount the OST.&#039;&#039; On the OSS node where the OST was created, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; &#039;&#039;To create additional OSTs, repeat Steps 5 and 6.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
7. &#039;&#039;Create the client (mount the file system on the client).&#039;&#039; On the client node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;mount -t lustre &amp;lt;MGS node&amp;gt;:/&amp;lt;fsname&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; To create additional clients, repeat Step 7]].&lt;br /&gt;
&lt;br /&gt;
8. &#039;&#039;Verify that the file system started and is working&#039;&#039; by running the UNIX&lt;br /&gt;
commands &#039;&#039;df&#039;&#039;, &#039;&#039;dd&#039;&#039; and &#039;&#039;ls&#039;&#039; on the client node.&lt;br /&gt;
&lt;br /&gt;
:a. &#039;&#039;Run the &#039;&#039;df&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] df -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:b. &#039;&#039;Run the &#039;&#039;dd&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] cd /lustre&lt;br /&gt;
;[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
c. &#039;&#039;Run the &#039;&#039;ls&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /lustre] ls -lsah&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have a problem mounting the file system, check the syslogs for errors.&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Example ==&lt;br /&gt;
&lt;br /&gt;
For an example illustrating the configuration steps described in the previous section for&lt;br /&gt;
a Lustre installation comprising a combined MGS/MDT, an OST and a client, see [[Lustre Configuration Example]].&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Utilities  ==&lt;br /&gt;
&lt;br /&gt;
Once the Lustre file system is configured, it is ready for use. If additional configuration is necessary, see [[Lustre System Configuration Utilities]].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12266</id>
		<title>Configuring the Lustre File System</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12266"/>
		<updated>2011-01-20T18:52:13Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Sep 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
This page describes how to configure a simple Lustre™ file system comprised of a combined MGS/MDT, an OST and a client. The administrative utilities provided with Lustre, however, can be used to set up systems with many different configurations. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; We recommend that you use dotted-quad (dot-decimal) notation for IP&lt;br /&gt;
addresses (IPv4) rather than host names. This aids in reading debug logs and helps&lt;br /&gt;
when debugging configurations with multiple interfaces.&lt;br /&gt;
&lt;br /&gt;
== Configuring the Lustre File System ==&lt;br /&gt;
This section contains a procedure for configuring the Lustre File System. For an example showing configuration of a Lustre installation comprising a combined MGS/MDT, an OST and a client see the [[Lustre Configuration Example]]. &lt;br /&gt;
&lt;br /&gt;
To configure Lustre Networking (LNET) and the Lustre file&lt;br /&gt;
system, complete these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;Define the module options for Lustre networking (LNET)&#039;&#039; by adding this line&lt;br /&gt;
to the &#039;&#039;/etc/modprobe.conf&#039;&#039; file. The &#039;&#039;modprobe.conf&#039;&#039; file is a Linux file that specifies which parts of the kernel are loaded.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;options lnet networks=&amp;lt;network interfaces that LNET can use&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:This step restricts LNET to using only the specified network interfaces and prevents LNET from using all network interfaces.&lt;br /&gt;
&lt;br /&gt;
:As an alternative to modifying the &#039;&#039;modprobe.conf&#039;&#039; file, you can modify the &#039;&#039;modprobe.local&#039;&#039; file or the configuration files in the &#039;&#039;modprobe.d&#039;&#039; directory. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; For details on configuring networking and LNET, see [http://wiki.lustre.org/manual/LustreManual20_HTML/UnderstandingLustreNetworking.html#50438191_pgfId-1289854 Chapter 2: &#039;&#039;Understanding Lustre Networking (LNET&#039;&#039;)] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;(Optional) Prepare the block devices to be used as OSTs or MDTs.&#039;&#039; Depending on the hardware used in the MDS and OSS nodes, you may want to set up a hardware or software RAID to increase the reliability of the Lustre system. For more details on how to set up a hardware or software RAID, see the documentation for your RAID controller or see [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfiguringStorage.html#50438208_pgfId-1289851 Chapter 6: Configuring Storage on a Lustre File System] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;Create a combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --fsname=&amp;lt;fsname&amp;gt; --mgs --mdt &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The default file system name (&#039;&#039;fsname&#039;&#039;) is &#039;&#039;lustre&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If you plan to generate multiple file systems, the MGS should be on its own dedicated block device.&lt;br /&gt;
&lt;br /&gt;
4. &#039;&#039;Mount the combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. &#039;&#039;Create the OST.&#039;&#039; On the OSS node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --ost --fsname=&amp;lt;fsname&amp;gt; --mgsnode=&amp;lt;NID&amp;gt; &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:You can have as many OSTs per OSS as the hardware or drivers allow.&lt;br /&gt;
&lt;br /&gt;
:You should use only one OST per block device. Optionally, you can create an OST which uses the raw block device and does not require partitioning.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If the block device has more than 8 TB of storage, it must be partitioned due to the ext3 file system limitation. Lustre can support block devices with multiple partitions, but they are not recommended because bottlenecks may result.&lt;br /&gt;
&lt;br /&gt;
6. &#039;&#039;Mount the OST.&#039;&#039; On the OSS node where the OST was created, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; &#039;&#039;To create additional OSTs, repeat Steps 5 and 6.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
7. &#039;&#039;Create the client (mount the file system on the client).&#039;&#039; On the client node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;mount -t lustre &amp;lt;MGS node&amp;gt;:/&amp;lt;fsname&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; To create additional clients, repeat Step 7]].&lt;br /&gt;
&lt;br /&gt;
8. &#039;&#039;Verify that the file system started and is working&#039;&#039; by running the UNIX&lt;br /&gt;
commands &#039;&#039;df&#039;&#039;, &#039;&#039;dd&#039;&#039; and &#039;&#039;ls&#039;&#039; on the client node.&lt;br /&gt;
&lt;br /&gt;
:a. &#039;&#039;Run the &#039;&#039;df&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] df -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:b. &#039;&#039;Run the &#039;&#039;dd&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] cd /lustre&lt;br /&gt;
;[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
c. &#039;&#039;Run the &#039;&#039;ls&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /lustre] ls -lsah&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have a problem mounting the file system, check the syslogs for errors.&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Example ==&lt;br /&gt;
&lt;br /&gt;
For an example illustrating the configuration steps described in the previous section for&lt;br /&gt;
a Lustre installation comprising a combined MGS/MDT, an OST and a client, see [[Lustre Configuration Example]].&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Utilities  ==&lt;br /&gt;
&lt;br /&gt;
Once the Lustre file system is configured, it is ready for use. If additional configuration is necessary, see [[Lustre System Configuration Utilities]].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12265</id>
		<title>Configuring the Lustre File System</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12265"/>
		<updated>2011-01-20T18:51:25Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Configuring the Lustre File System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Sep 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
This page describes how to configure a simple Lustre™ file system comprised of a combined MGS/MDT, an OST and a client. The administrative utilities provided with Lustre, however, can be used to set up systems with many different configurations. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; We recommend that you use dotted-quad (dot-decimal) notation for IP&lt;br /&gt;
addresses (IPv4) rather than host names. This aids in reading debug logs and helps&lt;br /&gt;
when debugging configurations with multiple interfaces.&lt;br /&gt;
&lt;br /&gt;
== Configuring the Lustre File System ==&lt;br /&gt;
This section contains a procedure for configuring the Lustre File System. For an example showing configuration of a Lustre installation comprising a combined MGS/MDT, an OST and a client see the [[Lustre Configuration Example]]. &lt;br /&gt;
&lt;br /&gt;
To configure Lustre Networking (LNET) and the Lustre file&lt;br /&gt;
system, complete these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;Define the module options for Lustre networking (LNET)&#039;&#039; by adding this line&lt;br /&gt;
to the &#039;&#039;/etc/modprobe.conf&#039;&#039; file. The &#039;&#039;modprobe.conf&#039;&#039; file is a Linux file that specifies which parts of the kernel are loaded.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;options lnet networks=&amp;lt;network interfaces that LNET can use&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:This step restricts LNET to using only the specified network interfaces and prevents LNET from using all network interfaces.&lt;br /&gt;
&lt;br /&gt;
:As an alternative to modifying the &#039;&#039;modprobe.conf&#039;&#039; file, you can modify the &#039;&#039;modprobe.local&#039;&#039; file or the configuration files in the &#039;&#039;modprobe.d&#039;&#039; directory. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; For details on configuring networking and LNET, see [http://wiki.lustre.org/manual/LustreManual20_HTML/UnderstandingLustreNetworking.html#50438191_pgfId-1289854 Chapter 2: &#039;&#039;Understanding Lustre Networking (LNET&#039;&#039;)] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;(Optional) Prepare the block devices to be used as OSTs or MDTs.&#039;&#039; Depending on the hardware used in the MDS and OSS nodes, you may want to set up a hardware or software RAID to increase the reliability of the Lustre system. For more details on how to set up a hardware or software RAID, see the documentation for your RAID controller or see [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfiguringStorage.html#50438208_pgfId-1289851 Chapter 6: Configuring Storage on a Lustre File System] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html Lustre Operations Manual].&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;Create a combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --fsname=&amp;lt;fsname&amp;gt; --mgs --mdt &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The default file system name (&#039;&#039;fsname&#039;&#039;) is &#039;&#039;lustre&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If you plan to generate multiple file systems, the MGS should be on its own dedicated block device.&lt;br /&gt;
&lt;br /&gt;
4. &#039;&#039;Mount the combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. &#039;&#039;Create the OST.&#039;&#039; On the OSS node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --ost --fsname=&amp;lt;fsname&amp;gt; --mgsnode=&amp;lt;NID&amp;gt; &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:You can have as many OSTs per OSS as the hardware or drivers allow.&lt;br /&gt;
&lt;br /&gt;
:You should use only one OST per block device. Optionally, you can create an OST which uses the raw block device and does not require partitioning.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If the block device has more than 8 TB of storage, it must be partitioned due to the ext3 file system limitation. Lustre can support block devices with multiple partitions, but they are not recommended because bottlenecks may result.&lt;br /&gt;
&lt;br /&gt;
6. &#039;&#039;Mount the OST.&#039;&#039; On the OSS node where the OST was created, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; &#039;&#039;To create additional OSTs, repeat Steps 5 and 6.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
7. &#039;&#039;Create the client (mount the file system on the client).&#039;&#039; On the client node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;mount -t lustre &amp;lt;MGS node&amp;gt;:/&amp;lt;fsname&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; To create additional clients, repeat Step 7]].&lt;br /&gt;
&lt;br /&gt;
8. &#039;&#039;Verify that the file system started and is working&#039;&#039; by running the UNIX&lt;br /&gt;
commands &#039;&#039;df&#039;&#039;, &#039;&#039;dd&#039;&#039; and &#039;&#039;ls&#039;&#039; on the client node.&lt;br /&gt;
&lt;br /&gt;
:a. &#039;&#039;Run the &#039;&#039;df&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] df -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:b. &#039;&#039;Run the &#039;&#039;dd&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] cd /lustre&lt;br /&gt;
;[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
c. &#039;&#039;Run the &#039;&#039;ls&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /lustre] ls -lsah&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have a problem mounting the file system, check the syslogs for errors.&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Example ==&lt;br /&gt;
&lt;br /&gt;
For an example illustrating the configuration steps described in the previous section for&lt;br /&gt;
a Lustre installation comprising a combined MGS/MDT, an OST and a client, see [[Lustre Configuration Example]].&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Utilities  ==&lt;br /&gt;
&lt;br /&gt;
Once the Lustre file system is configured, it is ready for use. If additional configuration is necessary, see [[Lustre System Configuration Utilities]].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12264</id>
		<title>Configuring the Lustre File System</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Configuring_the_Lustre_File_System&amp;diff=12264"/>
		<updated>2011-01-20T18:47:53Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Configuring the Lustre File System */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Sep 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
This page describes how to configure a simple Lustre™ file system comprised of a combined MGS/MDT, an OST and a client. The administrative utilities provided with Lustre, however, can be used to set up systems with many different configurations. &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; We recommend that you use dotted-quad (dot-decimal) notation for IP&lt;br /&gt;
addresses (IPv4) rather than host names. This aids in reading debug logs and helps&lt;br /&gt;
when debugging configurations with multiple interfaces.&lt;br /&gt;
&lt;br /&gt;
== Configuring the Lustre File System ==&lt;br /&gt;
This section contains a procedure for configuring the Lustre File System. For an example showing configuration of a Lustre installation comprising a combined MGS/MDT, an OST and a client see the [[Lustre Configuration Example]]. &lt;br /&gt;
&lt;br /&gt;
To configure Lustre Networking (LNET) and the Lustre file&lt;br /&gt;
system, complete these steps:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;Define the module options for Lustre networking (LNET)&#039;&#039; by adding this line&lt;br /&gt;
to the &#039;&#039;/etc/modprobe.conf&#039;&#039; file. The &#039;&#039;modprobe.conf&#039;&#039; file is a Linux file that specifies which parts of the kernel are loaded.&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;options lnet networks=&amp;lt;network interfaces that LNET can use&amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:This step restricts LNET to using only the specified network interfaces and prevents LNET from using all network interfaces.&lt;br /&gt;
&lt;br /&gt;
:As an alternative to modifying the &#039;&#039;modprobe.conf&#039;&#039; file, you can modify the &#039;&#039;modprobe.local&#039;&#039; file or the configuration files in the &#039;&#039;modprobe.d&#039;&#039; directory. &lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; For details on configuring networking and LNET, see [http://wiki.lustre.org/manual/LustreManual20_HTML/UnderstandingLustreNetworking.html#50438191_pgfId-1289854 Chapter 2: Understanding Lustre Networking (LNET)] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html Lustre Operations Manual].&lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;(Optional) Prepare the block devices to be used as OSTs or MDTs.&#039;&#039; Depending on the hardware used in the MDS and OSS nodes, you may want to set up a hardware or software RAID to increase the reliability of the Lustre system. For more details on how to set up a hardware or software RAID, see the documentation for your RAID controller or see [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfiguringStorage.html#50438208_pgfId-1289851 Chapter 6: Configuring Storage on a Lustre File System] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html Lustre Operations Manual].&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;Create a combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --fsname=&amp;lt;fsname&amp;gt; --mgs --mdt &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The default file system name (&#039;&#039;fsname&#039;&#039;) is &#039;&#039;lustre&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If you plan to generate multiple file systems, the MGS should be on its own dedicated block device.&lt;br /&gt;
&lt;br /&gt;
4. &#039;&#039;Mount the combined MGS/MDT file system on the block device.&#039;&#039; On the MDS&lt;br /&gt;
node, run:&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
5. &#039;&#039;Create the OST.&#039;&#039; On the OSS node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mkfs.lustre --ost --fsname=&amp;lt;fsname&amp;gt; --mgsnode=&amp;lt;NID&amp;gt; &amp;lt;block device name&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:You can have as many OSTs per OSS as the hardware or drivers allow.&lt;br /&gt;
&lt;br /&gt;
:You should use only one OST per block device. Optionally, you can create an OST which uses the raw block device and does not require partitioning.&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If the block device has more than 8 TB of storage, it must be partitioned due to the ext3 file system limitation. Lustre can support block devices with multiple partitions, but they are not recommended because bottlenecks may result.&lt;br /&gt;
&lt;br /&gt;
6. &#039;&#039;Mount the OST.&#039;&#039; On the OSS node where the OST was created, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;mount -t lustre &amp;lt;block device name&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; &#039;&#039;To create additional OSTs, repeat Steps 5 and 6.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
7. &#039;&#039;Create the client (mount the file system on the client).&#039;&#039; On the client node, run:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;mount -t lustre &amp;lt;MGS node&amp;gt;:/&amp;lt;fsname&amp;gt; &amp;lt;mount point&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; To create additional clients, repeat Step 7]].&lt;br /&gt;
&lt;br /&gt;
8. &#039;&#039;Verify that the file system started and is working&#039;&#039; by running the UNIX&lt;br /&gt;
commands &#039;&#039;df&#039;&#039;, &#039;&#039;dd&#039;&#039; and &#039;&#039;ls&#039;&#039; on the client node.&lt;br /&gt;
&lt;br /&gt;
:a. &#039;&#039;Run the &#039;&#039;df&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] df -h&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:b. &#039;&#039;Run the &#039;&#039;dd&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /] cd /lustre&lt;br /&gt;
;[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
c. &#039;&#039;Run the &#039;&#039;ls&#039;&#039; command.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;[root@client1 /lustre] ls -lsah&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you have a problem mounting the file system, check the syslogs for errors.&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Example ==&lt;br /&gt;
&lt;br /&gt;
For an example illustrating the configuration steps described in the previous section for&lt;br /&gt;
a Lustre installation comprising a combined MGS/MDT, an OST and a client, see [[Lustre Configuration Example]].&lt;br /&gt;
&lt;br /&gt;
== Lustre Configuration Utilities  ==&lt;br /&gt;
&lt;br /&gt;
Once the Lustre file system is configured, it is ready for use. If additional configuration is necessary, see [[Lustre System Configuration Utilities]].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Download:Download&amp;diff=12263</id>
		<title>Download:Download</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Download:Download&amp;diff=12263"/>
		<updated>2011-01-20T18:44:47Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Lustre™ is a scalable, secure, highly-available cluster file system. It is designed, developed and maintained by Oracle Corporation. &lt;br /&gt;
[[Learn|Learn More]]&lt;br /&gt;
&lt;br /&gt;
Official production releases and pre-release versions of Lustre software are available for download. Official releases offer new features and enhancements, and have undergone thorough test cyles. They are available at the Oracle [https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=LUSTRE-184-G-F@CDS-CDS_SMI download] site. Pre-release versions of Lustre are still being coded or are undergoing release testing. They are available for checkout from the Lustre source repository. &lt;br /&gt;
&lt;br /&gt;
If you are ready to get a production-level release of Lustre or ready to try a pre-release version, download it here.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryLeft&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Official Releases&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The latest official release of Lustre software is always available from Oracle Corporation, along with earlier production versions. To download an official release of Lustre, visit the Oracle &amp;lt;ins&amp;gt;[https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=LUSTRE-184-G-F@CDS-CDS_SMI download]&amp;lt;/ins&amp;gt; site.&lt;br /&gt;
&lt;br /&gt;
Currently, all Lustre l.8.x and 2.0.0 versions are available for download. To determine which Lustre release supports the features and environment you want, see the &amp;lt;ins&amp;gt;[[Lustre_Release_Information#Lustre_Support_Matrix|Lustre Test Matrix]]&amp;lt;/ins&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
A [https://www.sun.com/software/products/lustre/datasheet.pdf &amp;lt;ins&amp;gt;datasheet&amp;lt;/ins&amp;gt; for Lustre 1.8] is also available.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;strong&amp;gt;[https://cds.sun.com/is-bin/INTERSHOP.enfinity/WFS/CDS-CDS_SMI-Site/en_US/-/USD/ViewProductDetail-Start?ProductRef=LUSTRE-184-G-F@CDS-CDS_SMI Get Lustre from Oracle]&amp;lt;/strong&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryRight&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Pre-Release Versions&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As an open-source product, we encourage contributions to develop and test Lustre by trying out pre-release versions of the software. To obtain Lustre code from the source repository, you must have the Git version control system installed.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;strong&amp;gt;[[Accessing_Lustre_Code|Get Lustre from Git]]&amp;lt;/strong&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Finding_a_Project&amp;diff=12262</id>
		<title>Finding a Project</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Finding_a_Project&amp;diff=12262"/>
		<updated>2011-01-20T18:43:00Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Nov 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
This page describes how to [[#Finding a Bug to Fix|find a bug to fix]], [[#Selecting a Project to Enhance Lustre|select a project to enhance Lustre]] or [[#Helping with Lustre Testing|help with Lustre testing]]. Lustre defects and features or to-do items are logged in the Bugzilla bug tracking system. &lt;br /&gt;
&lt;br /&gt;
You can also also contact the [[Lustre_Mailing_Lists|Lustre Development mailing list]] (often referred to as [mailto:lustre-devel@lists.lustre.org lustre-devel]) to discuss ideas for projects that match your skills and interests. Note that Lustre encompasses a number of development areas, including user tools, documentation, disk filesystems, networking, kernel integration, etc., so you can almost always find a project that is interesting and challenging.&lt;br /&gt;
&lt;br /&gt;
Having a specific problem to fix requires an understanding of the flow of operations and how that maps to specific code.  It gives you a concrete goal that provides a context for investigating the code, rather than just reading vaguely through the vast Lustre code base.  &lt;br /&gt;
&lt;br /&gt;
Once you have selected a project, contact [mailto:lustre-devel@lists.lustre.org lustre-devel] to discuss the best approach to take and to keep others aware of what you are working on.  &lt;br /&gt;
&lt;br /&gt;
== Finding a Bug to Fix ==&lt;br /&gt;
Fixing bugs in Lustre is a good way to become familiar with the Lustre code if you&#039;ve not worked on it before. Some ways to find a bug you&#039;d like to work on are:&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Search [https://bugzilla.lustre.org/buglist.cgi?query_format=advanced&amp;amp;product=Lustre&amp;amp;keywords_type=anywords&amp;amp;keywords=easy+needs-test&amp;amp;bug_status=UNCONFIRMED&amp;amp;bug_status=NEW&amp;amp;bug_status=ASSIGNED&amp;amp;bug_status=REOPENED&amp;amp;order=Reuse+same+sort+as+last+time Bugzilla] key words for bugs designated &amp;quot;easy&amp;quot; bugs.&#039;&#039; Some Lustre developers use this keyword to indicate that a bug could be fixed by someone without in-depth familiarity with the Lustre code.&lt;br /&gt;
* &#039;&#039;Search [https://bugzilla.lustre.org/query.cgi Bugzilla] for very old Lustre bugs.&#039;&#039;  These are typically non-critical bugs that are not dependent on a release timeline. They can vary widely in complexity.  In particular, doing an empty Bugzilla query and looking at the first 100 items (sorted by bug number) shows a lot of bugs that are either relatively hard to reproduce, not generally visible to users, or &amp;quot;nice-to-have&amp;quot; features that no customer has specifically prioritized to be fixed.&lt;br /&gt;
&lt;br /&gt;
== Selecting a Project to Enhance Lustre ==&lt;br /&gt;
If you&#039;d like to take on a project to enhance or add a new feature to Lustre, consider one of these options:&lt;br /&gt;
&lt;br /&gt;
* Pick a project from the [[Lustre Project List]]. For guidance in selecting or proceeding with a project, contact [mailto:lustre-devel@lists.lustre.org lustre-devel].&lt;br /&gt;
&lt;br /&gt;
* Ask for a project on [mailto:lustre-devel@lists.lustre.org lustre-devel].  This mailing list is read by many of the Lustre developers and is a good place for questions, ideas, feedback.&lt;br /&gt;
&lt;br /&gt;
* &#039;&#039;Assist with keeping Lustre up-to-date with recent kernel changes.&#039;&#039;  Porting Lustre to newer kernel versions is an ongoing effort, given the large number of vendor and upstream kernel releases.  For some changes, a simple fix to the Lustre code will be required, while for others, a good understanding of the Linux kernel and how Lustre interfaces with it is needed.&lt;br /&gt;
* &#039;&#039;Propose a new feature that can be developed as a separate module on top of Lustre.&#039;&#039; Be sure to get feedback on your proposal by contacting [mailto:lustre-devel@lists.lustre.org lustre-devel] before you get started.&lt;br /&gt;
&lt;br /&gt;
== Helping with Lustre Testing ==&lt;br /&gt;
Testing Lustre under a variety of workloads is always of interest.  The more unusual the IO pattern used by a benchmark, application, or testing tool, the more likely it is to find something of interest.&lt;br /&gt;
&lt;br /&gt;
To find out how you can contribute to the testing of upcoming Lustre releases, see [[Lustre_Test_Plans|Lustre Test Plans]].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=GSS_/_Kerberos&amp;diff=12261</id>
		<title>GSS / Kerberos</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=GSS_/_Kerberos&amp;diff=12261"/>
		<updated>2011-01-20T18:41:13Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Secure MGC - MGS connection */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&#039;&#039;&#039;Note:&#039;&#039;&#039; Only the HEAD branch supports GSS/Kerberos functionality. It is subject to changes at any time, and backward compatibility is NOT guaranteed.&lt;br /&gt;
&lt;br /&gt;
= Kerberos Lustre Setup =&lt;br /&gt;
&lt;br /&gt;
== Security Flavor ==&lt;br /&gt;
A security flavor is a string to describe what kind authentication and data transformation be performed upon a PTLRPC connection. It covers both RPC message and BULK data.&lt;br /&gt;
&lt;br /&gt;
The support flavors are described in following table:&lt;br /&gt;
&lt;br /&gt;
{|border=1 cellspacing=0&lt;br /&gt;
|bgcolor=#E6E6E6| Base Flavor||bgcolor= 	#E6E6E6|Authentication||bgcolor=#E6E6E6|RPC Message Protection||bgcolor=#E6E6E6|Bulk Data Protection||bgcolor=#E6E6E6|Notes&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;null&#039;&#039;&#039;&#039;&#039;||N/A ||N/A ||N/A &#039;&#039;&#039;[*]&#039;&#039;&#039; ||Almost no performance overhead. The on-wire rpc format is compatible with old versions (1.4.x, 1.6.x, 1.8.x).&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;plain&#039;&#039;&#039;&#039;&#039;||N/A ||N/A ||checksum||(obsolete)&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;krb5n&#039;&#039;&#039;&#039;&#039;||GSS/Kerberos5 ||null||checksum (adler32)||No protection of rpc message, adler32 checksum protection of bulk data, light performance overhead.&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;krb5a&#039;&#039;&#039;&#039;&#039;||GSS/Kerberos5 ||partly integrity (krb5)||checksum (adler32)||Only header of rpc message is integrity protected, adler32 checksum protection of bulk data, more performance overhead compare to krb5n. &lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;krb5i&#039;&#039;&#039;&#039;&#039;||GSS/Kerberos5 ||integrity (krb5)||integrity (krb5)||transformation algorithm is determined by actual Kerberos algorithms in use; Heavy performance penalty. &lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;krb5p&#039;&#039;&#039;&#039;&#039;||GSS/Kerberos5 ||privacy (krb5)||privacy (krb5)||transformation privacy protection algorithm is determined by actual Kerberos algorithms in use; The heaviest performance penalty.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;[*]&#039;&#039;&#039; In Lustre 1.4 and 1.6 it is possible to enable bulk data checksumming to provide integrity checking using CRC32.  In 1.6.5 this is expected to be the default behaviour, using the Adler32 mechanism by default (lower CPU overhead than CRC32).&lt;br /&gt;
&lt;br /&gt;
In the future, we may want to support customize flavor to some extend. For example, allow set different flavors for RPC message and bulk data.&lt;br /&gt;
&lt;br /&gt;
== Kerberos Setup ==&lt;br /&gt;
=== Distribution ===&lt;br /&gt;
* We only support MIT Kerberos 5, version from 1.3.x to latest 1.6.x.&lt;br /&gt;
&lt;br /&gt;
=== Configuration ===&lt;br /&gt;
1. Configure client nodes:&lt;br /&gt;
*For each client node, create a lustre_root principal and generate keytab.&lt;br /&gt;
   kadmin&amp;gt; addprinc -randkey lustre_root/client_host.domain@REALM&lt;br /&gt;
   kadmin&amp;gt; ktadd -e aes128-cts:normal lustre_root/client_host.domain@REALM &lt;br /&gt;
*Install the keytab on the client node.&lt;br /&gt;
&lt;br /&gt;
2. Configure MDS node:&lt;br /&gt;
*For each MDS node, create a lustre_mds principal and generate keytab.&lt;br /&gt;
  kadmin&amp;gt; addprinc -randkey lustre_mds/mds_host.domain@REALM&lt;br /&gt;
  kadmin&amp;gt; ktadd -e aes128-cts:normal lustre_mds/mds_host.domain@REALM&lt;br /&gt;
*Install the keytab on the MDS node.&lt;br /&gt;
&lt;br /&gt;
3. Configure OSS node:&lt;br /&gt;
*For each OSS node, create a lustre_oss principal and generate keytab.&lt;br /&gt;
  kadmin&amp;gt; addprinc -randkey lustre_oss/oss_host.domain@REALM&lt;br /&gt;
  kadmin&amp;gt; ktadd -e aes128-cts:normal lustre_oss/oss_host.domain@REALM&lt;br /&gt;
*Install the keytab on the OSS node.&lt;br /&gt;
&lt;br /&gt;
NOTES:&lt;br /&gt;
*The &#039;&#039;host.domain&#039;&#039; should be the FQDN in your network, otherwise server might not recognize any GSS request.&lt;br /&gt;
&lt;br /&gt;
*As an alternative of the client keytab, if you want to save the trouble of assigning unique keytab for each client node, you can create a general lustre_root principal and its keytab, and install the same keytab on as many client nodes as you want. &#039;&#039;&#039;But be aware that in this way one compromised client means all clients are insecure&#039;&#039;&#039;.&lt;br /&gt;
      kadmin&amp;gt; addprinc -randkey lustre_root@REALM&lt;br /&gt;
      kadmin&amp;gt; ktadd -e aes128-cts:normal lustre_root@REALM&lt;br /&gt;
&lt;br /&gt;
*To merge keytab files, you need the tool &#039;&#039;&#039;&#039;&#039;ktutil&#039;&#039;&#039;&#039;&#039;, for more details please refers to manual of ktutil.&lt;br /&gt;
&lt;br /&gt;
*Lustre support following &#039;&#039;enctypes&#039;&#039; for MIT Kerberos 5 version 1.4 or higher:&lt;br /&gt;
**&amp;lt;u&amp;gt;&#039;&#039;des-cbc-md5&#039;&#039;&amp;lt;/u&amp;gt;&lt;br /&gt;
**&amp;lt;u&amp;gt;&#039;&#039;des3-hmac-sha1&#039;&#039;&amp;lt;/u&amp;gt;&lt;br /&gt;
**&amp;lt;u&amp;gt;&#039;&#039;aes128-cts&#039;&#039;&amp;lt;/u&amp;gt;&lt;br /&gt;
**&amp;lt;u&amp;gt;&#039;&#039;aes256-cts&#039;&#039;&amp;lt;/u&amp;gt;&lt;br /&gt;
&lt;br /&gt;
*For MIT Kerberos 1.3.x, only &#039;&#039;des-cbc-md5&#039;&#039; works because a known issue between libgssapi and Kerberos library.&lt;br /&gt;
&lt;br /&gt;
== Required packages ==&lt;br /&gt;
Every node should have follow packages installed:&lt;br /&gt;
* &#039;&#039;&#039;&#039;&#039;libgssapi&#039;&#039;&#039;&#039;&#039; version 0.10 or higher. Some newer Linux distributions already come with it. If not, build &amp;amp; install from source: http://www.citi.umich.edu/projects/nfsv4/linux/libgssapi/libgssapi-0.11.tar.gz&lt;br /&gt;
* &#039;&#039;&#039;&#039;&#039;keyutils&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
== Kernel &amp;amp; Environment ==&lt;br /&gt;
* System wide configuration:&lt;br /&gt;
On Each node (MDT, OST, Client) following line should be added into /etc/fstab to be automatically mounted&lt;br /&gt;
   nfsd         /proc/fs/nfsd            nfsd            defaults   0 0 &lt;br /&gt;
Each MDT and Client node add following line into /etc/request-key.conf:&lt;br /&gt;
   create lgssc * * /usr/sbin/lgss_keyring %o %k %t %d %c %u %g %T %P %S&lt;br /&gt;
Note you might need to replace &#039;&#039;&#039;/usr/sbin/lgss_keyring&#039;&#039;&#039; in above line to the actual path to lgss_keyring binary in your setting.&lt;br /&gt;
&lt;br /&gt;
* Networking:&lt;br /&gt;
If you are using network which is &#039;&#039;&#039;NOT&#039;&#039;&#039; TCP or Infiniband (e.g. Quadrics Elan, Myrinet, etc), you need configure a &#039;&#039;&#039;&#039;&#039;/etc/lustre/nid2hostname&#039;&#039;&#039;&#039;&#039; on &#039;&#039;&#039;each&#039;&#039;&#039; server node (MDT &amp;amp; OST), which is a simple script to translate NID into hostname. Following is sample on a Elan cluster:&lt;br /&gt;
&lt;br /&gt;
   #!/bin/bash&lt;br /&gt;
   set -x&lt;br /&gt;
   exec 2&amp;gt;/tmp/$(basename $0).debug&lt;br /&gt;
    &lt;br /&gt;
   # convert a NID for a LND to a hostname, for GSS for example&lt;br /&gt;
    &lt;br /&gt;
   # called with thre arguments: lnd netid nid&lt;br /&gt;
   #   $lnd will be string &amp;quot;QSWLND&amp;quot;, &amp;quot;GMLND&amp;quot;, etc.&lt;br /&gt;
   #   $netid will be number in hex string format, like &amp;quot;0x16&amp;quot;, etc.&lt;br /&gt;
   #   $nid has the same format as $netid&lt;br /&gt;
   # output the corresponding hostname, or error message leaded by a &#039;@&#039; for error logging.&lt;br /&gt;
    &lt;br /&gt;
   lnd=$1&lt;br /&gt;
   netid=$2&lt;br /&gt;
   nid=$3&lt;br /&gt;
     &lt;br /&gt;
   # uppercase the hex&lt;br /&gt;
   nid=$(echo $nid | tr &#039;[abcdef]&#039; &#039;[ABCDEF]&#039;)&lt;br /&gt;
   # and convert to decimal&lt;br /&gt;
   nid=$(echo -e &amp;quot;ibase=16\n${nid/#0x}&amp;quot; | bc)&lt;br /&gt;
   case $lnd in&lt;br /&gt;
        QSWLND)   # simply stick &amp;quot;mtn&amp;quot; on the front&lt;br /&gt;
                  echo &amp;quot;mtn$nid&amp;quot;&lt;br /&gt;
                  ;;&lt;br /&gt;
        *)        echo &amp;quot;@unknown LND: $lnd&amp;quot;&lt;br /&gt;
                  ;;&lt;br /&gt;
   esac&lt;br /&gt;
&lt;br /&gt;
== Build Lustre ==&lt;br /&gt;
Enable GSS during configuration:&lt;br /&gt;
&lt;br /&gt;
 ./configure --enable-gss --other-options&lt;br /&gt;
&lt;br /&gt;
== Running ==&lt;br /&gt;
=== GSS Daemons ===&lt;br /&gt;
Make sure start the daemon process &#039;&#039;&#039;lsvcgssd&#039;&#039;&#039; on each OST and MDT node before starting Lustre. The command syntax is:&lt;br /&gt;
 lsvcgssd [-f] [-v]&lt;br /&gt;
* &#039;&#039;-f&#039;&#039;: running at foreground instead of as daemon, thus output error/warning messages to front console instead of system log.&lt;br /&gt;
* &#039;&#039;-v&#039;&#039;: increase verbosity by 1. The default is 0, maximum is 4.&lt;br /&gt;
&lt;br /&gt;
=== Setting Security Flavors ===&lt;br /&gt;
Note: If nothing specified, by default all RPC connections will use &#039;&#039;&#039;&#039;&#039;null&#039;&#039;&#039;&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
On MGS there&#039;s a persistent sptlrpc rule database, by specifying set of rules you can change security flavors between nodes. A rule is in form of:&lt;br /&gt;
 &amp;lt;spec&amp;gt;=&amp;lt;flavor&amp;gt;&lt;br /&gt;
Rules can be manipulated on MGS node. To add a rule:&lt;br /&gt;
 mgs&amp;gt; lctl conf_param &amp;lt;spec&amp;gt;=&amp;lt;flavor&amp;gt;&lt;br /&gt;
If there a existing rule of &amp;lt;spec&amp;gt; part, it will overwritten.&lt;br /&gt;
&lt;br /&gt;
To delete a rule:&lt;br /&gt;
 mgs&amp;gt; lctl conf_param -d &amp;lt;spec&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Current rule set could be obtained by:&lt;br /&gt;
 msg&amp;gt; cat /proc/fs/lustre/mgs/&amp;lt;mgs-name&amp;gt;/live/&amp;lt;fs-name&amp;gt; | grep &amp;quot;srpc.flavor&amp;quot;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note&#039;&#039;&#039;:&lt;br /&gt;
* Rules have persistent storage on MGS, so it applied across re-mount.&lt;br /&gt;
* It doesn&#039;t matter in which order you add a set of rules, lustre keep rules in certain order or priority.&lt;br /&gt;
* After you changed a rule, usually it will take the system within 1 minutes to apply the new rules to all nodes, depend on system load.&lt;br /&gt;
* Before you change a rule, make sure affected nodes are ready for the new security flavor. E.g. you changed flavor from &#039;&#039;&#039;&#039;&#039;null&#039;&#039;&#039;&#039;&#039; to &#039;&#039;&#039;&#039;&#039;krb5p&#039;&#039;&#039;&#039;&#039; but GSS/Kerberos env is not properly configured on affected nodes, those nodes might be evicted because they can&#039;t communicate with others.&lt;br /&gt;
* You can also specify rules via device on-disk parameters, by mke2fs.lustre or tune2fs.lustre. The syntax is the same, and the rule only applied to connections to this specific target (MDT/OST).&lt;br /&gt;
&lt;br /&gt;
=== Rules Syntax &amp;amp; Examples ===&lt;br /&gt;
The general syntax is:&lt;br /&gt;
 &amp;lt;target&amp;gt;.srpc.flavor.&amp;lt;network&amp;gt;[.&amp;lt;direction&amp;gt;]=flavor&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;target&amp;gt;: could be filesystem name, or specific MDT/OST device name. For example, &#039;&#039;lustre&#039;&#039;, &#039;&#039;lustre-MDT0000&#039;&#039;, &#039;&#039;lustre-OST0001&#039;&#039;, etc.&lt;br /&gt;
* &amp;lt;network&amp;gt;: LNET network name of the RPC initiator. For example, &#039;&#039;tcp0&#039;&#039;, &#039;&#039;elan1&#039;&#039;, &#039;&#039;o2ib0&#039;&#039;.&lt;br /&gt;
* &amp;lt;direction&amp;gt;: could be one of &#039;&#039;cli2mdt&#039;&#039;, &#039;&#039;cli2ost&#039;&#039;, &#039;&#039;mdt2mdt&#039;&#039;, &#039;&#039;mdt2ost&#039;&#039;. In most cases you don&#039;t need to specify &amp;lt;direction&amp;gt; part.&lt;br /&gt;
&lt;br /&gt;
Examples:&lt;br /&gt;
* Apply &#039;&#039;krb5i&#039;&#039; on &#039;&#039;&#039;ALL&#039;&#039;&#039; connections:&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.default=krb5i&lt;br /&gt;
&lt;br /&gt;
* Nodes in network &#039;&#039;tcp0&#039;&#039; use &#039;&#039;krb5p&#039;&#039;; All other nodes use &#039;&#039;null&#039;&#039;&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.tcp0=krb5p&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.default=null&lt;br /&gt;
&lt;br /&gt;
* Nodes in network &#039;&#039;tcp0&#039;&#039; use &#039;&#039;krb5p&#039;&#039;; Nodes in &#039;&#039;elan1&#039;&#039; use &#039;&#039;plain&#039;&#039;; Amount other nodes, clients use &#039;&#039;krb5i&#039;&#039; to MDT/OST, MDT use &#039;&#039;null&#039;&#039; to other MDTs, MDT use &#039;&#039;plain&#039;&#039; to OSTs.&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.tcp0=krb5p&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.elan1=plain&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.default.cli2mdt=krb5i&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.default.cli2ost=krb5i&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.default.mdt2mdt=null&lt;br /&gt;
  mgs&amp;gt; lctl conf_param lustre.srpc.flavor.default.mdt2ost=plain&lt;br /&gt;
&lt;br /&gt;
=== Authenticate Normal Users ===&lt;br /&gt;
On client nodes, a non-root user need &#039;&#039;&#039;&#039;&#039;kinit&#039;&#039;&#039;&#039;&#039; before accessing Lustre, just like other Kerberized applications.&lt;br /&gt;
* Required by kerberos, the user&#039;s principal (&#039;&#039;username@REALM&#039;&#039;) should be added into KDC.&lt;br /&gt;
* Client and MDT nodes should have the same user database, i.e. the user name and uid/gid translation.&lt;br /&gt;
A use could destroy the established security contexts before logout, by &amp;quot;lfs flushctx&amp;quot;:&lt;br /&gt;
&lt;br /&gt;
 lfs flushctx [-k]&lt;br /&gt;
&lt;br /&gt;
Here &amp;quot;-k&amp;quot; means also destroy the on-disk kerberos credential cache, equals to &amp;quot;kdestroy&amp;quot;, otherwise it only destroy established contexts in Lustre kernel memory.&lt;br /&gt;
&lt;br /&gt;
== Secure MGC - MGS connection ==&lt;br /&gt;
Each node can specify what flavor to use to connect to MGS, by option &#039;&#039;&#039;&#039;&#039;mgssec=flavor&#039;&#039;&#039;&#039;&#039; upon mounting a target device or client. By default &#039;&#039;null&#039;&#039; is chosen. Once a flavor is chosen, it can&#039;t be changed until umount.&lt;br /&gt;
&lt;br /&gt;
Because each node has only one connection to MGS, so if there&#039;s more than one target device or client on a single node, all the &amp;quot;mgssec=&amp;quot; specification must be the same. Or simply missing option &amp;quot;mgssec=&amp;quot; means &amp;quot;using currently chosen flavor.&lt;br /&gt;
&lt;br /&gt;
By default, MGS accept RPCs with any flavor. But sysad can configure MGS to only accept certain flavor from certain network. The syntax is similar but with target as a special &amp;quot;_mgs&amp;quot;:&lt;br /&gt;
 mgs&amp;gt; lctl conf_param _mgs.srpc.flavor.&amp;lt;network&amp;gt;=flavor&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039; apply inappropriate flavor may lead to a node never be able to communicate with MGS until restart. So use it carefully.&lt;br /&gt;
&lt;br /&gt;
== Cross-Realms Authentication ==&lt;br /&gt;
Due to idmap functionality is missing, we don&#039;t support cross-realm authentication currently.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=HPC_Software_Workshop_and_Seminars_-_Regensburg_Germany_2009&amp;diff=12260</id>
		<title>HPC Software Workshop and Seminars - Regensburg Germany 2009</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=HPC_Software_Workshop_and_Seminars_-_Regensburg_Germany_2009&amp;diff=12260"/>
		<updated>2011-01-20T18:39:53Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Lustre Advanced  Seminar and Open Storage Track Agenda and Slides */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Sep 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The three-day [http://hpcworkshop.com HPC Software Workshop] offered three tracks with Sun and customer presentations around Grid Engine, Open Storage (including Lustre™ and SAM-QFS), and software Development Tools, including Sun Studio and Sun HPC ClusterTools.&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;&#039;Grid Engine&#039;&#039;&#039; track was the Grid Engine community meeting for 2009. We discussed our progress since the last year, how our community members are using Grid Engine today, and where we are planning to go in the future.&lt;br /&gt;
&lt;br /&gt;
* Lustre talks in the &#039;&#039;&#039;Open Storage&#039;&#039;&#039; track provided participants with the opportunity to learn new technical information, acquire best practices, and share knowledge about Lustre technology. Other topics in the &#039;&#039;&#039;Open Storage&#039;&#039;&#039; track included Sun Open Storage and other Sun open file systems, including ZFS and SAM-QFS.&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;&#039;Development Tools&#039;&#039;&#039; track covered the end-to-end process of developing and deploying applications and services in HPC and cloud computing environments, including Sun HPC ClusterTools and Sun Studio software. Other topics in this track included advanced language research for HPC and cloud computing technologies.&lt;br /&gt;
&lt;br /&gt;
Additionally, three advanced seminars were held as adjuncts to the HPC Software Workshop.&lt;br /&gt;
&lt;br /&gt;
* A special &#039;&#039;&#039;Lustre Advanced Technical Seminar&#039;&#039;&#039; was offered with senior Lustre engineers. They discussed several areas of the product they are working on, and took questions and provided answers about tuning, configuration, and debugging.&lt;br /&gt;
&lt;br /&gt;
* An &#039;&#039;&#039;HPC Parallel Programming Seminar&#039;&#039;&#039; described aspects of parallel computing and provided an introduction to the basics of the MPI and OpenMP programming models.&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;&#039;Grid Engine Advanced Seminar&#039;&#039;&#039; was presented to Sun Grid Engine administrators wanting to learn about administration facilities, expert configurations and best practices for complex use case scenarios. The class was conducted by Sun Grid Engine experts and engineers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;big&amp;gt;Lustre Advanced  Seminar and Open Storage Track Agenda and Slides&amp;lt;/big&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The HPC Software Workshop featured an advanced Lustre seminar and numerous Open Storage Track sessions on select Lustre topics.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 7 - Monday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Lustre Advanced Technical Seminar covered topics of interest to Lustre users, and featured the following presentations from our senior-level developers:&lt;br /&gt;
&lt;br /&gt;
[[Media:Shpc-2009-lombardi-lustre-practice.pdf|&#039;&#039;Lustre in Practice&#039;&#039;]] - Johann Lombardi, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Shpc-2009-lombardi-lustre-protocol.pdf|&#039;&#039;Some Protocol Basics&#039;&#039;]] - Johann Lombardi, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Shpc-2009-adilger-workshop.pdf|&#039;&#039;Lustre Deep Dive&#039;&#039;]] - Andreas Dilger, Sun&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 8 - Tuesday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Media:Tuesday_HPC_Workshop_Lustre_Roadmap.pdf|&#039;&#039;Lustre Roadmap&#039;&#039;]] - Dan Ferber, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Tuesday_lustre_automotive.pdf|&#039;&#039;Lustre in Automotive&#039;&#039;]] - Daniel Kobras, science + computing ag&lt;br /&gt;
&lt;br /&gt;
[[Media:Tuesday_shpc-2009-zfs.pdf|&#039;&#039;Lustre, ZFS &amp;amp; End-to-End Data Integrity&#039;&#039;]] - Andreas Dilger, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Tuesday_CERNLustreEvaluations-SunHPCWorkshop08SEP2009.pdf|&#039;&#039;Lustre Investigations at CERN&#039;&#039;]] - Arne Wiebalck, CERN&lt;br /&gt;
&lt;br /&gt;
[[Media:Tuesday_shpc-2009-cmd.pdf|&#039;&#039;Lustre Clustered Metadata&#039;&#039;]] - Andreas Dilger, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Tuesday_lustre_and_physics_HPC09.pdf|&#039;&#039;Lustre, SGE &amp;amp; Physics Code&#039;&#039;]] - Rajmund Krivec, J. Stefan Institute&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 9 - Wednesday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Media:Wednesday_Snowbird_Regensburg_TKP.pdf|&#039;&#039;Sun Lustre Storage System&#039;&#039;]] - Torben Kling-Petersen, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Wednesday_Lustre_FZJ.pdf|&#039;&#039;Lustre at FZ Juelich - Status and Goals&#039;&#039;]] - Otto Büchner, Forschungszentrum, Juelich&lt;br /&gt;
&lt;br /&gt;
[[Media:Wednesday_HPC_Workshop_Lustre_Performance_Sep_2009_Rev2.pdf|&#039;&#039;Sample Lustre Performance Data&#039;&#039;]] - Dan Ferber, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Wednesday_shpc-2009-benchmarking.pdf|&#039;&#039;Lustre Performance Tips and Tricks&#039;&#039;]] - Torben Kling-Petersen, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Wednesday_SAM-QFS.HPC.PUB.pdf|&#039;&#039;Data Management with Shared QFS and Storage Archive Manager&#039;&#039;]] - Harriet Coverston, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Wednesday_Tools.pdf|&#039;&#039;Lustre Management &amp;amp; Monitoring Tools&#039;&#039;]] - Sven Trautman, Sun&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 10 - Thursday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[Media:Thursday_Quota.pdf|&#039;&#039;Lustre Quota: Current and Future&#039;&#039;]] - Johann Lombardi, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Thursday_HPC_Workshop_Lustre_Community_Sep_2009_Rev2.pdf|&#039;&#039;Lustre Community&#039;&#039;]] - Dan Ferber, Sun&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=HPC_Software_Workshop_and_Seminars_-_Regensburg_Germany_2009&amp;diff=12259</id>
		<title>HPC Software Workshop and Seminars - Regensburg Germany 2009</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=HPC_Software_Workshop_and_Seminars_-_Regensburg_Germany_2009&amp;diff=12259"/>
		<updated>2011-01-20T18:32:01Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Lustre Advanced  Seminar and Open Storage Track Agenda and Slides */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Sep 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The three-day [http://hpcworkshop.com HPC Software Workshop] offered three tracks with Sun and customer presentations around Grid Engine, Open Storage (including Lustre™ and SAM-QFS), and software Development Tools, including Sun Studio and Sun HPC ClusterTools.&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;&#039;Grid Engine&#039;&#039;&#039; track was the Grid Engine community meeting for 2009. We discussed our progress since the last year, how our community members are using Grid Engine today, and where we are planning to go in the future.&lt;br /&gt;
&lt;br /&gt;
* Lustre talks in the &#039;&#039;&#039;Open Storage&#039;&#039;&#039; track provided participants with the opportunity to learn new technical information, acquire best practices, and share knowledge about Lustre technology. Other topics in the &#039;&#039;&#039;Open Storage&#039;&#039;&#039; track included Sun Open Storage and other Sun open file systems, including ZFS and SAM-QFS.&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;&#039;Development Tools&#039;&#039;&#039; track covered the end-to-end process of developing and deploying applications and services in HPC and cloud computing environments, including Sun HPC ClusterTools and Sun Studio software. Other topics in this track included advanced language research for HPC and cloud computing technologies.&lt;br /&gt;
&lt;br /&gt;
Additionally, three advanced seminars were held as adjuncts to the HPC Software Workshop.&lt;br /&gt;
&lt;br /&gt;
* A special &#039;&#039;&#039;Lustre Advanced Technical Seminar&#039;&#039;&#039; was offered with senior Lustre engineers. They discussed several areas of the product they are working on, and took questions and provided answers about tuning, configuration, and debugging.&lt;br /&gt;
&lt;br /&gt;
* An &#039;&#039;&#039;HPC Parallel Programming Seminar&#039;&#039;&#039; described aspects of parallel computing and provided an introduction to the basics of the MPI and OpenMP programming models.&lt;br /&gt;
&lt;br /&gt;
* The &#039;&#039;&#039;Grid Engine Advanced Seminar&#039;&#039;&#039; was presented to Sun Grid Engine administrators wanting to learn about administration facilities, expert configurations and best practices for complex use case scenarios. The class was conducted by Sun Grid Engine experts and engineers.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
====&amp;lt;big&amp;gt;Lustre Advanced  Seminar and Open Storage Track Agenda and Slides&amp;lt;/big&amp;gt;====&lt;br /&gt;
&lt;br /&gt;
The HPC Software Workshop featured an advanced Lustre seminar and numerous Open Storage Track sessions on select Lustre topics.&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 7 - Monday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The Lustre Advanced Technical Seminar covered topics of interest to Lustre users, and featured the following presentations from our senior-level developers:&lt;br /&gt;
&lt;br /&gt;
[[Media:Shpc-2009-lombardi-lustre-practice.pdf|&#039;&#039;Lustre in Practice&#039;&#039;]] - Johann Lombardi, Sun&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre: Some Protocol Basics&#039;&#039;&#039;&#039;&#039; - Johann Lombardi, Sun [[Media:Shpc-2009-lombardi-lustre-protocol.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Deep Dive&#039;&#039;&#039;&#039;&#039; - Andreas Dilger, Sun [[Media:Shpc-2009-adilger-workshop.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 8 - Tuesday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Roadmap&#039;&#039;&#039;&#039;&#039; - Dan Ferber, Sun [[Media:Tuesday_HPC_Workshop_Lustre_Roadmap.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre in Automotive&#039;&#039;&#039;&#039;&#039; - Daniel Kobras, science + computing ag [[Media:Tuesday_lustre_automotive.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre, ZFS &amp;amp; End-to-End Data Integrity&#039;&#039;&#039;&#039;&#039; - Andreas Dilger, Sun [[Media:Tuesday_shpc-2009-zfs.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Investigations at CERN&#039;&#039;&#039;&#039;&#039; - Arne Wiebalck, CERN [[Media:Tuesday_CERNLustreEvaluations-SunHPCWorkshop08SEP2009.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Clustered Metadata&#039;&#039;&#039;&#039;&#039; - Andreas Dilger, Sun [[Media:Tuesday_shpc-2009-cmd.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre, SGE &amp;amp; Physics Codes&#039;&#039;&#039;&#039;&#039; - Rajmund Krivec, J. Stefan Institute [[Media:Tuesday_lustre_and_physics_HPC09.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 9 - Wednesday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Sun Lustre Storage System&#039;&#039;&#039;&#039;&#039; - Torben Kling-Petersen, Sun [[Media:Wednesday_Snowbird_Regensburg_TKP.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre at FZ Juelich - Status and Goals&#039;&#039;&#039;&#039;&#039; - Otto Büchner, Forschungszentrum, Juelich [[Media:Wednesday_Lustre_FZJ.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Sample Lustre Performance Data&#039;&#039;&#039;&#039;&#039; - Dan Ferber, Sun [[Media:Wednesday_HPC_Workshop_Lustre_Performance_Sep_2009_Rev2.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Performance Tips and Tricks&#039;&#039;&#039;&#039;&#039; - Torben Kling-Petersen, Sun [[Media:Wednesday_shpc-2009-benchmarking.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Data Management with Shared QFS and Storage Archive Manager (SAM)&#039;&#039;&#039;&#039;&#039; - Harriet Coverston, Sun [[Media:Wednesday_SAM-QFS.HPC.PUB.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Management &amp;amp; Monitoring Tools&#039;&#039;&#039;&#039;&#039; - Sven Trautman, Sun [[Media:Wednesday_Tools.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;September 10 - Thursday&#039;&#039;&#039;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Quota: Current and Future&#039;&#039;&#039;&#039;&#039; - Johann Lombardi, Sun [[Media:Thursday_Quota.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Community&#039;&#039;&#039;&#039;&#039; - Dan Ferber, Sun [[Media:Thursday_HPC_Workshop_Lustre_Community_Sep_2009_Rev2.pdf|&#039;&#039;- Slides&#039;&#039;]]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Installing_Lustre_from_Downloaded_RPMs&amp;diff=12258</id>
		<title>Installing Lustre from Downloaded RPMs</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Installing_Lustre_from_Downloaded_RPMs&amp;diff=12258"/>
		<updated>2011-01-20T18:29:03Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Jan 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This procedure describes how to install Lustre™ from the RPM packages. For additional information, see [http://wiki.lustre.org/manual/LustreManual20_HTML/InstallingLustre.html#50438261_pgfId-1292575 Section 8: &#039;&#039;Installing the Lustre Software&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; In all Lustre installations, the server kernel that runs on an MDS, MGS or OSS must be patched. However, running a patched kernel on a Lustre client is optional and only required if the client will be used for multiple purposes, such as running as both a client and an OST.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Caution:&#039;&#039;&#039;&#039;&#039; Lustre contains kernel modifications that interact with storage devices and may introduce security issues and data loss if not installed, configured or&lt;br /&gt;
administered properly. Before installing Lustre, back up &#039;&#039;ALL&#039;&#039; data.&lt;br /&gt;
&lt;br /&gt;
Complete this procedure to install Lustre from RPMs:&lt;br /&gt;
&lt;br /&gt;
1. &#039;&#039;Verify that all of the Lustre installation requirements have been met.&#039;&#039; For more information on these prerequisites, see [[Preparing to Install Lustre]].&lt;br /&gt;
&lt;br /&gt;
2. &#039;&#039;Download the Lustre RPMs&#039;&#039;.&lt;br /&gt;
&lt;br /&gt;
:a. &#039;&#039;On the [http://www.oracle.com/technetwork/indexes/downloads/sun-az-index-095901.html#L Lustre download site], select your platform. The files required&#039;&#039; to install Lustre (kernels, modules and utilities RPMs) will be listed.&lt;br /&gt;
:b. &#039;&#039;Download the required files&#039;&#039;. Use the Sun Download Manager (SDM) or download the files individually.&lt;br /&gt;
&lt;br /&gt;
3. &#039;&#039;Install the Lustre packages.&#039;&#039;&lt;br /&gt;
Some Lustre packages are installed on servers (MDS and OSSs), and others are installed on Lustre clients. Lustre packages must be installed in a specific order.&lt;br /&gt;
&lt;br /&gt;
:a. &#039;&#039;For each Lustre package, determine if it needs to be installed on servers and/or clients&#039;&#039;. See [[Lustre Packages]] to determine where to install a specific package. Depending on your platform, not all of the listed files need to be installed.&lt;br /&gt;
&lt;br /&gt;
::&#039;&#039;&#039;&#039;&#039;Caution:&#039;&#039;&#039;&#039;&#039; For a non-production Lustre environment or for testing, a Lustre client and server can run on the same machine. However, for best performance in a production environment, Lustre must be installed on dedicated systems. Performance and other issues can occur when an MDS or OSS and a client are running on the same machine. The MDS and MGS can run on the same machine.&lt;br /&gt;
&lt;br /&gt;
:b. &#039;&#039;Install the kernel, modules and &#039;&#039;ldiskfs&#039;&#039; packages.&#039;&#039; Use the &#039;&#039;rpm -ivh&#039;&#039; command to install the kernel, module and ldiskfs packages. For example:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;$ rpm -ivh kernel-lustre-smp-&amp;lt;ver&amp;gt; kernel-ib-&amp;lt;ver&amp;gt; lustre-modules-&amp;lt;ver&amp;gt; lustre-ldiskfs-&amp;lt;ver&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:c. &#039;&#039;Install the utilities/userspace packages.&#039;&#039;Use the &#039;&#039;rpm -ivh&#039;&#039; command to install the utilities packages. For example:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;$ rpm -ivh lustre-&amp;lt;ver&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:d. &#039;&#039;Install the&#039;&#039; e2fsprogs &#039;&#039;package.&#039;&#039; Use the &#039;&#039;rpm -i&#039;&#039; command to install the &#039;&#039;e2fsprogs&#039;&#039; package. For example:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;$ rpm -i e2fsprogs-&amp;lt;ver&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:&#039;&#039;&#039;&#039;&#039;Note:&#039;&#039;&#039;&#039;&#039; If a version of &#039;&#039;e2fsprogs&#039;&#039; is already installed on your Linux system, install the Lustre-specific version of &#039;&#039;e2fsprogs&#039;&#039; using &#039;&#039;rpm -U&#039;&#039; to update the installed &#039;&#039;e2fsprogs&#039;&#039; package. For example:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;pre&amp;gt;&lt;br /&gt;
;$ rpm -U e2fsprogs-&amp;lt;ver&amp;gt;&lt;br /&gt;
&amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
:The &#039;&#039;rpm&#039;&#039; command options &#039;&#039;--force&#039;&#039; or &#039;&#039;--nodeps&#039;&#039; are not required to install or update &#039;&#039;e2fsprogs&#039;&#039;. We specifically recommend that you not use these options.&lt;br /&gt;
&lt;br /&gt;
:e. &#039;&#039;(Optional) If you want to add any optional packages to your Lustre file system, install them now.&#039;&#039; Optional packages include file system creation and repair tools, debugging tools, test programs and scripts, Linux kernel and Lustre source code, and other packages. A complete list of optional packages for your platform is provided on the [http://www.oracle.com/technetwork/indexes/downloads/sun-az-index-095901.html#L Lustre download site].&lt;br /&gt;
&lt;br /&gt;
4. &#039;&#039;Verify that the boot loader &#039;&#039;(grub.conf &#039;&#039;or&#039;&#039; lilo.conf) &#039;&#039;has been updated to load the patched kernel.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
5. &#039;&#039;Reboot the servers.&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
Proceed to [[Configuring the Lustre File System]] to configure Lustre Networking (LNET) and the Lustre file system.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Learn:Learn&amp;diff=12257</id>
		<title>Learn:Learn</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Learn:Learn&amp;diff=12257"/>
		<updated>2011-01-20T18:27:09Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Access resources such as information about current and upcoming Lustre features, a roadmap, publications about Lustre and training opportunities. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryLeft&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Interoperability, Features and Roadmap&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These resources detail Lustre releases and plans. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_Release_Information|Lustre Release Information]]&amp;lt;/ins&amp;gt; lists current Lustre releases and supported kernels and networks for these releases. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_1.8|Lustre 1.8]]&amp;lt;/ins&amp;gt; describes features and benefits offered by upgrading to Lustre 1.8. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_2.0|Lustre 2.0]]&amp;lt;/ins&amp;gt; describes features being developed for the next-generation release of Lustre. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryRight&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Publications&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use these resources to learn about Lustre.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_Community_Events, Conferences and Meetings|Lustre Presentations]]&amp;lt;/ins&amp;gt; by Lustre engineers at Lustre User Group meetings, conferences, Lustre all-hands meetings, and other Lustre-related events. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre Publications]]&amp;lt;/ins&amp;gt; from a variety of sources, including links to videos and podcasts, white papers and blueprints.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[http://wiki.lustre.org/manual/LustreManual20_HTML/index.html Lustre Operations Manual]&amp;lt;/ins&amp;gt;, the primary source for Lustre user documentation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryLeft&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Lustre Projects and Future Features&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To learn about active Lustre projects that span releases or features that are not in Lustre 1.8 or Lustre 2.0, see &amp;lt;ins&amp;gt;[[Lustre Projects]]&amp;lt;/ins&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryRight&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Lustre Internals&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* See the &amp;lt;ins&amp;gt;[http://wiki.lustre.org/lid/subsystem-map/subsystem-map.html Lustre Internals Documentation]&amp;lt;/ins&amp;gt; for detailed descriptions of the Lustre codebase in an easily accessible format.&lt;br /&gt;
&lt;br /&gt;
* The LID includes &amp;lt;ins&amp;gt;[http://wiki.lustre.org/lid/doxygen.api/modules.html Doxygen API information]&amp;lt;/ins&amp;gt; for various Lustre modules.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryLeft&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Training&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lustre training is available from Oracle University at locations in the &amp;lt;ins&amp;gt;[http://www.sun.com/training/catalog/courses/CL-400.xml United States]&amp;lt;/ins&amp;gt;, &amp;lt;ins&amp;gt;[http://uk.sun.com/training/catalog/courses/CL-400.xml Great Britain]&amp;lt;/ins&amp;gt;, &amp;lt;ins&amp;gt;[https://www.suntrainingcatalogue.com/eduserv/client/loadCourse.do?coId=de_DE_CL-400&amp;amp;l=de_DE Germany], and [http://au.sun.com/training/catalog/courses/CL-400.xml APAC countries]&amp;lt;/ins&amp;gt;.&lt;br /&gt;
* &amp;lt;ins&amp;gt;[http://www.sun.com/training/catalog/courses/CL-400.xml Administering Lustre Based Clusters (CL-400)]&amp;lt;/ins&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Learn:Learn&amp;diff=12256</id>
		<title>Learn:Learn</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Learn:Learn&amp;diff=12256"/>
		<updated>2011-01-20T18:26:44Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Access resources such as information about current and upcoming Lustre features, a roadmap, publications about Lustre and training opportunities. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryLeft&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Interoperability, Features and Roadmap&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
These resources detail Lustre releases and plans. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_Release_Information|Lustre Release Information]]&amp;lt;/ins&amp;gt; lists current Lustre releases and supported kernels and networks for these releases. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_1.8|Lustre 1.8]]&amp;lt;/ins&amp;gt; describes features and benefits offered by upgrading to Lustre 1.8. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_2.0|Lustre 2.0]]&amp;lt;/ins&amp;gt; describes features being developed for the next-generation release of Lustre. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryRight&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Publications&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Use these resources to learn about Lustre.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre_Community_Events, Conferences and Meetings|Lustre Presentations]]&amp;lt;/ins&amp;gt; by Lustre engineers at Lustre User Group meetings, conferences, Lustre all-hands meetings, and other Lustre-related events. &lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[[Lustre Publications]]&amp;lt;/ins&amp;gt; from a variety of sources, including links to videos and podcasts, white papers and blueprints.&lt;br /&gt;
&lt;br /&gt;
* &amp;lt;ins&amp;gt;[http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;]&amp;lt;/ins&amp;gt;, the primary source for Lustre user documentation.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryLeft&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Lustre Projects and Future Features&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To learn about active Lustre projects that span releases or features that are not in Lustre 1.8 or Lustre 2.0, see &amp;lt;ins&amp;gt;[[Lustre Projects]]&amp;lt;/ins&amp;gt;. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryRight&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Lustre Internals&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
* See the &amp;lt;ins&amp;gt;[http://wiki.lustre.org/lid/subsystem-map/subsystem-map.html Lustre Internals Documentation]&amp;lt;/ins&amp;gt; for detailed descriptions of the Lustre codebase in an easily accessible format.&lt;br /&gt;
&lt;br /&gt;
* The LID includes &amp;lt;ins&amp;gt;[http://wiki.lustre.org/lid/doxygen.api/modules.html Doxygen API information]&amp;lt;/ins&amp;gt; for various Lustre modules.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;div class=&amp;quot;categoryLeft&amp;quot;&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&amp;lt;strong&amp;gt;Training&amp;lt;/strong&amp;gt;&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lustre training is available from Oracle University at locations in the &amp;lt;ins&amp;gt;[http://www.sun.com/training/catalog/courses/CL-400.xml United States]&amp;lt;/ins&amp;gt;, &amp;lt;ins&amp;gt;[http://uk.sun.com/training/catalog/courses/CL-400.xml Great Britain]&amp;lt;/ins&amp;gt;, &amp;lt;ins&amp;gt;[https://www.suntrainingcatalogue.com/eduserv/client/loadCourse.do?coId=de_DE_CL-400&amp;amp;l=de_DE Germany], and [http://au.sun.com/training/catalog/courses/CL-400.xml APAC countries]&amp;lt;/ins&amp;gt;.&lt;br /&gt;
* &amp;lt;ins&amp;gt;[http://www.sun.com/training/catalog/courses/CL-400.xml Administering Lustre Based Clusters (CL-400)]&amp;lt;/ins&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_1.8&amp;diff=12255</id>
		<title>Lustre 1.8</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_1.8&amp;diff=12255"/>
		<updated>2011-01-20T18:24:55Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Version-Based Recovery */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Jan 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
Lustre™ 1.8.0.1 introduces several robust, new features and improved system functionality. This page provides feature descriptions and lists the benefits offered by upgrading to the Lustre 1.8 release branch. The change log and release notes are [[Change_Log_1.8|here]].&lt;br /&gt;
&lt;br /&gt;
==Adaptive Timeouts==&lt;br /&gt;
&lt;br /&gt;
The adaptive timeouts feature (enabled, by default) causes Lustre to use an adaptive mechanism to set RPC timeouts, so users no longer have to tune the obd_timeout value. RPC service time histories are tracked on all servers for each service, and estimates for future RPCs are reported back to clients. Clients use these service time estimates along with their own observations of the network delays to set future RPC timeout values. &lt;br /&gt;
&lt;br /&gt;
If server request processing slows down, its estimates increase and the clients allow more time for RPC completion before retrying. If RPCs queued up on the server approach their timeouts, the server sends early replies to the client, telling it to allow more time. Conversely, as the load on the server is reduced, the RPC timeout values decrease, allowing faster client detection of non-responsive servers and faster attempts to reconnect to a server&#039;s failover partner.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 1.8 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Adaptive timeouts offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Simplified management for small and large clusters.&lt;br /&gt;
* Automatically adjusts RPC timeouts as network conditions and server load changes.&lt;br /&gt;
* Reduces server recovery time in cases where the server load is low at time of failure.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about adaptive timeouts, see:&lt;br /&gt;
&lt;br /&gt;
* [[Architecture - Adaptive_Timeouts_-_Use_Cases|Architecture page - Adaptive timeouts (use cases)]]&lt;br /&gt;
* [[Media:Adaptive-timeouts-hld.pdf|HLD - Adaptive RPC timeouts]]&lt;br /&gt;
&lt;br /&gt;
==Client Interoperability==&lt;br /&gt;
&lt;br /&gt;
The client interoperability feature enables Lustre 1.8 clients to work with a new network protocol that will be introduced in Lustre 2.0. This feature allows transparent client, server, network and storage interoperability when migrating from 1.6 architecture-based clusters to clusters with 2.0 architecture-based servers. Lustre 1.8.x clients will interoperate with Lustre 2.0 servers.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 1.8 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Client interoperability offers this benefit:&lt;br /&gt;
&lt;br /&gt;
* When Lustre 2.x is released, Lustre 1.8.x users will be able to upgrade to 2.x servers while the Lustre filesystem is up and running. This transparent upgrade feature will enable users to upgrade their servers to Lustre 2.x and reboot them without disturbing applications using the filesystem on clients. It will no longer be necessary to unmount clients from the filesystem to upgrade servers to the new software. After the 2.x upgrade, Lustre 2.x servers will interoperate with 1.8.x clients.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on client interoperability, see:&lt;br /&gt;
&lt;br /&gt;
* [[Architecture - Interoperability_fids_zfs|Architecture page - Interoperability FIDs and ZFS]]&lt;br /&gt;
* [[Media:Interop_disk_fidea.pdf|HLD - Interoperability at the Server Side]]&lt;br /&gt;
* [[Media:Sptlrpc_interop-hld.pdf|HLD - Sptlrpc interoperability]]&lt;br /&gt;
* [[Media:Interop-client-recov-dld.pdf|DLD - Interoperable Client Recovery]]&lt;br /&gt;
* [[Media:Sptlrpc_interop-dld.pdf|DLD - Sptlrpc interoperability]]&lt;br /&gt;
&lt;br /&gt;
==OSS Read Cache==&lt;br /&gt;
&lt;br /&gt;
The OSS read cache feature provides read-only caching of data on an OSS. It uses a regular Linux pagecache to store the data. OSS read cache improves Lustre performance when several clients access the same data set, and the data fits the OSS cache (which can occupy most of the available memory). The overhead of OSS read cache is very low on modern CPUs, and cache misses do not negatively impact performance compared to Lustre releases before OSS read cache was available.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 1.8 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OSS read cache can improve Lustre performance, and offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Allows OSTs to cache read data more frequently&lt;br /&gt;
* Improves repeated reads to match network speeds instead of disk speeds&lt;br /&gt;
* Provides the building block for OST write cache (small write aggregation)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on OSS read cache, see:&lt;br /&gt;
&lt;br /&gt;
* [[Architecture - Caching_OSS|Architecture page - Caching OSS]]&lt;br /&gt;
&lt;br /&gt;
==OST Pools==&lt;br /&gt;
&lt;br /&gt;
The OST pools feature allows the administrator to name a group of OSTs for file striping purposes. For instance, a group of local OSTs could be defined for faster access; a group of higher-performance OSTs could be defined for specific applications; a group of non-RAID OSTs could be defined for scratch files; or groups of OSTs could be defined for particular users. &lt;br /&gt;
&lt;br /&gt;
Pools are defined by the system administrator, using regular Lustre tools (lctl). Pool usage is specified and stored along with other striping information&lt;br /&gt;
(e.g., stripe count, stripe size) for directories or individual files (lfs&lt;br /&gt;
setstripe or llapi_create_file()). Traditional automated OST selection&lt;br /&gt;
optimizations (QOS) occur within a pool (e.g., free-space leveling within&lt;br /&gt;
the pool). OSTs can be added or removed from a pool at any time (and existing&lt;br /&gt;
files always remain in place and available.)&lt;br /&gt;
&lt;br /&gt;
OST pools characteristics include:&lt;br /&gt;
&lt;br /&gt;
* An OST can be associated with multiple pools&lt;br /&gt;
* No ordering of OSTs is implied or defined within a pool&lt;br /&gt;
* OST membership in a pool can change over time&lt;br /&gt;
* a directory can default to a specific pool and new files/subdirectories created therein will use that pool&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;NOTE:&#039;&#039;&#039; In its current implementation, the OST pools feature does not implement an automated policy or restrict users from creating files in any of the pools; it must be managed directly by administrator/user. It is a building block for policy-managed storage. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 1.8 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
OST pools offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Allows sets of OSTs to be managed via named groups&lt;br /&gt;
* Pools can separate heterogeneous OSTs within the same filesystem&lt;br /&gt;
** Fast vs. slow disks&lt;br /&gt;
** Local network vs. remote network (e.g. WAN)&lt;br /&gt;
** RAID 1 vs. RAID5 backing storage, etc.&lt;br /&gt;
** Specific OSTs for users/groups/applications (by directory)&lt;br /&gt;
* Easier disk usage policy implementation for administrators&lt;br /&gt;
* Hardware can be more closely optimized for particular usage patterns&lt;br /&gt;
* Human-readable stripe mappings&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on OST pools, see:&lt;br /&gt;
&lt;br /&gt;
* [[Architecture - Pools_of_targets|Architecture page - OST pools]]&lt;br /&gt;
* [[Media:OstPools-DLD.pdf|DLD - OST Pools]]&lt;br /&gt;
* [[Media:Ostpools-large-scale_testplan.pdf|Test plan - OST pools]]&lt;br /&gt;
&lt;br /&gt;
==Version-Based Recovery==&lt;br /&gt;
&lt;br /&gt;
Version-based Recovery (VBR) improves the robustness of client recovery operations and allows Lustre to recover, even if multiple clients fail at the same time as the server. With VBR, recovery is more flexible; not all clients are evicted if some miss recovery, and a missed client may try to recover after the server recovery window.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 1.8 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
VBR functionality in Lustre 1.8 allows more flexible recovery after a failure. Previous Lustre releases enforced a strict, in-order replay condition that required all clients to reconnect during the recovery period. If a client was missing and the recovery period timed out, then the remaining clients were evicted. With VBR, conditional out-of-order replay is allowed. VBR uses versions to detect conflicting transactions. If an object&#039;s version matches what is expected, the transaction is replayed. If there is a version mis-match, clients attempting to modify the object are stopped. Recovery continues even if some clients do not reconnect (the missed clients can try to recover later). With VBR, Lustre clients may successfully recover in a wider variety of failure scenarios.&lt;br /&gt;
&lt;br /&gt;
VBR offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Improves the robustness of client recovery operations&lt;br /&gt;
* Allows Lustre recovery to continue even if multiple clients fail at the same time as the server&lt;br /&gt;
* Provides a building block for disconnected client operations&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on VBR, see:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreRecovery.html#50438268_pgfId-1287769 Section 30.4: &#039;&#039;Version-based Recovery&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;br /&gt;
* [[Architecture - Version_Based_Recovery|Architecture page - VBR]]&lt;br /&gt;
* [[Media:20080612165106%21Version_base_recovery-hld.pdf|HLD - VBR]]&lt;br /&gt;
* [[Media:Version_recovery.pdf|DLD - VBR]]&lt;br /&gt;
* [[Media:VBR_phase2_large_scale_testplan.pdf|Test plan - VBR]]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_2.0_Features&amp;diff=12254</id>
		<title>Lustre 2.0 Features</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_2.0_Features&amp;diff=12254"/>
		<updated>2011-01-20T18:23:01Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Nov 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Lustre_2.0|Lustre 2.0]] release introduced several significant new features and improved system functionality. This page provides descriptions of these features and lists the benefits offered by upgrading to the Lustre 2.0 release family.&lt;br /&gt;
&lt;br /&gt;
=Lustre 2.0.0=&lt;br /&gt;
&lt;br /&gt;
The initial [[Lustre 2.0]] release (known as 2.0.0) offers these features:&lt;br /&gt;
&lt;br /&gt;
===Changelogs===&lt;br /&gt;
&lt;br /&gt;
Changelogs record events that change the filesystem namespace or file metadata.  Events such as file creation, deletion, renaming, attribute changes, etc. are recorded with the target and parent file identifiers (FIDs), the name of the target, and a timestamp. These records can be used for a variety of purposes:&lt;br /&gt;
&lt;br /&gt;
* Record recent changes to feed into an archiving system.&lt;br /&gt;
* Use changelog entries to exactly replicate changes in a filesystem mirror.&lt;br /&gt;
* Set up &amp;quot;watch scripts&amp;quot; that take action on certain events or directories. Changelog record are persistent (on disk) until explicitly cleared by the user.  The are guaranteed to accurately reflect on-disk changes in the event of a server failure.&lt;br /&gt;
* Maintain a rough audit trail (file/directory changes with timestamps, but no user information).&lt;br /&gt;
&lt;br /&gt;
These are sample changelog entries:&lt;br /&gt;
&lt;br /&gt;
 2 02MKDIR 4298396676 0x0 t=[0x200000405:0x15f9:0x0] p=[0x13:0x15e5a7a3:0x0] pics&lt;br /&gt;
 3 01CREAT 4298402264 0x0 t=[0x200000405:0x15fa:0x0] p=[0x200000405:0x15f9:0x0] chloe.jpg&lt;br /&gt;
 4 06UNLNK 4298404466 0x0 t=[0x200000405:0x15fa:0x0] p=[0x200000405:0x15f9:0x0] chloe.jpg&lt;br /&gt;
 5 07RMDIR 4298405394 0x0 t=[0x200000405:0x15f9:0x0] p=[0x13:0x15e5a7a3:0x0] pics &lt;br /&gt;
&lt;br /&gt;
The record types are:&lt;br /&gt;
&lt;br /&gt;
{| border=1 cellpadding=0&lt;br /&gt;
!Record Type&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;MARK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;internal recordkeeping&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;CREAT&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;regular file creation&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;MKDIR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;directory creation&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;HLINK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;hardlink&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;SLINK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;softlink&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;MKNOD&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;other file creation&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;UNLNK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;regular file removal&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;RMDIR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;directory removal&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;RNMFM&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;rename, original&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;RNMTO&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;rename, final&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;IOCTL&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;ioctl on file or directory&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;TRUNC&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;regular file truncated&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;SATTR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;attribute change&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;XATTR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;extended attribute change&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;UNKNW&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;unknown op&amp;lt;/small&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
FID-to-full-pathname and pathname-to-FID functions are also included to map target and parent FIDs into the filesystem namespace.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 2.0.0 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Changelogs offer these benefits:&lt;br /&gt;
&lt;br /&gt;
* File/directory change notification&lt;br /&gt;
* Event notification&lt;br /&gt;
* Filesystem replication&lt;br /&gt;
* File backup policy decisions&lt;br /&gt;
* Audit trail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about changelogs, see:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreMonitoring.html#50438273_pgfId-1296751 Section 12.1: Changelogs - Lustre 2.0 manual]&lt;br /&gt;
&lt;br /&gt;
===Commit on Share===&lt;br /&gt;
&lt;br /&gt;
The Commit on Share (COS) feature prevents missing clients from causing cascading evictions of other clients. If some clients miss the recovery window, remaining clients are not evicted.&lt;br /&gt;
&lt;br /&gt;
When an MDS starts up and enters recovery mode after a failover or service restart, clients begin to reconnect and replay their uncommitted transactions. If one or more clients miss the recovery window, this may cause other clients to abort their transactions or be evicted. The transactions of evicted clients cannot be applied and are aborted. This causes a cascade effect as transactions dependent on the aborted ones fail and so on. COS addresses this problem by eliminating dependent transactions. With no dependent, uncommitted transactions to apply, the clients replay their requests independently without the risk of being evicted.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 2.0.0 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
COS offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Allows clients to always be able to recover, regardless of whether other clients have failed.&lt;br /&gt;
* Reduces recovery problems when multiple node failures occur &lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on COS, see:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreRecovery.html#50438268_pgfId-1292073 Section 30.5: Commit on Share - Lustre 2.0 manual]&lt;br /&gt;
* [[Media:COS_TestPlan.pdf|COS Test Plan]]&lt;br /&gt;
&lt;br /&gt;
===lustre_rsync===&lt;br /&gt;
&lt;br /&gt;
The lustre_rsync feature provides namespace and data replication to an external (remote) backup system without having to scan the file system for inode changes and modification times. Lustre metadata changelogs are used to record file system changes and determine which directory and file operations to execute on the replicated system. The lustre_rsync feature differs from existing backup/replication/synchronization systems because it avoids full file system scans, which can be unreasonably time-consuming for very large file systems. Also, the lustre_rsync process can be resumed from where it left off, so the replicated file system is fully synchronized when operation completes. Lustre_rsync may be bi-directional for distinct directories.&lt;br /&gt;
&lt;br /&gt;
The replicated system may be another Lustre file system or any other file system. The replica is an exact copy of the namespace of the original file system at a given point in time. However, the replicated file system is &#039;&#039;&#039;not&#039;&#039;&#039; a snapshot of the source file system in that its contents may differ from the original file system&#039;s contents. On the replicated file system, a file&#039;s contents will be the data in the file at the time the file transfer occurred. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 2.0.0 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lustre_rsync offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Namespace-coherent duplication of large file systems without scanning the complete file system&lt;br /&gt;
* Functionality is safe when run repeatedly or run after an aborted attempt&lt;br /&gt;
* Synchronization facility to switch the role of source and target file systems &lt;br /&gt;
* In the case of recovery, the feature provides for reverse replication&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on replication, see:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/SystemConfigurationUtilities_HTML.html#50438219_pgfId-1317225 Section 36.13: Lustre_rsync - Lustre 2.0 manual]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_2.0_Features&amp;diff=12253</id>
		<title>Lustre 2.0 Features</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_2.0_Features&amp;diff=12253"/>
		<updated>2011-01-20T18:19:23Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Nov 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
[[Lustre_2.0|Lustre 2.0]] release introduced several significant new features and improved system functionality. This page provides descriptions of these features and lists the benefits offered by upgrading to the Lustre 2.0 release family.&lt;br /&gt;
&lt;br /&gt;
=Lustre 2.0.0=&lt;br /&gt;
&lt;br /&gt;
The initial [[Lustre 2.0]] release (known as 2.0.0) offers these features:&lt;br /&gt;
&lt;br /&gt;
===Changelogs===&lt;br /&gt;
&lt;br /&gt;
Changelogs record events that change the filesystem namespace or file metadata.  Events such as file creation, deletion, renaming, attribute changes, etc. are recorded with the target and parent file identifiers (FIDs), the name of the target, and a timestamp. These records can be used for a variety of purposes:&lt;br /&gt;
&lt;br /&gt;
* Record recent changes to feed into an archiving system.&lt;br /&gt;
* Use changelog entries to exactly replicate changes in a filesystem mirror.&lt;br /&gt;
* Set up &amp;quot;watch scripts&amp;quot; that take action on certain events or directories. Changelog record are persistent (on disk) until explicitly cleared by the user.  The are guaranteed to accurately reflect on-disk changes in the event of a server failure.&lt;br /&gt;
* Maintain a rough audit trail (file/directory changes with timestamps, but no user information).&lt;br /&gt;
&lt;br /&gt;
These are sample changelog entries:&lt;br /&gt;
&lt;br /&gt;
 2 02MKDIR 4298396676 0x0 t=[0x200000405:0x15f9:0x0] p=[0x13:0x15e5a7a3:0x0] pics&lt;br /&gt;
 3 01CREAT 4298402264 0x0 t=[0x200000405:0x15fa:0x0] p=[0x200000405:0x15f9:0x0] chloe.jpg&lt;br /&gt;
 4 06UNLNK 4298404466 0x0 t=[0x200000405:0x15fa:0x0] p=[0x200000405:0x15f9:0x0] chloe.jpg&lt;br /&gt;
 5 07RMDIR 4298405394 0x0 t=[0x200000405:0x15f9:0x0] p=[0x13:0x15e5a7a3:0x0] pics &lt;br /&gt;
&lt;br /&gt;
The record types are:&lt;br /&gt;
&lt;br /&gt;
{| border=1 cellpadding=0&lt;br /&gt;
!Record Type&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;MARK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;internal recordkeeping&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;CREAT&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;regular file creation&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;MKDIR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;directory creation&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;HLINK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;hardlink&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;SLINK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;softlink&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;MKNOD&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;other file creation&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;UNLNK&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;regular file removal&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;RMDIR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;directory removal&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;RNMFM&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;rename, original&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;RNMTO&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;rename, final&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;IOCTL&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;ioctl on file or directory&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;TRUNC&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;regular file truncated&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;SATTR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;attribute change&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;XATTR&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;extended attribute change&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;small&amp;gt;&amp;lt;strong&amp;gt;UNKNW&amp;lt;/strong&amp;gt;&amp;lt;/small&amp;gt;|||&amp;lt;small&amp;gt;unknown op&amp;lt;/small&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
FID-to-full-pathname and pathname-to-FID functions are also included to map target and parent FIDs into the filesystem namespace.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 2.0.0 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Changelogs offer these benefits:&lt;br /&gt;
&lt;br /&gt;
* File/directory change notification&lt;br /&gt;
* Event notification&lt;br /&gt;
* Filesystem replication&lt;br /&gt;
* File backup policy decisions&lt;br /&gt;
* Audit trail&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information about changelogs, see:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreMonitoring.html#50618497_pgfId-1296751 Changelogs Topic - Lustre 2.0 manual]&lt;br /&gt;
&lt;br /&gt;
===Commit on Share===&lt;br /&gt;
&lt;br /&gt;
The Commit on Share (COS) feature prevents missing clients from causing cascading evictions of other clients. If some clients miss the recovery window, remaining clients are not evicted.&lt;br /&gt;
&lt;br /&gt;
When an MDS starts up and enters recovery mode after a failover or service restart, clients begin to reconnect and replay their uncommitted transactions. If one or more clients miss the recovery window, this may cause other clients to abort their transactions or be evicted. The transactions of evicted clients cannot be applied and are aborted. This causes a cascade effect as transactions dependent on the aborted ones fail and so on. COS addresses this problem by eliminating dependent transactions. With no dependent, uncommitted transactions to apply, the clients replay their requests independently without the risk of being evicted.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 2.0.0 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
COS offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Allows clients to always be able to recover, regardless of whether other clients have failed.&lt;br /&gt;
* Reduces recovery problems when multiple node failures occur &lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on COS, see:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreRecovery.html#50618492_pgfId-1292073 Commit on Share Topic - Lustre 2.0 manual]&lt;br /&gt;
* [[Media:COS_TestPlan.pdf|COS Test Plan]]&lt;br /&gt;
&lt;br /&gt;
===lustre_rsync===&lt;br /&gt;
&lt;br /&gt;
The lustre_rsync feature provides namespace and data replication to an external (remote) backup system without having to scan the file system for inode changes and modification times. Lustre metadata changelogs are used to record file system changes and determine which directory and file operations to execute on the replicated system. The lustre_rsync feature differs from existing backup/replication/synchronization systems because it avoids full file system scans, which can be unreasonably time-consuming for very large file systems. Also, the lustre_rsync process can be resumed from where it left off, so the replicated file system is fully synchronized when operation completes. Lustre_rsync may be bi-directional for distinct directories.&lt;br /&gt;
&lt;br /&gt;
The replicated system may be another Lustre file system or any other file system. The replica is an exact copy of the namespace of the original file system at a given point in time. However, the replicated file system is &#039;&#039;&#039;not&#039;&#039;&#039; a snapshot of the source file system in that its contents may differ from the original file system&#039;s contents. On the replicated file system, a file&#039;s contents will be the data in the file at the time the file transfer occurred. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Why should I upgrade to Lustre 2.0.0 to get it?&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Lustre_rsync offers these benefits:&lt;br /&gt;
&lt;br /&gt;
* Namespace-coherent duplication of large file systems without scanning the complete file system&lt;br /&gt;
* Functionality is safe when run repeatedly or run after an aborted attempt&lt;br /&gt;
* Synchronization facility to switch the role of source and target file systems &lt;br /&gt;
* In the case of recovery, the feature provides for reverse replication&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;Additional Resources&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For more information on replication, see:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/BackupAndRestore.html#50618423_pgfId-1293842 Lustre_rsync Topic - Lustre 2.0 manual]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_2.0_Release_Milestone_Status&amp;diff=12252</id>
		<title>Lustre 2.0 Release Milestone Status</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_2.0_Release_Milestone_Status&amp;diff=12252"/>
		<updated>2011-01-20T18:18:37Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Lustre 2.0 Beta-1 April 15, 2010 */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Apr 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
For the the upcoming Lustre™ 2.0 GA release, each alpha or beta milestone achieved is summarized below, including the focus of the milestone, testing results, a link to that release of the Lustre software, and links to additional documentation.  &lt;br /&gt;
&lt;br /&gt;
You can get to the latest milestone release from: http://downloads.lustre.org/public/lustre/v2.0/latest/&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Beta-1 April 15, 2010==&lt;br /&gt;
&lt;br /&gt;
The focus of the first Beta release (Beta-1) of 2.0 was to continue improve stability of Lustre while landing fixes to HEAD and completing additional bug fixes. 116 total fixes were landed during this cycle.&lt;br /&gt;
&lt;br /&gt;
This release was tested on RHEL5/x86_64 and OEL5/x86_64 with both IB and TCP connectivity. Known failures are documented in the HEAD Daily Testing Document available from lustre.org. &lt;br /&gt;
&lt;br /&gt;
The Beta-1 release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/beta/Lustre_2.0_Beta1/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Beta-1 release:&lt;br /&gt;
* [[Media:Lustre2.0Beata1Summary.pdf|Lustre 2.0 Beta-1 Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
* [[Media:HEADDailyTestingResults-Beta1.pdf|Lustre 2.0 Beta-1 Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each daily testing run for the duration of the Beta-1 Milestone&lt;br /&gt;
* Lustre 2.0 Operations Manual includes: installation, configuration, administration, troubleshooting and reference information - [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html HTML] and [http://wiki.lustre.org/images/3/35/821-2076-10.pdf PDF].&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Alpha-7 February 10, 2010==&lt;br /&gt;
&lt;br /&gt;
The focus of the seventh Alpha release (Alpha-7) of 2.0 was to continue to improve stability of Lustre through landing of fixes to the main trunk of the Lustre code base. 157 total fixes were landed during this cycle.&lt;br /&gt;
&lt;br /&gt;
This release was tested on RHEL5/x86_64 and OEL5/x86_64 with both IB and TCP connectivity. Known failures are documented in the HEAD Daily Testing Document available from lustre.org. &lt;br /&gt;
&lt;br /&gt;
The Alpha-7 release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/alpha/Lustre_2.0_Alpha7/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Alpha-7 release:&lt;br /&gt;
* [[Media:Lustre2.0Alpha7Summary-2.pdf|Lustre 2.0 Alpha-7 Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
* [[Media:HeadDailyTesting-Alpha-7.pdf|Lustre 2.0 Alpha-7 Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each daily testing run for the duration of the Alpha-7 Milestone&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Alpha-6 December 2, 2009==&lt;br /&gt;
&lt;br /&gt;
The focus of the sixth Alpha release (Alpha-6) of 2.0 was to continue to improve the stability of Lustre while landing fixes to HEAD and completing additional bug fixes. 104 total fixes were landed during this cycle. &lt;br /&gt;
&lt;br /&gt;
This release was tested on RHEL5/x86_64 servers with both IB and TCP connectivity. Known failures are documented in the HEAD Daily Testing Document available [[Media:HEADDailyTestingResults-Alpha6.pdf|here]]. &lt;br /&gt;
&lt;br /&gt;
The Alpha-6 release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/alpha/Lustre_2.0_Alpha6/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Alpha-6 release:&lt;br /&gt;
* [[Media:Lustre2.0Alpha6Summary.pdf|Lustre 2.0 Alpha-6 Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
* [[Media:HEADDailyTestingResults-Alpha6.pdf|Lustre 2.0 Alpha-6 Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each daily testing run for the duration of the Alpha-6 Milestone&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Alpha-5 October 14, 2009==&lt;br /&gt;
&lt;br /&gt;
The focus of the fifth Alpha release (Alpha-5) of 2.0 was to continue to improve the stability of Lustre while landing fixes to HEAD and completing additional bug fixes. 155 total fixes were landed during this cycle. &lt;br /&gt;
&lt;br /&gt;
This release was tested on RHEL5/x86_64 with both IB and TCP connectivity. Known failures are documented in the HEAD Daily Testing Document available [[Media:HEADDailyTestingResults-Alpha5.pdf|here]].&lt;br /&gt;
&lt;br /&gt;
The Alpha-5 release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/alpha/Lustre_2.0_Alpha5/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Alpha-5 release:&lt;br /&gt;
*[[Media:Lustre2.0Alpha5Summary.pdf|Lustre 2.0 Alpha-5 Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
*[[Media:HEADDailyTestingResults-Alpha5.pdf|Lustre 2.0 Alpha-5 Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each daily testing run for the duration of the Alpha-5 Milestone&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Alpha-4 July 1, 2009==&lt;br /&gt;
&lt;br /&gt;
The focus of the fourth Alpha release (Alpha-4) of 2.0 was to continue to improve stability of Lustre while landing fixes to HEAD and completing additional bug fixes. 120 total fixes were landed during this cycle. &lt;br /&gt;
&lt;br /&gt;
This release was tested on RHEL5/x86_64 and SLES 10/x86_64 with both IB and TCP connectivity. Known failures are documented in the HEAD Daily Testing Document available [[Media:HEADDailyTestingResults-Alpha4.pdf|here]]. &lt;br /&gt;
&lt;br /&gt;
This Alpha-4 release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/alpha/Lustre_2.0_Alpha4/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Alpha-4 release:&lt;br /&gt;
*[[Media:Lustre2.0Alpha4Summary.pdf|Lustre 2.0 Alpha-4 Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
*[[Media:HEADDailyTestingResults-Alpha4.pdf|Lustre 2.0 Alpha-4 Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each daily testing run for the duration of the Alpha-4 Milestone&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Alpha-3 June 10, 2009==&lt;br /&gt;
&lt;br /&gt;
The focus of the third Alpha release (Alpha-3) of 2.0 was to continue to improve stability of Lustre while landing queued fixes to HEAD and completing additional bug fixes. 95 total fixes were landed during this cycle. Notable check-ins during the period:&lt;br /&gt;
*Final patches for Size on MDS (SOM) preview were checked-in (Bug 1028)&lt;br /&gt;
*OFED 1.4.1 support was landed&lt;br /&gt;
&lt;br /&gt;
This release was tested on RHEL5/x86_64 and SLES 10/x86_64 with both IB and TCP connectivity. Known failures are documented in the HEAD Daily Testing Document available [[Media:HEADDailyTestingResults-Alpha3.pdf|here]]. Additionally, the backlog of HEAD patches that were queued for landing once HEAD was stabilized has been worked through and patches have been landed. &lt;br /&gt;
&lt;br /&gt;
This Alpha-3 release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/alpha/Lustre_2.0_Alpha3/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Alpha-3 release:&lt;br /&gt;
*[[Media:Lustre2.0Alpha3Summary.pdf|Lustre 2.0 Alpha-3 Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
*[[Media:HEADDailyTestingResults-Alpha3.pdf|Lustre 2.0 Alpha-3 Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each daily testing run for the duration of the Alpha-3 Milestone&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Alpha-2 May 12, 2009==&lt;br /&gt;
&lt;br /&gt;
The focus of the second Alpha release (Alpha-2) of 2.0 was to continue improve stability of Lustre while landing queued fixes to HEAD and completing additional bug fixes. 85 total fixes were landed during this cycle. &lt;br /&gt;
&lt;br /&gt;
This release was tested on RHEL5/x86_64 and SLES 10/x86_64 with both IB and TCP connectivity. Known failures are documented in the HEAD Daily Testing Document available [[Media:HEADDailyTestingResults-Alpha2.pdf|here]].  Additionally, IOR and simul runs were executed iteratively on a modest cluster (approx 75 clients) and run for 18 hours successfully with iSCSI attached HW in file per process mode. IOR and Simul were tested on a larger direct attach cluster and IOR has run successfully for 4 hours.&lt;br /&gt;
&lt;br /&gt;
This Alpha-2 release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/alpha/Lustre_2.0_Alpha2/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Alpha-2 release:&lt;br /&gt;
*[[Media:Lustre2.0Alpha2Summary.pdf|Lustre 2.0 Alpha-2 Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
*[[Media:HEADDailyTestingResults-Alpha2.pdf|Lustre 2.0 Alpha-2 Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each daily testing run for the duration of the Alpha-2 Milestone&lt;br /&gt;
&lt;br /&gt;
==Lustre 2.0 Alpha April 16, 2009==&lt;br /&gt;
&lt;br /&gt;
The goal of the first Alpha release of Lustre 2.0 was to demonstrate basic stability of Lustre on a single platform/distro. For this release, RHEL5/x86_64 was selected because it is the most often downloaded distribution from lustre.org. &lt;br /&gt;
&lt;br /&gt;
To achieve our goal, daily &#039;&#039;acc-sm&#039;&#039; runs were executed on small test clusters and bug fixing priority focused on fixing bugs that prevented clean &#039;&#039;acc-sm&#039;&#039; runs. Additionally, IOR and simul runs were executed on a modest cluster (approx 75 clients) to verify basic stability. During this Alpha phase, only bug fixes advancing stability were landed. All other HEAD landing requests were held and deferred to a later landing date post Alpha. &lt;br /&gt;
&lt;br /&gt;
This first alpha release of Lustre 2.0 can be downloaded from http://downloads.lustre.org/public/lustre/v2.0/alpha/Lustre_2.0_Alpha/&lt;br /&gt;
&lt;br /&gt;
The following documentation is available related to the Alpha-1 release:&lt;br /&gt;
*[[Media:Lustre2.0AlphaSummary.pdf|Lustre 2.0 Alpha Summary]] includes: goals, timelines, fixes landed and milestone outcomes&lt;br /&gt;
*[[Media:Lustre-20-alpha-test-plan.pdf|Lustre 2.0 Alpha Test Plan]]&lt;br /&gt;
*[[Media:HEADDailyTestingResults.pdf|Daily HEAD Testing Results]] includes: test pass/fail status and bugs found in each testing run&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_All-Hands_Meeting_12/08&amp;diff=12251</id>
		<title>Lustre All-Hands Meeting 12/08</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_All-Hands_Meeting_12/08&amp;diff=12251"/>
		<updated>2011-01-20T18:17:03Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Once a year, the Lustre™ Engineering team gathers to discuss new features under development and testing efforts. This week-long event is known as the Lustre all-hands meeting. The Development presentations made at the December 2008 all-hands meeting are available here:&lt;br /&gt;
&lt;br /&gt;
* [[media:Simplified_InteropRecovery.pdf|&#039;&#039;Simplified Interoperability Recovery&#039;&#039;]] - Huang Hua&lt;br /&gt;
* [[media:RecoveryTalk_2009.pdf|&#039;&#039;Recovery Overview&#039;&#039;]] - Robert Read&lt;br /&gt;
* [[media:Quotas-TOI.pdf|&#039;&#039;Quotas-TOI&#039;&#039;]] - Yong Fan&lt;br /&gt;
* [[media:QualityInitiativeTalk.pdf|&#039;&#039;Quality Initiative Talk&#039;&#039;]] - Robert Read&lt;br /&gt;
* [[media:OST_Pools.pdf|&#039;&#039;OST Pools&#039;&#039;]] - Nathan Rutman&lt;br /&gt;
* [[media:OST_Migration_RAID1_SNS.pdf|&#039;&#039;OST Migration RAID1 SNS&#039;&#039;]] - Andreas Dilger&lt;br /&gt;
* [[media:NRS.pdf|&#039;&#039;Lustre NRS Simulation&#039;&#039;]] - Yingjin Qian, Wang Di&lt;br /&gt;
* [[media:LustreInterop_1_8.pdf|&#039;&#039;Lustre Interoperability 1.8&#039;&#039;]] - Huang Hua&lt;br /&gt;
* [[media:HDFS.pdf|&#039;&#039;HDFS&#039;&#039;]] - Wang Di&lt;br /&gt;
* [[media:GIT_Overview.pdf|&#039;&#039;GIT Overview&#039;&#039;]] - Robert Read&lt;br /&gt;
* [[media:COS-TOI.pdf|&#039;&#039;COS-TOI&#039;&#039;]] - Alexander Zarochentsev&lt;br /&gt;
* [[media:CLIO-TOI.pdf|&#039;&#039;CLIO-TOI&#039;&#039;]] - Nikita Danilov&lt;br /&gt;
* [[media:CLIO-TOI-notes.pdf|&#039;&#039;CLIO-TOI-notes&#039;&#039;]] - Nikita Danilov&lt;br /&gt;
* [[media:CLIO.pdf|&#039;&#039;CLIO&#039;&#039;]] - Nikita Danilov&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_All-Hands_Meeting_3/08&amp;diff=12250</id>
		<title>Lustre All-Hands Meeting 3/08</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_All-Hands_Meeting_3/08&amp;diff=12250"/>
		<updated>2011-01-20T18:15:33Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Once a year, the Lustre™ Engineering team gathers to discuss new features under development and testing efforts. This week-long event is known as the Lustre all-hands meeting. The Development and QE presentations made at the March 2008 all-hands meeting are available here:&lt;br /&gt;
&lt;br /&gt;
== Development ==&lt;br /&gt;
* [[Media:RMG-process-0308.pdf|&#039;&#039;RMG Processes&#039;&#039;]] - Andreas Dilger&lt;br /&gt;
* [[Media:Eeb-launch-0308.pdf|&#039;&#039;Lustre Development Strategy&#039;&#039;]] - Eric Barton&lt;br /&gt;
* [[Media:Mdt.pdf|&#039;&#039;HEAD MDS&#039;&#039;]] - Nikita Danilov&lt;br /&gt;
* [[Media:Lnet-0308.pdf|&#039;&#039;LNET&#039;&#039;]] - Issac Huang&lt;br /&gt;
&lt;br /&gt;
== QE ==&lt;br /&gt;
* [[Media:Jd.day1-0308.pdf|&#039;&#039;Day 1&#039;&#039;]] - [[Media:Jd.day2-0308.pdf|&#039;&#039;Day 2&#039;&#039;]] - JD&lt;br /&gt;
* [[Media:Lustre-release%26weekly-testing-030.pdf|&#039;&#039;Lustre Release &amp;amp; weekly testing&#039;&#039;]] - Jian Yu&lt;br /&gt;
* [[Media:Buildsys-0308.pdf|&#039;&#039;Build System&#039;&#039;]] - Yibin Wang&lt;br /&gt;
* [[Media:Head-testing-0308.pdf&#039;&#039;|HEAD testing&#039;&#039;]] - Zheng Chen&lt;br /&gt;
* [[Media:Latest.b1_6-0308.pdf|&#039;&#039;b1.6 testing&#039;&#039;]] - Peng Ye&lt;br /&gt;
* [[Media:Perf.testing-0308.pdf|&#039;&#039;Performance testing&#039;&#039;]] - Jack Chen&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Center_of_Excellence_at_Oak_Ridge_National_Laboratory&amp;diff=12249</id>
		<title>Lustre Center of Excellence at Oak Ridge National Laboratory</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Center_of_Excellence_at_Oak_Ridge_National_Laboratory&amp;diff=12249"/>
		<updated>2011-01-20T18:14:04Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Dec 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
The Lustre™ Center of Excellence (LCE) at ORNL advances the state of Lustre for use in large scale HPC environments. As you&#039;ll see from the projects below, the ORNL LCE focuses on both systems and applications aspects of ensuring that Lustre meets the needs of DOE and the HPC community in general.&lt;br /&gt;
&lt;br /&gt;
==Events==&lt;br /&gt;
&lt;br /&gt;
The LCE sponsors events to encourage HPC community involvement&lt;br /&gt;
in analyzing IO and storage requirements and identifying ways for&lt;br /&gt;
Lustre to address these requirements.&lt;br /&gt;
&lt;br /&gt;
===LCE Summit - February 2008===&lt;br /&gt;
*[[Media:LCE_Summit_Summary_Draft_March_14_2008.pdf|&#039;&#039;LCE Summit Meeting Summary&#039;&#039;]]&lt;br /&gt;
*[[Media:LCESummitSlides.pdf|&#039;&#039;LCE Summit Slides and Notes&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
===LCE Application I/O Workshop - April 16, 2008===&lt;br /&gt;
*[[Media:April2008ApplicationIOWorkshop.pdf|&#039;&#039;Application IO Workshop, Agenda and Notes&#039;&#039;]]&lt;br /&gt;
*[[Media:Lustre_workshop_WangDi.pdf|&#039;&#039;Lustre and Application IO - Wang Di&#039;s Slides&#039;&#039;]]&lt;br /&gt;
*[[Media:Lustre_workshop_Oleg.pdf|&#039;&#039;Lustre and Application IO - Oleg Drokin&#039;s Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
===Lustre Scalability Workshop - Feb 10 &amp;amp; 11, 2009, ORNL===&lt;br /&gt;
*[[Media:Notes_on_SW1_Notes.pdf|&#039;&#039;Scalability Workshop Notes&#039;&#039;]]&lt;br /&gt;
*[[Media:LustreScalabilityWP_Updated.pdf|&#039;&#039;Scalability White Paper&#039;&#039;]]&lt;br /&gt;
*[[Media:Shipman_Feb_lustre_scalability.pdf|&#039;&#039;Galen Shipman&#039;s presentation&#039;&#039;]]&lt;br /&gt;
*[[Media:Eric-Barton_-_Lustre-Multi_PF_Roadmap-090130.pdf|&#039;&#039;Eric Barton&#039;s presentation&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
===Lustre Scalability Workshop - May 19 &amp;amp; 20, 2009, ORNL===&lt;br /&gt;
The May Lustre Scalability Workshop at ORNL was focused on long term (2015) HPC IO and storage requirements and on presentations on the IO objectives of the DOD HPCS program and how Lustre will achieve them.&lt;br /&gt;
*[[Media:Dawson_Lustre_Workshop_May_2009.pdf|&#039;&#039;Lustre Scalability Workshop, Initial Gap Response&#039;&#039; - John Dawson]]&lt;br /&gt;
*[[Media:Carrier_2009-05-19_ORNL_LCE_HPCS.pdf|&#039;&#039;HPCS IO&#039;&#039; - John Carrier]]&lt;br /&gt;
*[[Media:Newman_May_Lustre_Workshop.pdf|&#039;&#039;What is HPCS and How Does it Impact IO&#039;&#039; - Henry Newman]]&lt;br /&gt;
*[[Media:Shipman_May_lustre_scalability_workshop.pdf|&#039;&#039;2015 Parallel File System Requirements&#039;&#039; - Galen Shipman]]&lt;br /&gt;
*[[Media:Dilger_Lustre_HPCS_May_Workshop.pdf|&#039;&#039;Lustre HPCS Design Overview&#039;&#039; - Andreas Dilger]]&lt;br /&gt;
&lt;br /&gt;
===Scalability Workshop Follow Up===&lt;br /&gt;
*[[Media:Lustre_Scalability_Workshop.pdf‎ |&#039;&#039;Scalability Gap Response&#039;&#039;]] dated October 2009 is the final version of Sun&#039;s response to the scalability gaps identified and discussed during the LCE Scalability Workshops in February and May of 2009.&lt;br /&gt;
&lt;br /&gt;
==White Papers==&lt;br /&gt;
&lt;br /&gt;
LCE personnel have written a variety of papers on High Performance IO and potential Lustre features. Links to these documents are below.&lt;br /&gt;
 &lt;br /&gt;
*[https://www.sun.com/offers/details/Peta-Scale_wp.xml &#039;&#039;Lustre Scalability - An Oak Ridge National Laboratory/Lustre Center of Excellence Paper&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
*[[Media:Lce_pop_submitted.pdf|&#039;&#039;Improving I/O Performance in POP (Parallel Ocean Program)&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
* Scalability Improvements&lt;br /&gt;
**[[Media:Lustre_Enhancement_Report_Interval_Trees.pdf|&#039;&#039;Using Interval Tree to scale extent locks&#039;&#039;]]  &lt;br /&gt;
**[[Media:Lustre_Enhancement_Report_UUID_Hash_Tables.pdf|&#039;&#039;Implement hash tables to scale export lookups&#039;&#039;]] &lt;br /&gt;
**[[Media:A_Novel_Network_Request_Scheduler_for_a_Large_Scale_Storage_System.pdf|&#039;&#039;A Novel Network Request Scheduler for a Large Scale Storage System&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
*Lustre ADIO Driver Enhancements Whitepaper&lt;br /&gt;
**[[Media:Lustre_ADIO_Driver_Whitepaper_0926.pdf|&#039;&#039;Lustre ADIO Driver Enhancements&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
*FMS Application IO Analysis&lt;br /&gt;
**[[Media:FMS_Investigation_Report_%280915%29.pdf|&#039;&#039;FMS Application IO Performance Analysis&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
==Presentations==&lt;br /&gt;
&lt;br /&gt;
See the ORNL [http://wiki.lustre.org/index.php/Lustre_User_Group slides and video] from their presentation at LUG 2009.&lt;br /&gt;
&lt;br /&gt;
==Press Articles==&lt;br /&gt;
&lt;br /&gt;
* [http://www.nccs.gov/2009/06/30/ornl-hosts-lustre-part-ii/ &#039;&#039;ORNL Hosts Lustre Part II&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
==Lustre Internals Manual==&lt;br /&gt;
LCE and ORNL have written a Lustre filesystem internals document the describes the internal operation of Lustre.&lt;br /&gt;
*[[Media:Understanding_Lustre_Filesystem_Internals.pdf|&#039;&#039;Understanding Lustre Filesystem Internals&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
== Archives of Older Material ==&lt;br /&gt;
&lt;br /&gt;
This material may be out of date, and is preserved here for archive purposes. &lt;br /&gt;
 &lt;br /&gt;
*[[Media:Peta-Scale_wp.pdf|&#039;&#039;2008 Paper on IO with the Lustre File System at ORNL&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For more information about computing at Oak Ridge, click [http://www.nccs.gov/ here].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Center_of_Excellence_at_Oak_Ridge_National_Laboratory&amp;diff=12248</id>
		<title>Lustre Center of Excellence at Oak Ridge National Laboratory</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Center_of_Excellence_at_Oak_Ridge_National_Laboratory&amp;diff=12248"/>
		<updated>2011-01-20T18:11:45Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Dec 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
The Lustre™ Center of Excellence (LCE) at ORNL advances the state of Lustre for use in large scale HPC environments. As you&#039;ll see from the projects below, the ORNL LCE focuses on both systems and applications aspects of ensuring that Lustre meets the needs of DOE and the HPC community in general.&lt;br /&gt;
&lt;br /&gt;
==Events==&lt;br /&gt;
&lt;br /&gt;
The LCE sponsors events to encourage HPC community involvement&lt;br /&gt;
in analyzing IO and storage requirements and identifying ways for&lt;br /&gt;
Lustre to address these requirements.&lt;br /&gt;
&lt;br /&gt;
===LCE Summit - February 2008===&lt;br /&gt;
*[[Media:LCE_Summit_Summary_Draft_March_14_2008.pdf|&#039;&#039;LCE Summit Meeting Summary&#039;&#039;]]&lt;br /&gt;
*[[Media:LCESummitSlides.pdf|&#039;&#039;LCE Summit Slides and Notes&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
===LCE Application I/O Workshop - April 16, 2008===&lt;br /&gt;
*[[Media:April2008ApplicationIOWorkshop.pdf|&#039;&#039;Application IO Workshop, Agenda and Notes&#039;&#039;]]&lt;br /&gt;
*[[Media:Lustre_workshop_WangDi.pdf|&#039;&#039;Lustre and Application IO - Wang Di&#039;s Slides&#039;&#039;]]&lt;br /&gt;
*[[Media:Lustre_workshop_Oleg.pdf|&#039;&#039;Lustre and Application IO - Oleg Drokin&#039;s Slides&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
===Lustre Scalability Workshop - Feb 10 &amp;amp; 11, 2009, ORNL===&lt;br /&gt;
*[[Media:Notes_on_SW1_Notes.pdf|&#039;&#039;Scalability Workshop Notes&#039;&#039;]]&lt;br /&gt;
*[[Media:LustreScalabilityWP_Updated.pdf|&#039;&#039;Scalability White Paper&#039;&#039;]]&lt;br /&gt;
*[[Media:Shipman_Feb_lustre_scalability.pdf|&#039;&#039;Galen Shipman&#039;s presentation&#039;&#039;]]&lt;br /&gt;
*[[Media:Eric-Barton_-_Lustre-Multi_PF_Roadmap-090130.pdf|&#039;&#039;Eric Barton&#039;s presentation&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
===Lustre Scalability Workshop - May 19 &amp;amp; 20, 2009, ORNL===&lt;br /&gt;
The May Lustre Scalability Workshop at ORNL was focused on long term (2015) HPC IO and storage requirements and on presentations on the IO objectives of the DOD HPCS program and how Lustre will achieve them.&lt;br /&gt;
*[[Media:Dawson_Lustre_Workshop_May_2009.pdf|&#039;&#039;Lustre Scalability Workshop, Initial Gap Response&#039;&#039; - John Dawson]]&lt;br /&gt;
*[[Media:Carrier_2009-05-19_ORNL_LCE_HPCS.pdf|&#039;&#039;HPCS IO&#039;&#039; - John Carrier]]&lt;br /&gt;
*[[Media:Newman_May_Lustre_Workshop.pdf|&#039;&#039;What is HPCS and How Does it Impact IO&#039;&#039; - Henry Newman]]&lt;br /&gt;
*[[Media:Shipman_May_lustre_scalability_workshop.pdf|&#039;&#039;2015 Parallel File System Requirements&#039;&#039; - Galen Shipman]]&lt;br /&gt;
*[[Media:Dilger_Lustre_HPCS_May_Workshop.pdf|&#039;&#039;Lustre HPCS Design Overview&#039;&#039; - Andreas Dilger]]&lt;br /&gt;
&lt;br /&gt;
===Scalability Workshop Follow Up===&lt;br /&gt;
*[[Media:Lustre_Scalability_Workshop.pdf‎ |&#039;&#039;Scalability Gap Response&#039;&#039;]] dated October 2009 is the final version of Sun&#039;s response to the scalability gaps identified and discussed during the LCE Scalability Workshops in February and May of 2009.&lt;br /&gt;
&lt;br /&gt;
==White Papers==&lt;br /&gt;
&lt;br /&gt;
LCE personnel have written a variety of papers on High Performance IO and potential Lustre features. Links to these documents are below.&lt;br /&gt;
 &lt;br /&gt;
*[https://www.sun.com/offers/details/Peta-Scale_wp.xml &#039;&#039;Lustre Scalability - An Oak Ridge National Laboratory/Lustre Center of Excellence Paper&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
*[[Media:Lce_pop_submitted.pdf|&#039;&#039;Improving I/O Performance in POP (Parallel Ocean Program)&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
* Scalability Improvements&lt;br /&gt;
**[[Media:Lustre_Enhancement_Report_Interval_Trees.pdf|&#039;&#039;Using Interval Tree to scale extent locks&#039;&#039;]]  &lt;br /&gt;
**[[Media:Lustre_Enhancement_Report_UUID_Hash_Tables.pdf|&#039;&#039;Implement hash tables to scale export lookups&#039;&#039;]] &lt;br /&gt;
**[[Media:A_Novel_Network_Request_Scheduler_for_a_Large_Scale_Storage_System.pdf|&#039;&#039;A Novel Network Request Scheduler for a Large Scale Storage System&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
*Lustre ADIO Driver Enhancements Whitepaper&lt;br /&gt;
**[[Media:Lustre_ADIO_Driver_Whitepaper_0926.pdf|&#039;&#039;Lustre ADIO Driver Enhancements&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
*FMS Application IO Analysis&lt;br /&gt;
**[[Media:FMS_Investigation_Report_%280915%29.pdf|&#039;&#039;FMS Application IO Performance Analysis&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
==Presentations==&lt;br /&gt;
&lt;br /&gt;
See the ORNL [http://wiki.lustre.org/index.php/Lustre_User_Group slides and video] from their presentation at LUG 2009.&lt;br /&gt;
&lt;br /&gt;
==Press Articles==&lt;br /&gt;
&lt;br /&gt;
* [http://www.nccs.gov/2009/06/30/ornl-hosts-lustre-part-ii/ ORNL Hosts Lustre Part II]&lt;br /&gt;
&lt;br /&gt;
==Lustre Internals Manual==&lt;br /&gt;
LCE and ORNL have written a Lustre filesystem internals document the describes the internal operation of Lustre.&lt;br /&gt;
*[[Media:Understanding_Lustre_Filesystem_Internals.pdf|Understanding Lustre Filesystem Internals]]&lt;br /&gt;
&lt;br /&gt;
== Archives of Older Material ==&lt;br /&gt;
&lt;br /&gt;
This material may be out of date, and is preserved here for archive purposes. &lt;br /&gt;
 &lt;br /&gt;
*[[Media:Peta-Scale_wp.pdf| 2008 Paper on IO with the Lustre File System at ORNL]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For more information about computing at Oak Ridge, click [http://www.nccs.gov/ here].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Center_of_Excellence_at_Oak_Ridge_National_Laboratory&amp;diff=12247</id>
		<title>Lustre Center of Excellence at Oak Ridge National Laboratory</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Center_of_Excellence_at_Oak_Ridge_National_Laboratory&amp;diff=12247"/>
		<updated>2011-01-20T18:09:12Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Archives of Older Material */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Dec 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
__TOC__&lt;br /&gt;
The Lustre™ Center of Excellence (LCE) at ORNL advances the state of Lustre for use in large scale HPC environments. As you&#039;ll see from the projects below, the ORNL LCE focuses on both systems and applications aspects of ensuring that Lustre meets the needs of DOE and the HPC community in general.&lt;br /&gt;
&lt;br /&gt;
==Events==&lt;br /&gt;
&lt;br /&gt;
The LCE sponsors events to encourage HPC community involvement&lt;br /&gt;
in analyzing IO and storage requirements and identifying ways for&lt;br /&gt;
Lustre to address these requirements.&lt;br /&gt;
&lt;br /&gt;
===LCE Summit - February 2008===&lt;br /&gt;
*[[Media:LCE_Summit_Summary_Draft_March_14_2008.pdf|LCE Summit Meeting Summary]]&lt;br /&gt;
*[[Media:LCESummitSlides.pdf|LCE Summit Slides and Notes]]&lt;br /&gt;
&lt;br /&gt;
===LCE Application I/O Workshop - April 16, 2008===&lt;br /&gt;
*[[Media:April2008ApplicationIOWorkshop.pdf|Application IO Workshop, Agenda and Notes]]&lt;br /&gt;
*[[Media:Lustre_workshop_WangDi.pdf|Lustre and Application IO - Wang Di&#039;s Slides]]&lt;br /&gt;
*[[Media:Lustre_workshop_Oleg.pdf|Lustre and Application IO - Oleg Drokin&#039;s Slides]]&lt;br /&gt;
&lt;br /&gt;
===Lustre Scalability Workshop - Feb 10 &amp;amp; 11, 2009, ORNL===&lt;br /&gt;
*[[Media:Notes_on_SW1_Notes.pdf|Scalability Workshop Notes]]&lt;br /&gt;
*[[Media:LustreScalabilityWP_Updated.pdf|Scalability White Paper]]&lt;br /&gt;
*[[Media:Shipman_Feb_lustre_scalability.pdf|Galen Shipman&#039;s presentation]]&lt;br /&gt;
*[[Media:Eric-Barton_-_Lustre-Multi_PF_Roadmap-090130.pdf|Eric Barton&#039;s presentation]]&lt;br /&gt;
&lt;br /&gt;
===Lustre Scalability Workshop - May 19 &amp;amp; 20, 2009, ORNL===&lt;br /&gt;
The May Lustre Scalability Workshop at ORNL was focused on long term (2015) HPC IO and storage requirements and on presentations on the IO objectives of the DOD HPCS program and how Lustre will achieve them.&lt;br /&gt;
*[[Media:Dawson_Lustre_Workshop_May_2009.pdf|Lustre Scalability Workshop, Initial Gap Response - John Dawson]]&lt;br /&gt;
*[[Media:Carrier_2009-05-19_ORNL_LCE_HPCS.pdf|HPCS IO - John Carrier]]&lt;br /&gt;
*[[Media:Newman_May_Lustre_Workshop.pdf|What is HPCS and How Does it Impact IO - Henry Newman]]&lt;br /&gt;
*[[Media:Shipman_May_lustre_scalability_workshop.pdf|2015 Parallel File System Requirements - Galen Shipman]]&lt;br /&gt;
*[[Media:Dilger_Lustre_HPCS_May_Workshop.pdf|Lustre HPCS Design Overview - Andreas Dilger]]&lt;br /&gt;
&lt;br /&gt;
===Scalability Workshop Follow Up===&lt;br /&gt;
*[[Media:Lustre_Scalability_Workshop.pdf‎ |Scalability Gap Response]] dated October 2009 is the final version of Sun&#039;s response to the scalability gaps identified and discussed during the LCE Scalability Workshops in February and May of 2009.&lt;br /&gt;
&lt;br /&gt;
==White Papers==&lt;br /&gt;
&lt;br /&gt;
LCE personnel have written a variety of papers on High Performance IO and potential Lustre features. Links to these documents are below.&lt;br /&gt;
 &lt;br /&gt;
*[https://www.sun.com/offers/details/Peta-Scale_wp.xml Lustre Scalability - An Oak Ridge National Laboratory/Lustre Center of Excellence Paper]&lt;br /&gt;
&lt;br /&gt;
*[[Media:Lce_pop_submitted.pdf|Improving I/O Performance in POP (Parallel Ocean Program)]]&lt;br /&gt;
&lt;br /&gt;
* Scalability Improvements&lt;br /&gt;
**[[Media:Lustre_Enhancement_Report_Interval_Trees.pdf|Using Interval Tree to scale extent locks]]  &lt;br /&gt;
**[[Media:Lustre_Enhancement_Report_UUID_Hash_Tables.pdf|Implement hash tables to scale export lookups]] &lt;br /&gt;
**[[Media:A_Novel_Network_Request_Scheduler_for_a_Large_Scale_Storage_System.pdf|A Novel Network Request Scheduler for a Large Scale Storage System]]&lt;br /&gt;
&lt;br /&gt;
*Lustre ADIO Driver Enhancements Whitepaper&lt;br /&gt;
**[[Media:Lustre_ADIO_Driver_Whitepaper_0926.pdf|Lustre ADIO Driver Enhancements]]&lt;br /&gt;
&lt;br /&gt;
*FMS Application IO Analysis&lt;br /&gt;
**[[Media:FMS_Investigation_Report_%280915%29.pdf|FMS Application IO Performance Analysis]]&lt;br /&gt;
&lt;br /&gt;
==Presentations==&lt;br /&gt;
&lt;br /&gt;
See the ORNL [http://wiki.lustre.org/index.php/Lustre_User_Group slides and video] from their presentation at LUG 2009.&lt;br /&gt;
&lt;br /&gt;
==Press Articles==&lt;br /&gt;
&lt;br /&gt;
* [http://www.nccs.gov/2009/06/30/ornl-hosts-lustre-part-ii/ ORNL Hosts Lustre Part II]&lt;br /&gt;
&lt;br /&gt;
==Lustre Internals Manual==&lt;br /&gt;
LCE and ORNL have written a Lustre filesystem internals document the describes the internal operation of Lustre.&lt;br /&gt;
*[[Media:Understanding_Lustre_Filesystem_Internals.pdf|Understanding Lustre Filesystem Internals]]&lt;br /&gt;
&lt;br /&gt;
== Archives of Older Material ==&lt;br /&gt;
&lt;br /&gt;
This material may be out of date, and is preserved here for archive purposes. &lt;br /&gt;
 &lt;br /&gt;
*[[Media:Peta-Scale_wp.pdf| 2008 Paper on IO with the Lustre File System at ORNL]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
For more information about computing at Oak Ridge, click [http://www.nccs.gov/ here].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Customers&amp;diff=12246</id>
		<title>Lustre Customers</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Customers&amp;diff=12246"/>
		<updated>2011-01-20T18:08:16Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Nov 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Below are listed a handful of typical Lustre™ customers. For a look at other customers, see contributions by participants in [[GetInvolved:Get_Involved|Lustre User Group]] meetings.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Media:DIF.pdf|CEA]]&lt;br /&gt;
&amp;lt;li&amp;gt;[http://www.sun.com/customers/software/chevron.xml Chevron]&lt;br /&gt;
&amp;lt;li&amp;gt;[http://www.sun.com/customers/software/framestore.xml Framestore]&lt;br /&gt;
&amp;lt;li&amp;gt;[[Media:StephenSimms.pdf|Indiana University]]&lt;br /&gt;
&amp;lt;li&amp;gt;[http://www.nccs.gov/2009/06/30/ornl-hosts-lustre-part-ii/ Oak Ridge National Laboratory]&lt;br /&gt;
&amp;lt;li&amp;gt;[http://hpc.pnl.gov/projects/active-storage/papers/lci-evanfelix.pdf Pacific Northwest National Laboratory]&lt;br /&gt;
&amp;lt;li&amp;gt;[[Media:SNL_LUG_2009.pdf|Sandia National Laboratory]]&lt;br /&gt;
&amp;lt;li&amp;gt;[http://www.sun.com/customers/servers/tacc.xml Texas Advanced Computing Center]&lt;br /&gt;
&amp;lt;li&amp;gt;[http://www.sun.com/customers/servers/tokyo_tech1.xml Tokyo Institute of Technology]&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Design_Documents&amp;diff=12245</id>
		<title>Lustre Design Documents</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Design_Documents&amp;diff=12245"/>
		<updated>2011-01-20T18:06:24Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The Lustre™ team has prepared several design documents as part of the [[Lustre_HPCS_Activities|DARPA High Productivity Computing Systems (HPCS) program]]: &lt;br /&gt;
&lt;br /&gt;
*[[Media:JKD_Wiki_V1_2009_08_07_Lustre_HPCS_Overview.pdf|&#039;&#039;Lustre HPCS Design Overview&#039;&#039;]]&lt;br /&gt;
*[[Media:FSCK_Design-2009-06-15-09.pdf|&#039;&#039;Filesystem Integrity Check Design&#039;&#039;]]&lt;br /&gt;
*[[Media:End-to-End-Integrity-2009-06-15.pdf|&#039;&#039;End to End Data Integrity Design&#039;&#039;]]&lt;br /&gt;
*[[Media:Channel Bonding_06_15_09.pdf|&#039;&#039;LNET Channel Bonding Design&#039;&#039;]]&lt;br /&gt;
*[[Media:Rebuild_performance-2009-06-15.pdf|&#039;&#039;Rebuild Performance Design&#039;&#039;]]&lt;br /&gt;
*[[Media:HPCS_CMD_06_15_09.pdf|&#039;&#039;Clustered Metadata Design&#039;&#039;]]&lt;br /&gt;
&lt;br /&gt;
In November 2009, the senior Lustre engineers met in conjunction with the SC&#039;09 event in Portland, Oregon, to discuss designs for upcoming features and implementation status of current development projects. The following project reviews and design presentations took place at this meeting:&lt;br /&gt;
&lt;br /&gt;
* [[Media:SC09-kDMU-Code.pdf|&#039;&#039;Kernel DMU Project Review&#039;&#039;]] - Alex Zhuravlev&lt;br /&gt;
* [[Media:SC09-SOM-Code.pdf|&#039;&#039;Size On MDS Project Review&#039;&#039;]] - Vitaly Fertman&lt;br /&gt;
* [[Media:SC09-FID-on-OST.pdf|&#039;&#039;FIDs on OST Design Presentation&#039;&#039;]] - Mike Pershin&lt;br /&gt;
* [[Media:SC09-HSM-Code.pdf|&#039;&#039;HSM Project Review&#039;&#039;]] - Nathan Rutman&lt;br /&gt;
* [[Media:SC09-SMP-Scaling.pdf|&#039;&#039;Server SMP Scaling Project Review&#039;&#039;]] - Liang Zhen&lt;br /&gt;
* [[Media:SC09-Quota-Code.pdf|&#039;&#039;Quota Project Review&#039;&#039;]] - Johann Lombardi&lt;br /&gt;
* [[Media:SC09-Imperative-Recovery.pdf|&#039;&#039;Imperative Recovery Design Presentation&#039;&#039;]] - Nicolas &lt;br /&gt;
* [[Media:SC09-ZFS-End-to-End-Integrity.pdf|&#039;&#039;ZFS, Lustre, End-to-end Data Integrity BOF&#039;&#039;]] - Andreas Dilger&lt;br /&gt;
* [[Media:SC09-LOV-OSC.pdf|&#039;&#039;LOV/OSC/llog Redesign Presentation&#039;&#039;]] - Alex Zhuravlev&lt;br /&gt;
* [[Media:SC09-CMD-Code.pdf|&#039;&#039;Clustered Metadata Project Review&#039;&#039;]] - Tom Wang&lt;br /&gt;
* [[Media:SC09-Simplified-Interop.pdf|&#039;&#039;Simplified Interop (CSS) Design Presentation&#039;&#039;]] - Eric Mei&lt;br /&gt;
&lt;br /&gt;
The [[Lustre Design Document Archive]] contains older architecture and design documents.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Documentation&amp;diff=12244</id>
		<title>Lustre Documentation</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Documentation&amp;diff=12244"/>
		<updated>2011-01-20T18:04:12Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__ &lt;br /&gt;
The &#039;&#039;Lustre Operations Manual&#039;&#039; is the primary source of documentation about the Lustre product. The manual covers the Lustre product and provides users with installation, configuration, tuning, monitoring and troubleshooting information. &lt;br /&gt;
&lt;br /&gt;
Different versions of the &#039;&#039;Lustre Operations Manual&#039;&#039; support different versions of Lustre software. To determine which version of the &#039;&#039;Lustre Operations Manual&#039;&#039; supports the Lustre software you are using, refer to the table below. The latest version of the Operations Manual is available in both HTML and PDF formats. &lt;br /&gt;
 &lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=0 cellpadding=&amp;quot;10&amp;quot;&lt;br /&gt;
! &amp;lt;small&amp;gt;Manual Part No.&amp;lt;/small&amp;gt; !! &amp;lt;small&amp;gt;Supported Lustre Version&amp;lt;/small&amp;gt; !! &amp;lt;small&amp;gt;Release Date&amp;lt;/small&amp;gt; !! &amp;lt;small&amp;gt;HTML&amp;lt;/small&amp;gt; !! &amp;lt;small&amp;gt;PDF&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;small&amp;gt;821-2076-10&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;&#039;&#039;&#039;2.0&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;January 2011&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;[http://wiki.lustre.org/manual/LustreManual20_HTML/index.html Lustre Manual 2.0]&amp;lt;/small&amp;gt; &lt;br /&gt;
| &amp;lt;small&amp;gt;[[Media:821-2076-10.pdf|Lustre 2.0 Manual]]&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;small&amp;gt;821-2077-10&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;&#039;&#039;&#039;2.0&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;July 2010&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;n/a&amp;lt;/small&amp;gt; &lt;br /&gt;
| &amp;lt;small&amp;gt;[[Media:821-2077-10.pdf|Lustre 2.0 Release Notes]]&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;small&amp;gt;821-0035-12&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;&#039;&#039;&#039;1.8&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;December 2010&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;[http://wiki.lustre.org/manual/LustreManual18_HTML/index.html Lustre Manual 1.8]&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;[[Media:821-0035-12.pdf|Lustre 1.8 Manual]]&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
| &amp;lt;small&amp;gt;820-3681-12&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;&#039;&#039;&#039;1.6&#039;&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;June 2010&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;[http://wiki.lustre.org/manual/LustreManual16_HTML/index.html Lustre Manual 1.6]&amp;lt;/small&amp;gt;&lt;br /&gt;
| &amp;lt;small&amp;gt;[[Media:820-3681_v1_18.pdf|Lustre 1.6 Manual]]&amp;lt;/small&amp;gt;&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Fall_Workshop_10/2010&amp;diff=12243</id>
		<title>Lustre Fall Workshop 10/2010</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Fall_Workshop_10/2010&amp;diff=12243"/>
		<updated>2011-01-20T18:03:04Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;During the week of October 25-28, 2010, the Lustre team held a workshop in Beijing.  Below are the presentations offered during this time.&lt;br /&gt;
&lt;br /&gt;
Project Overviews:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Feature Overviews:&lt;br /&gt;
*[[Media:Beijing-2010.2-ZFS_overview_3.1_Dilger.pdf|&#039;&#039;ZFS Overview&#039;&#039;]] - Andreas Dilger&lt;br /&gt;
*[[Media:HSM_api_Green.pdf|&#039;&#039;HSM Overview&#039;&#039;]] - Oleg Drokin&lt;br /&gt;
*[[Media:quota_on_osd.pdf|&#039;&#039;Quotas Overview&#039;&#039;]] - Alex Zhuravlev&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_HPCS_Activities&amp;diff=12242</id>
		<title>Lustre HPCS Activities</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_HPCS_Activities&amp;diff=12242"/>
		<updated>2011-01-20T18:02:04Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Aug 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The DARPA High Productivity Computing Systems (HPCS) project is a  program created to ensure that US Government agencies continue to have access to the advanced high-performance computing technologies needed to fulfill their missions. The program objectives include achieving extremely high IO performance and file system scale and reliability. [http://www.cray.com Cray Inc.] has partnered with Sun to use Lustre™ to achieve these goals.&lt;br /&gt;
&lt;br /&gt;
General information on the HPCS program is available at:&lt;br /&gt;
&lt;br /&gt;
*http://www.highproductivity.org/&lt;br /&gt;
&lt;br /&gt;
*http://www.darpa.mil/ipto/programs/hpcs/hpcs.asp&lt;br /&gt;
&lt;br /&gt;
John Carrier of Cray, Inc. presented an overview of the HPCS IO project at the Lustre User Group meeting in April of 2008. The slides and video of John Carrier&#039;s presentation can be found here:&lt;br /&gt;
&lt;br /&gt;
*[[Media:LUG08_Cray_HPCS.pdf|&#039;&#039;DARPA HPCS Project Slides&#039;&#039;]] and *[http://video.google.com/videoplay?docid=-5970693206965456534&amp;amp;hl=en &#039;&#039;Video&#039;&#039;] - John Carrier, Cray Inc.&lt;br /&gt;
&lt;br /&gt;
There were three presentations on the HPCS IO project at the Scalability Workshop held by the [[Lustre Center of Excellence at Oak Ridge National Laboratory]] in May, 2009.&lt;br /&gt;
&lt;br /&gt;
*[[Media:Carrier_2009-05-19_ORNL_LCE_HPCS.pdf|&#039;&#039;HPCS IO&#039;&#039; - John Carrier, Cray Inc.]]&lt;br /&gt;
*[[Media:Newman_May_Lustre_Workshop.pdf|&#039;&#039;What is HPCS and How Does it Impact IO&#039;&#039; - Henry Newman, Instrumental Inc.]]&lt;br /&gt;
*[[Media:Dilger_Lustre_HPCS_May_Workshop.pdf|&#039;&#039;Lustre HPCS Design Overview&#039;&#039; - Andreas Dilger, Sun Microsystems]]&lt;br /&gt;
&lt;br /&gt;
The Lustre team has prepared several design documents as part of the HPCS program. These designs are:&lt;br /&gt;
&lt;br /&gt;
*[[Media:JKD_Wiki_V1_2009_08_07_Lustre_HPCS_Overview.pdf|&#039;&#039;Lustre HPCS Design Overview&#039;&#039;]]&lt;br /&gt;
*[[Media:FSCK_Design-2009-06-15-09.pdf|&#039;&#039;Filesystem Integrity Check Design&#039;&#039;]]&lt;br /&gt;
*[[Media:End-to-End-Integrity-2009-06-15.pdf|&#039;&#039;End to End Data Integrity Design&#039;&#039;]]&lt;br /&gt;
*[[Media:Channel Bonding_06_15_09.pdf|&#039;&#039;LNet Channel Bonding Design&#039;&#039;]]&lt;br /&gt;
*[[Media:Rebuild_performance-2009-06-15.pdf|&#039;&#039;Rebuild Performance Design&#039;&#039;]]&lt;br /&gt;
*[[Media:HPCS_CMD_06_15_09.pdf|&#039;&#039;Clustered Metadata Design&#039;&#039;]]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_HPCS_Activities&amp;diff=12241</id>
		<title>Lustre HPCS Activities</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_HPCS_Activities&amp;diff=12241"/>
		<updated>2011-01-20T18:01:43Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Aug 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The DARPA High Productivity Computing Systems (HPCS) project is a  program created to ensure that US Government agencies continue to have access to the advanced high-performance computing technologies needed to fulfill their missions. The program objectives include achieving extremely high IO performance and file system scale and reliability. [http://www.cray.com Cray Inc.] has partnered with Sun to use Lustre™ to achieve these goals.&lt;br /&gt;
&lt;br /&gt;
General information on the HPCS program is available at:&lt;br /&gt;
&lt;br /&gt;
*http://www.highproductivity.org/&lt;br /&gt;
&lt;br /&gt;
*http://www.darpa.mil/ipto/programs/hpcs/hpcs.asp&lt;br /&gt;
&lt;br /&gt;
John Carrier of Cray, Inc. presented an overview of the HPCS IO project at the Lustre User Group meeting in April of 2008. The slides and video of John Carrier&#039;s presentation can be found here:&lt;br /&gt;
&lt;br /&gt;
*[[Media:LUG08_Cray_HPCS.pdf|&#039;&#039;DARPA HPCS Project Slides&#039;&#039;]] and *[http://video.google.com/videoplay?docid=-5970693206965456534&amp;amp;hl=en Video] - John Carrier, Cray Inc.&lt;br /&gt;
&lt;br /&gt;
There were three presentations on the HPCS IO project at the Scalability Workshop held by the [[Lustre Center of Excellence at Oak Ridge National Laboratory]] in May, 2009.&lt;br /&gt;
&lt;br /&gt;
*[[Media:Carrier_2009-05-19_ORNL_LCE_HPCS.pdf|&#039;&#039;HPCS IO&#039;&#039; - John Carrier, Cray Inc.]]&lt;br /&gt;
*[[Media:Newman_May_Lustre_Workshop.pdf|&#039;&#039;What is HPCS and How Does it Impact IO&#039;&#039; - Henry Newman, Instrumental Inc.]]&lt;br /&gt;
*[[Media:Dilger_Lustre_HPCS_May_Workshop.pdf|&#039;&#039;Lustre HPCS Design Overview&#039;&#039; - Andreas Dilger, Sun Microsystems]]&lt;br /&gt;
&lt;br /&gt;
The Lustre team has prepared several design documents as part of the HPCS program. These designs are:&lt;br /&gt;
&lt;br /&gt;
*[[Media:JKD_Wiki_V1_2009_08_07_Lustre_HPCS_Overview.pdf|&#039;&#039;Lustre HPCS Design Overview&#039;&#039;]]&lt;br /&gt;
*[[Media:FSCK_Design-2009-06-15-09.pdf|&#039;&#039;Filesystem Integrity Check Design&#039;&#039;]]&lt;br /&gt;
*[[Media:End-to-End-Integrity-2009-06-15.pdf|&#039;&#039;End to End Data Integrity Design&#039;&#039;]]&lt;br /&gt;
*[[Media:Channel Bonding_06_15_09.pdf|&#039;&#039;LNet Channel Bonding Design&#039;&#039;]]&lt;br /&gt;
*[[Media:Rebuild_performance-2009-06-15.pdf|&#039;&#039;Rebuild Performance Design&#039;&#039;]]&lt;br /&gt;
*[[Media:HPCS_CMD_06_15_09.pdf|&#039;&#039;Clustered Metadata Design&#039;&#039;]]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Packages&amp;diff=12240</id>
		<title>Lustre Packages</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Packages&amp;diff=12240"/>
		<updated>2011-01-20T17:58:11Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Feb 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The table below describes each Lustre™ package and indicates about where each package is to be installed.&lt;br /&gt;
&lt;br /&gt;
{| border=1 cellpadding=0&lt;br /&gt;
|-&lt;br /&gt;
!width=&amp;quot;220&amp;quot;|Lustre Package&lt;br /&gt;
!width=&amp;quot;300&amp;quot;|Description&lt;br /&gt;
!width=&amp;quot;160&amp;quot;|Install on servers&lt;br /&gt;
!width=&amp;quot;160&amp;quot;|Install on clients&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;strong&amp;gt;&amp;lt;small&amp;gt;Lustre kernel RPMs&amp;lt;/small&amp;gt;&amp;lt;/strong&amp;gt;|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;kernel-lustre-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre-patched kernel package for RHEL 5 (i686, ia64 and x86_64) platform.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;kernel-lustre-smp-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre-patched kernel package for SuSE Server 10 (x86_64) platform.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;kernel-lustre-bigsmp-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre-patched kernel package for SuSE Server 10 (i686) platform.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;kernel-ib-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre OFED package. Install if the network interconnect is InfiniBand (IB).&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;kernel-lustre-default-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;kernel-lustre-default-base-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;kernel-lustre-default-extra-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre-patched kernel package for SuSE Server 11 (i686 and x86_64) platform.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;strong&amp;gt;&amp;lt;small&amp;gt;Lustre module RPMs&amp;lt;/small&amp;gt;&amp;lt;/strong&amp;gt;|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;lustre-modules-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre modules for the patched kernel.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;lustre-client-modules-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre modules for patchless clients.&amp;lt;/small&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;strong&amp;gt;&amp;lt;small&amp;gt;Lustre utilities&amp;lt;/small&amp;gt;&amp;lt;/strong&amp;gt;|| || ||&lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;lustre-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre utilities package. This includes userspace utilities to configure and run Lustre.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;lustre-ldiskfs-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre-patched backing file system kernel module package for the ext3 file system.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;e2fsprogs-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Utilities package used to maintain the ext3 backing file system.&amp;lt;/small&amp;gt;&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;|| &lt;br /&gt;
|-&lt;br /&gt;
|&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;lt;small&amp;gt;lustre-client-&amp;lt;ver&amp;gt;&amp;lt;/small&amp;gt;&lt;br /&gt;
|&amp;lt;small&amp;gt;Lustre utilities for patchless clients.&amp;lt;/small&amp;gt;&lt;br /&gt;
|&lt;br /&gt;
|align=&amp;quot;center&amp;quot;|&amp;lt;small&amp;gt;X&amp;lt;/small&amp;gt;&lt;br /&gt;
|}&lt;br /&gt;
For packages used to patch a client kernel (optional), see [http://wiki.lustre.org/manual/LustreManual20_HTML/InstallingLustre.html#50438261_pgfId-1294006 Section 8.1: &#039;&#039;Preparing to Install the Lustre Software&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_Projects&amp;diff=12239</id>
		<title>Lustre Projects</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_Projects&amp;diff=12239"/>
		<updated>2011-01-20T17:57:03Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page lists Lustre™ projects that may span multiple releases or are features in development that have not yet been assigned to a release. You can find links to feature descriptions for current and future releases of Lustre on the [[Learn]] page.&lt;br /&gt;
&lt;br /&gt;
Projects: &lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Metadata Performance Project]]&lt;br /&gt;
&amp;lt;li&amp;gt;[[Running Hadoop with Lustre]]&lt;br /&gt;
&amp;lt;li&amp;gt;[[Lustre HPCS Activities]]&lt;br /&gt;
&amp;lt;li&amp;gt; [[ZFS and Lustre]]&lt;br /&gt;
&amp;lt;li&amp;gt; [[Windows Native Client]]&lt;br /&gt;
&amp;lt;/ul&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Features:&lt;br /&gt;
&amp;lt;ul&amp;gt;&lt;br /&gt;
&amp;lt;li&amp;gt;[[Clustered Metadata]]&lt;br /&gt;
&amp;lt;/ul&amp;gt;&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_System_Configuration_Utilities&amp;diff=12238</id>
		<title>Lustre System Configuration Utilities</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_System_Configuration_Utilities&amp;diff=12238"/>
		<updated>2011-01-20T17:47:20Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Jan 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A set of system configuration utilities are provided with Lustre.  These utilities include:&lt;br /&gt;
&lt;br /&gt;
: &#039;&#039;&#039;mkfs.lustre&#039;&#039;&#039; - Used to format a disk for a Lustre service.&lt;br /&gt;
&lt;br /&gt;
: &#039;&#039;&#039;tunefs.lustre&#039;&#039;&#039; - Used to modify configuration information on a Lustre target disk.&lt;br /&gt;
&lt;br /&gt;
: &#039;&#039;&#039;lctl&#039;&#039;&#039; - Used to directly control Lustre via an &#039;&#039;ioctl&#039;&#039; interface, allowing various configuration, maintenance and debugging features to be accessed.&lt;br /&gt;
&lt;br /&gt;
: &#039;&#039;&#039;mount.lustre&#039;&#039;&#039; - Used to start a Lustre client or target service.&lt;br /&gt;
&lt;br /&gt;
For examples using these utilities, see:&lt;br /&gt;
* [[Completing Basic Administrative Tasks]] - many of these procedures use one or more system configuration utilities. For example, in [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreOperations.html#50438194_pgfId-1307131 &#039;&#039;Running Multiple Lustre File Systems&#039;&#039;], &#039;&#039;mkfs.lustre&#039;&#039; is used to create a second file system and to set up a Lustre installation with two file systems.&lt;br /&gt;
* [[Configuring the Lustre File System]] - &#039;&#039;mkfs.lustre&#039;&#039; and &#039;&#039;tunefs.lustre&#039;&#039; are used to configure a Lustre file system after installation.&lt;br /&gt;
* [[Managing OSTs]] and related subtopics - &#039;&#039;lctl&#039;&#039; is used to show the status of an OST, to deactivate and activate an OST, to set weighting for free space, and to create a new pool and add an OST.&lt;br /&gt;
&lt;br /&gt;
For more information about the Lustre system configuration utilities, see [http://wiki.lustre.org/manual/LustreManual20_HTML/SystemConfigurationUtilities_HTML.html#50438219_pgfId-5529 Chapter 36: &#039;&#039;System Configuration Utilities&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;]. This chapter also describes other system configuration utilities, as well as utilities to manage large clusters, perform application profiling, and test and debug Lustre.&lt;br /&gt;
&lt;br /&gt;
For additional information and examples, see the following chapters in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;]:&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfiguringLustre.html#50438267_pgfId-1290860 Chapter 10: &#039;&#039;Configuring Lustre&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfiguringLustre.html#50438267_pgfId-1290956 Section 10.1.1: &#039;&#039;Simple Lustre Configuration Example&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/ConfiguringLustre.html#50438267_pgfId-1293213 Section 10.2: &#039;&#039;Additional Configuration Options&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreMonitoring.html#50438273_pgfId-5529 Chapter 12: &#039;&#039;Lustre Monitoring&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreOperations.html#50438194_pgfId-1289959 Chapter 13: &#039;&#039;Lustre Operations&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
* [http://wiki.lustre.org/manual/LustreManual20_HTML/UserUtilities_HTML.html#50438206_pgfId-5529 Chapter 32: &#039;&#039;User Utilities&#039;&#039;]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2006&amp;diff=12237</id>
		<title>Lustre User Group 2006</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2006&amp;diff=12237"/>
		<updated>2011-01-20T17:45:16Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
The fourth annual Lustre User Group Spring meeting was held April 18-20, 2006 on Hilton Head Island, SC.&lt;br /&gt;
&lt;br /&gt;
The Lustre User Group was founded by Pacific Northwest National Laboratory and Lawrence Livermore National Laboratory to promote the use of the open source Lustre file system for HPC and scalable I/O. The annual meeting is an opportunity for current and prospective Lustre users to share and develop best practices, provide direct feedback to the Lustre development team, and receive an update on our plans for the coming year.&lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
==LUG 2006 Conference Schedule==&lt;br /&gt;
The target attendees for this meeting are users or administrators of a Lustre file system, and partners/resellers with Lustre offerings. Developers or researchers working with Lustre are also welcome, but should note that there is little content specifically for developers.&lt;br /&gt;
&lt;br /&gt;
To allow for requested topics and submitted presentations, the agenda has several open slots. Please let us know what you would like to see in the program, and we&#039;ll do our best to accommodate the most popular requests.&lt;br /&gt;
&lt;br /&gt;
If you would like to make a presentation at the meeting, please send a short proposal to lustre2006-submissions@clusterfs.com. This e-mail address is being protected from spam bots, you need JavaScript enabled to view it. Due to limited availability, please limit your presentation to topics of general interest such as best practices, lessons learned, research projects based on Lustre, or similar. Please do not submit sales presentations, and please submit before March 24.&lt;br /&gt;
&lt;br /&gt;
The last two half-day sessions will provide a structured, organized forum for identifying and clarifying Lustre&#039;s future requirements. The goals are to gather input from customers and partners, identify conflicting requirements, and prioritize the rest. This replaces the half-day group discussions of the previous two LUG meetings.&lt;br /&gt;
&lt;br /&gt;
While everyone is welcome to attend the first 1.5 days of the meeting, we must ask that organizations designate no more than two representatives for the Lustre Requirements Forum. These individuals are historically the liaisons from strategic Lustre partners and users who can speak to their organizations&#039; requirements. To help us get an accurate count, please indicate on the registration form whether you will attend the Lustre Requirements Forum.&lt;br /&gt;
&lt;br /&gt;
===Tuesday April 18, 2006=== &lt;br /&gt;
&lt;br /&gt;
9:00-9:30 a.m.  Welcome &amp;amp; Introduction&lt;br /&gt;
Dr.Peter Braam - President &amp;amp; CEO  &lt;br /&gt;
&lt;br /&gt;
9:30-10:00 a.m.  [[Media:Lug_06_cfs_business_overview.pdf|Business Overview]] &lt;br /&gt;
Jeffrey Denworth - Director of Sales &lt;br /&gt;
 &lt;br /&gt;
10:00-10:30 a.m.  [[Media:Lug-supportpresentation-060418.pdf|Support &amp;amp; QA Update]]&lt;br /&gt;
Peter Bojanic - Director of Engineering &lt;br /&gt;
 &lt;br /&gt;
10:30-11:00 a.m.  Break &lt;br /&gt;
&lt;br /&gt;
11:00-12:00 a.m.  Lustre now and future:&lt;br /&gt;
architecture &amp;amp; roadmap &lt;br /&gt;
&lt;br /&gt;
12:00-1:00 p.m.  Lunch &lt;br /&gt;
&lt;br /&gt;
1:00-1:30 p.m.  Lustre now and future:&lt;br /&gt;
architecture &amp;amp; roadmap &lt;br /&gt;
&lt;br /&gt;
1:30-2:00 p.m.  Experiences with Lustre in a Seismic Processing Environment &lt;br /&gt;
Chevron Corporation&lt;br /&gt;
 &lt;br /&gt;
2:00-2:30 p.m.  [[Media:Lug06-osu.pdf|Evaluating Lustre on DDR &amp;amp; SDR Infiniband]]&lt;br /&gt;
Dr.DK Panda - Ohio State University&lt;br /&gt;
 &lt;br /&gt;
3:00-3:30 p.m.  Break &lt;br /&gt;
&lt;br /&gt;
3:30-3:45 p.m. [[Media:Lug06-ncsa.pdf|Lustre - WAN Update]]&lt;br /&gt;
Chris Cribbs - NCSA &lt;br /&gt;
&lt;br /&gt;
3:45-4:10 p.m.  CEA TERA10 100GB/s + Lustre Deployment &lt;br /&gt;
Stephane Thiell - CEA DAM&lt;br /&gt;
 &lt;br /&gt;
4:10-4:30 p.m.  [[Media:Lug06-bull.pdf|Bull Lustre Management Tools for Large Clusters]]&lt;br /&gt;
Johann Lombardi - Bull&lt;br /&gt;
 &lt;br /&gt;
6:00-9:00 p.m. Lustre Users Reception&lt;br /&gt;
&lt;br /&gt;
===Wednesday April 19, 2006 ===&lt;br /&gt;
&lt;br /&gt;
9:00-9:30 a.m.  [[Media:Lug06-liquid.pdf|Converged Communications Systems Parallel File Systems and Copious IO]]&lt;br /&gt;
Jose Miguel Peleato - Liquid &lt;br /&gt;
&lt;br /&gt;
9:30-10:00 a.m.  [[Media:Lug06-hp.pdf|Key Issues Deploying Lustre at HP Customer Sites]]&lt;br /&gt;
Fergal Mc Carthy - HP&lt;br /&gt;
 &lt;br /&gt;
10:00-10:30 a.m.  Indiana University&#039;s Data Capacitor &lt;br /&gt;
Steve Simms - Indiana University &lt;br /&gt;
&lt;br /&gt;
10:30-11:00 a.m.  Break &lt;br /&gt;
&lt;br /&gt;
11:00-11:30 a.m.  Cray Experiences Deploying Terascale Lustre Environments&lt;br /&gt;
Peter Rigsbee - Cray &lt;br /&gt;
&lt;br /&gt;
11:30-12:00 a.m.  [[Media:Lug06-ddn.pdf|Storage HW and Petascale I/O]]&lt;br /&gt;
Dave Fellinger - DataDirect Networks &lt;br /&gt;
&lt;br /&gt;
12:00-1:00 p.m.  Lunch &lt;br /&gt;
&lt;br /&gt;
1:00-5:00 p.m.  &lt;br /&gt;
&lt;br /&gt;
Lustre Requirements Forum&lt;br /&gt;
&lt;br /&gt;
-Requirements Survey&lt;br /&gt;
&lt;br /&gt;
-Prioritization&lt;br /&gt;
&lt;br /&gt;
-Refinement of Specific Requirements&lt;br /&gt;
&lt;br /&gt;
-Break&lt;br /&gt;
&lt;br /&gt;
-Implementation Scenarios&lt;br /&gt;
&lt;br /&gt;
-Review&lt;br /&gt;
&lt;br /&gt;
===Thursday April 20, 2006 ===&lt;br /&gt;
&lt;br /&gt;
9:00-12:00 a.m.  &lt;br /&gt;
&lt;br /&gt;
Lustre Requirements Forum&lt;br /&gt;
&lt;br /&gt;
-Requirements Survey&lt;br /&gt;
&lt;br /&gt;
-Prioritization&lt;br /&gt;
&lt;br /&gt;
-Refinement of Specific Requirements&lt;br /&gt;
&lt;br /&gt;
-Break&lt;br /&gt;
&lt;br /&gt;
-Implementation Scenarios&lt;br /&gt;
&lt;br /&gt;
-Review &lt;br /&gt;
&lt;br /&gt;
--------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
==Registration==&lt;br /&gt;
Registration is open to all, however attendance is limited so please register early. While the registration deadline was Friday March 24, 2006, we will continue to accept registrations until the conference is full - however, we can no longer guarantee accommodations.&lt;br /&gt;
&lt;br /&gt;
There is a conference fee of US$300 to cover the costs of hosting the meeting. Payments may be made by purchase order, credit card (all major cards accepted through Paypal), money order, or check.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
-------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
==Location, Travel, and Hotel==&lt;br /&gt;
The meeting will be held at [http://www.starwoodhotels.com/westin/property/overview/index.html?propertyID=1050 The Westin Resort, Hilton Head Island], 3 miles from the Hilton Head Island Airport (HHH) and about 45 miles from Savannah International Airport (SAV). We have negotiated a special group rate of $199/night, and reserved a block of rooms for our attendees. In addition, there is a reserved block of Government Per Diem rate rooms at $112/night.&lt;br /&gt;
Rooms must be booked before March 24, 2006 to take advantage of this group rate.&lt;br /&gt;
We are in the process of completing the reservation arrangements; please check back on February 14 for details about how to book these discounted rooms.&lt;br /&gt;
&lt;br /&gt;
-------------------------------------------------------------------------------&lt;br /&gt;
&lt;br /&gt;
==Contact==&lt;br /&gt;
To submit a presentation proposal, please send email to lustre2006-submissions@clusterfs.com. This e-mail address is being protected from spam bots, you need JavaScript enabled to view it .&lt;br /&gt;
For all other questions and comments related to the meeting, please send email to lustre2006@clusterfs.com. This e-mail address is being protected from spam bots, you need JavaScript enabled to view it .&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* LUG Requirements Forum - [[Media:LUG-Requirements-060420-final.pdf|LUG-Requirements-060420-final.pdf]] | [http://wiki.lustre.org/images/7/78/LUG-Requirements-060420-final.xls LUG-Requirements-060420-final.xls]&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2007&amp;diff=12236</id>
		<title>Lustre User Group 2007</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2007&amp;diff=12236"/>
		<updated>2011-01-20T17:44:53Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
The fifth annual Lustre User Group meeting was held  in stylish South Beach District of Miami Beach, Florida, April 23-25, 2007, at the beautiful [http://www.thepalmshotel.com Palms Hotel].&lt;br /&gt;
&lt;br /&gt;
The Lustre User Group Conference is the premier event for learning new technical information, acquiring best practices, and sharing knowledge about Lustre Technology. It&#039;s a once-a-year opportunity for users to get answers, advice, and suggestions regarding their specific Lustre implementations. Attendees will have access to experts and peers who will share their real-world experiences. You will also be able to meet with Lustre management and developers to discuss upcoming enhancements and capabilities. Register today while space is still available!&lt;br /&gt;
&lt;br /&gt;
== LUG 2007 Agenda ==&lt;br /&gt;
===Monday April 23, 2007 ===&lt;br /&gt;
9:00-9:30 a.m. &lt;br /&gt;
[[Media:Lug-opening-talk.pdf|Welcome &amp;amp; Introduction]]&lt;br /&gt;
Dr.Peter Braam - President &amp;amp; CEO &lt;br /&gt;
 &lt;br /&gt;
9:30-10:00 a.m.  Corporate Update &lt;br /&gt;
Kevin Canady - Vice President of Business Development &lt;br /&gt;
&lt;br /&gt;
10:00-10:30 a.m.  [[Media:Lug07-engineering.pdf|Engineering &amp;amp; QA Update]]&lt;br /&gt;
Peter Bojanic - Vice President of Engineering &lt;br /&gt;
 &lt;br /&gt;
10:30-11:00 a.m.  Break &lt;br /&gt;
&lt;br /&gt;
11:00-12:00 p.m.  [[Media:Lug07-architecture.pdf|Lustre: Architecture &amp;amp; Roadmap]]&lt;br /&gt;
Dr.Peter Braam - President &amp;amp; CEO &lt;br /&gt;
 &lt;br /&gt;
12:00-1:00 p.m.  Lunch &lt;br /&gt;
&lt;br /&gt;
1:00-1:30 p.m.  Lustre v1.6 Features &amp;amp; Benefits (Part I) &lt;br /&gt;
Nathan Rutman - Principal Engineer &lt;br /&gt;
 &lt;br /&gt;
1:30-2:00 p.m.  Lustre v1.6 Features &amp;amp; Benefits (Part II) &lt;br /&gt;
Nathan Rutman - Principal Engineer &lt;br /&gt;
 &lt;br /&gt;
2:00-2:30 p.m.  [[Media:Lug07_ornl.pdf|Lustre Center of Exellence Reports]] &lt;br /&gt;
Shane Canon - Oak Ridge National Lab&lt;br /&gt;
 &lt;br /&gt;
3:00-3:30 p.m.  Break &lt;br /&gt;
&lt;br /&gt;
3:30-4:00 p.m. [[Media:Lug07_sun.pdf|HPC Installations]] &lt;br /&gt;
Larry McIntosh - Sun&lt;br /&gt;
 &lt;br /&gt;
4:00-4:30 p.m.  [[Media:Lustre_user_group_-_tacc_presentation.pdf|TACC Overview &amp;amp; Lustre Experiences]]&lt;br /&gt;
Karl Schulz - TACC&lt;br /&gt;
 &lt;br /&gt;
4:30-5:00 p.m.&lt;br /&gt;
[[Media:Sicortex.pdf|SiCortex and Lustre]] &lt;br /&gt;
&lt;br /&gt;
6:00-7:30 p.m. Lustre Users Reception&lt;br /&gt;
&lt;br /&gt;
===Tuesday April 24, 2007=== &lt;br /&gt;
&lt;br /&gt;
9:00-9:30 a.m.  [[Media:Cealug2007.pdf|Lustre Centre of Excellence Reports: HSM]]&lt;br /&gt;
Stephane Thiell - French Atomic Energy Commission (CEA) &lt;br /&gt;
&lt;br /&gt;
9:30-10:00 a.m.  Lustre Deployments Lessons Learned &lt;br /&gt;
Steve Simms - Indiana University&lt;br /&gt;
 &lt;br /&gt;
10:00-10:30 a.m.  Lustre Deployments Lessons Learned: Oil&amp;amp; Gas Industry &lt;br /&gt;
Keith Gray - BP &lt;br /&gt;
10:30-11:00 a.m.  Break &lt;br /&gt;
&lt;br /&gt;
11:00-11:30 a.m. &lt;br /&gt;
[[Media:Ncar_lustre.pdf|Lustre Activities]]                                                          Adam Boggs - National Center for Atmospheric Research (NCAR)&lt;br /&gt;
 &lt;br /&gt;
11:30-12:00 a.m.  Lustre Tuning Tutorial&lt;br /&gt;
Nathan Rutman - Principal Engineer &lt;br /&gt;
&lt;br /&gt;
12:00-1:00 p.m.  Lunch &lt;br /&gt;
&lt;br /&gt;
1:00-2:00 p.m.  Lustre Troubleshooting Tutorial&lt;br /&gt;
Nathan Rutman - Principal Engineer &lt;br /&gt;
&lt;br /&gt;
2:00-2:30 p.m.  [[Media:Lug07-dresden.pdf|Center for Information and High Performance Computing]]&lt;br /&gt;
Guido Juckeland - Dresden Technical University &lt;br /&gt;
&lt;br /&gt;
3:00-3:30 p.m.  Break &lt;br /&gt;
&lt;br /&gt;
3:30-4:00 p.m.  [[Media:Ddn-lug07.pdf|Storage Architecture and Roadmap]]&lt;br /&gt;
Dave Fellinger - DDN &lt;br /&gt;
&lt;br /&gt;
4:00-4:30 p.m.  [[Media:Lug07-lustrenetworking.pdf|Lustre Networking: Features &amp;amp; Benefits]]&lt;br /&gt;
Dr. Peter Braam &lt;br /&gt;
&lt;br /&gt;
4:30-5:00 p.m.  [[Media:Markgary.pdf|Lustre Centre of Excellence Reports]]&lt;br /&gt;
Mark Gary - Lawrence Livermore National Lab&lt;br /&gt;
&lt;br /&gt;
===Wednesday April 25, 2007 ===&lt;br /&gt;
9:00-11:30 a.m. &lt;br /&gt;
Lustre Requirements Forum&lt;br /&gt;
&lt;br /&gt;
-Requirements Survey&lt;br /&gt;
&lt;br /&gt;
-Prioritization&lt;br /&gt;
&lt;br /&gt;
-Refinement of Specific Requirements&lt;br /&gt;
&lt;br /&gt;
-Break&lt;br /&gt;
&lt;br /&gt;
-Voting&lt;br /&gt;
&lt;br /&gt;
-Review&lt;br /&gt;
 &lt;br /&gt;
11:30-12:00 a.m. &lt;br /&gt;
Wrap-up &amp;amp; Thanks&lt;br /&gt;
 &lt;br /&gt;
We look forward to seeing you at the Palms Hotel in Miami Beach! Call 1-800-550-0505 to book rooms at the Palms. Be sure to ask for the Lustre group rate.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2008&amp;diff=12235</id>
		<title>Lustre User Group 2008</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2008&amp;diff=12235"/>
		<updated>2011-01-20T17:44:27Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;__TOC__&lt;br /&gt;
Presentations and videos from the Lustre User Group (LUG) 2008 are available below.&lt;br /&gt;
&lt;br /&gt;
[http://picasaweb.google.com/overheardinpdx/LustreUserGroup2008 LUG2008 Photo Gallery] by Carl Wimmi&lt;br /&gt;
&lt;br /&gt;
==LUG 2008 AGENDA==&lt;br /&gt;
===APRIL 28 MONDAY===&lt;br /&gt;
&lt;br /&gt;
[[Media:Bojanic.pdf|Welcome &amp;amp; Introduction Slides]] and  [http://video.google.com/videoplay?docid=4679328954718003275&amp;amp;hl=en Video]&lt;br /&gt;
- Peter Bojanic, Director, Lustre Group, Sun&lt;br /&gt;
&lt;br /&gt;
Lustre™ Business Update (slides and video not made available)&lt;br /&gt;
- Kevin Canady, Director of Business Development, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Barton08.pdf|Lustre Engineering Update Slides]] and [http://video.google.com/videoplay?docid=-7161557916859061133&amp;amp;hl=en Video]&lt;br /&gt;
- Eric Barton, Lustre Lead Engineer, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Lug-2008-Braam.pdf|Sun Storage Perspective &amp;amp; Lustre Architecture Slides]] and [http://video.google.com/videoplay?docid=-3180589066635872586&amp;amp;hl=en Video]&lt;br /&gt;
- Peter Braam, VP of Lustre, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Lustre-RoadmapUpdate-20080428.pdf|Lustre Roadmap Update Slides]] and [http://video.google.com/videoplay?docid=-3461652841731942937&amp;amp;hl=en Video]&lt;br /&gt;
- Bryon Neitzel, Lustre, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:Stearman_LUG_2008.pdf|Lustre at LLNL  Slides]] and [http://video.google.com/videoplay?docid=-7652222883022083898&amp;amp;hl=en Video]&lt;br /&gt;
- Marc Stearman, LLNL&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG-talk-canon.pdf|A Global File System with Lustre &amp;amp;LNET Routers  Slides]] and [http://video.google.com/videoplay?docid=-826148550313733004&amp;amp;hl=en Video]&lt;br /&gt;
- Shane Canon, ORNL&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG08_Cray_HPCS.pdf|DARPA HPCS Project Slides]] and [http://video.google.com/videoplay?docid=-5970693206965456534&amp;amp;hl=en Video]&lt;br /&gt;
- John Carrier, Cray&lt;br /&gt;
&lt;br /&gt;
[[Media:Lustre_HSM_Lug08.pdf|ILM- Lustre HSM  Slides]] and [http://video.google.com/videoplay?docid=-3801023942247542718&amp;amp;hl=en Video]&lt;br /&gt;
- Aurelien Degremont, CEA&lt;br /&gt;
&lt;br /&gt;
[[Media:Hedges_-LustreUsersGroup2008.pdf|Customer Support &amp;amp; I/O Applications from a LLNL Perspective Slides]] and [http://video.google.com/videoplay?docid=4651328086223399469&amp;amp;hl=en Video]&lt;br /&gt;
- Richard Hedges, LLNL&lt;br /&gt;
&lt;br /&gt;
===APRIL 29 TUESDAY===&lt;br /&gt;
Keynote Speaker&lt;br /&gt;
- Andy Bechtolsheim (slides and video not made available)&lt;br /&gt;
&lt;br /&gt;
[http://video.google.com/videoplay?docid=-1374897290924249244&amp;amp;hl=en FlexRex: HPC Community Portal]&lt;br /&gt;
- Rich Brueckner, HPC Group, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG08-Lustre-Application-IO.pdf|Scientific Application Performance with Lustre Slides]] and [http://video.google.com/videoplay?docid=-7545630997322244657&amp;amp;hl=en Video]&lt;br /&gt;
- Tom Wang, Lustre Group, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG08-Lustre-LNET-selftest.pdf|Lnet Selftest Slides]]  (no Video)&lt;br /&gt;
- Isaac Huang, Lustre Group, Sun&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG08-IU-wan.pdf|BOF on Lustre over WAN - IU Slides]]  (no Video)&lt;br /&gt;
- Eric Barton, ORNL, TACC, IU&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG2008-NEARSC-Cardo.pdf|Lustre Tools Session-Backup&amp;amp;Quota Slides]] (no Video)&lt;br /&gt;
- Nicholas P. Cardo, LBNL&lt;br /&gt;
&lt;br /&gt;
[[Media:Shine_LUG2008.pdf|Lustre Tools Session-Shine, Administration Tool Slides]] and [http://video.google.com/videoplay?docid=569932018410904962&amp;amp;hl=en Video]&lt;br /&gt;
- Stephane Thiell, CEA&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG08-DDN-sm.pdf|Lustre Partner: DataDirect Networks Slides]] and [http://video.google.com/videoplay?docid=-7929228264502851730&amp;amp;hl=en Video]&lt;br /&gt;
- Dave Fellinger, DDN&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Lustre Expertise Session (No Videos):&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
1. [[Media:LUG08-Lustre-uOSS.pdf|Userspace Server Architecture Slides]]&lt;br /&gt;
- Andreas Dilger&lt;br /&gt;
&lt;br /&gt;
2. [[Media:Lustre_on_ZFS-Ricardo.pdf|Lustre on ZFS Slides]]&lt;br /&gt;
- Ricardo Correia&lt;br /&gt;
&lt;br /&gt;
3. [[Media:LUG08-Lustre-NFS.pdf|NFS/pNFS export Slides]]&lt;br /&gt;
- Oleg Drokin&lt;br /&gt;
&lt;br /&gt;
4. [[Media:LUG2008-Lustre-cmd.pdf|Clustered Metadata (CMD) Slides]]&lt;br /&gt;
- Andreas Dilger&lt;br /&gt;
                                                             &lt;br /&gt;
5. [[Media:LUG08-head-mds-danilov.pdf|Metadata Server Stack Slides]]&lt;br /&gt;
- Nikita Danilov&lt;br /&gt;
&lt;br /&gt;
6. [[Media:LUG08-som-vitaly-fertman.pdf|Metadata Performance with Size on MDS Slides]]&lt;br /&gt;
- Vitaly Fertman&lt;br /&gt;
&lt;br /&gt;
===APRIL 30 WEDNESDAY===&lt;br /&gt;
&lt;br /&gt;
Joint CEA/NNSA BOF on PetaScale I/O Issues (no Video)&lt;br /&gt;
- Mark Gary, LLNL&lt;br /&gt;
&lt;br /&gt;
[[Media:LUG08-Linux_HPC_Software_Stack.pdf|Linux HPC Software Stack Slides]] and [http://video.google.com/videoplay?docid=-1289609202868980540&amp;amp;hl=en Video]&lt;br /&gt;
- Makia Minich, Lustre Group,  Sun&lt;br /&gt;
&lt;br /&gt;
[http://video.google.com/videoplay?docid=-2769491780923413644&amp;amp;hl=en Lustre Engineering Panel Q&amp;amp;A Video] (no Slides)&lt;br /&gt;
- Lustre Group&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2009&amp;diff=12234</id>
		<title>Lustre User Group 2009</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2009&amp;diff=12234"/>
		<updated>2011-01-20T17:44:02Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Thanks to all of the attendees and presenters at the 2009 Lustre User Group (LUG). We had the highest attendance in LUG&#039;s seven year history and an excellent program. &lt;br /&gt;
&lt;br /&gt;
This year&#039;s LUG agenda appears below with links to the presentations and videos.&lt;br /&gt;
&lt;br /&gt;
== LUG09 Agenda, Slides, and Videos ==&lt;br /&gt;
LUG09, a three-day event, featured a workshop and numerous presentations on select Lustre features, upcoming enhancements, site-specific experiences using Lustre, and much more.   &lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;April 15 - Wednesday&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Lustre Workshop&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
This full-day seminar covered topics of interest to Lustre users, and featured the following presentations from our senior-level developers:&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;LNET Mysteries&#039;&#039;&#039;&#039;&#039; - Isaac Huang&lt;br /&gt;
:[[Media:LNet_LUG_09.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/node/1179274162 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Building Lustre, Protocol Basics, and Debugging&#039;&#039;&#039;&#039;&#039; - Johann Lombardi&lt;br /&gt;
:[[Media:Lug_johann.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274164|&#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;IO and Tuning&#039;&#039;&#039;&#039;&#039; - Oleg Drokin&lt;br /&gt;
:[[Media:Oleg_Workshop_LUG_09.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274167 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre File System and Tuning&#039;&#039;&#039;&#039;&#039; - Andreas Dilger&lt;br /&gt;
:[[Media:Dilger_Workshop_LUG_09.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274169 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;April 16 - Thursday&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Lustre Presentations and Forums&#039;&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Welcome &amp;amp; Introduction&#039;&#039;&#039;&#039;&#039; - Peter Bojanic, Director, Lustre Group, Sun&lt;br /&gt;
:[[Media:PeterBojanic.pdf|&#039;&#039;Slides&#039;&#039;]] &lt;br /&gt;
:[http://slx.sun.com/1179274073 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Scalability&#039;&#039;&#039;&#039;&#039; - Eric Barton, Lead Engineer, Sun&lt;br /&gt;
:[[Media:Barton_LUG_2009_Scalability.pdf|&#039;&#039;Slides&#039;&#039;]] &lt;br /&gt;
:[http://slx.sun.com/1179274098 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre 1.8 Features&#039;&#039;&#039;&#039;&#039; - Nathan Rutman, Lustre, Sun&lt;br /&gt;
:[[Media:NathanRutman.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274124 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Quality Initiative&#039;&#039;&#039;&#039;&#039; - Robert Read, Lustre, Sun&lt;br /&gt;
:[[Media:RobertReadTalk1.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274140 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Making Movies with Lustre - Fun and Fantasy with Furry Creatures&#039;&#039;&#039;&#039;&#039; - John Leedham, Daire Byrne, James Rose, Framestore&lt;br /&gt;
:[[Media:JamesRose.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274187 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre at Harvard&#039;&#039;&#039;&#039;&#039; - Teresa Kaltz&lt;br /&gt;
:[[Media:TeresaKaltz.pdf|&#039;&#039;Slides&#039;&#039;]] &lt;br /&gt;
:[http://slx.sun.com/1179274189 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre Recovery&#039;&#039;&#039;&#039;&#039; - Robert Read, Lustre, Sun&lt;br /&gt;
:[[Media:RobertReadTalk2.pdf|&#039;&#039;Slides&#039;&#039;]] &lt;br /&gt;
:[http://slx.sun.com/1179274084 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre at Sandia&#039;&#039;&#039;&#039;&#039; - Steve Monk, Sandia&lt;br /&gt;
:[[Media:SNL_LUG_2009.pdf|&#039;&#039;Slides&#039;&#039;]] &lt;br /&gt;
:Video not recorded for this session&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre at CEA: TERA-100&#039;&#039;&#039;&#039;&#039; - Stéphane Thiell, CEA&lt;br /&gt;
:[[Media:StephaneThiell.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274102 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Sun Lustre Storage Cluster: Lessons Learned and Best Practices&#039;&#039;&#039; - Joey Jablonski, Sun&lt;br /&gt;
:[[Media:JoeyJablonski.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274146 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Managing High Availability on a Shine-Equipped Lustre Cluster&#039;&#039;&#039; - Oliver Hargoaa, Bull&lt;br /&gt;
:[[Media:OliverHargoaa.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274195 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&amp;lt;big&amp;gt;April 17 - Friday&amp;lt;/big&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Lustre Presentations and Forums&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre on Hyperion&#039;&#039;&#039;&#039;&#039; - Marc Stearman, LLNL&lt;br /&gt;
:[[Media:LUGHyperion2009.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274196 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Meeting the I/O Demands of the World&#039;s Most Powerful Scientific Computing Complex&#039;&#039;&#039;&#039;&#039; - Galen Shipman, ORNL&lt;br /&gt;
:[[Media:Gshipman_lug_2009.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274139 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre 2.0 Overview&#039;&#039;&#039;&#039;&#039; - Andreas Dilger, Lustre, Sun&lt;br /&gt;
:[[Media:AndreasDilger.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274141 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;HSM: Lustre/HSM Project - Manage your Data&#039;&#039;&#039;&#039;&#039; - Aurélien Degrémont, CEA&lt;br /&gt;
:[[Media:AurelienDegremont.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274197 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre in a WAN Environment&#039;&#039;&#039;&#039;&#039; - James Hofmann and Dardo Kleiner, Naval Research Laboratory, David McMillen, System Fabric Works&lt;br /&gt;
:[[Media:JamesHoffman.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274198 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;DDN Storage and Lustre&#039;&#039;&#039;&#039;&#039; - Jeff Denworth, DataDirect Networks&lt;br /&gt;
:[[Media:DDN_LUG_Talk_-_Public.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274199 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Monitoring I/O Performance on Lustre&#039;&#039;&#039;&#039;&#039; - Andrew Uselton, NERSC&lt;br /&gt;
:[[Media:AndrewUselton.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274200 &#039;&#039;Video&#039;&#039;] &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;Lustre as the Core of a Data Centric Best Practice HPC Workflow&#039;&#039;&#039;&#039;&#039; - Bob Murphy, Sun&lt;br /&gt;
:[[Media:LUG_Best_Practice_HPC_Workflow.pdf|&#039;&#039;Slides&#039;&#039;]]&lt;br /&gt;
:[http://slx.sun.com/1179274158 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Lessons Learned: Lustre on the Teragrid&#039;&#039;&#039; - Stephen Simms, Indiana University&lt;br /&gt;
:[[Media:StephenSimms.pdf|&#039;&#039;Slides&#039;&#039;]] &lt;br /&gt;
:[http://slx.sun.com/1179274201 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Open Fabrics Update&#039;&#039;&#039; - Bill Boas, OpenFabrics Alliance&lt;br /&gt;
:[[Media:OpenFabrics_Alliance.pdf|&#039;&#039;Slides&#039;&#039;]] &lt;br /&gt;
:[http://slx.sun.com/1179274202 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Open Q&amp;amp;A with Lustre Senior Engineers&#039;&#039;&#039; - Lustre Group&lt;br /&gt;
:[http://slx.sun.com/1179274203 &#039;&#039;Video&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Historical LUGs===&lt;br /&gt;
&lt;br /&gt;
* [[Lustre User Group 2008]]&lt;br /&gt;
* [[Lustre User Group 2007]]&lt;br /&gt;
* [[Lustre User Group 2006]]&lt;br /&gt;
&lt;br /&gt;
===Contact Us===&lt;br /&gt;
&lt;br /&gt;
If you would like to receive Lustre User Group announcements, please subscribe to the [[Lustre Mailing_Lists|lustre-announce]] mailing list.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2010&amp;diff=12233</id>
		<title>Lustre User Group 2010</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Lustre_User_Group_2010&amp;diff=12233"/>
		<updated>2011-01-20T17:43:16Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Thanks to all of the attendees and presenters at the 2010 Lustre User Group (LUG). This year&#039;s event was hosted at the beautiful Seascape Resort in Monterey Bay, California, and featured a Lustre Advanced User seminar and two days of informative presentations on select Lustre features, upcoming enhancements, site-specific experiences using Lustre.&lt;br /&gt;
&lt;br /&gt;
==Lustre Advanced User Seminar 2010 Agenda==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;Wednesday, April 14&#039;&#039;&#039;&amp;lt;/big&amp;gt; - Seascape Grand Ballroom, 2nd Floor, Seascape Conference Center&lt;br /&gt;
&lt;br /&gt;
{| border=1 cellpadding=0&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;8:00 - 9:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Breakfast - Riviera Room, 3rd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;9:00 - 10:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:LUG-2010-tricksRev.pdf|Lustre Tips and Tricks]] &#039;&#039;&#039;&#039;&#039; &lt;br /&gt;
:Andreas Dilger, Oracle&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;10:30 - 11:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Coffee Break - Foyer, 2nd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;11:00 - 12:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:LUG_User_Seminar_Presentation_-_Hill.pdf|Administering Lustre at Scale, Lessons Learned at ORNL]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Jason Hill, ORNL&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;12:00 - 1:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Lunch - Riviera Room, 3rd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;1:00 - 2:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Lustre_hsm_seminar_lug10.pdf|A Look Inside HSM]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Aurélien Degrémont and Thomas Leibovici, CEA&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;2:30 - 3:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Coffee Break - Foyer, 2nd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3:00 - 5:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:A_Deep_Dive_into_Lustre_Recovery_Mechanisms.pdf|A Deep Dive into Lustre Recovery Mechanisms]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Johann Lombardi, Oracle&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
==LUG 2010 Agenda==&lt;br /&gt;
&lt;br /&gt;
LUG 2010, a two-day event, will feature a workshop and numerous presentations on select Lustre features, upcoming enhancements, site-specific experiences using Lustre, and much more.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;Thursday, April 15 - LUG Day 1&#039;&#039;&#039;&amp;lt;/big&amp;gt; - Seascape Grand Ballroom, 2nd Floor, Seascape Conference Center&lt;br /&gt;
&lt;br /&gt;
{| border=1 cellpadding=0&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;8:00 - 9:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Breakfast - Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;9:00 - 9:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:LUG_Keynote_Presentation-Bojanic-100415.pdf|LUG Kickoff]]&#039;&#039;&#039;&#039;&#039; &lt;br /&gt;
:Peter Bojanic, Oracle&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;9:30 - 10:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Lustre_Development_Barton_LUG_2010.pdf|Lustre Development]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Eric Barton, Oracle&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;10:00 - 10:30&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Coffee Break - Foyer, 2nd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;10:30 - 11:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:LUG10_Peter_Jones_1.8.x_Update.pdf|Lustre 1.8 Update]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Peter Jones, Oracle&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;11:30 - 12:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Rread-lustre20.pdf|Lustre 2.0]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Robert Read, Oracle&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;12:00 - 1:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Lunch - Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;1:00 - 1:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Buisson_NUMIOA_n_multirail.pdf|Getting the Best from Lustre in a NUMIOA and Multi-rail IB Environment]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Sebastien Buisson, Bull&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;1:30 - 2:15&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:SNL_LUG_2010_Sandia-rev2.pdf|What RedSky and Lustre Have Accomplished]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Steve Monk, Sandia&lt;br /&gt;
:Joe Mervini, Sandia&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;2:15 - 3:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Gshipman_LUG_2010.pdf|Lustre at the OLCF: Experiences and Path Forward]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Galen Shipman, ORNL&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;3:00 - 3:30&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Coffee Break - Foyer, 2nd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3:30 - 4:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Lustre_user_group_2010.pdf|Comprehensive Lustre I/O Tracking with Vampir]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Michael Kluge, ZIH&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;4:00 - 4:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:LUG2010-CLUMEQ.pdf|Lustre Deployment and Early Experiences]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Florent Parent, Clumeq&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;4:30 - 5:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:LUG2010_Simms.pdf|Indiana University&#039;s Lustre WAN - Empowering Production Workflows on the TeraGrid]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Stephen Simms, Indiana University&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;5:00 - 5:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:CEALUG2010_v1.pdf|LCE: Lustre at CEA]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Stéphane Thiell, CEA&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;5:30 - 6:30&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Break&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;6:30&#039;&#039;&#039;&lt;br /&gt;
|Dinner Reception - Seascape Resort: To be announced at Thursday&#039;s event&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;big&amp;gt;&#039;&#039;&#039;Friday, April 16 - LUG Day 2&#039;&#039;&#039;&amp;lt;/big&amp;gt; - Seascape Grand Ballroom, 2nd Floor, Seascape Conference Center&lt;br /&gt;
&lt;br /&gt;
{| border=1 cellpadding=0&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;8:00 - 9:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot; width=574|&#039;&#039;&#039;Breakfast - Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;9:00 - 9:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:lustre_smp_scaling_LUG_2010.pdf|Lustre SMP Scaling Improvements]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Liang Zhen, Oracle&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;9:30 - 10:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Lustre/HSM Binding&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:[[Media:hsm_lug10.aurelien.pdf|Aurelien Degremont, CEA]]&lt;br /&gt;
:[[Media:LUG2010.hsm.hua.2.pdf|Hua Huang, Oracle]]&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;10:00 - 10:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;Lustre Enabled WAN in Government, NRL&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Jeremy Filizetti&lt;br /&gt;
:[[Media:LUG2010_Filizetti_NRL.pdf|NRL]]&lt;br /&gt;
:[[Media:LUG2010_Filizetti_SMSi.pdf|SMSi]]&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;10:30 - 11:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Coffee Break - Foyer, 2nd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;11:00 - 11:30&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:lug2010_BP.pdf|Hedging Our Filesystem Bet]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Kent Blancett, BP&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;11:30 - 12:00&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:jhammond-lug.pdf|Analysis and Recovery from Lustre Faults/Failures on Ranger]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:John Hammond, TACC&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;12:00 - 1:00&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Lunch - Riviera/Bayview Rooms, 3rd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;1:00 - 1:45&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:lug2010.13-palencia.pdf|Kerberized Lustre 2.0 over the WAN]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Josephine Palencia, PSC&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;1:45 - 2:15&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Cardo_LUG2010.pdf|Reaping the Benefits of MetaData]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Nic Cardo, NERSC&lt;br /&gt;
|-&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;2:15 - 2:45&#039;&#039;&#039;&lt;br /&gt;
|style=&amp;quot;background:#E0E0E0&amp;quot;|&#039;&#039;&#039;Coffee Break - Foyer, 2nd Floor, Seascape Conference Center&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;2:45 - 3:15&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:Lustre_MacOS.pdf|Porting Lustre to Operating Systems Other than Linux]]&#039;&#039;&#039;&#039;&#039;:&lt;br /&gt;
:Ken Hornstein, NRL&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3:15 - 3:45&#039;&#039;&#039;&lt;br /&gt;
|&#039;&#039;&#039;&#039;&#039;[[Media:LUG 2010 Lustre Community.pdf|Lustre and Community Development]]&#039;&#039;&#039;&#039;&#039;&lt;br /&gt;
:Daniel Ferber, Oracle&lt;br /&gt;
|-&lt;br /&gt;
|&#039;&#039;&#039;3:45&#039;&#039;&#039;&lt;br /&gt;
|LUG 2010 Concludes&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Change_Log_2.0&amp;diff=12232</id>
		<title>Change Log 2.0</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Change_Log_2.0&amp;diff=12232"/>
		<updated>2011-01-20T17:42:04Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Aug 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
= Welcome to  Lustre 2.0.0 =&lt;br /&gt;
&lt;br /&gt;
This release represents a departure from the previous trains that were closely related to one another.  As such, we have no previous release to show you the change from.  Future 2.x releases will show the change from this or subsequent releases.  All that said, you can find out more details from one of the following sources:&lt;br /&gt;
&lt;br /&gt;
[http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre 2.0 Manual&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
[http://wiki.lustre.org/images/6/60/821-2077-10.pdf &#039;&#039;Lustre 2.0.0 Release Notes&#039;&#039;]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Support for networks:&#039;&#039;&#039;&lt;br /&gt;
* socklnd   - any kernel supported by Lustre,&lt;br /&gt;
* qswlnd    - Qsnet kernel modules 5.20 and later,&lt;br /&gt;
* openiblnd - IbGold 1.8.2,&lt;br /&gt;
* o2iblnd   - OFED 1.1, 1.2.0, 1.2.5, 1.3, and 1.4.1&lt;br /&gt;
* viblnd    - Voltaire ibhost 3.4.5 and later,&lt;br /&gt;
* ciblnd    - Topspin 3.2.0,&lt;br /&gt;
* iiblnd    - Infiniserv 3.3 + PathBits patch,&lt;br /&gt;
* gmlnd     - GM 2.1.22 and later,&lt;br /&gt;
* mxlnd     - MX 1.2.10 or later,&lt;br /&gt;
* ptllnd    - Portals 3.3 / UNICOS/lc 1.5.x, 2.0.x&lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
&#039;&#039;&#039;Server support for kernels:&#039;&#039;&#039;&lt;br /&gt;
* 2.6.18-164.11.1.el5 (RHEL 5)&lt;br /&gt;
* 2.6.18-164.11.1.0.1.el5 (OEL 5)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Client support for unpatched kernels:&#039;&#039;&#039; see [http://wiki.lustre.org/index.php?title=Patchless_Client &amp;quot;Patchless Client&amp;quot;]&lt;br /&gt;
* 2.6.18-164.11.1.el5 (RHEL 5),&lt;br /&gt;
* 2.6.18-164.11.1.0.1.el5 (OEL 5)&lt;br /&gt;
* 2.6.16.60-0.42.8 (SLES 10),&lt;br /&gt;
* 2.6.27.19-5 (SLES11)&lt;br /&gt;
* 2.6.29.4-167.fc11 (FC11)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Recommended e2fsprogs version:&#039;&#039;&#039;&lt;br /&gt;
* 1.41.10-sun2&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Managing_Lustre_Failover&amp;diff=12231</id>
		<title>Managing Lustre Failover</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Managing_Lustre_Failover&amp;diff=12231"/>
		<updated>2011-01-20T17:40:25Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Feb 2010)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
For information about managing Lustre™ failover, see [http://wiki.lustre.org/manual/LustreManual20_HTML/UnderstandingFailover.html#50438253_pgfId-1304327 Chapter 3: &#039;&#039;Understanding Failover in Lustre&#039;&#039;] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html &#039;&#039;Lustre Operations Manual&#039;&#039;].&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=NFS_vs._Lustre&amp;diff=12230</id>
		<title>NFS vs. Lustre</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=NFS_vs._Lustre&amp;diff=12230"/>
		<updated>2011-01-20T17:39:20Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Oct 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;small&amp;gt;&#039;&#039;DISCLAIMER - EXTERNAL CONTRIBUTOR CONTENT&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;small&amp;gt;&#039;&#039;This content was submitted by an external contributor. We provide this information as a resource for the Lustre™ open-source community, but we make no representation as to the accuracy, completeness or reliability of this information.&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
The following is based on a post written by Lee Ward and posted on the Lustre-discuss mailing list and a couple of corrections supplied by Daniel Kobras and Nicolas Williams have been added. Further expansion and correction is welcome.&lt;br /&gt;
&lt;br /&gt;
I&#039;ll begin by motivating both NFS and Lustre. Why do they exist? What &lt;br /&gt;
problems do they solve.&lt;br /&gt;
&lt;br /&gt;
==NFS==&lt;br /&gt;
&lt;br /&gt;
Way back in the day, ethernet and the concept of a workstation got &lt;br /&gt;
popular. There were many tools to copy files between machines but few &lt;br /&gt;
ways to share a name space; Have the directory hierarchy and it&#039;s &lt;br /&gt;
content directly accessible to an application on a foreign machine. This &lt;br /&gt;
made file sharing awkward. The model was to copy the file or files to &lt;br /&gt;
the workstation where the work was going to be done, do the work, and &lt;br /&gt;
copy the results back to some, hopefully, well maintained central &lt;br /&gt;
machine. &lt;br /&gt;
&lt;br /&gt;
There &#039;&#039;were&#039;&#039; solutions to this at the time. I recall an attractive &lt;br /&gt;
alternative called RFS (I believe) from the Bell Labs folks, via some &lt;br /&gt;
place in England if I&#039;m remembering right, it&#039;s been a looong time after &lt;br /&gt;
all. It had issues though. The nastiest issue for me was that if a &lt;br /&gt;
client went down the service side would freeze, at least partially. &lt;br /&gt;
Since this could happen willy-nilly, depending on the users wishes and &lt;br /&gt;
how well the power button on his workstation was protected, together &lt;br /&gt;
with the power cord and ethernet connection, this freezing of service &lt;br /&gt;
for any amount of time was difficult to accept. This was so even in a &lt;br /&gt;
rather small collection of machines. &lt;br /&gt;
&lt;br /&gt;
The problem with RFS (?) and it&#039;s cousins were that they were all &lt;br /&gt;
stateful. The service side depended on state that was held at the &lt;br /&gt;
client. If the client went down, the service side couldn&#039;t continue &lt;br /&gt;
without a whole lot of recovery, timeouts, etc. It was a very *annoying* &lt;br /&gt;
problem. &lt;br /&gt;
&lt;br /&gt;
In the latter half of the 1980&#039;s (am I remembering right?) SUN proposed &lt;br /&gt;
an open protocol called NFS. An implementation using this protocol could &lt;br /&gt;
do most everything RFS(?) could but it didn&#039;t suffer the service-side &lt;br /&gt;
hangs. It couldn&#039;t. It was stateless. If the client went down, the &lt;br /&gt;
server just didn&#039;t care. If the server went down, the client had the &lt;br /&gt;
opportunity to either give up on the local operation, usually with an &lt;br /&gt;
error returned, or wait. It was always up to the user and for client &lt;br /&gt;
failures the annoyance was limited to the user(s) on that client.&lt;br /&gt;
&lt;br /&gt;
SUN, also, wisely desired the protocol to be ubiquitous. They published &lt;br /&gt;
it. They wanted *everyone* to adopt it. More, they would help &lt;br /&gt;
competitors. SUN held interoperability bake-a-thons to help with this. &lt;br /&gt;
It looks like they succeeded, all around :) &lt;br /&gt;
&lt;br /&gt;
Let&#039;s sum up, then. The goals for NFS were: &lt;br /&gt;
&lt;br /&gt;
# Share a local file system name space across the network. &lt;br /&gt;
# Do it in a robust, resilient way. Pesky FS issues because some user kicked the cord out of his workstation was unacceptable. &lt;br /&gt;
# Make it ubiquitous. SUN was a workstation vendor. They sold servers but almost everyone had a VAX in their back pocket where they made the infrastructure investment. SUN needed the high-value machines to support this protocol. &lt;br /&gt;
&lt;br /&gt;
==Lustre==&lt;br /&gt;
&lt;br /&gt;
Lustre has a weird story and I&#039;m not going to go into all of it. The &lt;br /&gt;
shortest, relevant, part is that while there was at least one solution &lt;br /&gt;
that DOE/NNSA felt acceptable, GPFS, it was not available on anything &lt;br /&gt;
other than an IBM platform and because DOE/NNSA had a semi-formal policy &lt;br /&gt;
of buying from different vendors at each of the three labs we were kind &lt;br /&gt;
of stuck. Other file systems, existing and imminent, at the time were &lt;br /&gt;
examined but they were all distributed file systems and we needed IO &lt;br /&gt;
&#039;&#039;&#039;&#039;&#039;bandwidth&#039;&#039;&#039;&#039;&#039; . We needed lots, and lots of bandwidth. &lt;br /&gt;
&lt;br /&gt;
We also needed that ubiquitous thing that SUN had as one of their goals. &lt;br /&gt;
We didn&#039;t want to pay millions of dollars for another GPFS. We felt that &lt;br /&gt;
would only be painting ourselves into a corner. Whatever we did, the &lt;br /&gt;
result &#039;&#039;had&#039;&#039; to be open. It also had to be attractive to smaller sites &lt;br /&gt;
as we wanted to turn loose of the ting at some point. If it was &lt;br /&gt;
attractive for smaller machines we felt we would win in the long term &lt;br /&gt;
as, eventually, the cost to further and maintain this thing was spread &lt;br /&gt;
across the community. &lt;br /&gt;
&lt;br /&gt;
As far as technical goals, I guess we just wanted GPFS, but open. More &lt;br /&gt;
though, we wanted it to survive in our platform roadmaps for at least a &lt;br /&gt;
decade. The actual technical requirements for the contract that DOE/NNSA &lt;br /&gt;
executed with HP, CFS was the sub-contractor responsible for &lt;br /&gt;
development, can be found here: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;http://www-cs-students.stanford.edu/~trj/SGS_PathForward_SOW.pdf&amp;gt; &lt;br /&gt;
&lt;br /&gt;
LLNL used to host this but it&#039;s no longer there? Oh well, hopefully this &lt;br /&gt;
link will be good for a while, at least. &lt;br /&gt;
&lt;br /&gt;
I&#039;m just going to jump to the end and sum the goals up: &lt;br /&gt;
&lt;br /&gt;
# It must do &#039;&#039;everything&#039;&#039; NFS can. We relaxed the stateless thing though, see the next item for why. &lt;br /&gt;
# It must support full POSIX semantics; Last writer wins, POSIX locks, etc. &lt;br /&gt;
# It must support all of the transports we are interested in.&lt;br /&gt;
# It must be scalable, in that we can cheaply attach storage and both performance (reading *and* writing) and capacity within a single mounted file system increase in direct proportion.&lt;br /&gt;
# We wanted it to be easy, administratively. Our goal was that it be no harder than NFS to set up and maintain. We were involving too many folks with PhDs in the operation of our machines at the time. Before you yell &#039;&#039;&#039;FAIL&#039;&#039;&#039;, I&#039;ll say we did try. I&#039;ll also say we didn&#039;t make CFS responsible for this part of the task. Don&#039;t blame them overly much, OK?&lt;br /&gt;
# We recognized we were asking for a stateful system, we wanted to mitigate that by having some focus on resiliency. These were big machines and clients died all the time.&lt;br /&gt;
# While not in the SOW, we structured the contract to accomplish some future form of wide acceptance. We wanted it to be ubiquitous. &lt;br /&gt;
&lt;br /&gt;
That&#039;s a lot of goals! For the technical ones, the main ones are all &lt;br /&gt;
pretty much structured to ask two things of what became Lustre. First, &lt;br /&gt;
give us everything NFS functionally does but go far beyond it in &lt;br /&gt;
performance. Second, give us everything NFS functionally does but make &lt;br /&gt;
it completely equivalent to a local file system, semantically. &lt;br /&gt;
&lt;br /&gt;
There&#039;s a little more we have to consider. NFS4 is a different beast &lt;br /&gt;
than NFS2 or NFS3. NFS{2,3} had some serious issues that became more &lt;br /&gt;
prominent as time went by. First, security; It had none. Folks had &lt;br /&gt;
bandaged on some different things to try to cure this but they weren&#039;t &lt;br /&gt;
standard across platforms. Second, it couldn&#039;t do the full POSIX &lt;br /&gt;
required semantics. That was attacked with the NFS lock protocols but it &lt;br /&gt;
was such an after-thought it will always remain problematic. Third, new &lt;br /&gt;
authorization possibilities introduced by Microsoft and then POSIX, &lt;br /&gt;
called ACLs, had no way of being accomplished. &lt;br /&gt;
&lt;br /&gt;
NFS4 addresses those by: &lt;br /&gt;
&lt;br /&gt;
# Introducing state. (Lots of resiliency mechanisms introduced to offset the downside of this, too.) NFS4 implementations are able to handle Posix advisory locks, but unlike Lustre, they don&#039;t support full Posix filesystem semantics. For example, NFS4 still follows the traditional NFS close-to-open cache consistency model whereas with Lustre, individual write()s are atomic and become immediately visible to all clients. &amp;lt;br /&amp;gt; &amp;lt;br /&amp;gt; NFSv4 can&#039;t handle O_APPEND, and has those close-to-open semantics. Those are the two large departures from POSIX in NFSv4. &amp;lt;br /&amp;gt; &amp;lt;br /&amp;gt; NFSv4.1 also adds metadata/data separation and data distribution, much like Lustre, but with the same POSIX semantics departures mentioned above.  Also, NFSv4.1&#039;s &amp;quot;pNFS&amp;quot; concept doesn&#039;t have room for &amp;quot;capabilities&amp;quot; (in the distributed filesystem sense, not in the Linux capabilities sense), which means that OSSs and MDSs have to communicate to get permissions to be enforced.  There are also differences with respect to recovery, etcetera. &amp;lt;br /&amp;gt; &amp;lt;br /&amp;gt; One thing about NFS is that it&#039;s meant to be neutral w.r.t. the type of filesystem it shares.  So NFSv4, for example, has features for dealing with filesystems that don&#039;t have a notion of persistent inode number. Whereas Lustre has its own on-disk format and therefore can&#039;t be used to share just any type of filesystem.&lt;br /&gt;
# Formalizing and offering standardized authentication headers. &lt;br /&gt;
# Introducing ACLs that map to equivalents in POSIX and Microsoft. &lt;br /&gt;
&lt;br /&gt;
==Strengths and Weaknesses of the Two==&lt;br /&gt;
&lt;br /&gt;
NFS4 does most everything Lustre can with one very important exception, &lt;br /&gt;
IO bandwidth. &lt;br /&gt;
&lt;br /&gt;
Both seem able to deliver metadata performance at roughly the same &lt;br /&gt;
speeds. File create, delete, and stat rates are about the same. NetApp &lt;br /&gt;
seems to have a partial enhancement. They bought the Spinnaker goodies &lt;br /&gt;
some time back and have deployed that technology, and redirection &lt;br /&gt;
too(?), within their servers. The good about that is two users in &lt;br /&gt;
different directories *could* leverage two servers, independently, and, &lt;br /&gt;
so, scale metadata performance. It&#039;s not guaranteed but at least there &lt;br /&gt;
is the possibility. If the two users are in the same directory, it&#039;s not &lt;br /&gt;
much different, though, I&#039;m thinking. Someone correct me if I&#039;m wrong? &lt;br /&gt;
&lt;br /&gt;
Both can offer full POSIX now. It&#039;s nasty in both cases but, yes, in &lt;br /&gt;
theory you can export mail directory hierarchies with locking. &lt;br /&gt;
&lt;br /&gt;
The NFS client and server are far easier to set up and maintain. The &lt;br /&gt;
tools to debug issues are advanced. While the Lustre folks have done &lt;br /&gt;
much to improve this area, NFS is just leaps and bounds ahead. It&#039;s &lt;br /&gt;
easier to deal with NFS than Lustre. Just far, far easier, still.&lt;br /&gt;
 &lt;br /&gt;
NFS is just built in to everything. My TV has it, for heck&#039;s sake. Lustre &lt;br /&gt;
is, seemingly, always an add-on. It&#039;s also a moving target. We&#039;re &lt;br /&gt;
constantly futzing with it, upgrading, and patching. Lustre might be &lt;br /&gt;
compilable most everywhere we care about but building it isn&#039;t trivial. &lt;br /&gt;
The supplied modules are great but, still, moving targets in that we &lt;br /&gt;
wait for SUN to catch up to the vendor supplied changes that affect &lt;br /&gt;
Lustre. Given Lustre&#039;s size and interaction with other components in the &lt;br /&gt;
OS, that happens far more frequently than desired. NFS just plain wins &lt;br /&gt;
the ubiquity argument at present. &lt;br /&gt;
&lt;br /&gt;
NFS IO performance does *not* scale. It&#039;s still an in-band protocol. The &lt;br /&gt;
data is carried in the same message as the request and is, practically, &lt;br /&gt;
limited in size. Reads are more scalable in writes, a popular &lt;br /&gt;
file-segment can be satisfied from the cache on reads but develops &lt;br /&gt;
issues at some point. For writes, NFS3 and NFS4 help in that they &lt;br /&gt;
directly support write-behind so that a client doesn&#039;t have to wait for &lt;br /&gt;
data to go to disk, but it&#039;s just not enough. If one streams data &lt;br /&gt;
to/from the store, it can be larger than the cache. A client that might &lt;br /&gt;
read a file already made &amp;quot;hot&amp;quot; but at a very different rate just loses. &lt;br /&gt;
A client, writing, is always looking for free memory to buffer content. &lt;br /&gt;
Again, too many of these, simultaneously, and performance descends to &lt;br /&gt;
the native speed of the attached back-end store and that store can only &lt;br /&gt;
get so big. &lt;br /&gt;
&lt;br /&gt;
Lustre IO performance *does* scale. It uses a 3rd-party transfer. &lt;br /&gt;
Requests are made to the metadata server and IO moves directly between &lt;br /&gt;
the affected storage component(s) and the client. The more storage &lt;br /&gt;
components, the less possibility of contention between clients and the &lt;br /&gt;
more data can be accepted/supplied per unit time. &lt;br /&gt;
NFS4 has a proposed extension, called pNFS, to address this problem. It &lt;br /&gt;
just introduces the 3rd-party data transfers that Lustre enjoys. If and &lt;br /&gt;
when that is a standard, and is well supported by clients and vendors, &lt;br /&gt;
the really big technical difference will virtually disappear. It&#039;s been &lt;br /&gt;
a long time coming, though. It&#039;s still not there. Will it ever be, &lt;br /&gt;
really? &lt;br /&gt;
&lt;br /&gt;
The answer to the NFS vs. Lustre question comes down to the workload for &lt;br /&gt;
a given application then, since they do have overlap in their solution &lt;br /&gt;
space. If I were asked to look at a platform and recommend a solution I &lt;br /&gt;
would worry about IO bandwidth requirements. If the platform in question &lt;br /&gt;
were either read-mostly and, practically, never needed sustained read or &lt;br /&gt;
write bandwidth, NFS would be an easy choice. I&#039;d even think hard about &lt;br /&gt;
NFS if the platform created many files but all were very small; Today&#039;s &lt;br /&gt;
filers have very respectable IOPS rates. If it came down to IO &lt;br /&gt;
bandwidth, I&#039;m still on the parallel file system bandwagon. NFS just &lt;br /&gt;
can&#039;t deal with that at present and I do still have the folks, in house, &lt;br /&gt;
to manage the administrative burden. &lt;br /&gt;
&lt;br /&gt;
Done. That was useful for me. I think five years ago I might have opted &lt;br /&gt;
for Lustre in the &amp;quot;create many small files&amp;quot; case, where I would consider &lt;br /&gt;
NFS today, so re-examining the motivations, relative strengths, and &lt;br /&gt;
weaknesses of both was useful. As I said, I did this more as a &lt;br /&gt;
self-exercise than anything else but I hope you can find something &lt;br /&gt;
useful here, too.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Patchless_Client&amp;diff=12229</id>
		<title>Patchless Client</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Patchless_Client&amp;diff=12229"/>
		<updated>2011-01-20T17:38:02Z</updated>

		<summary type="html">&lt;p&gt;Sbarthel: /* Versions */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;small&amp;gt;&#039;&#039;(Updated: Oct 2009)&#039;&#039;&amp;lt;/small&amp;gt;&lt;br /&gt;
&lt;br /&gt;
As of Lustre™ 1.6.0, Lustre supports running the client modules on most current &amp;quot;stock&amp;quot; kernels without the need for patches to the client kernel.  Patches are still required to the Lustre &#039;&#039;server&#039;&#039; kernel, but since these nodes generally run only Lustre, this is not a major limitation. Pre-built &amp;quot;patchless&amp;quot; RPMs can be found at the [http://www.sun.com/software/products/lustre/get.jsp Lustre download site]. &lt;br /&gt;
&lt;br /&gt;
We strongly recommend that you use a pre-built RPM rather than building your own. However, if you need to run a kernel on the client that is not one of the supported kernels, it is possible to build from source for the kernels listed at the top the Change Log for each release (see [[Lustre Release Information]]).&lt;br /&gt;
&lt;br /&gt;
The Lustre configure script will automatically detect the unpatched kernel and disable building the servers.&lt;br /&gt;
&lt;br /&gt;
 [lustre]$ ./configure --with-linux=/unpatched/kernel/source &lt;br /&gt;
&lt;br /&gt;
=== Versions ===&lt;br /&gt;
See the [http://wiki.lustre.org/index.php/Lustre_Release_Information#Lustre_Test_Matrix Lustre Test Matrix] for a list of kernels that are known to work with patchless Lustre clients.  Note that Oracle does not test all of these kernel versions with each Lustre release, but it is expected that kernels between the oldest and newest listed versions work with a given Lustre release.&lt;br /&gt;
&lt;br /&gt;
=== Known Issues ===&lt;br /&gt;
&lt;br /&gt;
Many NFS-related bugs are also addressed by the patchless client fixes.&lt;/div&gt;</summary>
		<author><name>Sbarthel</name></author>
	</entry>
</feed>