<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://wiki.old.lustre.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Braam</id>
	<title>Obsolete Lustre Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://wiki.old.lustre.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Braam"/>
	<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Special:Contributions/Braam"/>
	<updated>2026-05-08T07:12:27Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.39.7</generator>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Main_Page&amp;diff=3866</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Main_Page&amp;diff=3866"/>
		<updated>2007-08-08T19:38:48Z</updated>

		<summary type="html">&lt;p&gt;Braam: /* Lustre Centres of Excellence™ */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is Lustre®? ==&lt;br /&gt;
&lt;br /&gt;
Lustre® is a scalable, secure, robust, highly-available cluster file system. It is designed, developed and maintained by [http://www.clusterfs.com Cluster File Systems, Inc.]&lt;br /&gt;
&lt;br /&gt;
The central goal is the development of a next-generation cluster file system which can serve clusters with 10,000&#039;s of nodes, provide petabytes of storage, and move 100&#039;s of GB/sec with state-of-the-art security and management infrastructure.&lt;br /&gt;
&lt;br /&gt;
Lustre runs on many of the largest Linux clusters in the world, and is included by CFS&#039;s partners as a core component of their cluster offering (examples include HP StorageWorks SFS, and the Cray XT3 and XD1 supercomputers). Today&#039;s users have also demonstrated that Lustre scales down as well as it scales up, and runs in production on clusters as small as 4 and as large as 25,000 nodes.&lt;br /&gt;
&lt;br /&gt;
The latest version of Lustre is always available from [http://www.clusterfs.com Cluster File Systems, Inc.] Public Open Source releases of Lustre are available under the GNU General Public License. These releases are [http://www.clusterfs.com/download.html found here], and are used in production supercomputing environments worldwide.&lt;br /&gt;
&lt;br /&gt;
To be informed of Lustre releases, subscribe to the [https://mail.clusterfs.com/mailman/listinfo/lustre-announce lustre-announce mailing list].&lt;br /&gt;
&lt;br /&gt;
Lustre development would not have been possible without funding and guidance from many organizations, including several U.S. National Laboratories, early adopters, and product partners.&lt;br /&gt;
&lt;br /&gt;
== User Resources ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
* [http://www.clusterfs.com/download.html Lustre Downloads]&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Lustre_Quick_Start Lustre Quick Start] &lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Mailing_Lists Mailing Lists]&lt;br /&gt;
* [http://manual.lustre.org Lustre Operations Manual] &lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Bug_Filing Filing Bugs]&lt;br /&gt;
* [https://bugzilla.lustre.org/showdependencytree.cgi?id=2374 Lustre Knowledge Base]&lt;br /&gt;
&lt;br /&gt;
== Advanced User Resources ==&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=BuildLustre How to build Lustre]&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Kerb_Lustre Kerberos]&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=LustreTuning Lustre Tuning]&lt;br /&gt;
* [http://wiki.lustre.org/images/7/78/LustreManual.html#Chapter_III-2._LustreProc LustreProc] - A guide on the &#039;&#039;&#039;proc&#039;&#039;&#039; tunable parameters for Lustre and their usage. It describes several of the &#039;&#039;&#039;proc&#039;&#039;&#039; tunables including those that effect the client&#039;s RPC behavior and prepare for a substantial reorganization of &#039;&#039;&#039;proc&#039;&#039;&#039; entries.&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=LibLustre_HowTo Liblustre HowTo]&lt;br /&gt;
&lt;br /&gt;
== Lustre Centres of Excellence™ ==&lt;br /&gt;
&lt;br /&gt;
* [http://ornl-lce.clusterfs.com/index.php?title=Main_Page ORNL]&lt;br /&gt;
* [http://www.clusterfs-mwiki.com/cea-lce CEA]&lt;br /&gt;
* [http://www.clusterfs-mwiki.com/llnl-lce LLNL]&lt;br /&gt;
* [http://www.clusterfs-mwiki.com/psc-lce/index.php?title=Main_Page PSC]&lt;br /&gt;
* [http://www.clusterfs-mwiki.com/tsinghua-lce Tsinghua]&lt;br /&gt;
&lt;br /&gt;
== Developer Resources ==&lt;br /&gt;
&lt;br /&gt;
* [http://arch.lustre.org Lustre Architecture]&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Contribution_Policy Contribution Policy]&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Mailing_List Developer Mailing List]&lt;br /&gt;
* CVS usage&lt;br /&gt;
** [http://wiki.lustre.org/index.php?title=Cvs_Branches CVS Branches] - How to manage branches with CVS.&lt;br /&gt;
** [http://wiki.lustre.org/index.php?title=Cvs_Tips CVS Tips] - Helpful things to know while using Lustre CVS.&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Lustre_Debugging Debugging Lustre] - A guide to debugging Lustre.&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=ZFS_Resources ZFS Resources] - Learn about ZFS.&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Coding_Guidelines Coding Guidelines] - Developer guidelines to avoid problems during Lustre code merges.&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Portals_Ping_Client_Server Portals Ping Server Client] - Kernel modules used to test basic message passing of ports.&lt;br /&gt;
&lt;br /&gt;
== CFS Development Projects  ==&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=IOPerformanceProject I/O Performance]&lt;br /&gt;
&lt;br /&gt;
== Community Development Projects==&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Networking_Development Networking Development]&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Diskless_Booting Diskless Booting]&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Drbd_And_Lustre DRBD and Lustre]&lt;br /&gt;
* [http://www.bullopensource.org/lustre Bull- Open Source tools for Lustre]&lt;br /&gt;
* [http://www.sourceforge.net/projects/lmt LLNL- Lustre Monitoring Tool]&lt;br /&gt;
&lt;br /&gt;
== Other Resources ==&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Lustre_Publications Lustre Publications] - Papers and presentations about Lustre&lt;br /&gt;
* [http://wiki.lustre.org/index.php?title=Lustre_User_Group Lustre User Group]&lt;br /&gt;
** LUG Requirements Forum - [http://wiki.lustre.org/images/7/78/LUG-Requirements-060420-final.pdf LUG-Requirements-060420-final.pdf] | [http://wiki.lustre.org/images/7/78/LUG-Requirements-060420-final.xls LUG-Requirements-060420-final.xls]&lt;br /&gt;
** [http://www.clusterfs.com/lug2007.html Lustre User Group 2007]&lt;br /&gt;
** [http://www.clusterfs.com/lug2006.html Lustre User Group 2006]&lt;/div&gt;</summary>
		<author><name>Braam</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Architecture_-_Punch_and_Extent_Migration&amp;diff=9694</id>
		<title>Architecture - Punch and Extent Migration</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Architecture_-_Punch_and_Extent_Migration&amp;diff=9694"/>
		<updated>2007-08-08T16:11:14Z</updated>

		<summary type="html">&lt;p&gt;Braam: Architecture Punch moved to Punch and Extent Migration&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Prototypes ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Punch ===&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
punch(inode, start, end, version, flags)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Migrate ===&lt;br /&gt;
&lt;br /&gt;
migrate(src_inode, start, end, tgt_inode, offset, version, flags) &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== references ==&lt;br /&gt;
&lt;br /&gt;
http://arch.lustre.org/index.php?title=QAS_Punch&lt;br /&gt;
&lt;br /&gt;
[[Category:Architecture|Punch]]&lt;/div&gt;</summary>
		<author><name>Braam</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=Architecture_-_Multiple_Interfaces_For_LNET&amp;diff=9729</id>
		<title>Architecture - Multiple Interfaces For LNET</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=Architecture_-_Multiple_Interfaces_For_LNET&amp;diff=9729"/>
		<updated>2007-07-30T16:02:29Z</updated>

		<summary type="html">&lt;p&gt;Braam: /* = Use Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Use Cases ==&lt;br /&gt;
&lt;br /&gt;
A node may have one or more link sets - these are a set of nids that&lt;br /&gt;
will be used in an aggregated fashion.&lt;br /&gt;
&lt;br /&gt;
#  one to many, many to one, many to many and rails situations need to be supported. Precisely link sets with K elements should be able to connect to   link sets with L elements.&lt;br /&gt;
##  clients with one interface to servers with 2&lt;br /&gt;
##  vice versa&lt;br /&gt;
##  rail situations&lt;br /&gt;
# a link set requires an aggregation descriptor:&lt;br /&gt;
## bandwidth aggregation behavior&lt;br /&gt;
## link level failover/failure recovery model&lt;br /&gt;
###    Some of these are optional or for future versions.  &lt;br /&gt;
###    These descriptors need to go into /etc/modprobe.conf&lt;br /&gt;
#  The MGS will be reached through passing multiple remote addresses   describing a failover link set&lt;br /&gt;
#  Aggregation is desirable for links on a single or on multiple LNETs&lt;br /&gt;
#  Utilities like lctl ping &amp;lt;nid&amp;gt; can send/packets to an individual nid of an   interface and to an aggregated link set.  (a link set probably    needs to be  named with a nid)&lt;br /&gt;
#  Clients will connect to servers by naming the server link set.   This requirement is to allow clients outside a firewall to connect    to a server behind a firewall where the server has non-reachable    nids (like 192.168.1.*) which might have a different meaning near    the client.&lt;br /&gt;
#  Lustre will see multiple nid&#039;s only for failover, i.e. no new   connection behavior&lt;br /&gt;
# The nids in a link set will be made available though the LNET   management node (probably the MGS) to allow dynamic server    addition.&lt;br /&gt;
#  Configuration will allow &amp;quot;real failover IP addresses&amp;quot; to be   configured.&lt;br /&gt;
# Desirable implementation constraint: a linkset will be a nid on a   lustre LNET and the routing mechanisms will be used to reach the    nid and implement aggregation behavior.  LNET bandwidth sharing,    failure handling etc.&lt;br /&gt;
# It shall be possible to specify lustre configurations for   simultaneous use of different linksets on the same server targets.&lt;br /&gt;
# If modified nids are used, they shall be big enough to contains   both linkset modifiers and IP-v6 addresses&lt;br /&gt;
# modprobe.conf shall remain a clusterwide file&lt;br /&gt;
&lt;br /&gt;
== CONFIGURATION MANAGEMENT ==&lt;br /&gt;
&lt;br /&gt;
=== Lustre configuration adaptations ===&lt;br /&gt;
&lt;br /&gt;
A lustre configuration specification must be able to describ linksets&lt;br /&gt;
for each node that shall be used during Lustre setup.&lt;br /&gt;
&lt;br /&gt;
The ip2net configuration directive is extremely similar to what we&lt;br /&gt;
need here.  &lt;br /&gt;
&lt;br /&gt;
 options lnet &#039;ip2linkgroup=&amp;quot;eth-oss-vib-mds 192.168.0.[2-20]@tcp0:ethall;&lt;br /&gt;
 eth-oss-vib-mds 132.6.1.[2,3]@vib0; vib-all *@vib0&amp;quot;&#039;&lt;br /&gt;
&lt;br /&gt;
The ethall directive is a linkset nid modifier as defined in B below.&lt;br /&gt;
&lt;br /&gt;
Specifying this as a modprobe.conf parameter is very desirable,&lt;br /&gt;
because every node would have linkgroup descriptors which it could use&lt;br /&gt;
to establish routes to aggregated linksets.&lt;br /&gt;
&lt;br /&gt;
The mount command can give a parameter:&lt;br /&gt;
&lt;br /&gt;
 mount -t lustre -o linkgroup=&amp;lt;name&amp;gt;  &amp;lt;mgs-nids-seq&amp;gt;:fsname /mnt/pt&lt;br /&gt;
&lt;br /&gt;
The MGS will map a linkgroup name to a linkset nid (using one of the&lt;br /&gt;
two alternatives below) for each server, to be used by nodes&lt;br /&gt;
connecting to this.  These linkset nids will be in the configuration&lt;br /&gt;
log and can be interpreted by LNET.&lt;br /&gt;
&lt;br /&gt;
This allows, for example, the MDS to connect to OSS&#039;s over IB while&lt;br /&gt;
clients connect to the OSS&#039;s over TCP.&lt;br /&gt;
&lt;br /&gt;
 mount -t lustre -o linkgroup=vib-all  /dev/mds-dev /mnt/pt&lt;br /&gt;
 mount -t lustre -o linkgroup=eth-oss-vib-mds  &amp;lt;mgs-nids-seq&amp;gt;:fsname /mnt/pt&lt;br /&gt;
&lt;br /&gt;
=== linkgroup indicators in NIDs ===&lt;br /&gt;
&lt;br /&gt;
define linksets in /etc/modprobe.conf without a requirement to&lt;br /&gt;
define a unique nid, e.g.:&lt;br /&gt;
&lt;br /&gt;
 linkset= &amp;lt;linkset-name&amp;gt;[{&amp;lt;aggr params&amp;gt;}]( &amp;lt;iface list&amp;gt; ) &lt;br /&gt;
&lt;br /&gt;
 linkset=eth-all{failover,noloadbalance}(eth0 eth1)&lt;br /&gt;
&lt;br /&gt;
extend the syntax of the nid from nid = &amp;lt;address&amp;gt;[@&amp;lt;network&amp;gt;] to&lt;br /&gt;
&lt;br /&gt;
 nid = &amp;lt;address&amp;gt;[@&amp;lt;network&amp;gt;][:&amp;lt;linkset name&amp;gt;]&lt;br /&gt;
&lt;br /&gt;
Now: &lt;br /&gt;
 192.168.1.5@tcp0:eth0&lt;br /&gt;
 192.168.1.5@tcp0:eth-all&lt;br /&gt;
become valid nids.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== ISSUES ==&lt;br /&gt;
&lt;br /&gt;
# Suppose servers are added with a previously unspecified network.  In this case the MGS needs to learn this at addition time, in   particular, the MGS would have to reparse its own /etc/moprobe.conf   file or get information from the new servers.&lt;br /&gt;
# This requirements discussion does not address the naming of nodes,  which might be an additional useful requirement.&lt;br /&gt;
&lt;br /&gt;
[[Category:Architecture|Multiple Network Interfaces]]&lt;br /&gt;
[[Category:QAS|Multiple Network Interfaces]]&lt;/div&gt;</summary>
		<author><name>Braam</name></author>
	</entry>
	<entry>
		<id>http://wiki.old.lustre.org/index.php?title=ZFS_Resources&amp;diff=3699</id>
		<title>ZFS Resources</title>
		<link rel="alternate" type="text/html" href="http://wiki.old.lustre.org/index.php?title=ZFS_Resources&amp;diff=3699"/>
		<updated>2007-07-01T16:13:25Z</updated>

		<summary type="html">&lt;p&gt;Braam: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page contains useful references to ZFS:&lt;br /&gt;
&lt;br /&gt;
== Training ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.sun.com/software/solaris/zfs_learning_center.jsp ZFS courses, fun to watch!]&lt;br /&gt;
&lt;br /&gt;
== Papers &amp;amp; Reference Material ==&lt;br /&gt;
&lt;br /&gt;
* [http://opensolaris.org/os/community/zfs/docs/  ZFS Docs at Sun]&lt;br /&gt;
** Presentation&lt;br /&gt;
** Administration guide&lt;br /&gt;
** Manual pages&lt;br /&gt;
** Disk format specification&lt;br /&gt;
* [http://www.sun.com/bigadmin/features/articles/zfs_part1.scalable.html Two introductory papers on ZFS]&lt;br /&gt;
&lt;br /&gt;
== Presentations ==&lt;br /&gt;
&lt;br /&gt;
* [http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf ZFS: The Last Word in File Systems]&lt;br /&gt;
&lt;br /&gt;
== Links on the Web ==&lt;br /&gt;
&lt;br /&gt;
* [http://blogs.sun.com/bonwick/category/ZFS  Jeff Bonwick&#039;s ZFS blog entries (one of ZFS&#039; principal designers)]&lt;br /&gt;
* [https://developer.berlios.de/projects/zfs-fuse/  DMU and ZFS FUSE on Linux]&lt;br /&gt;
* [http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide# ZFS Best Practices] - A guide with tuning and configuration details.&lt;br /&gt;
* &lt;br /&gt;
* [http://blogs.sun.com/jimlaurent/entry/zfs_resources ZFS Resources]&lt;br /&gt;
** Best practices&lt;br /&gt;
** Various articles with examples to learn from&lt;br /&gt;
** Tuning advice&lt;br /&gt;
* [http://blogs.sun.com/Peerapong/entry/zfs_by_examples ZFS by Examples]&lt;/div&gt;</summary>
		<author><name>Braam</name></author>
	</entry>
</feed>