WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Configuring the Lustre File System

From Obsolete Lustre Wiki
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

This page describes how to configure a simple Lustre file system comprised of a combined MGS/MDT, an OST and a client. However, the administrative utilities provided with Lustre can be used to set up a systems with many different configurations.

Note: We recommend that you use dotted-quad (dot-decimal) notation for IP addresses (IPv4) rather than host names. This aids in reading debug logs and helps when debugging configurations with multiple interfaces.

Configuring the Lustre File System

Complete these steps to configure Lustre Networking (LNET) and the Lustre file system:

1. Define the module options for Lustre networking (LNET) by adding this line to the /etc/modprobe.conf file. The modprobe.conf file is a Linux file that specifies which parts of the kernel are loaded.

options lnet networks=<network interfaces that LNET can use>
This step restricts LNET to using only the specified network interfaces and prevents LNET from using all network interfaces.
As an alternative to modifying the modprobe.conf file, you can modify the modprobe.local file or the configuration files in the modprobe.d directory.
Note: For details on configuring networking and LNET, see the Configuring LNET section in the Lustre Operations Manual.

2. (Optional) Prepare the block devices to be used as OSTs or MDTs. Depending on the hardware used in the MDS and OSS nodes, you may want to set up a hardware or software RAID to increase the reliability of the Lustre system. For more details on how to set up a hardware or software RAID, see the documentation for your RAID controller or see the section “Lustre Software RAID Support” in the Lustre Operations Manual.

3. Create a combined MGS/MDT file system on the block device. On the MDS node, run:

mkfs.lustre --fsname=<fsname> --mgs --mdt <block device name>
The default file system name (fsname) is lustre.
Note: If you plan to generate multiple file systems, the MGS should be on its own dedicated block device.

4. Mount the combined MGS/MDT file system on the block device. On the MDS node, run:

mount -t lustre <block device name> <mount point>

5. Create the OST. On the OSS node, run:

mkfs.lustre --ost --fsname=<fsname> --mgsnode=<NID> <block device name>
You can have as many OSTs per OSS as the hardware or drivers allow.
You should use only one OST per block device. Optionally, you can create an OST which uses the raw block device and does not require partitioning.
Note: If the block device has more than 8 TB of storage, it must be partitioned due to the ext3 file system limitation. Lustre can support block devices with multiple partitions, but they are not recommended because bottlenecks may result.

6. Mount the OST. On the OSS node where the OST was created, run:

mount -t lustre <block device name> <mount point>
Note: To create additional OSTs, repeat Steps 5 and 6.

7. Create the client (mount the file system on the client). On the client node, run:

mount -t lustre <MGS node>:/<fsname> <mount point>
Note: To create additional clients, repeat Step 7]].

8. Verify that the file system started and is working by running the UNIX commands df, dd and ls on the client node.

a. Run the df command.
[root@client1 /] df -h
b. Run the dd command. FIX FORMATTING
[root@client1 /] cd /lustre
[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M count=2

c. Run the ls command.

[root@client1 /lustre] ls -lsah

If you have a problem mounting the file system, check the syslogs for errors.

Do we want to link to an example here or just refer readers to the OM?

Lustre Configuration Utilities

Once the Lustre file system is configured, it is ready for use. If additional configuration is necessary, several configuration utilities are available. For man pages and reference information, see these sections in the Lustre Operations Manual:

  • mkfs.lustre
  • tunefs.lustre
  • lctl
  • mount.lustre

The System Configuration Utilities chapter of the Lustre Operations Manual profiles utilities such as lustre_rmmod, e2scan, l_getgroups, llobdstat, llstat, lst, plot-llstat, routerstat, and ll_recover_lost_found_objs, as well as utilities to manage large clusters, perform application profiling, and test and debug Lustre.