WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.
Lustre Configuration Example
(Updated: Sep 2009)
This Lustre™ configuration example illustrates the configuration steps for a simple Lustre installation comprising a combined MGS/MDT, an OST and a client, where:
Variable | Setting | Variable | Setting |
---|---|---|---|
network type | TCP/IP | MGS node | 10.2.0.1@tcp0 |
block device | /dev/sdb | OSS 1 node | oss1 |
file system | temp | clientnode | client1 |
mount point | /mnt/mdt | OST 1 | ost1 |
mount point | /lustre |
1. Define the module options for Lustre networking (LNET), by adding this line to the /etc/modprobe.conf file.
options lnet networks=tcp
2. Create a combined MGS/MDT file system on the block device. On the MDS node, run:
[root@mds /]# mkfs.lustre --fsname=temp --mgs --mdt /dev/sdb
This command generates this output:
Permanent disk data: Target: temp-MDTffff Index: unassigned Lustre FS: temp Mount type: ldiskfs Flags: 0x75 (MDT MGS needs_index first_time update ) Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr Parameters: mdt.group_upcall=/usr/sbin/l_getgroups checking for existing Lustre data: not found device size = 16MB 2 6 18 formatting backing filesystem ldiskfs on /dev/sdb target name temp-MDTffff 4k blocks 0 options -i 4096 -I 512 -q -O dir_index,uninit_groups -F mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-MDTffff -i 4096 -I 512 -q -O dir_index,uninit_groups -F /dev/sdb Writing CONFIGS/mountdata
3. Mount the combined MGS/MDT file system on the block device. On the MDS node, run:
[root@mds /]# mount -t lustre /dev/sdb /mnt/mdt
This command generates this output:
Lustre: temp-MDT0000: new disk, initializing Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_group_upcall()) \ temp-MDT0000: group upcall set to /usr/sbin/l_getgroups Lustre: temp-MDT0000.mdt: set parameter \ group_upcall=/usr/sbin/l_getgroups Lustre: Server temp-MDT0000 on device /dev/sdb has started
4. Create the OST. On the OSS node, run:
[root@oss1 /]# mkfs.lustre --ost --fsname=temp --mgsnode= 10.2.0.1@tcp0 /dev/sdb
The command generates this output:
Permanent disk data: Target: temp-OSTffff Index: unassigned Lustre FS: temp Mount type: ldiskfs Flags: 0x72 (OST needs_index first_time update) Persistent mount opts: errors=remount-ro,extents,mballoc Parameters: mgsnode=10.2.0.1@tcp checking for existing Lustre data: not found device size = 16MB 2 6 18 formatting backing filesystem ldiskfs on /dev/sdc target name temp-OSTffff 4k blocks 0 options -I 256 -q -O dir_index,uninit_groups -F mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-OSTffff -I 256 -q -O dir_index,uninit_groups -F /dev/sdc Writing CONFIGS/mountdata
5. Mount the OST. On the OSS node, run:
root@oss1 /] mount -t lustre /dev/sdb /mnt/ost1
The command generates this output:
LDISKFS-fs: file extents enabled LDISKFS-fs: mballoc enabled Lustre: temp-OST0000: new disk, initializing Lustre: Server temp-OST0000 on device /dev/sdb has started
Shortly afterwards, this output appears:
Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0 Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting orphans
6. Mount the file system on the client. On the client node, run:
root@client1 /] mount -t lustre 10.2.0.1@tcp0:/temp /lustre
This command generates this output:
Lustre: Client temp-client has started
7. Verify that the file system started and is working by running the UNIX commands df, dd and ls on the client node.
a. Run the df command:
- [root@client1 /] df -h
- This command generates output similar to:
- Filesystem Size Used Avail Use% Mounted on
- /dev/mapper/VolGroup00-LogVol00
- 7.2G 2.4G 4.5G 35% /
- dev/sda1 99M 29M 65M 31% /boot
- tmpfs 62M 0 62M 0% /dev/shm
- 10.2.0.1@tcp0/:temp 30M 8.5M 20M 30% /lustre
b. Run the dd command:
- [root@client1 /] cd /lustre
- [root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M
- count=2
- This command generates output similar to:
- 2+0 records in
- 2+0 records out
- 8388608 bytes (8.4 MB) copied, 0.159628 seconds, 52.6 MB/s
c. Run the ls command:
- [root@client1 /lustre] ls -lsah
- This command generates output similar to:
- total 8.0M
- 4.0K drwxr-xr-x 2 root root 4.0K Oct 16 15:27 .
- 8.0K drwxr-xr-x 25 root root 4.0K Oct 16 15:27 ..
- 8.0M -rw-r--r-- 1 root root 8.0M Oct 16 15:27 zero.dat