WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Lustre Configuration Example: Difference between revisions

From Obsolete Lustre Wiki
Jump to navigationJump to search
No edit summary
No edit summary
Line 23: Line 23:
2. Create a combined MGS/MDT file system on the block device. On the MDS
2. Create a combined MGS/MDT file system on the block device. On the MDS
node, run:
node, run:
[root@mds /]# mkfs.lustre --fsname=temp --mgs --mdt /dev/sdb
This command generates this output:


<pre>
<pre>
[root@mds /]# mkfs.lustre --fsname=temp --mgs --mdt /dev/sdb
    Permanent disk data:
This command generates this output:
Target:       temp-MDTffff
Permanent disk data:
Index:       unassigned
Target: temp-MDTffff
Lustre FS:   temp
Index: unassigned
Mount type:   ldiskfs
Lustre FS: temp
Flags:       0x75
Mount type: ldiskfs
    (MDT MGS needs_index first_time update )
Flags: 0x75
(MDT MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters: mdt.group_upcall=/usr/sbin/l_getgroups
Parameters: mdt.group_upcall=/usr/sbin/l_getgroups
checking for existing Lustre data: not found
checking for existing Lustre data: not found
device size = 16MB
device size = 16MB
2 6 18
2 6 18
formatting backing filesystem ldiskfs on /dev/sdb
formatting backing filesystem ldiskfs on /dev/sdb
target name temp-MDTffff
    target name   temp-MDTffff
4k blocks 0
    4k blocks     0
options -i 4096 -I 512 -q -O dir_index,uninit_groups -F
    options       -i 4096 -I 512 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-MDTffff -i 4096 -I 512 -q -O
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-MDTffff -i 4096 -I 512 -q -O
dir_index,uninit_groups -F /dev/sdb
dir_index,uninit_groups -F /dev/sdb
Writing CONFIGS/mountdata
Writing CONFIGS/mountdata
Variable Setting Variable Setting
network type TCP/IP MGS node 10.2.0.1@tcp0
block device /dev/sdb OSS 1 node oss1
file system temp client node client1
mount point /mnt/mdt OST 1 ost1
mount point /lustre
</pre>
</pre>


3. Mount the combined MGS/MDT file system on the block device. On the MDS node, run:
[root@mds /]# mount -t lustre /dev/sdb /mnt/mdt


3. Mount the combined MGS/MDT file system on the block device. On the MDS node, run:
This command generates this output:


<pre>
<pre>
[root@mds /]# mount -t lustre /dev/sdb /mnt/mdt
This command generates this output:
Lustre: temp-MDT0000: new disk, initializing
Lustre: temp-MDT0000: new disk, initializing
Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_group_upcall()) \
Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_group_upcall()) \
Line 66: Line 64:
group_upcall=/usr/sbin/l_getgroups
group_upcall=/usr/sbin/l_getgroups
Lustre: Server temp-MDT0000 on device /dev/sdb has started
Lustre: Server temp-MDT0000 on device /dev/sdb has started
4. Create the OST. On the OSS node, run:
4. Create the OST. On the OSS node, run:
[root@oss1 /]# mkfs.lustre --ost --fsname=temp --mgsnode=
 
10.2.0.1@tcp0 /dev/sdb
[root@oss1 /]# mkfs.lustre --ost --fsname=temp --mgsnode= 10.2.0.1@tcp0 /dev/sdb
 
The command generates this output:
The command generates this output:
Permanent disk data:
 
Target: temp-OSTffff
<pre>
Index: unassigned
    Permanent disk data:
Lustre FS: temp
Target:     temp-OSTffff
Mount type: ldiskfs
Index:       unassigned
Flags: 0x72
Lustre FS:   temp
Mount type: ldiskfs
Flags:       0x72
(OST needs_index first_time update)
(OST needs_index first_time update)
Persistent mount opts: errors=remount-ro,extents,mballoc
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=10.2.0.1@tcp
Parameters: mgsnode=10.2.0.1@tcp
checking for existing Lustre data: not found
checking for existing Lustre data: not found
device size = 16MB
device size = 16MB
2 6 18
2 6 18
formatting backing filesystem ldiskfs on /dev/sdc
formatting backing filesystem ldiskfs on /dev/sdc
target name temp-OSTffff
    target name   temp-OSTffff
4k blocks 0
    4k blocks     0
options -I 256 -q -O dir_index,uninit_groups -F
    options       -I 256 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-OSTffff -I 256 -q -O
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-OSTffff -I 256 -q -O
dir_index,uninit_groups -F /dev/sdc
dir_index,uninit_groups -F /dev/sdc
Line 92: Line 95:


5. Mount the OST. On the OSS node, run:
5. Mount the OST. On the OSS node, run:
root@oss1 /] mount -t lustre /dev/sdb /mnt/ost1
The command generates this output:


<pre>
<pre>
root@oss1 /] mount -t lustre /dev/sdb /mnt/ost1
The command generates this output:
LDISKFS-fs: file extents enabled
LDISKFS-fs: file extents enabled
LDISKFS-fs: mballoc enabled
LDISKFS-fs: mballoc enabled
Lustre: temp-OST0000: new disk, initializing
Lustre: temp-OST0000: new disk, initializing
Lustre: Server temp-OST0000 on device /dev/sdb has started
Lustre: Server temp-OST0000 on device /dev/sdb has started
</pre>
Shortly afterwards, this output appears:
Shortly afterwards, this output appears:
<pre>
Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0
Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0
Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting
Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting
Line 107: Line 116:


6. Create the client (mount the file system on the client). On the client node, run:
6. Create the client (mount the file system on the client). On the client node, run:
root@client1 /] mount -t lustre 10.2.0.1@tcp0:/temp /lustre
 
root@client1 /] mount -t lustre 10.2.0.1@tcp0:/temp /lustre
 
This command generates this output:
This command generates this output:
Lustre: Client temp-client has started


Lustre: Client temp-client has started


7. Verify that the file system started and is working by running the UNIX
7. Verify that the file system started and is working by running the UNIX
commands df, dd and ls on the client node.
commands ''df'', ''dd'' and ''ls'' on the client node.
 
a. Run the ''df'' command:


a. Run the df command:
:[root@client1 /] df -h


:<pre>
:<pre>
;[root@client1 /] df -h
;This command generates output similar to:
;This command generates output similar to:
;Filesystem Size Used Avail Use% Mounted on
;Filesystem         Size   Used   Avail   Use% Mounted on
;/dev/mapper/VolGroup00-LogVol00
;/dev/mapper/VolGroup00-LogVol00
;7.2G 2.4G 4.5G 35% /
;                   7.2G   2.4G   4.5G     35%   /
;dev/sda1 99M 29M 65M 31% /boot
;dev/sda1           99M   29M   65M     31%   /boot
;tmpfs 62M 0 62M 0% /dev/shm
;tmpfs               62M     0     62M     0%   /dev/shm
;10.2.0.1@tcp0:/temp 30M 8.5M 20M 30% /lustre
;10.2.0.1@tcp0:/temp 30M   8.5M   20M     30%   /lustre
</pre>
</pre>


b. Run the dd command:
b. Run the ''dd'' command:


:<pre>
:<pre>
Line 134: Line 146:
;[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M
;[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M
;count=2
;count=2
;This command generates output similar to:
</pre>
 
:This command generates output similar to:
 
:<pre>
;2+0 records in
;2+0 records in
;2+0 records out
;2+0 records out
;8388608 bytes (8.4 MB) copied, 0.159628 seconds, 52.6 MB/s
;8388608 bytes (8.4 MB) copied, 0.159628 seconds, 52.6 MB/s
</pre>


c. Run the ls command:
c. Run the ''ls'' command:


  ;[root@client1 /lustre] ls -lsah
  ;[root@client1 /lustre] ls -lsah

Revision as of 11:48, 29 September 2009

This Lustre configuration example illustrates the configuration steps for a simple Lustre installation comprising a combined MGS/MDT, an OST and a client, where:

Variable Setting Variable Setting
network type TCP/IP MGS node node 10.2.0.1@tcp0
block device /dev/sdb OSS 1 node oss1
file system temp clientnode client1
mount point /mnt/mdt OST 1 ost1
mount point /lustre

1. Define the module options for Lustre networking (LNET), by adding this line to the /etc/modprobe.conf file.

options lnet networks=tcp

2. Create a combined MGS/MDT file system on the block device. On the MDS node, run:

[root@mds /]# mkfs.lustre --fsname=temp --mgs --mdt /dev/sdb

This command generates this output:

     Permanent disk data:
Target:       temp-MDTffff
Index:        unassigned
Lustre FS:    temp
Mount type:   ldiskfs
Flags:        0x75
     (MDT MGS needs_index first_time update )
Persistent mount opts: errors=remount-ro,iopen_nopriv,user_xattr
Parameters: mdt.group_upcall=/usr/sbin/l_getgroups

checking for existing Lustre data: not found
device size = 16MB
2 6 18
formatting backing filesystem ldiskfs on /dev/sdb
     target name   temp-MDTffff
     4k blocks     0
     options       -i 4096 -I 512 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-MDTffff -i 4096 -I 512 -q -O
dir_index,uninit_groups -F /dev/sdb
Writing CONFIGS/mountdata

3. Mount the combined MGS/MDT file system on the block device. On the MDS node, run:

[root@mds /]# mount -t lustre /dev/sdb /mnt/mdt

This command generates this output:

Lustre: temp-MDT0000: new disk, initializing
Lustre: 3009:0:(lproc_mds.c:262:lprocfs_wr_group_upcall()) \
temp-MDT0000: group upcall set to /usr/sbin/l_getgroups
Lustre: temp-MDT0000.mdt: set parameter \
group_upcall=/usr/sbin/l_getgroups
Lustre: Server temp-MDT0000 on device /dev/sdb has started

4. Create the OST. On the OSS node, run:

 [root@oss1 /]# mkfs.lustre --ost --fsname=temp --mgsnode= 10.2.0.1@tcp0 /dev/sdb

The command generates this output:

<pre>
     Permanent disk data:
Target:      temp-OSTffff
Index:       unassigned
Lustre FS:   temp
Mount type:  ldiskfs
Flags:       0x72
(OST needs_index first_time update)
Persistent mount opts: errors=remount-ro,extents,mballoc
Parameters: mgsnode=10.2.0.1@tcp

checking for existing Lustre data: not found
device size = 16MB
2 6 18
formatting backing filesystem ldiskfs on /dev/sdc
     target name    temp-OSTffff
     4k blocks      0
     options        -I 256 -q -O dir_index,uninit_groups -F
mkfs_cmd = mkfs.ext2 -j -b 4096 -L temp-OSTffff -I 256 -q -O
dir_index,uninit_groups -F /dev/sdc
Writing CONFIGS/mountdata

5. Mount the OST. On the OSS node, run:

root@oss1 /] mount -t lustre /dev/sdb /mnt/ost1

The command generates this output:

LDISKFS-fs: file extents enabled
LDISKFS-fs: mballoc enabled
Lustre: temp-OST0000: new disk, initializing
Lustre: Server temp-OST0000 on device /dev/sdb has started

Shortly afterwards, this output appears:

Lustre: temp-OST0000: received MDS connection from 10.2.0.1@tcp0
Lustre: MDS temp-MDT0000: temp-OST0000_UUID now active, resetting
orphans

6. Create the client (mount the file system on the client). On the client node, run:

root@client1 /] mount -t lustre 10.2.0.1@tcp0:/temp /lustre

This command generates this output:

Lustre: Client temp-client has started

7. Verify that the file system started and is working by running the UNIX commands df, dd and ls on the client node.

a. Run the df command:

[root@client1 /] df -h
This command generates output similar to
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
7.2G 2.4G 4.5G 35% /
dev/sda1 99M 29M 65M 31% /boot
tmpfs 62M 0 62M 0% /dev/shm
10.2.0.1@tcp0
/temp 30M 8.5M 20M 30% /lustre

b. Run the dd command:

[root@client1 /] cd /lustre
[root@client1 /lustre] dd if=/dev/zero of=/lustre/zero.dat bs=4M
count=2
This command generates output similar to:
2+0 records in
2+0 records out
8388608 bytes (8.4 MB) copied, 0.159628 seconds, 52.6 MB/s

c. Run the ls command:

;[root@client1 /lustre] ls -lsah
This command generates output similar to:
total 8.0M
4.0K drwxr-xr-x 2 root root 4.0K Oct 16 15
27 .
8.0K drwxr-xr-x 25 root root 4.0K Oct 16 15
27 ..
8.0M -rw-r--r-- 1 root root 8.0M Oct 16 15
27 zero.dat