WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Using Xen with Lustre"

From Obsolete Lustre Wiki
Jump to navigationJump to search
 
(5 intermediate revisions by one other user not shown)
Line 1: Line 1:
 +
<small>''(Updated: Dec 2009)''</small>
 
__TOC__
 
__TOC__
 
With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the server's hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more "virtual servers", effectively decoupling the operating system and its applications from the underlying physical server.
 
With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the server's hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more "virtual servers", effectively decoupling the operating system and its applications from the underlying physical server.
Line 127: Line 128:
 
</pre>
 
</pre>
  
4. Build a Lustre-patched Xen guest image by following the procedure [http://wiki.lustre.org/index.php/Building_Lustre_Code Building Lustre Code].
+
4. Build a Lustre-patched Xen guest image by following the procedure [[Building Lustre Code]].
  
 
5. In the ''initrd'' image, check that the ''xenblk'' module has been loaded to enable accessing block devices.
 
5. In the ''initrd'' image, check that the ''xenblk'' module has been loaded to enable accessing block devices.

Latest revision as of 12:19, 22 February 2010

(Updated: Dec 2009)

With Xen virtualization, a thin software layer known as the Xen hypervisor is inserted between the server's hardware and the operating system. This provides an abstraction layer that allows each physical server to run one or more "virtual servers", effectively decoupling the operating system and its applications from the underlying physical server.

You can deploy virtualization using one of two options: full virtualization or paravirtualization.

Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system in which the guest operating systems can run. No modifications are needed in the guest OS or application (the guest OS or application is not aware of the virtualized environment and runs normally).

Paravirtualization requires user modification of the guest operating systems that run on the virtual machines (these guest operating systems are aware that they are running on a virtual machine). The result is near-native performance.

You can deploy both paravirtualization and full virtualization across your virtualization infrastructure.

Installing a Xen Host and Creating a Xen Guest

To install a Xen host and create a Xen guest for use with Lustre™, follow the procedure below.

1. Use virt-install to provision the operating system as shown in the example below:

virt-install --paravirt --name=$NAME --ram=$MEM --vcpus=$NCPUS --file=$IMAGE \
  --file-size=$IMAGESIZE --nographics --noautoconsole \
  --location=nfs:10.8.0.175:/rhel5/cd \
  --extra-args="ks=nfs:10.8.0.175:/home/host1/xen-cfg/rhel5.cfg"

For more information about the virt-install command line tool, see the virt-install Linux man page.

2. Create a config image for the Xen guest. The example below for RHEL5 shows a configuration image in the form of a kickstart file generated by the Anaconda installation program. The kickstart image can be accessed from the host using NFS or HTTP.

# Kickstart file automatically generated by anaconda.

install
nfs --server 10.8.0.175  --dir /rhel5/cd
key --skip
lang en_US.UTF-8
keyboard us
text
network --device eth0 --bootproto dhcp
#rootpw --iscrypted $1$QQO.JRIi$kRVI0ntI.EWCvF9DpdNNp/
rootpw --iscrypted $1$wJCInuZ4$C.J3yKu/.a2Ce6cfo1kpV.
# firewall --enabled --port=22:tcp
firewall --disabled
authconfig --enableshadow --enablemd5
selinux --permissive
timezone --utc America/Los_Angeles
bootloader --location=mbr --driveorder=xvda
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
clearpart --linux --initlabel --drives=xvda
part /boot --fstype ext3 --size=100 --ondisk=xvda
part /     --fstype ext3 --size=0 --grow --ondisk=xvda
part swap   --size=1000
reboot

%packages --resolvedeps
@core
@base
@development-libs
@system-tools
@legacy-network-server
@legacy-software-development
@admin-tools
@development-tools
audit
kexec-tools
device-mapper-multipath
imake
-sysreport


%post
/usr/sbin/useradd -u 500 lustre
/usr/sbin/useradd -u 501 user1
/usr/sbin/useradd -u 60000 quota_usr
/usr/sbin/useradd -u 60001 quota_2usr
mkdir /data
echo 10.8.0.75:/rhel5/cd /data nfs defaults 0 0" >> /etc/fstab


3. Modify the .config file to add the following entries for Xen:

CONFIG_X86_64_XEN=y
CONFIG_XEN_PCIDEV_FRONTEND=y
CONFIG_NETXEN_NIC=m
CONFIG_XEN=y
CONFIG_XEN_INTERFACE_VERSION=0x00030203
#
# XEN
#
CONFIG_XEN_PRIVILEGED_GUEST=y
# CONFIG_XEN_UNPRIVILEGED_GUEST is not set
CONFIG_XEN_PRIVCMD=y
CONFIG_XEN_XENBUS_DEV=y
CONFIG_XEN_BACKEND=y
CONFIG_XEN_BLKDEV_BACKEND=m
CONFIG_XEN_BLKDEV_TAP=m
CONFIG_XEN_NETDEV_BACKEND=m
# CONFIG_XEN_NETDEV_PIPELINED_TRANSMITTER is not set
CONFIG_XEN_NETDEV_LOOPBACK=m
CONFIG_XEN_PCIDEV_BACKEND=m
CONFIG_XEN_PCIDEV_BACKEND_VPCI=y
# CONFIG_XEN_PCIDEV_BACKEND_PASS is not set
# CONFIG_XEN_PCIDEV_BACKEND_SLOT is not set
# CONFIG_XEN_PCIDEV_BE_DEBUG is not set
# CONFIG_XEN_TPMDEV_BACKEND is not set
CONFIG_XEN_BLKDEV_FRONTEND=y
CONFIG_XEN_NETDEV_FRONTEND=y
CONFIG_XEN_FRAMEBUFFER=y
CONFIG_XEN_KEYBOARD=y
CONFIG_XEN_SCRUB_PAGES=y
# CONFIG_XEN_DISABLE_SERIAL is not set
CONFIG_XEN_SYSFS=y
CONFIG_XEN_COMPAT_030002_AND_LATER=y
# CONFIG_XEN_COMPAT_LATEST_ONLY is not set
CONFIG_XEN_COMPAT_030002=y
CONFIG_HAVE_ARCH_ALLOC_SKB=y
CONFIG_HAVE_ARCH_DEV_ALLOC_SKB=y
CONFIG_HAVE_IRQ_IGNORE_UNHANDLED=y
CONFIG_NO_IDLE_HZ=y
CONFIG_XEN_UTIL=y
CONFIG_XEN_BALLOON=y
CONFIG_XEN_DEVMEM=y
CONFIG_XEN_SKBUFF=y
CONFIG_XEN_REBOOT=y
CONFIG_XEN_SMPBOOT=y

4. Build a Lustre-patched Xen guest image by following the procedure Building Lustre Code.

5. In the initrd image, check that the xenblk module has been loaded to enable accessing block devices.

6. To start up the guest OS, use the xm create command:

xm create XenGuest1.cfg -c

The -c flag instructs Xen to attach a console to the guest system to display the output as the system boots. For more information about xm, refer to the xm Linux man page

Note: When you boot as the guest, if you have problems with the network, the xennet module may not be loaded. Enter:

depmod -a
modprobe xnenet
Cross check with lsmod

Troubleshooting and Debugging

Some troubleshooting tips are provided below.

Mounting Xen disk images

The two procedures below are options for what to do if you have problems mounting Xen disk images:

  • Handling Linux-based virtual hosts without an LVM
1. Find out which loop devices are already in use by entering:
losetup –a
losetup /dev/loop1 /data/images/rhel5.img
fdisk -l /dev/loop1
kpartx -a /dev/loop1
Because the image file uses the loop0 device, the created device files will have the names /dev/mapper/loop0p1, /dev/mapper/loop0p2, and so on.
2. Use these files to mount the file system on which the root of the virtualized operating system is installed.
3. When you have made all the necessary modifications to this file system, unmount everything properly by entering the following commands:
umount /mnt
kpartx -d /dev/loop1
losetup -d /dev/loop1
  • Handling logical volumes in Linux-based virtual hosts
1. Find out which loop devices are already in use by entering:
losetup –a
losetup /dev/loop1 /data/images/rhel5.img
fdisk -l /dev/loop1
kpartx -a /dev/loop1
2. Make sure that the partition is known by the LVM subsystem as a physical device. Knowing that the partition is a type 8e is not enough; you need to tell the LVM subsystem that it is available as a physical device and that the LVM can use it. Use the following command to do this:
pvscan /dev/loop0p2
3. You will be told that an LVM volume group has been found within the physical device. Initialize this volume group manually by using this command:
vgscan
4. To complete the reconfiguration of the LVM structure, initialize the logical volumes in the volume group manually using this command:
lvscan
5. Although you now have access to the logical volumes again, you'll see that all of the logical volumes are inactive. You need to fix this before the logical volumes can be mounted. To do this, change the status of the volume group by using the vgchange command. This example command changes the status of all volumes in the volume group vm1vg to active:
vgchange /dev/vm1vg
The LVM logical volumes are now active and ready to be mounted.
5. Mount the logical volume. For example, to mount the logical volume with the name /dev/lvm1vg/root, use the following command:
mount /dev/vm1vg/root /mnt
At this point you have full access to all of the files in the logical volume.
6. You now can make all of the changes that you need to make.

Enabling console logging for guests

To enable console logging for guests, in /etc/init.d/xend, change the following lines to 'yes':

XENCONSOLED_LOG_HYPERVISOR=yes
XENCONSOLED_LOG_GUESTS=yes

References