WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Architecture - ZFS for Lustre

From Obsolete Lustre Wiki
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Note: The content on this page reflects the state of design of a Lustre feature at a particular point in time and may contain outdated information.

Background

Purpose of this page

The purpose of this page is to document architecture and requirements related to Lustre servers using the DMU. There are two parts:

  1. how should the DMU be used as a storage backend for the Lustre servers and
  2. what features should be added to ZFS because Lustre requires these from the server disk file systems as more storage management features are added to Lustre.

This page documents how such features might be added to ZFS, based on discussions with Bill Moore from Sun and internal discussions in the Lustre Group.

Using the DMU in user space

The ZFS DMU can run in userspace and is extensively used by the ztest program. The Linux ZFS FUSE project has also used it. Bill Moore (Sun ZFS Architect) reports that in principle there are no obstacles using the DMU in user space, but that some performance tuning is to be expected. All storage management facilities offered by the DMU remain available, such as pool management.

The DMU offers benefits for Lustre but it is not a perfect marriage. The approach below is based on:

  1. low risk: use methods we know
  2. time to market: stick with methods we use under ldiskfs
  3. low controversy: start with something that ZFS can deliver without modifications
  4. few initial enhancements: there are a few ZFS enhancements that would be highly beneficial

Lustre Servers using the DMU

File system formats

The Lustre servers will interface with the DMU in such a way that the disk images that are created can be mounted as ZFS file systems - at least for the initial versions. These DMU created ZFS file systems will be identically populated on the OSS as is currently done in the user level OSS layered on ZFS.

While this is not strictly necessary, it will retain the transparency of where the data is.

ZFS has an exceptionally rich "fork" feature (similar to extended attributes), and this can be used to build, for example, object indexes in a way consistent with ZFS disk file system images.

User space

The DMU will be used in user space. It is important that the DMU is treated as a library (libzpool) and that a strict separation is made between code changes to the DMU and to Lustre for license reasons.

Note that volume management and utilities are already working in userspace also. Libzpool is perhaps a deceptive name as it includes DMU and Pool features.

OSS

  • Object index: we stick with a directory hierarchy under O. Because sequences will be important, the hierarchy will be /O/<seq>/<objid>. The last part of the pathname points to a ZFS regular file. Note that the current ZFS port uses this also.
  • Reference to MDS fid: Each object requires a reference to the MDS fid to which it belongs. For this we need a relatively small extended attribute. We propose to use the empty 56 bytes in the inode, in due course, but will store it initially as an extended attribute. Should this contain a checksum to validate that the fid is valid?
  • Size on MDS, HSM: also requires extended attributes on the objects. It will likely not fit into 56 bytes and standard EAs are probably slow. Yet we will use this.
  • Larger blocks: we believe that for HPC applications larger blocks than 128K may be desirable for performance reasons. Benchmarking is in progress.

MDS

  • Fid to object mapping: We propose to use ZAP EAs associated with the /O directory for this purpose, perhaps one ZAP per sequence, as iteration is important. Probably for robustness reasons, each entry in this zap should include a checksum of some aspects of the object it is referencing so that false entries can be detected.
  • Readdir: readdir must return the fid to the client. The fid will be put in an EA of the znode in the first implementation but it leads to every znode being read during readdir. This will be slow, but it may be by far the most common case of readdir use anyway. If this is an obstacle then two things can be done: one is use a secondary name-to-fid ZAP in a fork of the directory, the second is to add the fid to a dirent structure.
  • Stripe information: this needs to go into an extended attribute, and is expected to be quite slow. Using larger inodes and EA's embedded in the inode seems the right way to go, but these are two changes to ZFS, which can be made before the first release, possibly.

First Steps

  1. Get PIOS working with the DMU
  2. Get excellent performance out of the user space DMU
  3. Design and implement a DMU OSS backend
  4. Design and implement an DMU MDS backend
  5. Enhance ZFS with features required by Lustre

ZFS Features Required by Lustre

Larger dnodes with embedded EAs

To avoid much indirection to other data structures as is currently seen with ZFS xattrs, larger dnodes in ZFS which can contain small EAs (large enough for most Lustre EAs) are very attractive.

ZFS large dnodes and ZFS_TinyZAP

Larger block size

For HPC applications, 128K is probably a blocksize that is considerably too small. We will need to implement larger block sizes.

Read / Write priorities

ZFS has a simple table to control read write priorities. Given that writes mostly go to cache and are flushed by background daemons, while reads block applications, reads are often given higher priority, with limitations to prevent starving writes. Henry Newman raised concerns that for the HPCS file system this policy is not necessarily ideal. Bill Moore explained that it is simple to change it through settings in a table.

Data on Separate Devices

Past parallel file systems and current efforts with MAID arrays have found significant advantages in file systems that store data on separate block devices from metadata. Some users of Lustre already place the journal on a separate device.

In ZFS this is relatively easy to arrange by introducing new classes of vdev's. The block allocator would choose a metadata class vdev if it was allocating metadata and an file-data class if it was allocating for file data. See Jeff Bonwick's blog entry about block allocation

Migration of Allocation Data

When Lustre's server network striping (SNS) feature will be introduced, write calls that overwrite existing data will have to save the old contents before overwriting for recovery purposes. The SNS architecture proposes to record allocation data of the extent in a log file from which it will be freed when the network stripe commits globally.

ZFS has a block pointer (BP) list structure that might be very useful to hold such allocation data. It comes with appropriate API's to free such blocks. The BP list is held in a DMU object.


ZFS extended attributes

ZFS has an extended attribute model that is very general and support large extended attributes. See http://www.scit.wlv.ac.uk/cgi-bin/mansec?5+fsattr

One issue is that the ZFS xattr model provides no protection for xattrs stored on a file and a user would be able to corrupt the Lustre EA data with enough effort, even if it is owned by root.

Parity Declustering

Simple parity declustering patterns should be supported.