WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Architecture Descriptions

From Obsolete Lustre Wiki
Revision as of 13:01, 21 January 2010 by Docadmin (talk | contribs)
Jump to navigationJump to search

The architecture descriptions listed below provide information about Lustre architecture and design and are intended to help users better understand the conceptual framework of the Lustre file system.

Note: These documents reflect the state of design of a Lustre feature at a particular point in time. They many contain information that is incomplete or obsolete and may not reflect the current architecture, features and functionality of Lustre.

Adaptive Timeouts - Use Cases

Backup (file system backup)

Caching OSS (caching on object storage servers)

Changelogs (per-server logs of data or metadata changes)

Changelogs 1.6 (used to facilitate efficient replication of large Lustre 1.6 filesystems)

Client Cleanup (use cases, business drivers, models to consider, implementation contraints)

Clustered Metadata (clustered metadata server capability)

Commit on Share (better recover-ability in an environment where clients miss reconnect window)

CROW (CReate On Write optimizes create performance by deferring OSS object creation)

CTDB with Lustre (cluster implementation of the TDB database with Lustre provides a solution for windows pCIFS)

Cuts (technique for recovering file system metadata stored on file server clusters)

DMU OSD (an implementation of the Object Storage Device API for a Data Management Unit)

DMU Zerocopy

End-to-end Checksumming (Lustre network checksumming)

Epochs (used to merge distributed data and meta-data updates in a redundant cluster configuration)

External File Locking (file range lock and whole-file lock capabilities)

FIDs on OST (file identifiers used to identify objects on an object storage target)

Fileset (an efficient representation of a group of file identifiers (FIDs))

Flash Cache (very fast read-only flash storage)

Free Space Management (managing free space for stripe allocation)

GNS (global namespace for a distributed file system)

HSM (hierarchical storage management)

HSM and Cache (reuse of components by Lustre features that involve migration of file system objects)

HSM Migration (use cases and high-level architecture for migrating files between Lustre and a HSM system)

Interoperability fids zfs (client, server, network, storage interoperability during migration to clusters based on file identifiers and the ZFS file system)

Interoperability 1.6 1.8 2.0 (interoperability definitions and QAS summary)

IO system

Libcfs

Llog over OSD

LRE Images

Lustre Logging API

MDS striping format

MDS-on-DMU

Metadata API

Migration (1)

Migration (2)

MPI IO and NetCDF

MPI LND

Multiple Interfaces For LNET

Network Request Scheduler

New Metadata API

Open by fid

OSS-on-DMU

PAG

Pools of targets

Profiling Tools for IO

Proxy Cache

Punch and Extent Migration

Punch and Extent Migration Requirements

Recovery Failures

Request Redirection

Scalable Pinger

Security

Server Network Striping

Simple Space Balance Migration

Simplified Interoperation

Space Manager

Sub Tree Locks

User Level Access

User Level OSS

Userspace Servers

Version Based Recovery

Wide Striping

Wire Level Protocol

Write Back Cache

ZFS for Lustre

ZFS large dnodes

ZFS TinyZAP