WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.
Architecture Descriptions: Difference between revisions
No edit summary |
No edit summary |
||
Line 19: | Line 19: | ||
[[Architecture - Commit on Share|Commit on Share ]] (better recover-ability in an environment where clients miss reconnect window) | [[Architecture - Commit on Share|Commit on Share ]] (better recover-ability in an environment where clients miss reconnect window) | ||
[[Architecture - CROW|CROW ]] (CReate On Write optimizes create performance by deferring OSS object creation | [[Architecture - CROW|CROW ]] (CReate On Write optimizes create performance by deferring OSS object creation) | ||
[[Architecture - CTDB with Lustre|CTDB with Lustre ]] (cluster implementation of the TDB database with Lustre provides a | [[Architecture - CTDB with Lustre|CTDB with Lustre ]] (cluster implementation of the TDB database with Lustre provides a solution for windows pCIFS) | ||
[[Architecture - Cuts|Cuts ]] (technique for recovering file system metadata stored on file server clusters) | [[Architecture - Cuts|Cuts ]] (technique for recovering file system metadata stored on file server clusters) |
Revision as of 12:35, 21 January 2010
The architecture descriptions listed below provide information about Lustre architecture and design and are intended to help users better understand the conceptual framework of the Lustre file system.
Note: These documents reflect the state of design of a Lustre feature at a particular point in time. They many contain information that is incomplete or obsolete and may not reflect the current architecture, features and functionality of Lustre.
Backup (file system backup)
Caching OSS (caching on object storage servers)
Changelogs (per-server logs of data or metadata changes)
Changelogs 1.6 (used to facilitate efficient replication of large Lustre 1.6 filesystems)
Client Cleanup (use cases, business drivers, models to consider, implementation contraints)
Clustered Metadata (clustered metadata server capability)
Commit on Share (better recover-ability in an environment where clients miss reconnect window)
CROW (CReate On Write optimizes create performance by deferring OSS object creation)
CTDB with Lustre (cluster implementation of the TDB database with Lustre provides a solution for windows pCIFS)
Cuts (technique for recovering file system metadata stored on file server clusters)
DMU OSD (an implementation of the Object Storage Device API for a Data Management Unit)
End-to-end Checksumming (Lustre network checksumming)
Epochs (used to merge distributed data and meta-data updates in a redundant cluster configuration)
External File Locking (file range lock and whole-file lock capabilities)
Punch and Extent Migration Requirements