WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Architecture - HSM and Cache

From Obsolete Lustre Wiki
Revision as of 12:49, 21 January 2010 by Docadmin (talk | contribs) (Protected "Architecture - HSM and Cache" ([edit=sysop] (indefinite) [move=sysop] (indefinite)))
Jump to navigationJump to search

Note: The content on this page reflects the state of design of a Lustre feature at a particular point in time and may contain outdated information.

Deployments

Multiple Lustre features have significant overlap because they involve migration of file system objects:

  • HSM
  • Space rebalancing
  • Data migration
  • Caches for Lustre proxy services
  • Server Network Striping rebuild
  • 3rd party IO

HSM-deploy.jpg

The architecture under consideration will target re-use of components.

Software Subsystem Decomposition

Migration

Migration enables an initiator to request that file data is read from one location and stored in another.

  1. 3rd Party IO - A node requests through a lustre client that data can be read/written through a 3rd party transport
    1. the node gives {R,W}, offset, length, and a transport id, and transport data to a Lustre client
    2. the Lustre client decodes the ranges of the file that reside on OSS nodes and forwards corresponding range descriptors in the file and in the objects to the OSS nodes
    3. the OSS nodes prepare pages for the IO corresponding to the object ranges that need to read/written.
    4. the OSS calls a 3pio_init(transport id, object range_desc, file_range_desc, transport_data, &3pio_fini, cb_data). There are rules
    5. the 3rd party transport