WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Architecture - HSM Migration: Difference between revisions

From Obsolete Lustre Wiki
Jump to navigationJump to search
m (1 revision)
No edit summary
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
'''''Note:''''' ''The content on this page reflects the state of design of a Lustre feature at a particular point in time and may contain outdated information.''
== Purpose ==
== Purpose ==


Line 269: Line 271:


# makes policy decisions for archive, release (which files and when)
# makes policy decisions for archive, release (which files and when)
## policy engine will provide the functionality of the [[Space Manager|Space_Manager]] and any other archive/release policies
## policy engine will provide the functionality of the [[Architecture - Space Manager|Space Manager]] and any other archive/release policies
## may be based on space available per filesystem, OST, or pool
## may be based on space available per filesystem, OST, or pool
## may be based on any filesystem or per-file attributes (last access time, file size, file type, etc)
## may be based on any filesystem or per-file attributes (last access time, file size, file type, etc)
Line 478: Line 480:


= References =
= References =
[[Category:Architecture|HSM migration]]
[https://bugzilla.lustre.org/show_bug.cgi?id=15599 HSM implementation 15599]
[https://bugzilla.lustre.org/show_bug.cgi?id=15599 HSM implementation 15599]
[https://bugzilla.lustre.org/show_bug.cgi?id=15699 changelogs 15699]
[https://bugzilla.lustre.org/show_bug.cgi?id=15699 changelogs 15699]

Latest revision as of 14:12, 22 January 2010

Note: The content on this page reflects the state of design of a Lustre feature at a particular point in time and may contain outdated information.

Purpose

This page describes use cases and high-level architecture for migrating files between Lustre and a HSM system.

Definitions

Trigger
A process or event in the file system which causes a migration to take place (or be denied).
Coordinator
A service coordinating migration of data.
Agent
A service used by coordinators to move data or cancel such movement.
Mover
The userspace component of the agent which copies the file between Lustre and the HSM storage.
Copy tool
HSM-specific component of the mover. (May be the entire mover.)
I/O request
This term groups read requests, write requests and other metadata accesses like truncate or unlink.
Resident
A file whose working copy is in Lustre.
Release
A released file's data has been removed by Lustre after being copied to the HSM. The MDT retains metadata info for the file.
Archive
An archived file's data resides in the HSM. File data may or may not also reside in Lustre. The MDT retains metadata info for the file.
Restore
Copy a file from the HSM back into Lustre to make an Archived file Resident.
Prestage
An explicit call (from User or Policy Engine) to Restore a Released file

Use cases

Summary

id quality attribute summary
Restore availability If a file is accessed from the primary storage system (Lustre), but resides in the backend storage system (HSM) it can be relocated in the primary storage system and made available.
Archive availability, usability The system can copy files from the primary to the backend storage system.
access-during-archive performance, usability When a file is migrating and is accessed, the migration may be aborted.
component-failure availability If a migration component suffers a failure. The migration is resumed or aborted depending on the migration state and the failing component.
unlink availability When a Lustre object is deleted, the MDTs or the OSTs request the needed removal from the HSM when needed, depending on policy.

Restore (aka cache-miss aka copyin)

Scenario: File data is copied from a HSM into Lustre transparently
Business Goals: Automatically provides filesystem access to all files in the HSM
Relevant QA's: Availability
details Stimulus: A released file is accessed
Stimulus source: A client tries to open a released file, or an explicit call to prestage
Environment: Released and restored files in Lustre
Artifact: Released file
Response: Block the client open request. Start a file transfer from the HSM to Lustre using the dedicated copy tool. As soon as the data are fully available, reply to the client. Tag the file as resident.
Response measure: The file is 100 % resident in Lustre, the open completes without error or timeout.
Questions: None.
Issues: None.

Archive (aka copyout)

Scenario: Some files are copied to the HSM from Lustre.
Business Goals: Provides a large capacity archiving system for Lustre transparently.
Relevant QA's: Availability, usability.
details Stimulus: Explicit request
Stimulus source: Administrator/User or policy engine.
Environment: Candidate files from the policy.
Artifact: Files matching the policy.
Response: The coordinator, receiving the request, starts a transfer between Lustre and the HSM for selected files. A dedicated agent is called and will spawn the copy tool to manage the transfer. When the transfer is completed, the Lustre file is tagged as Archived and Not Dirty on the MDT.
Response measure: Policy compliance.
Questions: None.
Issues: None.

access-during-archive

Scenario: The file being archived is accessed during the archival process.
Business Goals: Optimize file accesses.
Relevant QA's: Usability.
details Stimulus: File access on migrating file.
Stimulus source: A process on any client opens a file or requests some specific actions.
Environment: File currently undergoing archival.
Artifact: File metadata.
Response: Files that are modified during archival must at no point be marked as up-to-date in the HSM until it is copied completely and coherently.
Response measure: Policy engine must re-queue file for archival. MDT disallows release of the file.
Questions: None.
Issues: None.

component-failure

Scenario: A transfer component fails during a migration.
Business Goals: Interrupted jobs should be restarted. Filesystem and HSM must remain coherent.
Relevant QA's: Availability.
details Stimulus: A component peer reaches a timeout exchanging data with a specific component.
Stimulus source: Any component failure (client, MDT, agent).
Environment: Lustre components used for file migration.
Artifact: One component could not be reached anymore.
Response: On client failure, the migration is finished anyway. On MDT/coordinator failure, a recovery mechanism using persistent state must be applied. On agent failure, the archive process is aborted and the coordinator will respawn it when necessary. On OST failure, the archive process is delayed and managed like a traditional I/O.
Response measure: The system is coherent, no data transfer process is hung.
Questions: None.
Issues: None.

unlink

Scenario: A client unlinks a file in Lustre.
Business Goals: Do not limit Lustre unlink speed to HSM speed.
Relevant QA's: Availability.
details Stimulus: A client issues a unlink request on a file in Lustre.
Stimulus source: A Lustre client.
Environment: A Lustre filesystem with objects archived in the HSM.
Artifact: A file archived in the HSM.
Response: The file is unlinked normally in Lustre. For each Lustre object removed this way, an unlink request is sent to the coordinator for the corresponding removal. This should be asynchronous and may be delayed by a large time period.
Response measure: No file object exists anymore in Lustre or the archive.
Questions: None.
Issues: None.

Components

Coordinator

  1. dispatches requests to agents; chooses agents
    1. restore <FIDlist>
    2. archive <FIDlist>
    3. unlink <FIDlist>
    4. abort_action <cookie>
  2. consolidates repeat requests
  3. re-queues requests to a new agent if an agent becomes unresponsive (aborts old request)
    1. agents send regular progress updates to coordinator (e.g. current extent)
    2. coordinator periodically checks for stuck threads
  4. coordinator requests are persistent
    1. all requests coming to the coordinator are kept in llog, cancelled when complete or aborted
  5. kernel-space service, MDT acts as initiator for copyin
  6. ioctl interface for all requests. Initiators are policy engine, administrator tool, or MDT for cache-miss.
  7. Location: a coordinator will be directly integrated with each MDT
    1. Agents will communicate via MDC
    2. Connection/reconnection already taken care of; no additional pinging, config
    3. Client mount option will indicate "agent", connect flag will inform MDT
    4. MDT already has intimate knowledge of HSM bits (see below) and needs to communicate with coordinator anyhow
    5. HSM comms can use a new portal and reuse MDT threads.
    6. Coordinators will handle the same namespace segment as each MDT under CMD

MDT changes

  1. Per-file layout lock
    1. A new layout lock is created for every file. The lock contains a layout version number.
    2. Private writer lock is taken by the MDT when allocating/changing file layout (LOV EA).
      1. The lock is not released until the layout change is complete and the data exist in the new layout.
      2. The MDT will take group extent locks for the entire file. The group ID will be passed to the agent performing the data transfer.
      3. The current layout version is stored by the OSTs for each object in the layout.
    3. Shared reader locks are taken by anyone reading the layout (client opens, lfs getstripe) to get the layout version.
    4. Anyone taking a new extent lock anywhere in the file includes the layout version. The OST will grant an extent lock only if the layout version included in the RPC matches the object layout version.
  2. lov EA changes
    1. flags
      1. hsm_released: file is not resident on OSTs; only in HSM
      2. hsm_exists: some version of this fid exists in HSM; maybe partial or outdated
      3. hsm_dirty: file in HSM is out of date
      4. hsm_archived: a full copy of this file exists in HSM; if not hsm_dirty, then the HSM copy is current.
    2. The hsm_released flag is always manipulated under a write layout lock, the other flags are not.
  3. new ioctls for HSM control:
    1. HSM_REQUEST: policy engine or admin requests (archive, release, restore, remove, cancel) <FIDlist>
    2. HSM_STATE_GET: user requests HSM status information on a single file
    3. HSM_STATE_SET: user sets HSM policy flags for a single file (HSM_NORELEASE, HSM_NOARCHIVE)
    4. HSM_PROGRESS: copytool reports periodic state of a single request (current extent, error)
    5. HSM_TAPEFILE_ADD: add an existing archived file into the Lustre filesystem (only metadata is copied).
  4. changelogs:
    1. new events for HSM event completion
      1. restore_complete
      2. archive_complete
      3. unlink_complete
    2. per-event flags used by HSM
      1. setattr: data_changed (actually mtime_changed for V1)
      2. archive_complete: hsm_dirty
      3. all HSM events: hsm_failed


Agent

An agent manages local HSM requests on a client.

  1. one agent per client max; most clients will not have agents
  2. consists of two parts
    1. kernel component receives messages from the coordinator (LNET comms)
      1. agents and coordinator piggyback comms on MDC/MDT: connections, recovery, etc.
      2. coordinator uses reverse imports to send RPCs to agents
    2. userspace process copies data between Lustre and HSM backend
      1. will use special fid directory for file access (.lustre/fid/XXXX)
      2. interfaces with hardware-specific copytool to access HSM files
  3. kernel process passes requests to userspace process via socket

Copytool

The copytool copies data between Lustre and the HSM backend, and deletes the HSM object when necessary.

  1. userspace; runs on a Lustre client with HSM i/o access
  2. opens objects by fid
  3. may manipulate HSM mode flags in an EA.
  4. uses ioctl calls on the (opened-by-fid) file to report progress to MDT. Note MDT must pass some messages on to Coordinator.
    1. updates progress regularly while waiting for HSM (e.g. every X seconds)
    2. reports error conditions
    3. reports current extent
  5. copytool is HSM-specific, since they must move data to the HSM archive
    1. version 1 will include tools for HPSS and SAM-QFS
    2. other, vendor-proprietary (binary) tools may be wrapped in order to include Lustre ioctl progress calls.

Policy Engine

  1. makes policy decisions for archive, release (which files and when)
    1. policy engine will provide the functionality of the Space Manager and any other archive/release policies
    2. may be based on space available per filesystem, OST, or pool
    3. may be based on any filesystem or per-file attributes (last access time, file size, file type, etc)
    4. policy engine will therefore require access to various user-available info: changelogs, getstripe, lfs df, stat, lctl get_param, etc.
  2. normally uses changelogs and 'df' for input; rarely is allowed to scan filesystem
    1. changelogs are available to superuser on Lustre clients
    2. filesystem scans are expensive; allowed only at initial HSM setup time or other rate events
  3. the policy engine runs as a userspace process; requests archive and release via file ioctl to coordinator (through MDT).
  4. policy engine may be packaged separately from Lustre
  5. the policy engine may use HSM-backend specific features (e.g. HPSS storage class) for policy optimizations, but these will be kept modularized so they are easily removed for other systems.
  6. API can pass an opaque arbitrary chunk of data (char array, size) from policy engine ioctl call through coordinator and agent to copytool.

Configuration

  1. policy engine has it's own external configuration
  2. coordinator starts as part of MDT; tracks agents registrations as clients connect
    1. connect flag to indicate agent should run on this MDC
    2. mdt_set_info RPC for setting agent status using 'remount'


Scenarios

restore (aka cache-miss, copyin)

Version 1 ("simple"): "Migration on open" policy

Clients block at open for read and write. OSTs are not involved.

  1. Client layout-intent enqueues layout read lock on the MDT.
  2. MDT checks hsm_released bit; if released, the MDT takes PW lock on the layout
  3. MDT creates a new layout with a similar stripe pattern as the original, increasing the layout version, and allocating new objects on new OSTs with the new version.
    (We should try to respect specific layout settings (pool, stripecount, stripesize), but be flexible if e.g. pool doesn't exist anymore.
    Maybe we want to ignore stripe offset and/or specific OST allocations in order to rebalance.)
  4. MDT enqueues group write lock on extents 0-EOF
    Extents lock enqueue timeout must be very long while group lock is held (need proc tunable here)
  5. MDT releases PW layout lock
    Client open succeeds at this point, but r/w is blocked on extent locks
  6. MDT sends request to coordinator requesting restore of the file to .lustre/fid/XXXX with group lock id and extents 0-EOF. (Extents may be used in the future to (a) copy in part of a file, in low-disk-space situations; (b) copy in individual stripes simultaneously on multiple OSTs.)
  7. Coordinator distributes that request to an appropriate agent.
  8. Agent starts copytool
  9. Copytool opens .lustre/fid/<fid>
  10. Copytool takes group extents lock
  11. Copytool copies data from HSM, reporting progress via ioctl
  12. When finished, copytool reports progress of 0-EOF and closes the file, releasing group extents lock.
  13. MDT clears hsm_released bit
  14. MDT releases group extents lock
    This sends a completion AST to the original client, who now receives his extents lock.
  15. MDT adds FID <fid> HSM_copyin_complete record to changelog (flags: failed)

HSM Copy-in schema

Version 2 ("complex"): "Migration on first I/O" policy

Clients are able to read/write the file data as soon as possible and the OSTs need to prevent access to the parts of the file which have not yet been restored.

  1. getattr: attributes can be returned from MDT with no HSM involvement
    1. MDS holds file size[*]
    2. client may get MDS attribute read locks, but not layout lock
  1. Client open intent enqueues layout read lock.
  2. MDT checks "purged" bit
  3. MDT creates a new layout with a similar stripe pattern as the original, allocating new objects on new OSTs with per-object "purged" bits set.
  4. MDT grants layout lock to client and open completes
  5. ?Should we pre-stage: MDT sends request to coordinator requesting copyin of the file to .lustre/fid/XXXX with extents 0-EOF.
  6. client enqueues extent lock on OST. Must wait forever.
  7. check OST object is marked fully/partly invalid
    1. object may have persistent invalid map of extent(s) that indicate which parts of object require copy-in
  8. access to invalid parts of object trigger copy-in upcall to coordinator for those extents
    1. coordinator consolidates repeat requests for the same range (e.g. if entire file has already been queued for copyin, ignore specific range requests??)
  9. ? group locks on invalid part of file block writes to missing data
  10. clients block waiting on extent locks for invalid parts of objects
    1. OST crash at this time will restart enqueue process during replay
  11. coordinator contacts agent(s) to retrieve FID N extents X-Y from HSM
  12. copytool writes to actual object to be restored with "clear invalid" flag (special write)
    1. writes by agent shrink invalid extent, periodically update on-disk invalid extent and release locks on that part of file (on commit?)
    2. note changing lock extents (lock conversion) is not currently possible but is a long-term Lustre performance improvement goal.
  13. client is granted extent lock when that part of file is copied in

copyout

  1. Policy engine (or administrator) decides to copy a file to HSM, executes HSMCopyOut ioctl on file
  2. ioctl caught by MDT, which passes request to Coordinator
  3. coordinator dispatches request to mover. Request includes file extents (for future purposes)
  4. normal extents read lock is taken by mover running on client
  5. mover sends "copyout begin" message to coordinator via ioctl on the file
    1. coordinator/MDT sets "hsm_exists" bit and clears "hsm_dirty" bit.
      "hsm_exists" bit is never cleared, and indicates a copy (maybe partial/out of date) exists in the HSM
  6. any writes to the file cause the MDT to set the "hsm_dirty" bit (may be lazy/delayed with mtime or filesize change updates on MDT for V1).
    1. file writes need not cancel copyout (settable via policy? Implementation in V2.)
  7. mover sends status update to coordinator via periodic ioctl calls on the file (e.g % complete)
  8. mover sends "copyout done" message to coordinator via ioctl
  9. coordinator/MDT checks hsm_dirty bit.
    1. If not dirty, MDT sets "copyout_complete" bit.
    2. If dirty, coordinator dispatches another copyout request; goto step 3
  10. MDT adds FID X HSM_copyout_complete record to changelog
  11. Policy engine notes HSM_copyout_complete record from changelog (flags: failed, dirty)

(Note: files modifications after copyout is complete will have both copyout_complete and hsm_dirty bits set.)


HSM Copy-out schema

purge (aka punch)

V1: full file purge

  1. Policy engine (or administrator) decides to purge a file, executes HSMPurge ioctl on file
  2. ioctl handled by MDT
  3. MDT takes a write lock on the file layout lock
  4. MDT enques write locks on all extents of the file. After these are granted, then no client has any dirty cache and no child can take new extent locks until layout lock is released. MDT drops all extent locks.
  5. MDT verifies that hsm_dirty bit is clear and copyout_complete bit is set
    1. if not, the file cannot be purged, return EPERM
  6. MDT marks the LOV EA as "purged"
  7. MDT sends destroys to the OST objects, using destroy llog entries to guard against object leakage during OST failover
    1. the OSTs should eventually purge the objects during orphan recovery
  8. MDT drops layout lock.

V2: partial purge

Partially purged files hopefully allows graphical file browsers to retrieve file header info or icons stored at the beginning or end of a file. Note: determine exactly which parts of a file that Windows Explorer reads to generate it's icons

  1. MDT sends purge range to first and last objects, and destroys to all intermediate objects, using llog entries for recovery.
  2. First and last OSTs record purge range
  3. When requesting copyin of the entire file (first access to the middle of a partially purged file), MDT destroys old partial objects before allocating new layout. (Or: we keep old first and last objects, just allocate new "middle object" striping - yuck.)

unlink

  1. A client issues an unlink for a file to the MDT.
  2. The MDT includes the "hsm_exists" bit in the changelog unlink entry
  3. The policy engine determines if the file should be removed from HSM
  4. Policy engine sends HSMunlink FID to coordinator via MDT ioctl
    1. ioctl will be on the directory .lustre/fid
      or perhaps on a new .lustre/dev/XXX where any lustre device may be listed, and act as stub files for handling ioctls.
  5. The coordinator sends a request to one of its agent for the corresponding removal.
  6. The agent spawns the HSM tool to do this removal.
  7. HSM tool reports completion via another MDT ioctl
  8. Coordinator cancels unlink request record
    1. In case of agent crash, unlink request will remain uncancelled and coordinator will eventually requeue
    2. In case of coordinator crash, agent ioctl will proceed after recovery
  9. Policy engine notes HSM_unlink_complete record from changelog (flags: failed)


abort

  1. abort dead agent
    the coordinator must send an abort signal to an agent to abort a copyout/copyin if it determines the migration is stuck/crashed. The coordinator can then re-queue the migration request elsewhere.
  2. dirty-while-copyout
    If a file is written to while it is being copied out, the copyout will have an incoherent copy in some cases.
    1. We could send abort signal, but:
    2. If a filesystem has a single massive file that is used all the time, it will never get backed up if we abort.
    3. Not a problem if just appending to a file
    4. Most backup systems work this way with relatively little harm.
    5. V1: don't abort this case
    6. V2: abort in this case is a settable policy

Recovery

MDT crash

  1. MDT crashes and is restarted.
  2. The coordinator recreates its migration list, reading the its llog.
  3. The client, when doing its recovery with the MDT, reconnects to the coordinator.
    1. Copytool eventually sends its periodic status update for migrating files (asynchronously from reconnect).
    2. As far as the copytools/agent is concerned, the MDT restart is invisible.

Note: The migration list is simply the list of unfinished migrations which may be read from the llog at any time (no need to keep it in memory all the time, if there are many open migration requests).

Logs should contain:

  1. fid, request type, agent_id (for aborts)
  2. if the list is not kept in memory: last_status_update_time, last_status.

Client crash

  1. Client stops communicating with MDT
  2. MDT evicts client
  3. Eviction triggers coordinator to re-dispatch immediately all of the migrations from that agent
  4. For copyin, it is desireable that any existing agent I/O is stopped
    1. Ghost client and copytool may still be alive and communicating with OSTs, but not MDT. Can't send abort.
    2. Taking file extent locks will only temporarily stop ghost.
    3. It's not so bad if new agent and ghost are racing trying to copyin the file at the same time.
      1. Regular extent locks prevent file corruption
      2. The file data being copied in is the same
      3. Ghost copyin may still be ongoing after new copyin has finished, in which case ghost may

overwrite newly-modified data (data modified by regular clients after HSM/Lustre think copyin is complete.)

Copytool crash

Copytool crash is different from a client crash, since the client will not get evicted

  1. Copytool crashes
  2. Coordinator notices no status updates
  3. Coordinator sends abort signal to old agent
  4. Coordinator re-dispatches migration


Implementation constraints

  1. all single-file coherency issues are in kernel space (file locking, recovery)
  2. all policy decisions are in user space (using changelogs, df, etc)
  3. coordinator/mover communication will use LNET
  4. Version 1 HSM is a simplified implementation:
    1. integration with HPSS only
    2. depends on changelog for policy decisions
    3. restore on file open, not data read/write
  5. HSM tracks entire files, not stripe objects
  6. HSM namespace is flat, all files are addressed by FID only
  7. Coordinator and movers can be reused by (non-HSM) replication

HSM Migration components & interactions

Note: for V1, copyin initiators are on MDT only (file open). Hsm migration.png

For further review/detail

  1. "complex" HSM roadmap
    1. partial access to files during restore
    2. partial purging for file type identification, image thumbnails, ??
    3. integration with other HSM backends (ADM, ??)
  2. How can layout locks be held in liblustre


References

HSM implementation 15599 changelogs 15699