Architecture - Caching OSS

Note: The content on this page reflects the state of design of a Lustre feature at a particular point in time and may contain outdated information.

Summary
Caching OSS introduces cache on OSS side.

Definitions

 * transno: number of transaction, server generates them.


 * committed transno: greatest transno known committed. Current recovery model implies all transactions with transno less than the committed one are committed as well.

Requirements

 * 1) support writeback cache on OST
 * 2) support cached reads on OST
 * 3) follow same recovery model (atomic updates, executed-once semantics, replay changes from clients)
 * 4) no performance penalty in majority cases

Few requests in flight

 * QAS template

Server side

 * 1) transno generation for OST_WRITE
 * 2) flush in order of transno

Client side

 * 1) copy-on-write mechanism for regular writes
 * 2) copy-on-write mechanism for mmap'ed pages
 * 3) OST_WRITE replay and free upon commit

Implementation Details
Should we consider different recovery model for data? For metadata we use model where state is reproduced by replaying all requests and each request has all required data (IOW, it doesn't refer pages, inodes, dentries, etc). We can't follow exactly this model as this implies we do copy of all data for each request. So, we have to refer external data from retained requests, but external data is shared and can change by the moment of replay. Would it make sense to use "state" model for data when we reproduce current state only, not replay all previous states?

Can we support asynchronous truncate? Then how do we understand on the client page was truncated and don't try to read it from server? How do we truncate up on lock cancel?

Should we take NRS into account? To what extent?