WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Lustre Publications: Difference between revisions

From Obsolete Lustre Wiki
Jump to navigationJump to search
No edit summary
No edit summary
Line 3: Line 3:
* attachment
* attachment
* we are doing poorly on 64k random reads, panasas does much better.
* we are doing poorly on 64k random reads, panasas does much better.
== HEPiX Talks ==
* [https://indico.desy.de/conferenceTimeTable.py?confId=257&showDate=all&showSession=all&detailLevel=contribution&viewMode=plain Spring HEPiX 2007]: April 23-27, 2007
* [https://indico.desy.de/getFile.py/access?contribId=26&sessionId=40&resId=1&materialId=slides&confId=257 Storage Evaluations at BNL]
** Robert Petkus - BNL
** Performance comparison between ZFS, XFS and EXT3 (unfortunately not EXT4) on a Sun Thumper
** attachment:
* [https://indico.desy.de/getFile.py/access?contribId=44&sessionId=39&resId=0&materialId=slides&confId=257 Lustre Experience at CEA/DIF]
** J-Ch Lafoucriere
** attachment:


== Karlsruhe Lustre Talks ==
== Karlsruhe Lustre Talks ==
Line 22: Line 33:


== University of Boulder, Colorado ==
== University of Boulder, Colorado ==
* LCI paper: attachment:
** Note that when a single client gets exclusive access GPFS wins
** Single client reads (we are very poor): remedy - tune
** Creates in a unique directory per client: remedy - memory WB cache
** The management issues mentioned in the last part of this paper are being addressed
** [http://linuxclustersinstitute.org/Linux-HPC-Revolution/Archive/PDF05/17-Oberg_M.pdf ](It's the same as the attachment of LCI paper above.)
== University of Minnesota ==
* attachment:MSST-2006-paper.pdf

Revision as of 02:52, 22 May 2007

Caspur (CERN)

  • attachment
  • we are doing poorly on 64k random reads, panasas does much better.

HEPiX Talks


Karlsruhe Lustre Talks

  • http://www.rz.uni-karlsruhe.de/dienste/lustretalks.php
  • Six talks in PDF
    • Einführungsveranstaltung im Rechenzentrum (2005):
    • HP-CAST 4 in Krakau (10.5.2005):
    • ISC 2005 in Heidelberg (24.6.2005):
    • HP-CAST 5 in Seattle (11.11.2005):
    • HP-CCN in Seattle (12.11.2005):
    • SGPFS 5 in Stuttgart (4.4.2006):

Ohio State University

  • Lustre performance comparison when using InfiniBand and Quadrics interconnects
  • Download paper at OSU site: [1]
  • attachment:

University of Boulder, Colorado

  • LCI paper: attachment:
    • Note that when a single client gets exclusive access GPFS wins
    • Single client reads (we are very poor): remedy - tune
    • Creates in a unique directory per client: remedy - memory WB cache
    • The management issues mentioned in the last part of this paper are being addressed
    • [2](It's the same as the attachment of LCI paper above.)

University of Minnesota

  • attachment:MSST-2006-paper.pdf