WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Guidelines for Setting Up a Cluster"

From Obsolete Lustre Wiki
Jump to navigationJump to search
Line 1: Line 1:
Some tips we've collected while working on clusters that can lead to a more useful debugging experience.
+
Some tips are described below that make debugging easier when working on clusters.
 +
 
 +
* '''Set up shared home directories.''' A shared namespace is useful for bringing up lustre builds, collecting logs, using a ''blat'' command-line utility to email configuration files, etc.  Sharing ''/home'' is the least surprising. '''[[OK to replace with this?]]''' [[The most commonly shared namespace is /home.]]
 +
* '''Use ''pdsh''.''' Using ''pdsh'' is an absolute requirement with bonus points for being able to ''pdsh'' to all nodes from any node.
 +
* '''Regular naming.''' A node naming scheme of a short prefix combined with regularly incremented decimal node numbers (e.g., n0001, n0002, etc.) works well when using an automated tool like ''pdsh''.  Also, machines tend to be used for different roles in a cluster over time, so hostnames based on roles in a lustre file system (mds, ost, etc) are not always practical. However, documenting how hostnames map to Lustre functions is useful.
 +
* '''Use serial consoles.''' As in any data center, serial consoles are essential enabling output to be logged for later retrieval in case a problem occurs. Provide a useful front end like 'conman' or 'conserver'.  Make sure the front-end can send breaks to the kernel's ''sysrq'' facility over the serial console. 
 +
 
 +
[[Does this need to be updated?]] In 2.6 kernels, reliable network-based consoles allow sending (nearly) all kernel messages to a remote system, even ''oops'' messages.  In 2.6.5, ''netconsole'' is provided. In 2.6.9 and later,  ''netdump'' supercedes ''netconsole''.  The ''netdump'' code also allows doing kernel crash dumps over the network to another host, which can be invaluable for debugging node-crashing problems.
  
# '''Shared home directories''' <br/>Having a shared namespace comes in handy all the time.  Its useful for bringing up lustre builds, collecting logs, blatting configuration files, etc.  sharing /home is the least surprising.
 
# '''PDSH '''<br/>pdsh is an absolute requirement. Bonus points for being able to pdsh to all nodes from any node.
 
# '''Regular naming'''<br/>A node naming scheme that involves a short prefix and regular incrementing decimal node numbers (e.g. n0001, n0002, etc) combines very well with automation like pdsh.  As machines tend to take on different roles as different people use the cluster, it doesn't make a lot of sense to give hostnames based on roles in the lustre universe (mds, ost, etc).  It is useful to have a map available of hostname to Lustre function though.
 
# '''Serial Consoles'''<br/>As in any data center, they're essential.  Log their output for later retrieval should the kernel go wrong.  Provide a useful front end like 'conman' or 'conserver'.  Make sure the front-end can send breaks to the kernel's sysrq facility over the serial console.  In 2.6 kernels there are also reliable network based consoles that allow sending (nearly) all of the kernel messages to a remote system, even for oops messages.  In 2.6.5 this is called "netconsole", and 2.6.9 and later this is "netdump" (which supercedes netconsole).  The "netdump" code also allows doing kernel crash dumps over the network to another host, which can be invaluable for debugging node-crashing problems.
 
 
# '''Collect syslogs in one place'''<br/>Its nice to be able to watch one log for errors that are reported to syslog across the cluster.
 
# '''Collect syslogs in one place'''<br/>Its nice to be able to watch one log for errors that are reported to syslog across the cluster.
 
# '''Remote Power Management'''<br/>If a machine wedges one needs to be able to reboot it without physically flipping a switch.  Any number of vendors offer serial controlled power widgets, ones that work with 'powerman' are most useful.  This is a requirement for doing automated failover (STONITH).
 
# '''Remote Power Management'''<br/>If a machine wedges one needs to be able to reboot it without physically flipping a switch.  Any number of vendors offer serial controlled power widgets, ones that work with 'powerman' are most useful.  This is a requirement for doing automated failover (STONITH).

Revision as of 21:36, 11 November 2009

Some tips are described below that make debugging easier when working on clusters.

  • Set up shared home directories. A shared namespace is useful for bringing up lustre builds, collecting logs, using a blat command-line utility to email configuration files, etc. Sharing /home is the least surprising. OK to replace with this? The most commonly shared namespace is /home.
  • Use pdsh. Using pdsh is an absolute requirement with bonus points for being able to pdsh to all nodes from any node.
  • Regular naming. A node naming scheme of a short prefix combined with regularly incremented decimal node numbers (e.g., n0001, n0002, etc.) works well when using an automated tool like pdsh. Also, machines tend to be used for different roles in a cluster over time, so hostnames based on roles in a lustre file system (mds, ost, etc) are not always practical. However, documenting how hostnames map to Lustre functions is useful.
  • Use serial consoles. As in any data center, serial consoles are essential enabling output to be logged for later retrieval in case a problem occurs. Provide a useful front end like 'conman' or 'conserver'. Make sure the front-end can send breaks to the kernel's sysrq facility over the serial console.

Does this need to be updated? In 2.6 kernels, reliable network-based consoles allow sending (nearly) all kernel messages to a remote system, even oops messages. In 2.6.5, netconsole is provided. In 2.6.9 and later, netdump supercedes netconsole. The netdump code also allows doing kernel crash dumps over the network to another host, which can be invaluable for debugging node-crashing problems.

  1. Collect syslogs in one place
    Its nice to be able to watch one log for errors that are reported to syslog across the cluster.
  2. Remote Power Management
    If a machine wedges one needs to be able to reboot it without physically flipping a switch. Any number of vendors offer serial controlled power widgets, ones that work with 'powerman' are most useful. This is a requirement for doing automated failover (STONITH).
  3. Automated Disaster Recovery
    Its nice to be able to reimage a node by via netbooting and network software installs. Its a low frequency endevour, though.
  4. Boot Quickly
    1. Disable non-essential services to be started at boot-time
    2. Minimize hardware checks the BIOS may do
    3. Especially avoid things like RH's Kudzu which can ask for user input before proceeding

  • FrontPage