WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Debugging Lustre: Difference between revisions

From Obsolete Lustre Wiki
Jump to navigationJump to search
m (Protected "Lustre Debugging NEW" ([edit=sysop] (indefinite) [move=sysop] (indefinite)))
No edit summary
Line 1: Line 1:
__TOC__
Lustre™ is a complex system that requires a rich debugging environment  
Lustre™ is a complex system that requires a rich debugging environment  
to assist programmers with locating problems.  
to assist programmers with locating problems.  

Revision as of 12:26, 17 December 2009

Lustre™ is a complex system that requires a rich debugging environment to assist programmers with locating problems. Developers must be able to easily create new and meaningful debugging output as they add functionality and trace bugs. Troubleshooters need a mechanism to filter and display only data that is relevant to the problem at hand.

This topic describes a variety of diagnostic tools that can be used by developers and administrators to debug issues related to Lustre. Many of these are available in the Linux operating system, while others are provided with Lustre.

Tools Available for Debugging & Analysis

There are several diagnostic tools available to debug Lustre; some are provided by the operating system, while others were developed and made available by the Lustre project.

Lustre tools -Components of same in-kernel debug mechanism

  • debug logs: When a kernel module is first inserted, a circular debug buffer is allocated to hold substantial amount of debugging information (in megabytes or more). When the buffer fills up, it wraps around and discards the oldest information. We have added debug messages specifically for Lustre; they can be written out to this kernel log.
  • debug daemon: The debug daemon provides the facility for unlimited logging of the CDEBUG logs in Lustre.
  • /proc/sys/lnet/debug: Contains a mask that can be used to delimit the debugging information written out to the kernel debug logs.


  • lctl: This tool is made available by Lustre. It is very useful to filter the kernel and extract useful information.
  • Lustre subsystem asserts: In case of asserts, a log will be written out to /tmp/lustre_log.<timestamp>.
  • lfs: A Lustre utility that can be used to get to a Lustre file's extended attributes (among other things).


  • leak_finder.pl: An extremely useful program that helps locate memory leaks in the code.

External tools

  • strace: Allows Lustre users to trace a system call.
  • /var/log/messages: The directory to which fatal or serious messages are printed by the syslogd.
  • Crash dumps: On some kernels, a sysrq "c" is enabled which produces a crash dump. Lustre enhances this crash dump with a log dump (the last 64K of log) to the console.
  • debugfs: An nteractive Ext2 filesystem debugger.


Tien suggested adding tools from this site. BZ 21334 http://wiki.hpc.ufl.edu/index.php/Lustre

Lustre Debug Messages and Subsystems

All Lustre debug messages are divided into several subsystems and types to relate them with the Lustre module that generates the message. Each Lustre message is tagged with the subsystem in which it originated, the message type, and the location in the source code. We have listed below all subsystems and debug types used in Lustre. For the most current list, see lnet/include/libcfs/libcfs.h

  • Standard Subsystems:
    • mdc, mds, osc, ost, obdclass, obdfilter, llite, ptlrpc, portals, lnd, ldlm, lov
  • Debug Types:
    • trace: Entry/Exit markers
    • dlmtrace: Locking related information.
    • inode
    • super
    • ext2: Anything from ext2_debug.
    • malloc: Print malloc or free information.
    • cache: Cache related information.
    • info: General information.
    • ioctl: IOCTL related information.
    • blocks: Ext2 block allocation information.
    • net: Networking.
    • warning
    • buffs
    • other
    • dentry
    • portals: Entry/ Exit markers.
    • page: Bulk page handling.
    • error: Error messages.
    • emerg
    • rpctrace: For distributed debugging.
    • ha: Information related to failover and recovery.

Format of Lustre Debug Messages

The CDEBUG and CERROR macros are used in Lustre to print the debug or error messages. The CDEBUG macro uses portals_debug_msg (portals/linux/oslib/debug.c) to print the message. The message format is shown below, along with an example:

subsystem: debug mask: smp_processor_id: sec.usec stack size pid host pid (if uml) or zero (file: line #:function()): debug message
800000: 000010: 0: 1081880847.677302: 1204: 2973: 31070: (as_dev.c:144:create_write_buffers()) kmalloced '*obj': 24 at a375571c (tot 17447717)

Lustre debug daemon

The debug_daemon allows users to control the Lustre kernel debug daemon to dump the debug_kernel buffer to a user-specified file. This functionality is built using a kernel thread on top of debug_kernel. debug_kernel, another sub-command of lctl, continues to work in parallel with the debug_daemon command.

Debug_daemon is highly dependent on the filesystem write speed. If the Lustre filesystem is under heavy system load, the filesystem write operation may not be fast enough to flush out all of the debug_buffer and continue to CDEBUG to the debug_buffer. Debug_daemon is to put DEBUG MARKER: Trace buffer full into the debug_buffer, indicating that debug_buffer is overlapping itself before debug_daemon flushes data to a file.

User can use lctl control to start or stop the Lustre daemon from dumping the debug_buffer to a file. Users can also temporarily pause daemon from dumping the file. It can be controlled using the debug_daemon sub-command to lctl.

Debug daemon commands

  • debug_daemon start [{file} {megabytes}] - Initiates the debug_daemon to start dumping debug_buffer into a file. The file can be a system default file,as shown in /proc/sys/lnet/debug_path. The default patch after Lustre boots up is /tmp/lustre-log-$HOSTNAME. Users can specify a new file for debug_daemon to output debug_buffer. The new file name is to show up in /proc/sys/lnet/debug_path. Users can specify a new file name for debug_daemon to output debug_buffer. The new filename will show up in /proc/sys/lnet/debug_path. megabytes is the limitation of the file size in megabytes. The daemon will wrap around and dump data to the beginning of the file when the output file size exceeds the user-specified file size limit. To decode the dumped file to ASCII run:
#lctl debug_file {file}  > {newfile} 

The output is already internally sorted by lctl using quicksort.

  • debug_daemon stop - Completely shuts down the debug_daemon operation and flushes the file output. Otherwise, debug_daemon is shut down as part of Lustre file system shutdown process. Users can restart debug_daemon by using start command after each issued stop command.

EXAMPLE

Below is an example to use the debug_daemon to dump debug logs to a 10MB file.

#~/utils/lctl

To start the daemon dumping debug_buffer into a 40 MB /tmp/dump file.
lctl > debug_daemon start /trace/log 40
To completely shut down the daemon.
lctl > debug_daemon stop
To start yet another daemon with an unlimited file size.
lctl > debug_daemon start /tmp/unlimited

An “*** End of debug_daemon trace log ***” is added to end of each output files.

Controlling the Kernel Debug Log

The amount of information printed to the kernel debug logs can be controlled by masks in /proc/sys/lnet/subsystem_debug and /proc/sys/lnet/debug. The subsystem_debug mask is for controlling the subsystems (e.g. obdfilter, net, portals, OSC, etc) and the debug mask is for controlling the debug types that would be written out to the log (e.g. info, error, trace, alloc, etc).

  • sysctl -w lnet.debug=0 - Turns off Lustre debugging completely
  • sysctl -w lnet.debug=-1 - Enables full debugging
  • sysctl -w lnet.debug=net - Turns on logging of messages related to network communications.
  • sysctl -w lnet.debug=+net - Turns on logging of messages related to network communications, in addition to existing debug flags.
  • sysctl -w lnet.debug=-net - Turns off logging of messages related to network communications, leaving debug flags alone.

The different options are listed in lnet/include/libcfs/libcfs.h.

The lctl Tool

Lustre source code has debug messages sprinkled throughout the code; these are very useful for troubleshooting. As described earlier, we have sub-divided the debug messages into a number of sub-systems and types. This allows us to filter and show only the messages we are interested in. The lctl tool is very useful to enable this filtering and manipulate the logs to extract the useful information from them. The lctl command can be used directly from the command-line, executing a single command, or it can be used interactively to execute multiple commands in a single invocation.

In this section, we describe the lctl commands which can be used effectively to obtain the required information from the logs. More detailed help on lctl can be found in the lctl reference section in the Lustre manual.

Note that the debug messages displayed with lctl are also subject to the kernel debug masks above! In other words, the filters are additive.

  1. Listing all the sub-systems/types Use the following command to obtain a list of all the types and sub-systems:
    lctl debug_list <subs | types>
  2. Filtering As mentioned before, the whole key to debugging such a system is limiting the debug log to what is really important. The filter command is the primary means of doing this:
    lctl filter <subsystem name | debug type> When lctl filters, it removes those lines from the displayed output only--this does not affect the contents of the debug log in the kernel's memory. As a result, you can print the log many times with different filtering levels without worrying about losing data.
  3. Enabling the required debug messages In order to show debug messages belonging to certain type or sub-system, use the show command as shown below:
    lctl show <subsystem name | debug type>
  4. Displaying the Log Once you've setup your filters, debug_kernel pulls the data from the kernel logs, filters it appropriately, and displays or saves it as per the options specified in the command below:
    lctl debug_kernel [output filename] If the debugging is being done on User Mode Linux (UML), it might be useful to save the logs on the host machine so that they can be used at any later time too.
  5. Filter a log on disk If you already have a debug log saved to disk (probably from a crash), debug_file will process that:
    lctl debug_file <input filename> [output filename]
  6. Marking the Log During the course of a debug session you may want to add markers or breaks to the log for many reasons: lctl mark [marker text] The marker text defaults to the current date and time. Three lines similar to these will be inserted into the debug log:
 ***************************************************************
      DEBUG MARKER: Tue Mar 5 16:06:44 EST 2002
 ***************************************************************
  7. Clearing the Log. When you're ready to completely flush the kernel debug buffer, run the clear command. lctl clear


Printing to /var/log/messages

To dump debug messages to the console set the corresponding debug mask in the printk flag:

sysctl -w lnet.printk=-1

This slows down the system dramatically. It is also possible to selectively enable or disable this for particular flags:

sysctl -w lnet.printk=+vfstrace
sysctl -w lnet.printk=-vfstrace

In newer versions of Lustre (1.4.12, 1.6.5) it is even possible to disable warning, error, and console messages, though it is strongly recommended to have something like lctl debug_daemon running to capture this data to a local filesystem for debugging purposes.

Tracing Lock Traffic

We have built a special debug type category for tracing lock traffic:

lctl> filter all_types
lctl> show dlmtrace
lctl> debug_kernel [filename]

Sample Run of lctl

bash-2.04# ./lctl
lctl > debug_kernel /tmp/lustre_logs/log_all
Debug log: 324 lines, 324 kept, 0 dropped.
lctl > filter trace
Disabling output of type "trace"
lctl > debug_kernel /tmp/lustre_logs/log_notrace
Debug log: 324 lines, 282 kept, 42 dropped.
lctl > show trace
Enabling output of type "trace"
lctl > filter portals
Disabling output from subsystem "portals"
lctl > debug_kernel /tmp/lustre_logs/log_noportals
Debug log: 324 lines, 258 kept, 66 dropped.

Adding Debugging to the Source Code

The debug infrastructure provides a number of macros which can be used in the Lustre source to aid debugging or report serious errors. All of these macros depend on having the DEBUG_SUBSYSTEM variable set at the top of the file to indicate what code module the debugging is part of:

#define DEBUG_SUBSYSTEM S_PORTALS
  • LBUG: This is a panic style assertion in the kernel which will cause Lustre to dump its circular log to the file /tmp/lustre-log from where it can be retrieved after a reboot.
  • LASSERT: Validate that a given expression is true, otherwise call LBUG. The failed expression is printed onto the console, though the values that make up the expression are not printed.
  • LASSERTF: Similar to LASSERT but also allows a free-format message to be printed, like printf/printk
  • CDEBUG: CDEBUG is the basic and most commonly used debug macro, and takes just one more argument than a standard printf - the debug type. When the debug mask is set accordingly, this message will be added to the debug log. Later, when a troubleshooter retrieves that log, they can also filter based on this type.
CDEBUG(D_INFO, "This is my debug message: the number is %d\n", number);
  • CERROR: CERROR behaves similarly to CDEBUG, but prints the message in the debug log and to the console. This is appropriate for serious errors or fatal conditions:
CERROR("Something very bad has happened, and the return code is %d.\n", rc);
  • ENTRY and EXIT: ENTRY and EXIT take no arguments and simply add messages to aid in call tracing. One should attempt to cover all exit conditions when using these macros, to avoid confusion when the debug log reports that a function was entered but never exited.
  • LDLM_DEBUG and LDLM_DEBUG_NOLOCK: These macros are to be used when tracing MDS and VFS operations for locking. The purpose of these macros is to build a thin trace that shows the protocol exchanges between nodes.
  • DEBUG_REQ: prints a great deal of information about the given ptlrpc_request structure
  • OBD_FAIL_CHECK: this allows insertion of failure points into the Lustre code, useful for generating regression tests that can hit a very specific sequence of events. This works in conjunction with "sysctl -w lustre.fail_loc={fail_loc}" to set a specific failure point for which a given OBD_FAIL_CHECK will test for.
  • OBD_FAIL_TIMEOUT: like OBD_FAIL_CHECK, and if the given fail_loc is hit, wait for the specified number of seconds. Useful for simulating hung, blocked, or busy processes, or network delays
  • OBD_RACE: like OBD_FAIL_CHECK, and the first process to hit it sleeps until a second process hits the same OBD_RACE and then both continue on. Useful for having multiple processes executing the same code concurrently to provoke locking races
  • OBD_FAIL_ONCE: a flag set on a lustre.fail_loc breakpoint to cause the OBD_FAIL_CHECK condition to only be hit a single time. otherwise a fail_loc is permanent until cleared with "sysctl -w lustre.fail_loc=0"
  • OBD_FAIL_RAND: have the OBD_FAIL_CHECK fail randomly, on average every (1 / lustre.fail_val) times
  • OBD_FAIL_SKIP: have OBD_FAIL_CHECK succeed lustre.fail_val times, and then fail permanently, or once with OBD_FAIL_ONCE
  • OBD_FAIL_SOME: have OBD_FAIL_CHECK fail lustre.fail_val times, and then succeed

Debugging in User Mode Linux (UML)

The developers sometimes use gdb in UML to debug Lustre. The Lustre startup puts the debug symbols for the newly-loaded modules into /tmp/gdb-{hostname} on the host machine. These symbols can be loaded into gdb using the source command in gdb via:

        symbol-file
        delete
        symbol-file /usr/src/lum/linux
        source /tmp/gdb-{hostname}
        b panic
        b stop

Use of strace for Troubleshooting

The strace is a system call trace utility made available by the operating system. It allows us to trace a system call, intercepts all the systems calls made by a process, and records the system call name, arguements, and return values. This is a very useful tool, especially when trying to troubleshoot a failed system call.

The strace can be invoked for a system call as shown below:

$strace <system-call> <args>

A system call might sometimes fork child processes, we need to use the -f option of strace in such case to trace the child processes also:

$strace -f <system-call> <args>

We can redirect the strace output to a file that maybe examined at a later time:

$strace -o <filename> <system-call> <args>

We can use the -ff option along with -o to cause the trace output to be saved in filename.pid, where pid is the process id of the process being traced. Use "-ttt" to timestamp all lines in the strace output, so they can be correlated to operations in the lustre kernel debug log.

If the debugging is done in User mode Linux, it's a good idea to save the traces on the host machine by using the command shown below; here we assume that the hostfs is mounted on /r:

$strace -o /r/tmp/vi.strace

Looking at Disk Content

In Lustre, the information about objects that make a file is stored in the extended attributes of the file inode. This will contain a list of all the object ids and their locations (that is, which OST device stores them). Lustre has a tool called the lfs that can be used to obtain this information for a given file name via the getstripe subcommand. A corresponding lfs setstripe command can be used to specify the striping attributes of a new file or directory.

The lfs getstripe utility is written in C; it takes a Lustre filename as input and lists all the objects that form a part of this file. So, to obtain this information for the file /mnt/lustre/frog in Lustre filesystem, we would do:

  $lfs getstripe /mnt/lustre/frog
  $
   OBDs:
      0 : OSC_localhost_UUID
      1 : OSC_localhost_2_UUID
      2 : OSC_localhost_3_UUID
   obdidx        objid
     0            17
     1             4

The debugfs tool is provided by the e2fsprogs package; it can be used for interactive debugging of an ext3/ldiskfs filesystem. It can either be used just to check the status, or it can be used to modify some information in the filesystem, maybe to correct the filesystem state. In Lustre, all the objects that belong to a file are stored in an underlying ldiskfs filesystem on the OST's. The filesystem uses the object id's as the file names. So, once the object id's are known, debugfs can be used to obtain the attributes of all the objects from the different OST's. A sample run for the /mnt/lustre/frog file used in the example above is shown here:

   $ debugfs -c /tmp/ost1
   debugfs: cd O
   debugfs: cd 0               /* for files in group 0 */
   debugfs: cd d<objid % 32>
   debugfs: stat <objid>       /* for getattr on object */
   debugfs: quit
## Suppose object id is 36, then follow the steps below:
   $ debugfs /tmp/ost1
   debugfs: cd O
   debugfs: cd 0
   debugfs: cd d4                /* objid % 32 */
   debugfs: stat 36              /* for getattr on obj 4*/
   debugfs: dump 36 /tmp/obj.36  /* dump contents of obj 4 */
   debugfs: quit

More help on debugfs can be obtained from the man pages.

Finding the lustre UUID of an OST

To find the Lustre UUID of an obdfilter disk (in the event, perhaps, you mix up the cables on your OST devices or the SCSI bus numbering suddenly changes and your SCSI devices get new names), use debugfs to get the last_rcvd file debugfs -R "dump last_rcvd /tmp/last_rcvf"

Tcpdump

Lustre provides a modified version of tcpdump that can be used to decode the complete Lustre message packet. [NOTE: It is currently out-of-date and unusable] The tool has more developed support to read packets from clients to OSTs; the functionality to decode the packets between clients and MDSs is not well developed yet.

Ptlrpc Request History

Each service always maintains request history, which is almost certainly very useful for first occurrence troubleshooting. The following email snippet describes the implementation.

Ptlrpc history works as follows...

  1. request_in_callback() adds the new request to the service's request history.
  2. When a request buffer becomes idle add it to the service's request buffer history list.
  3. Cull buffers from the service's request buffer history if it has grown above
  "req_buffer_history_max" and remove its reqs from the service's request history.

Request history is accessed/controlled via the following /proc files under the service directory...

  • req_buffer_history_len # of request buffers currently in the history
  • req_buffer_history_max max # request buffers to keep
  • req_history the request history

Note that requests in the history include "live" requests that are actually being handled. Each line in "req_history" looks like....

<seq>:<target NID>:<client ID>:<xid>:<length>:<phase> <svc specific>

  • seq: request sequence number
  • target NID: destination NID of the incoming request
  • client ID: client's PID and NID
  • xid: rq_xid
  • length: size of the request message
  • phase:
    • New (waiting to be handled or couldn't be unpacked)
    • Interpret (unpacked and being handled)
    • Complete (handled)
  • svc specific:
    • Service-specific request printout. Currently the only service that does this is the OST, which prints the opcode if the message has been unpacked successfully.

LWT Tracing

Also, Eric Barton has a very lightweight tracing facility called LWT. It prints fixed size requests into a buffer and is much faster than LDEBUG.

The records that are dumped contain:

  • Current CPU
  • Process counter
  • Pointer to file
  • Pointer to line in the file
  • 4 void * pointers

An lctl command dumps the logs to files.

This tracking facility has been very successful to debug difficult problems.

Finding memory leaks

Memory leaks can occur in a code where you allocate a memory, but forget to free it once its no longer required. Such problems can be found using the leak_finder.pl program. Before running this program, the debugging should be completely turned on to make sure all malloc and free entries are collected. This can be done as follows:

sysctl -w lnet.debug=+malloc

Dump the log into a user specified log file using lctl as shown in previous section. Run the leak finder after this on the just created log dump:

perl leak_finder.pl <ascii-logname>

This provides output as shown below:

malloced   8bytes   at a3116744 called pathcopy
(lprocfs_status.c:lprocfs_add_vars:80)
freed      8bytes   at a3116744 called pathcopy 
(lprocfs_status.c:lprocfs_add_vars:80)

The tool will also give the following output to show the leaks found:

Leak:32bytes allocated at a23a8fc  (service.c:ptlrpc_init_svc:144,debug file line 241)