WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Lustre Debugging for Developers"

From Obsolete Lustre Wiki
Jump to navigationJump to search
Line 53: Line 53:
  
  
Request history is accessed/controlled via the following /proc files under the service directory...
+
The request history is accessed and controlled using the [[following /proc files under the service directory]]:
  
: req_buffer_history_len     Number of request buffers currently in the history
+
{|
: req_buffer_history_max     Maximum number of request buffers to keep
+
| || ''req_buffer_history_len'' ||  Number of request buffers currently in the history
: req_history                 [[Request history]]
+
|-
 +
| || ''req_buffer_history_max''||  Maximum number of request buffers to keep
 +
|-
 +
| || ''req_history''|| [[Request history]]
 +
|}
  
 
Note that requests in the history include "live" requests that are actually being handled. Each line in ''req_history'' looks like:
 
Note that requests in the history include "live" requests that are actually being handled. Each line in ''req_history'' looks like:
Line 64: Line 68:
  
 
Where:
 
Where:
: seq           Request sequence number
+
{|
:target NID     Destination NID of the incoming request
+
| || ''seq'' ||  || Request sequence number
* client ID:          client's PID and NID
+
|-
* xid:                  rq_xid
+
| || ''target NID'' ||  || Destination NID of the incoming request
* length:              size of the request message
+
|-
* phase:
+
| || ''client ID'' ||  || client's PID and NID
** New (waiting to be handled or couldn't be unpacked)
+
|-
** Interpret (unpacked and being handled)
+
| || ''xid'' ||  || rq_xid
** Complete (handled)
+
|-
* svc specific:
+
| || ''length'' ||  || size of the request message
** Service-specific request printout.  Currently the only service that does this is the OST, which prints the opcode if the message has been unpacked successfully.
+
|-
 
+
| || ''phase'' || ||
 +
|-
 +
| || || New || Waiting to be handled or couldn't be unpacked
 +
|-
 +
| || || Interpret || Unpacked and being handled
 +
|-
 +
| || || Complete || Handled
 +
|-
 +
| || ''svc specific'' || ||
 +
| || || || [[Service-specific request printout.]] Currently the only service that does this is the OST, which prints the opcode if the message has been unpacked successfully.
 +
|}
 
= LWT Tracing =
 
= LWT Tracing =
 
Also, Eric Barton has a very lightweight tracing facility called LWT. It prints fixed size requests into a buffer and is much faster than LDEBUG.
 
Also, Eric Barton has a very lightweight tracing facility called LWT. It prints fixed size requests into a buffer and is much faster than LDEBUG.

Revision as of 20:28, 12 January 2010

Intro...

Adding Debugging to the Source Code

The debug infrastructure provides a number of macros which can be used in the Lustre™ source to aid debugging or report serious errors.

To use these macros, you will need to set the DEBUG_SUBSYSTEM variable at the top of the file to indicate what code module the debugging is part of as shown below:

#define DEBUG_SUBSYSTEM S_PORTALS

The available macros include:

  • LBUG - This is a panic style assertion in the kernel which causes Lustre to dump its circular log to the file /tmp/lustre-log from where it can be retrieved after a reboot.
  • LASSERT - Validates that a given expression is true, otherwise call LBUG. The failed expression is displayed on the console, though the values that make up the expression are not printed.
  • LASSERTF - Similar to LASSERT but also allows a free-format message to be printed, like printf/printk
  • CDEBUG - The basic and most commonly used debug macro, it takes one more argument than a standard printf - the debug type. OK to replace bold text with this? CDEBUG takes the standard printf arguments along with one additional argurment, the debug type. When the debug mask is set accordingly OK to change to this? to the corresponding debug type, this message is added to the debug log. Later, when the log is retrieved, it can also be filtered based on debug type.

What is this?

CDEBUG(D_INFO, "This is my debug message: the number is %d\n", number);
  • CERROR - Behaves similarly to CDEBUG, but prints the message in the debug log and to the console. This is appropriate for serious errors or fatal conditions:

What is this?

CERROR("Something very bad has happened, and the return code is %d.\n", rc);
  • ENTRY and EXIT - ENTRY and EXIT take no arguments and simply add messages to aid in call tracing. One should attempt to cover all exit conditions when using these macros, to avoid confusion when the debug log reports that a function was entered but never exited.
  • LDLM_DEBUG and LDLM_DEBUG_NOLOCK - These macros are to be used when tracing MDS and VFS operations for locking. The purpose of these macros is to build a thin trace that shows the protocol exchanges between nodes.
  • DEBUG_REQ - Prints information about the given ptlrpc_request structure.
  • OBD_FAIL_CHECK - Allows insertion of failure points into the Lustre code. Useful for generating regression tests that can hit a very specific sequence of events. This works in conjunction with sysctl -w lustre.fail_loc={fail_loc} to set a specific failure point for which a given OBD_FAIL_CHECK will test for.
  • OBD_FAIL_TIMEOUT - like OBD_FAIL_CHECK, and if the given fail_loc is hit, wait for the specified number of seconds. Useful for simulating hung, blocked, or busy processes, or network delays.
  • OBD_RACE - like OBD_FAIL_CHECK, and the first process to hit it sleeps until a second process hits the same OBD_RACE and then both continue on. Useful for having multiple processes executing the same code concurrently to provoke locking races.
  • OBD_FAIL_ONCE - A flag set on a lustre.fail_loc breakpoint to cause the OBD_FAIL_CHECK condition to only be hit a single time. Otherwise, a fail_loc is permanent until cleared with sysctl -w lustre.fail_loc=0.

Ptlrpc Request History

Each service maintains a request history, which can be useful for first occurrence troubleshooting.

Is ptlrpc an acronym?

Ptlrpc history works as follows:

  1. request_in_callback() adds the new request to the service's request history.
  2. When a request buffer becomes idle add it the contents of the request buffer? to the service's request buffer history list.
  3. Cull buffers from the service's request buffer history if it has grown above "req_buffer_history_max" and remove its reqs from the service's request history.


The request history is accessed and controlled using the following /proc files under the service directory:

req_buffer_history_len Number of request buffers currently in the history
req_buffer_history_max Maximum number of request buffers to keep
req_history Request history

Note that requests in the history include "live" requests that are actually being handled. Each line in req_history looks like:

<seq>:<target NID>:<client ID>:<xid>:<length>:<phase> <svc specific>

Where:

seq Request sequence number
target NID Destination NID of the incoming request
client ID client's PID and NID
xid rq_xid
length size of the request message
phase
New Waiting to be handled or couldn't be unpacked
Interpret Unpacked and being handled
Complete Handled
svc specific Service-specific request printout. Currently the only service that does this is the OST, which prints the opcode if the message has been unpacked successfully.

LWT Tracing

Also, Eric Barton has a very lightweight tracing facility called LWT. It prints fixed size requests into a buffer and is much faster than LDEBUG.

The records that are dumped contain:

  • Current CPU
  • Process counter
  • Pointer to file
  • Pointer to line in the file
  • 4 void * pointers

An lctl command dumps the logs to files.

This tracking facility has been very successful to debug difficult problems.

Finding memory leaks

Memory leaks can occur in a code where you allocate a memory, but forget to free it once its no longer required. Such problems can be found using the leak_finder.pl program. Before running this program, the debugging should be completely turned on to make sure all malloc and free entries are collected. This can be done as follows:

sysctl -w lnet.debug=+malloc

Dump the log into a user specified log file using lctl as shown in previous section. Run the leak finder after this on the just created log dump:

perl leak_finder.pl <ascii-logname>

This provides output as shown below:

malloced   8bytes   at a3116744 called pathcopy
(lprocfs_status.c:lprocfs_add_vars:80)
freed      8bytes   at a3116744 called pathcopy 
(lprocfs_status.c:lprocfs_add_vars:80)

The tool will also give the following output to show the leaks found:

Leak:32bytes allocated at a23a8fc  (service.c:ptlrpc_init_svc:144,debug file line 241)