WARNING: This is the _old_ Lustre wiki, and it is in the process of being retired. The information found here is all likely to be out of date. Please search the new wiki for more up to date information.

Difference between revisions of "Testing Lustre Code"

From Obsolete Lustre Wiki
Jump to navigationJump to search
 
(One intermediate revision by the same user not shown)
Line 9: Line 9:
 
* ''Acceptance test suite''. This Lustre testing framework from which a suite of acceptance tests called ''acceptance-small'' can be run is described in the next section [[Testing Lustre Code#Using the Lustre Testing Framework|Using the Lustre Testing Framework]].
 
* ''Acceptance test suite''. This Lustre testing framework from which a suite of acceptance tests called ''acceptance-small'' can be run is described in the next section [[Testing Lustre Code#Using the Lustre Testing Framework|Using the Lustre Testing Framework]].
 
* The ''simul parallel file system test tool'' from LLNL exercises file system operations simultaneously from many nodes and processes. For more information, see [[Simul Parallel File System Test Tool]].
 
* The ''simul parallel file system test tool'' from LLNL exercises file system operations simultaneously from many nodes and processes. For more information, see [[Simul Parallel File System Test Tool]].
* The ''Load Generator (loadgen) test program'' is designed to simulate large numbers of Lustre clients connecting and writing to an OST. For information about using loadgen, see [http://wiki.lustre.org/manual/LustreManual18_HTML/SystemConfigurationUtilitiesMan8_HTML.html#50651195_pgfId-1294862 Section 32.5.6: ''Testing / Debugging Utilities''] in the [[Lustre Documentation|''Lustre Operations Manual'']]
+
* The ''Load Generator (loadgen) test program'' is designed to simulate large numbers of Lustre clients connecting and writing to an OST. For information about using loadgen, see [http://wiki.lustre.org/manual/LustreManual20_HTML/SystemConfigurationUtilities_HTML.html#50438219_pgfId-1294862 Section 36.19.3: ''Testing / Debugging Utilities''] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html ''Lustre Operations Manual'']
 
* The ''POSIX compliance test suite'' for testing the Lustre file system is described in [[POSIX Compliance Testing]].
 
* The ''POSIX compliance test suite'' for testing the Lustre file system is described in [[POSIX Compliance Testing]].
  

Latest revision as of 10:27, 20 January 2011

(Updated: Dec 2009)

We recommend a "test early, test often" approach to testing.

  • If you are developing a new feature for Lustre™, designing tests to exercise the new feature early in the development process will allow you to test your code as you develop it.
  • If you are fixing a bug in Lustre, creating a regression test up front will ensure that you can reproduce the reported problem and then verify that it has been fixed. And it will save you the effort of testing the fix manually and then creating a separate regression test later to submit with your bug fix.

We provide several tools to help with testing Lustre code:

To find out more about testing for upcoming Lustre releases, see Lustre Test Plans.

Using the Lustre Testing Framework

Before you submit code, it must pass the acceptance-small acceptance test suite. We recommend you run the test suite often so that you can find out as soon as possible if your code changes result in a regression.

The acceptance-small test suite is run using the script acceptance-small.sh, which is located in the lustre/tests directory of a compiled Lustre tree. For more details, see Acceptance Small (acc-sm) Testing on Lustre.

Note: When Submitting Patches, in Bugzilla set the acc-sm passed flag on the attachment for each individual branch that was tested to indicate that it passed.

Creating a Test Case

The first step in fixing a bug is to create or find a test case that causes the bug to be reproduced, then fix the bug, and finally verify that the code containing the bug passes the test.

Often if a defect is found in Lustre, it is because a test script in the Lustre acceptance test suite is not testing the failing code. Before starting work on a bug, first check if a test script reproduces the bug. If not, you will need to create a new test and add it to the test suite as a scripted sub-test of one of these tests:

sanity.sh Tests to verify operation under normal operating conditions
sanityN.sh Tests to verify operations from two clients under normal operating conditions
liblustre/tests/sanity Runs a test linked to a liblustre client library
recovery-small.sh Tests to verify RPC replay after communications failure (message loss)
replay-single.sh Tests to verify recovery after MDS failure
replay-dual.sh Tests to verify recovery from two clients after server failure
replay-ost-single.sh Tests to verify recovery after OST failure
lfscktest.sh Tests e2fsck and lfsck to detect and fix filesystem corruption
insanity.sh Tests multiple concurrent failure conditions

Bypassing Failures

If one or more tests in acceptance-small are regularly failing due to an issue not related to the bug you are fixing, you may want to bypass these tests so that you can test your bug fix. Complete these steps:

1. Check to see if a bug has been logged for the failure and file a new bug if one has not yet been opened.

2. Set environment variables to skip these specific tests until you or someone else fixes them. For example, to skip sanity.sh subtest 36g and 65, replay-single.sh subtest 42, and all of insanity.sh, set in your environment:

export SANITY_EXCEPT="36g 65"
export REPLAY_SINGLE_EXCEPT=42
export INSANITY=no
You can also skip these tests using a command. For example, when running acceptance-small, enter:
SANITY_EXCEPT="36g 65" REPLAY_SINGLE_EXCEPT=42 INSANITY=no ./acceptance-small.sh

The test framework is very flexible and provides an easy "hands-off" way of running tests while you are doing other things like coding. By running the entire test suite regularly, you will soon know whether your code has introduced a new bug or revived an old one.

Note: Questions or problems with the test framework should be emailed to the lustre-discuss mailing list so that all Lustre users can benefit from the solution.

Test Framework Options

The examples below show how to run a full test or sub-tests from the acceptance-small suite.

  • Run all tests including the standard tests (sanity*, liblustre) with the default (local.sh) setup.
$ cd lustre/tests
$ sh acceptance-small.sh
  • Run only the recovery-small.sh, replay-single.sh, and conf-sanity.sh tests.
$ ACC_SM_ONLY="recovery-small replay-single conf-sanity" sh acceptance-small.sh
  • Run acceptance-small with a different configuration.
$ NAME="myth" sh acceptance-small.sh
  • Run only tests 1, 3, 4, 6, 9 in sanity.sh with a custom configuration example1.sh.
$ ONLY="1 3 4 6 9" NAME=example1 sh sanity.sh
  • Skip tests 1 ... 30 and run remaining tests in sanity.sh.
$ EXCEPT="`seq 1 30`" sh sanity.sh
  • Clean up after an example1.sh test failure (normally the system is left mounted for debugging after a failure).
$ NAME=example1 sh llmountcleanup.sh
  • Clean up replay-single.sh after a test failure (normally the system is left mounted for debugging after a failure).
$ ONLY=cleanup sh replay-single.sh

Note: The acceptance-small suite includes two configuration files: lustre/tests/cfg/local.sh is used as the default configuration file and lustre/tests/cfg/ncli.sh is used for environments with multiple clients.

Adding New Tests

You can easily add a test to one of the above scripts. Failures can be injected into the Lustre kernel code and monitored using:

  • OBD_FAIL_CHECK()
  • OBD_FAIL_RACE()
  • OBD_FAIL_TIMEOUT()

Or you can make runtime changes using lctl set_param fail_loc=....