Tutorial - Basic Usage
Go to /testing/plans/commands and look at the files there. You will see a set of tools for performing different kinds of testing actions. These are like lego blocks that can be assembled to conduct different types of testing. We'll look at an assembly for doing nfsv4 testing in a bit, but first look through these scripts.
You'll see they're all fairly simple things - do one thing and do it well. Most of them are just wrappers around other things. For example, the runtest_ scripts wrapper tests. They do things like set up environment variables, mount or unmount file systems, create directories, and so forth.
You'll also see scripts that do more fundamental things, like rebooting the system, building and installing kernels, starting up services, and so forth. Some of the tools you'll write will plug in here along side these. Think of it like you're inventing some new kinds of lego blocks that you can use for creating interesting tests later. You should decide what kinds of commands you'd like to have, but here are some ideas:
* Exhaust all memory on the SUT [after a given delay] * Exhaust all space on a given partition [after a given delay] * Turn off a network interface [after a given delay] * Turn on a network interface [after a given delay] * Delay all packets going out of a network interface - By random amount (default) - By fixed amount - With a normal distribution correlation - Other ideas (correlated to actual Internet data...?) * Introduce packet loss on a network interface - Random - With correlation value * Introduce packet duplication on a given network interface * Introduce packet corruption on a given network interface * Introduce packet re-ordering on a given network interface * Introduce rate control on a given network interface * Change queuing discipline for a given network interface - FIFO - GRED - CBQ - etc. * Restore network interface to normal [after a given delay]
You'll want to allow specifying delays for when the commands kick in, by an environment variable. By default, make all the commands perform the commands immediately, but if an environment variable is set then sleep for that amount of time first. E.g., $DISABLE_NETWORK_DELAY, $RESTORE_NETWORK_DELAY, etc.
In order to allow other things to progress while your script is delayed, you'll need to background the sleep and the actual effect. I.e., something like,
`sleep $DISABLE_NETWORK_DELAY && /etc/init.d/net.eth1 stop` &
I haven't actually tried this, so you may have to experiment a bit to get something that works properly.
Okay, next to look at how these commands are put into use. Let's first look at how some of the existing commands work.
Go to /testing/plans and look at the files there. These are "test plans"; they are essentially high level scripts that help to indicate a) what kinds of packages to run this test on, b) how to set up the SUT(s), and c) what instruments and tests to set up.
Start by looking at the cairo script, this is very simple:
$ cat cairo wanted=cairo tests=nfs02:build_install
It means, "When something new appears in /testing/packages/cairo, then have SUT nfs02 do ./autogen.sh && ./configure && make && make check && make install". The output from all this is captured in a log file in /testing/runs/$run_id.
Here is a slightly more sophisticated one:
$ cat hotplug-cpu wanted=linux tests=nfs02:create_patched_kernel,build_kernel,boot_kernel,lhcs_regression
This one waits for new things in /testing/packages/linux (this means, run on all new kernel releases, plus all -mm, -rc, and -git patches - lots of testing!) then it creates a patched kernel, builds and installs it, boots to it, and then runs a test suite called lhcs_regression.
Now let's design a more sophisticated one, for doing nfs testing. We want to run against the nfsv4 patches, which are in /testing/packages/nfsv4. We'll want to create and patch the kernel, and want to run them on the test machine checked out to you:
wanted=nfsv4 tests=nfs05:create_patched_kernel,build_kernel,boot_kernel,runtest_newpynfs
Next, you'll want to add your machines back to the testing pool (make sure to back up anything you have on it first!)
$ sut checkin nfs04 $ sut checkin nfs05
Now you should be able to queue tests against it:
$ queue_package nfsv4/linux-2.6.17-rc2-CITI_NFS4_ALL-1.diff
Check that it's running by doing this command:
$ testrun status
You should see two tests marked running - one for the nfsv4 test plan, and one for your new one. Look for the one that mentions nfs05 and note its test ID. Now run this (replace $ID with the ID you saw):
$ testrun info $ID
You should see a listing of your commands and their state.
Okay, now cd into /testing/runs/$ID/ and look around. You can do this from nfs01 if you like (this way if your sut reboots, your ssh session won't interfere).
The commands being run are in the FINISHED, INCOMING, RUNNING directories. The testrun info script above does little more than just run a find on these directories and format the output nicely. You can also cd into these directories and see what's going on; for example if you move some scripts from FINISHED to INCOMING while the test is still running, it'll go back and redo those commands.
You'll also see a 'pending', 'running' or 'finished' file in the run directory. This indicates whether the test run is running or not. When you run 'testrun cancel $ID', it does nothing more than create a 'finished' file here, and delete any 'running' or 'pending' file.
run_profile.txt is an interesting file; this stores all the metadata about the test run, such as the name of the package being tested, paths of things relevant to the test, and so on.
You will also see an nfs04.log file here. This is probably the most important file for you, as it captures the output as the machine processes commands. If anything goes wrong, chances are the error messages will show up here. I usually like to do a 'tail -f' of this log file while the test runs, to keep track of how it's doing. I like doing this from nfs01 so I can watch while the machine goes through reboots. ;-)
Finally, there is a test_output/ directory, created by runtest_*, if your testrun gets that far. In this directory will be the raw output from your test run. Now, you'll probably find that, sadly, this test plan doesn't work! We're trying to run a client/server test, but we didn't set up a server!
Let's fix that. Modify your test plan to look like this:
wanted=nfsv4 tests=nfs04:create_patched_kernel,build_kernel,boot_kernel,start_nfsv4_services,signal_ready,wait_until_finished,finish tests=nfs05:create_patched_kernel,build_kernel,boot_kernel,wait_for_ready,runtest_newpynfs,finish export server=nfs04 export server_portname=nfs04-2
Okay, starting to get more complex! Here's what's going on. We've specified that we're running stuff on two machines. At first, we do the same thing on both - compile the kernel and boot it. But then we have one machine start up nfs services while the other waits. Once the nfs services are up and ready, this machine signals that it's ready to go. This triggers the other to start running the newpnyfs test. Once its done, it flags that it's finished, and both machines return to the testing pool. You can read these scripts in /testing/plans/commands to see specifically what they do - they're all pretty simple things.
You'll also notice the two variables at the end marked 'export'. This is how you pass custom parameters from the test plan to the run_profile.txt file. In this case we're specifying that the server is the machine nfs04, and that we want it to use its nfs04-2 interface for the nfsv4 services.
Now for the fun part - let's hook up some of your new tools into this, so you can do some network robustness simulation. I will just make up names for these new scripts and the environment variables; don't feel you must call them by these names, just adapt this example to suit:
wanted=nfsv4 tests=nfs04:create_patched_kernel,build_kernel,boot_kernel,start_nfsv4_services,network_pktdelay,network_off,network_on,network_pktloss,signal_ready,wait_until_finished,network_restore,finish tests=nfs05:create_patched_kernel,build_kernel,boot_kernel,wait_for_ready,runtest_newpynfs,finish export server=nfs04 export server_portname=nfs04-2 export NETWORK_PKTDELAY_DELAY=100 sec export NETWORK_PKTDELAY_AMOUNT=random export NETWORK_OFF_DELAY=130 sec export NETWORK_ON_DELAY=140 sec export NETWORK_PKTLOSS_DELAY=180 sec export NETWORK_PKTLOSS_AMOUNT=1.5%
So the idea here is to run newpynfs, but introduce some network issues. Newpynfs is nice because it runs quickly, and exercises a lot of nfsv4 functionality. It probably isn't the best for your real testing, but while you're developing your commands it is nice since it doesn't take too long to complete. Anyway, here in this example test plan we let pynfs run normally for 100 sec, and then introduce a random packet delay. We let that go for half a minute, and then we interrupt the network entirely for 10 seconds. We bring it back, let it run for 20 sec, and then add a 1.5% packet loss for the remainder of the test. Finally, we restore the network at the end of the test.
Once you have the basic process down and can script up test scenarios like this, you should then brainstorm several different test plans to test NFSv4 under different kinds of loads and network issues. Run different tests than newpynfs - such as iozone, fsstress, etc. Or perhaps set up some other kinds of workloads to run in the background, such as tarring/untarring files on a mount point, or performing ACL or kerberos operations (or both!) Then experiment with causing your network error conditions to show up in different ways; maybe even set your delays to random times so each time the test is run, the network issues will show up at a different point. Or schedule a bunch of problems to happen all at once, or at particularly sensitive times (like right at the point where a mount operation is going to be given).