X

News, tips, partners, and perspectives for the Oracle Linux operating system and upstream Linux kernel work

So, you are a Linux kernel programmer and you want to do some automated testing...

Guest Author

ktest: Automated Testing For Kernel Programmers

Oracle Linux kernel developer Daniel Jordan contributes this post on ktest, a tool for making kernel programmers' lives easier.

In October 2010, Steven Rostedt announced on the LKML that he was working on a script called ktest.pl to automate certain aspects of Linux kernel testing. The script is aimed at individual kernel programmers testing their patch series, and provides an alternative to the Autotest framework, which is powerful but quite involved for one person to set up.

This post will cover ktest's capabilities and requirements, and give concrete examples of how to use it in one specific environment, a single physical machine with a qemu VM run under virsh.

Why use it?

ktest can build a kernel on a host system, boot it on a target machine, and run a script on the target. It's up to you how far in the process ktest will go, so it's possible to build, or build and boot, or do all three steps.

The script can detect build failures, newly introduced warnings, runtime hangs and splats, and script failures. For instance, here's ktest flagging a new warning during a build:

CRITICAL FAILURE... [TEST 2] New warning found (not in
/home/penguin/src/ktest/warnings_file)
/home/penguin/kernel/worktrees/ktest-blog/mm/page_alloc.c:135:15:
warning: symbol 'totalcma_pages' was not declared. Should it be static?
See /home/penguin/src/ktest/log.ktest for more info.

It can test multiple kernel configurations, in case the code you're touching is sensitive to that, and multiple architectures, in case you want to automate cross-compilation.

Where ktest is especially useful, though, is in its ability to do these things for each patch in a series, thereby freeing you from a significant amount of tedium. For your chosen configs, the series will be cleanly bisectable and won't trigger upstream build bots with easily avoided errors and warnings mid-series. (Those bots are nice for less common configs though.) Code reviewers' moods improve too because each patch will stand alone with all the necessary code.

ktest comes with numerous, battle-hardened configuration options allowing you to control, for example, compile options, kernel configuration, bootloader type, kernel install method, iterations, timeout lengths, power cycle command, and test success criteria.

Multiple tests can be specified at once, with default options optionally shared between them, allowing you to kick off a ktest session on different branches and kernel configs in one command.

In addition to the main build/boot/script testing, ktest can automate bisection in a couple of ways. First, it can do patch bisection to find either the first bad commit where something broke or the first good commit when something started working again (reverse bisection). Sometimes bugs aren't 100% reproducible, so ktest can try a given commit multiple times before moving on. You can configure it to skip commits that encounter problems unrelated to the one you're looking for but that prevent testing for it, an unfortunately common thing when bisecting.

Second, ktest can bisect between good and bad kernel configurations to find the option that's causing a problem. It starts with the good config, compares the options between the two configs, and changes half the differing options in the good config to match the bad, repeating the process until the culprit is found.

Requirements

The idea is for this script to work on a small scale, so only two systems, host and target, are required. One or both may be virtual machines. ktest of course needs access to a Linux source tree and a directory to build the kernel(s) being tested. It always does out-of-tree builds.

ktest is run from the host system and drives the target system remotely, typically via passwordless ssh/scp if using a VM. For boot testing, the host needs to be able to powercycle the target and read its console.

Example usage with a virsh VM

Here's an example showing how to build, boot, and run a test script for each patch in a series on a virsh VM. If you only want to run build tests, you can skip the section on VM assumptions.

VM assumptions and configuration

The steps to install and configure a virsh VM are outside the scope of this post, but there are numerous resources that document this process. It's assumed that the VM is set up for passwordless ssh/scp access for the root user, uses the grub2 bootloader, and has a tool to pick the kernel to reboot into such as grub2-reboot (called grub-reboot on some Linux distributions).

The last ktest-specific bit of setup is to provide a custom, named boot entry in the GRUB menu. This is so ktest has a stable title that a tool can select regardless of where it appears among other entries. This step depends on whether your Linux distribution uses the Boot Loader Specification. Follow the "Without BLS" section if it doesn't, for instance if grub.cfg contains menu entries inline, and the "With BLS" section if it does, for instance if menu entries are defined as snippets in $BOOT/loader/entries where $BOOT is the path under which grub configuration resides.

Without BLS

ktest's sample.conf recommends adding the entry in /etc/grub.d/40_custom. You should follow the convention for an entry on your system, but to get an idea, the important parts of my 40_custom look like this:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'ktest-custom' <menuentry options> {
    <preceding menuentry commands>
    linux16 /vmlinuz-ktest <kernel arguments>
    initrd16 /initramfs-ktest.img
}

I copied an entry from the system's main grub config file into 40_custom, changed the menuentry title to 'ktest-custom', and updated the linux and initramfs paths. Later in the post, the ktest config file will set GRUB_MENU to 'ktest-custom' so ktest can pick the right kernel to boot into and set TARGET_IMAGE to /boot/vmlinuz-ktest so it installs to the path this GRUB menuentry expects.

With BLS

On systems that use BLS, you can add a custom kernel entry with grubby:

grubby --add-kernel=/boot/vmlinuz-ktest --title="ktest-custom" \ --initrd=/boot/initramfs-ktest.img --copy-default"

The example configuration given later in the post doesn't use BLS, so REBOOT_TYPE needs to be changed to grub2bls and GRUB_FILE should be removed. The GRUB_BLS_GET option may also need to be explicitly set if ktest's default (grubby --info=ALL) doesn't work.

Obtaining and Running ktest

ktest.pl is readily available in mainline under the tools/testing/ktest directory. All configuration options are documented in the sample.conf file there.

I recommend running ktest.pl with the path to a configuration file as the first and only argument:

$LINUX_SRC/tools/testing/ktest/ktest.pl path/to/ktest.conf

Without arguments, it searches for ktest.conf in the current directory. If no such file exists, it prompts for the minimum required options and writes the resulting config to $PWD/ktest.conf, but the test types offered from the prompt are limited to building and booting a single patch.

Provided the config is correct, ktest takes care of the rest. Much of the work in using this tool, then, comes from writing the config and luckily, you have the next section to get you started.

Configuration

ktest comes with example configs in tools/testing/ktest/examples. These nicely cover a few different use cases and demonstrate how to modularize your configs and share settings between them.

This post, however, will use a slightly simpler, standalone config that should be enough to demonstrate some of ktest's main features. It builds, boots, and runs a script for each patch in a series, and there'™s optional checking for newly-introduced warnings. Its comments are designed to help first-time users get up and running in their environments, and it was successfully used with a ktest.pl from v5.11-rc2.

ktest.pl may have to be run as root with this config if you have installed your virsh VM using the system libvirtd instance (qemu:///system), which means virsh commands usually require root permissions.

Finally, before kicking off the tests, make sure the virsh VM is started and accessible via ssh but that the console is not in use so ktest can access it.

Here's the sample ktest.conf:

# The DEFAULTS tag marks the beginning of a set of default options that apply
# to all tests unless overidden in a specific test's options (or another
# DEFAULTS OVERRIDE section, see sample.conf for this option).  DEFAULTS is not
# required at the top of the file but is included anyway for the sake of
# learning.
DEFAULTS

# Used as both the hostname and virsh domain name for the VM.
MACHINE = virsh-vm

# Installing kernels on the target usually requires root privileges.
SSH_USER = root

# Some of these variables, such as the next three, are for convenience only.
# ktest doesn't define or require them but they're used in variables ktest does
# support to avoid repetition.
USR_HOME = /home/penguin
KTEST_DIR = ${USR_HOME}/src/ktest
BRANCH = master

# Tests a range of three commits.  Together with BRANCH above, these variables
# determine what code is tested.
START_COMMIT   = ${BRANCH}~2
END_COMMIT     = ${BRANCH}

# ktest won't autogenerate the config, so this should be set to a valid config.
CONFIG_FILE    = ${USR_HOME}/kernel/worktrees/${BRANCH}/out/x86/.config

# Path to the Linux source tree.
BUILD_DIR = ${USR_HOME}/kernel/worktrees/${BRANCH}

# Head off upstream build bots with extra checking (W=1 C=1).  By default, C=1
# invokes sparse during the build, but you can remove it if sparse isn't
# installed.
BUILD_OPTIONS  = W=1 C=1 -j8

# It sometimes happens that the kernel(s) being tested are too old for
# the system's default compilers.  In this case, using older tools can
# sometimes help.
#MAKE_CMD = CC=gcc-4.9 HOSTCC=gcc-4.9 make

OUTPUT_DIR = ${KTEST_DIR}/build-output
BUILD_TARGET = arch/x86/boot/bzImage
TARGET_IMAGE = /boot/vmlinuz-ktest
LOCALVERSION = -ktest

POWER_CYCLE = virsh destroy ${MACHINE}; sleep 5; virsh start ${MACHINE}
CONSOLE = virsh console ${MACHINE}

# virsh's console can't be killed with ktest's default signal, SIGHUP.
CLOSE_CONSOLE_SIGNAL = KILL

LOG_FILE = ${KTEST_DIR}/log.ktest

# By default, ktest does not clear the log when running new tests.  Uncomment
# to get this behavior.
#CLEAR_LOG = 1

# GRUB-related options.  If your system uses the Boot Loader Specification
# (BLS), change REBOOT_TYPE to grub2bls and remove the GRUB_FILE line.
REBOOT_TYPE = grub2
GRUB_FILE = /boot/grub/grub.cfg

# Some Linux distributions name the grub2-reboot command grub-reboot.
GRUB_REBOOT = grub2-reboot
GRUB_MENU = ktest-custom

# ktest checks that the VM is reachable by ssh before attempting to reboot
# and gives up after this many seconds.  Shorten to 5 from the default of 25.
CONNECT_TIMEOUT = 5

# Uncomment to check each patch in the series for newly-introduced warnings.
#
# WARNINGS_FILE holds a path to a file that ktest will use to store a list of
# warnings from the patch before the first commit in the series being tested,
# to form a baseline.  This file is populated in the make_warnings_file test
# below.
#
# ktest does an exact comparison between the preexisting warnings lines from the
# make_warnings_file build with those from the series builds.  If the series
# touches files that had preexisting warnings such that the line numbers from
# the messages are shifted, the series builds will fail with false positive
# new warnings.
#
# The test is disabled by default because of this.
#WARNINGS_FILE  = ${KTEST_DIR}/warnings_file

# This is the end of the default options, test-specific options follow.


# The first test creates a file containing baseline warning messages that can
# be used in the second test to detect newly-introduced warnings in the series.
# It runs only if the definition of WARNINGS_FILE above is uncommented.
TEST_START IF DEFINED WARNINGS_FILE

TEST_TYPE = make_warnings_file
BUILD_TYPE = useconfig:${CONFIG_FILE}
# Use the commit before START_COMMIT as a baseline to record existing warnings.
CHECKOUT = ${START_COMMIT}~1


# The second and final test does the main work of building, booting, and
# running a script at each commit in the series under test.
TEST_START

# A test that runs on each commit is called "patchcheck" in ktest.
TEST_TYPE = patchcheck

POST_INSTALL = ssh ${SSH_USER}@${MACHINE} /sbin/dracut -f /boot/initramfs-ktest.img ktest

# Not required.  It gives ktest a string to look for in the console output
# so it doesn't have to sleep for an arbitrary amount of time.
REBOOT_SUCCESS_LINE = login:

# Can set this to 'build' or 'boot' instead to only advance that far in the
# process.
PATCHCHECK_TYPE = test

# Can replace 'echo ktest success' with a path to a script on the target
# system.  ktest will flag a nonzero exit code from this command.
TEST = ssh ${SSH_USER}@${MACHINE} 'echo ktest success'

# Improve test times by doing incremental builds between commits.
BUILD_NOCLEAN = 1

MIN_CONFIG       = ${CONFIG_FILE}

# CHECKOUT ensures you're on the right branch before the test starts,
# especially if PATCHCHECK_START and PATCHCHECK_END use raw commit hashes
# that don't include branch names, but it's not required.
CHECKOUT         = ${START_COMMIT}

# Define the start and end of the series to test.
PATCHCHECK_START = ${START_COMMIT}
PATCHCHECK_END   = ${END_COMMIT}

ktest In Action

To give you an idea of what to expect, here's some output from a run using the above configuration. Output is written to both the terminal and the log file. ktest begins by announcing the variables it's read and showing the commits it'™s going to test from oldest to newest (the branch ktest-blog ends with f601c725a6ac):

STARTING AUTOMATED TESTS DEFAULT OPTIONS: ... BRANCH = ktest-blog BUILD_DIR = ${USR_HOME}/kernel/worktrees/${BRANCH} BUILD_NOCLEAN = 0 BUILD_OPTIONS = W=1 C=1 -j8 ... RUNNING TEST 1 of 1 with option patchcheck test ... Going to test the following commits: d69e037bcc4a padata: remove effective cpumasks from the instance 3f257191d31d padata: fold padata_alloc_possible() into padata_alloc() f601c725a6ac padata: remove padata_parallel_queue

The log is quite verbose (25K+ lines from this run alone), listing many of the commands that ktest runs, which greatly eases getting it running smoothly in your environment (and I speak from experience). Here the kernel is configured and built at the first commit in the series:

Processing commit "d69e037bcc4a7e31fdd40ae416aa1bd768dd7d99 padata: remove effective cpumasks from the instance"

git checkout d69e037bcc4a7e31fdd40ae416aa1bd768dd7d99 ... HEAD is now at d69e037bcc4a padata: remove effective cpumasks from the instance [0 seconds] SUCCESS
cp /home/penguin/kernel/worktrees/ktest-blog/out/x86/.config /home/penguin/src/ktest/build-output/.config ... [0 seconds] SUCCESS
touch /home/penguin/src/ktest/build-output/.config ... [0 seconds] SUCCESS
Loading force configs from /home/penguin/kernel/worktrees/ktest-blog/out/x86/.config
Applying minimum configurations into /home/penguin/src/ktest/build-output/.config.new
mv /home/penguin/src/ktest/build-output/.config.new /home/penguin/src/ktest/build-output/.config ... [0 seconds] SUCCESS
make O=/home/penguin/src/ktest/build-output olddefconfig ... make[1]: Entering directory '/home/penguin/src/ktest/build-output'
  GEN     Makefile
  HOSTCC  scripts/kconfig/conf.o
  HOSTCC  scripts/kconfig/confdata.o
  HOSTCC  scripts/kconfig/expr.o
  ... thousands of lines of build and install output ...
  ...

Console output from the VM is shown as well, covering boot and shutdown for the commits being tested as well as the install process for the next kernel. ktest runs the test command (echo ktest success) provided in the config file after the VM is finished booting and the monitor settles down, then the VM is rebooted to prepare for the next patch in the series.

virsh-vm login: Successful boot found: break after 1 second ** Wait for monitor to settle down ** run test ssh root@virsh-vm 'echo ktest success' ssh root@virsh-vm 'echo ktest success' ... ktest success [0 seconds] SUCCESS kill child process 248797 wait for child process 248797 to exit Reboot and sleep 60 seconds ssh root@virsh-vm echo check machine status ... timeout = 25 check machine status timeout = 25 [0 seconds] SUCCESS ssh root@virsh-vm sync ... timeout = 10 [0 seconds] SUCCESS ** Wait for monitor to settle down ** ssh root@virsh-vm reboot ... Connection to virsh-vm closed by remote host.^M [5 seconds] SUCCESS

The first commit'™s test finishes with some timing data:

Build time: 6 minutes 54 seconds Install time: 17 seconds Reboot time: 22 seconds

Each commit goes through the same process before the final outcome is reported:

******************************************* ******************************************* KTEST RESULT: TEST 1 SUCCESS!!!! ** ******************************************* *******************************************

Of course, if there are problems along the way (any of the aforementioned compile failures, splats, hangs, etc), ktest will stop and complain by default.

And that, kernel testers, is a short introduction to what ktest can do for you. Maybe you can find some ways to put it to work!

Be the first to comment

Comments ( 0 )
Please enter your name.Please provide a valid email address.Please enter a comment.CAPTCHA challenge response provided was incorrect. Please try again.