[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
15.2 Simple Tests using ‘parallel-tests’
The option ‘parallel-tests’ (see section Changing Automake's Behavior) enables a test
suite driver that is mostly compatible to the simple test driver described
in the previous section, but provides a few more features and slightly different
semantics. It features concurrent execution of tests with make -j
,
allows to specify inter-test dependencies, lazy reruns of tests that
have not completed in a prior run, summary and verbose output in
‘RST’ (reStructuredText) and ‘HTML’ format, and hard errors
for exceptional failures. Similar to the simple test driver,
TESTS_ENVIRONMENT
, AM_COLOR_TESTS
, XFAIL_TESTS
, and
the check_*
variables are honored, and the environment variable
srcdir
is set during test execution.
This test driver is still experimental and may undergo changes in order to satisfy additional portability requirements.
The driver operates by defining a set of make
rules to create
a summary log file, TEST_SUITE_LOG
, which defaults to
‘test-suite.log’ and requires a ‘.log’ suffix. This file
depends upon log files created for each single test program listed in
TESTS
, which in turn contain all output produced by the
corresponding tests.
Each log file is created when the corresponding test has completed.
The set of log files is listed in the read-only variable
TEST_LOGS
, and defaults to TESTS
, with the executable
extension if any (see section Support for executable extensions), as well as any suffix listed in
TEST_EXTENSIONS
removed, and ‘.log’ appended.
TEST_EXTENSIONS
defaults to ‘.test’. Results are undefined
if a test file name ends in several concatenated suffixes.
For tests that match an extension .ext
listed in
TEST_EXTENSIONS
, you can provide a test driver using the variable
ext_LOG_COMPILER
(note the upper-case extension) and pass
options in AM_ext_LOG_FLAGS
and allow the user to pass
options in ext_LOG_FLAGS
. It will cause all tests with
this extension to be called with this driver. For all tests without a
registered extension, the variables LOG_COMPILER
,
AM_LOG_FLAGS
, and LOG_FLAGS
may be used. For example,
TESTS = foo.pl bar.py baz TEST_EXTENSIONS = .pl .py PL_LOG_COMPILER = $(PERL) AM_PL_LOG_FLAGS = -w PY_LOG_COMPILER = $(PYTHON) AM_PY_LOG_FLAGS = -v LOG_COMPILER = ./wrapper-script AM_LOG_FLAGS = -d |
will invoke ‘$(PERL) -w foo.pl’, ‘$(PYTHON) -v bar.py’, and ‘./wrapper-script -d baz’ to produce ‘foo.log’, ‘bar.log’, and ‘baz.log’, respectively. The ‘TESTS_ENVIRONMENT’ variable is still expanded before the driver, but should be reserved for the user.
As with the simple driver above, by default one status line is printed per completed test, and a short summary after the suite has completed. However, standard output and standard error of the test are redirected to a per-test log file, so that parallel execution does not produce intermingled output. The output from failed tests is collected in the ‘test-suite.log’ file. If the variable ‘VERBOSE’ is set, this file is output after the summary. For best results, the tests should be verbose by default now.
With make check-html
, the log files may be converted from RST
(reStructuredText, see http://docutils.sourceforge.net/rst.html)
to HTML using ‘RST2HTML’, which defaults to rst2html
or
rst2html.py
. The variable ‘TEST_SUITE_HTML’ contains the
set of converted log files. The log and HTML files are removed upon
make mostlyclean
.
Even in the presence of expected failures (see XFAIL_TESTS
, there
may be conditions under which a test outcome needs attention. For
example, with test-driven development, you may write tests for features
that you have not implemented yet, and thus mark these tests as expected
to fail. However, you may still be interested in exceptional conditions,
for example, tests that fail due to a segmentation violation or another
error that is independent of the feature awaiting implementation.
Tests can exit with an exit status of 99 to signal such a hard
error. Unless the variable DISABLE_HARD_ERRORS
is set to a
nonempty value, such tests will be counted as failed.
By default, the test suite driver will run all tests, but there are several ways to limit the set of tests that are run:
-
You can set the
TESTS
variable, similarly to how you can with the simple test driver from the previous section. For example, you can use a command like this to run only a subset of the tests:env TESTS="foo.test bar.test" make -e check
-
You can set the
TEST_LOGS
variable. By default, this variable is computed atmake
run time from the value ofTESTS
as described above. For example, you can use the following:set x subset*.log; shift env TEST_LOGS="foo.log $*" make -e check
-
By default, the test driver removes all old per-test log files before it
starts running tests to regenerate them. The variable
RECHECK_LOGS
contains the set of log files which are removed.RECHECK_LOGS
defaults toTEST_LOGS
, which means all tests need to be rechecked. By overriding this variable, you can choose which tests need to be reconsidered. For example, you can lazily rerun only those tests which are outdated, i.e., older than their prerequisite test files, by setting this variable to the empty value:env RECHECK_LOGS= make -e check
-
You can ensure that all tests are rerun which have failed or passed
unexpectedly, by running
make recheck
in the test directory. This convenience target will setRECHECK_LOGS
appropriately before invoking the main test driver. Therecheck-html
target does the same asrecheck
but again converts the resulting log file in HTML format, like thecheck-html
target.
In order to guarantee an ordering between tests even with make
-jN
, dependencies between the corresponding log files may be
specified through usual make
dependencies. For example, the
following snippet lets the test named ‘foo-execute.test’ depend
upon completion of the test ‘foo-compile.test’:
TESTS = foo-compile.test foo-execute.test foo-execute.log: foo-compile.log |
Please note that this ordering ignores the results of required
tests, thus the test ‘foo-execute.test’ is run even if the test
‘foo-compile.test’ failed or was skipped beforehand. Further,
please note that specifying such dependencies currently works only for
tests that end in one of the suffixes listed in TEST_EXTENSIONS
.
Tests without such specified dependencies may be run concurrently with
parallel make -jN
, so be sure they are prepared for
concurrent execution.
The combination of lazy test execution and correct dependencies between
tests and their sources may be exploited for efficient unit testing
during development. To further speed up the edit-compile-test cycle, it
may even be useful to specify compiled programs in EXTRA_PROGRAMS
instead of with check_PROGRAMS
, as the former allows intertwined
compilation and test execution (but note that EXTRA_PROGRAMS
are
not cleaned automatically, see section The Uniform Naming Scheme).
The variables TESTS
and XFAIL_TESTS
may contain
conditional parts as well as configure substitutions. In the latter
case, however, certain restrictions apply: substituted test names
must end with a nonempty test suffix like ‘.test’, so that one of
the inference rules generated by automake
can apply. For
literal test names, automake
can generate per-target rules
to avoid this limitation.
Please note that it is currently not possible to use $(srcdir)/
or $(top_srcdir)/
in the TESTS
variable. This technical
limitation is necessary to avoid generating test logs in the source tree
and has the unfortunate consequence that it is not possible to specify
distributed tests that are themselves generated by means of explicit
rules, in a way that is portable to all make
implementations
(see (autoconf)Make Target Lookup section `Make Target Lookup' in The Autoconf Manual, the
semantics of FreeBSD and OpenBSD make
conflict with this).
In case of doubt you may want to require to use GNU make
,
or work around the issue with inference rules to generate the tests.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |