+2011-07-07 Stefano Lattarini <stefano.lattarini@gmail.com>
+
+ parallel-tests: new recognized test result 'ERROR'
+ * lib/am/check.am ($(TEST_SUITE_LOG)): Recognize a new test result
+ `ERROR'. Use it when encountering unreadable test logs (previously
+ a simple `FAIL' was used in this situations).
+ * lib/test-driver: Set the global test result to `ERROR' when the
+ test exit status is 99. When doing colorized output, color `ERROR'
+ results in magenta.
+ * doc/automake.texi (Log files generation and test results
+ recording): Update by listing `ERROR' too among the list of valid
+ `:test-results:' arguments.
+ * NEWS: Update.
+ * tests/trivial-test-driver: Update.
+ * tests/parallel-tests.test: Likewise.
+ * tests/parallel-tests-harderror.test: Likewise.
+ * tests/parallel-tests-no-spurious-summary.test: Likewise.
+ * tests/test-driver-global-log.test: Likewise.
+ * tests/test-driver-recheck.test: Likewise.
+ * tests/test-driver-custom-multitest-recheck.test: Likewise.
+ * tests/test-driver-custom-multitest-recheck2.test: Likewise.
+ * tests/test-driver-custom-multitest.test: Likewise.
+ * tests/test-driver-custom-no-html.test: Likewise.
+ * tests/test-driver-end-test-results.test: Likewise.
+ * tests/color.test: Likewise. Also, make stricter, and also test
+ from VPATH.
+ * tests/color2.test: Likewise, and improve syncing with color.test.
+ * tests/parallel-tests-exit-statuses.test: New test.
+ * tests/parallel-tests-console-output.test: Likewise.
+ * tests/Makefile.am (TESTS): Update.
+
2011-07-07 Stefano Lattarini <stefano.lattarini@gmail.com>
parallel-tests: make parsing of test results safer
* Changes to Automake-generated testsuite harnesses:
+ - Test scripts that exit with status 99 to signal an "hard error" (e.g.,
+ and unexpected or internal error, or a failure to set up the test case
+ scenario) have their outcome reported as an 'ERROR' now. Previous
+ versions of automake reported such an outcome as a 'FAIL' (the only
+ difference with normal failures being that hard errors were counted
+ as failures even when the test originating them was listed in
+ XFAIL_TESTS).
+
- The default testsuite driver offered by the 'parallel-tests' option is
now implemented (partly at least) with the help of automake-provided
auxiliary scripts (e.g., `test-driver'), instead of relying entirely
@c Keep this in sync with lib/am/check-am:$(TEST_SUITE_LOG).
The only recognized test results are currently @code{PASS}, @code{XFAIL},
-@code{SKIP}, @code{FAIL} and @code{XPASS}. These results, when declared
-with @code{:test-result:}, can be optionally followed by text holding
-the name and/or a brief description of the corresponding test; the
-@option{parallel-tests} harness will ignore such extra text when
+@code{SKIP}, @code{FAIL}, @code{XPASS} and @code{ERROR}. These results,
+when declared with @code{:test-result:}, can be optionally followed by
+text holding the name and/or a brief description of the corresponding
+test; the @option{parallel-tests} harness will ignore such extra text when
generating @file{test-suite.log} and preparing the testsuite summary.
Also, @code{:test-result:} can be used with a special ``pseudo-result''
@code{END}, that will instruct the testsuite harness to stop scanning
nlinit=`echo 'nl="'; echo '"'`; eval "$$nlinit"; unset nlinit; \
list='$(TEST_LOGS)'; \
list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
- results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
+ results1=`for f in $$list; do test -r $$f || echo ERROR; done`; \
results2=''; \
exec 5<&0; \
for f in $$list2; do \
skip=`echo "$$results" | grep -c '^SKIP'`; \
xfail=`echo "$$results" | grep -c '^XFAIL'`; \
xpass=`echo "$$results" | grep -c '^XPASS'`; \
+ error=`echo "$$results" | grep -c '^ERROR'`; \
+ fail=`expr $$fail + $$error`; \
failures=`expr $$fail + $$xpass`; \
all=`expr $$all - $$skip`; \
if test "$$all" -eq 1; then tests=test; All=; \
## appended.
##
## In addition to the magic "exit 77 means SKIP" feature (which was
-## imported from automake), there is a magic "exit 99 means FAIL" feature
+## imported from automake), there is a magic "exit 99 means ERROR" feature
## which is useful if you need to issue a hard error no matter whether the
## test is XFAIL or not. You can disable this feature by setting the
## variable DISABLE_HARD_ERRORS to a nonempty value.
## Readable test logs.
list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
## Each unreadable test log counts as a failed test.
- results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
+ results1=`for f in $$list; do test -r $$f || echo ERROR; done`; \
## Now we're going to extract the outcome of all the testcases from the
## test logs.
results2=''; \
skip=`echo "$$results" | grep -c '^SKIP'`; \
xfail=`echo "$$results" | grep -c '^XFAIL'`; \
xpass=`echo "$$results" | grep -c '^XPASS'`; \
+ error=`echo "$$results" | grep -c '^ERROR'`; \
+## FIXME: for the moment, we count errors as failures, otherwise the code
+## that displays the testsuite summary will become too complicated.
+ fail=`expr $$fail + $$error`; \
failures=`expr $$fail + $$xpass`; \
all=`expr $$all - $$skip`; \
if test "$$all" -eq 1; then tests=test; All=; \
grn='\e[0;32m' # Green.
lgn='\e[1;32m' # Light green.
blu='\e[1;34m' # Blue.
+ mgn='\e[0;35m' # Magenta.
std='\e[m' # No color.
else
- red= grn= lgn= blu= std=
+ red= grn= lgn= blu= mgn= std=
fi
tmpfile=$logfile-t
0:yes) col=$red; res=XPASS;;
0:*) col=$grn; res=PASS ;;
77:*) col=$blu; res=SKIP ;;
- 99:*) col=$red; res=FAIL ;;
+ 99:*) col=$mgn; res=ERROR;;
*:yes) col=$lgn; res=XFAIL;;
*:*) col=$red; res=FAIL ;;
esac
parallel-tests-empty-testlogs.test \
parallel-test-driver-install.test \
parallel-tests-no-spurious-summary.test \
+parallel-tests-exit-statuses.test \
+parallel-tests-console-output.test \
test-driver-end-test-results.test \
test-driver-custom-no-extra-driver.test \
test-driver-custom.test \
parallel-tests-empty-testlogs.test \
parallel-test-driver-install.test \
parallel-tests-no-spurious-summary.test \
+parallel-tests-exit-statuses.test \
+parallel-tests-console-output.test \
test-driver-end-test-results.test \
test-driver-custom-no-extra-driver.test \
test-driver-custom.test \
nlinit=`echo 'nl="'; echo '"'`; eval "$$nlinit"; unset nlinit; \
list='$(TEST_LOGS)'; \
list2=`for f in $$list; do test ! -r $$f || echo $$f; done`; \
- results1=`for f in $$list; do test -r $$f || echo FAIL; done`; \
+ results1=`for f in $$list; do test -r $$f || echo ERROR; done`; \
results2=''; \
exec 5<&0; \
for f in $$list2; do \
skip=`echo "$$results" | grep -c '^SKIP'`; \
xfail=`echo "$$results" | grep -c '^XFAIL'`; \
xpass=`echo "$$results" | grep -c '^XPASS'`; \
+ error=`echo "$$results" | grep -c '^ERROR'`; \
+ fail=`expr $$fail + $$error`; \
failures=`expr $$fail + $$xpass`; \
all=`expr $$all - $$skip`; \
if test "$$all" -eq 1; then tests=test; All=; \
TERM=ansi
export TERM
-red='\e[0;31m'
-grn='\e[0;32m'
-lgn='\e[1;32m'
-blu='\e[1;34m'
-std='\e[m'
+esc='\e'
+# Escape `[' for grep, below.
+red="$esc\[0;31m"
+grn="$esc\[0;32m"
+lgn="$esc\[1;32m"
+blu="$esc\[1;34m"
+mgn="$esc\[0;35m"
+std="$esc\[m"
# Check that grep can parse nonprinting characters.
# BSD 'grep' works from a pipe, but not a seekable file.
cat >Makefile.am <<'END'
AUTOMAKE_OPTIONS = color-tests
TESTS = $(check_SCRIPTS)
-check_SCRIPTS = pass fail skip xpass xfail
+check_SCRIPTS = pass fail skip xpass xfail error
XFAIL_TESTS = xpass xfail
END
exit 77
END
+cat >error <<END
+#! /bin/sh
+exit 99
+END
+
cp fail xfail
cp pass xpass
-chmod +x pass fail skip xpass xfail
+chmod +x pass fail skip xpass xfail error
$ACLOCAL
$AUTOCONF
$AUTOMAKE --add-missing
-./configure
-
test_color ()
{
# Not a useless use of cat; see above comments about grep.
- cat stdout | grep ": pass" | $FGREP "$grn"
- cat stdout | grep ": fail" | $FGREP "$red"
- cat stdout | grep ": xfail" | $FGREP "$lgn"
- cat stdout | grep ": xpass" | $FGREP "$red"
- cat stdout | grep ": skip" | $FGREP "$blu"
+ cat stdout | grep "^${grn}PASS${std}: .*pass"
+ cat stdout | grep "^${red}FAIL${std}: .*fail"
+ cat stdout | grep "^${blu}SKIP${std}: .*skip"
+ cat stdout | grep "^${lgn}XFAIL${std}: .*xfail"
+ cat stdout | grep "^${red}XPASS${std}: .*xpass"
+ # The old serial testsuite driver doesn't distinguish between failures
+ # and hard errors.
+ if test x"$parallel_tests" = x"yes"; then
+ cat stdout | grep "^${mgn}ERROR${std}: .*error"
+ else
+ cat stdout | grep "^${red}FAIL${std}: .*error"
+ fi
+ :
}
test_no_color ()
{
- # Not a useless use of cat; see above comments about grep.
- cat stdout | grep ": pass" | $FGREP "$grn" && Exit 1
- cat stdout | grep ": fail" | $FGREP "$red" && Exit 1
- cat stdout | grep ": xfail" | $FGREP "$lgn" && Exit 1
- cat stdout | grep ": xpass" | $FGREP "$red" && Exit 1
- cat stdout | grep ": skip" | $FGREP "$blu" && Exit 1
- :
+ # With make implementations that, like Solaris make, in case of errors
+ # print the whole failing recipe on standard output, we should content
+ # ourselves with a laxer check, to avoid false positives.
+ # Keep this in sync with lib/am/check.am:$(am__color_tests).
+ if $FGREP '= Xalways || test -t 1 ' stdout; then
+ # Extra verbose make, resort to laxer checks.
+ (
+ set +e # In case some grepped regex below isn't matched.
+ # Not a useless use of cat; see above comments about grep.
+ cat stdout | grep "PASS.*:"
+ cat stdout | grep "FAIL.*:"
+ cat stdout | grep "SKIP.*:"
+ cat stdout | grep "XFAIL.*:"
+ cat stdout | grep "XPASS.*:"
+ cat stdout | grep "ERROR.*:"
+ # To check that the testsuite summary is not unduly colorized.
+ cat stdout | grep '===='
+ cat stdout | grep 'test.*expected'
+ cat stdout | grep 'test.*not run'
+ ) | grep "$esc" && Exit 1
+ : For shells with broken 'set -e'
+ else
+ cat stdout | grep "$esc" && Exit 1
+ : For shells with broken 'set -e'
+ fi
}
-AM_COLOR_TESTS=always $MAKE -e check >stdout && { cat stdout; Exit 1; }
-cat stdout
-test_color
+for vpath in false :; do
+
+ if $vpath; then
+ mkdir build
+ cd build
+ srcdir=..
+ else
+ srcdir=.
+ fi
+
+ $srcdir/configure
+
+ AM_COLOR_TESTS=always $MAKE -e check >stdout && { cat stdout; Exit 1; }
+ cat stdout
+ test_color
+
+ $MAKE -e check >stdout && { cat stdout; Exit 1; }
+ cat stdout
+ test_no_color
+
+ $MAKE distclean
+ cd $srcdir
+
+done
:
TERM=ansi
export TERM
-red='\e[0;31m'
-grn='\e[0;32m'
-lgn='\e[1;32m'
-blu='\e[1;34m'
-std='\e[m'
+esc='\e'
+# Escape `[' for grep, below.
+red="$esc\[0;31m"
+grn="$esc\[0;32m"
+lgn="$esc\[1;32m"
+blu="$esc\[1;34m"
+mgn="$esc\[0;35m"
+std="$esc\[m"
# Check that grep can parse nonprinting characters.
# BSD 'grep' works from a pipe, but not a seekable file.
cat >Makefile.am <<'END'
AUTOMAKE_OPTIONS = color-tests
TESTS = $(check_SCRIPTS)
-check_SCRIPTS = pass fail skip xpass xfail
+check_SCRIPTS = pass fail skip xpass xfail error
XFAIL_TESTS = xpass xfail
END
exit 77
END
+cat >error <<END
+#! /bin/sh
+exit 99
+END
+
cp fail xfail
cp pass xpass
-chmod +x pass fail skip xpass xfail
+chmod +x pass fail skip xpass xfail error
$ACLOCAL
-$AUTOMAKE -a
$AUTOCONF
-./configure
+$AUTOMAKE --add-missing
test_color ()
{
# Not a useless use of cat; see above comments about grep.
- cat stdout | grep ": pass" | $FGREP "$grn"
- cat stdout | grep ": fail" | $FGREP "$red"
- cat stdout | grep ": xfail" | $FGREP "$lgn"
- cat stdout | grep ": xpass" | $FGREP "$red"
- cat stdout | grep ": skip" | $FGREP "$blu"
+ cat stdout | grep "^${grn}PASS${std}: .*pass"
+ cat stdout | grep "^${red}FAIL${std}: .*fail"
+ cat stdout | grep "^${blu}SKIP${std}: .*skip"
+ cat stdout | grep "^${lgn}XFAIL${std}: .*xfail"
+ cat stdout | grep "^${red}XPASS${std}: .*xpass"
+ # The old serial testsuite driver doesn't distinguish between failures
+ # and hard errors.
+ if test x"$parallel_tests" = x"yes"; then
+ cat stdout | grep "^${mgn}ERROR${std}: .*error"
+ else
+ cat stdout | grep "^${red}FAIL${std}: .*error"
+ fi
+ :
}
test_no_color ()
{
- # Not a useless use of cat; see above comments about grep.
- cat stdout | grep ": pass" | $FGREP "$grn" && Exit 1
- cat stdout | grep ": fail" | $FGREP "$red" && Exit 1
- cat stdout | grep ": xfail" | $FGREP "$lgn" && Exit 1
- cat stdout | grep ": xpass" | $FGREP "$red" && Exit 1
- cat stdout | grep ": skip" | $FGREP "$blu" && Exit 1
- :
+ # With make implementations that, like Solaris make, in case of errors
+ # print the whole failing recipe on standard output, we should content
+ # ourselves with a laxer check, to avoid false positives.
+ # Keep this in sync with lib/am/check.am:$(am__color_tests).
+ if $FGREP '= Xalways || test -t 1 ' stdout; then
+ # Extra verbose make, resort to laxer checks.
+ (
+ set +e # In case some grepped regex below isn't matched.
+ # Not a useless use of cat; see above comments about grep.
+ cat stdout | grep "PASS.*:"
+ cat stdout | grep "FAIL.*:"
+ cat stdout | grep "SKIP.*:"
+ cat stdout | grep "XFAIL.*:"
+ cat stdout | grep "XPASS.*:"
+ cat stdout | grep "ERROR.*:"
+ # To check that the testsuite summary is not unduly colorized.
+ cat stdout | grep '===='
+ cat stdout | grep 'test.*expected'
+ cat stdout | grep 'test.*not run'
+ ) | grep "$esc" && Exit 1
+ : For shells with broken 'set -e'
+ else
+ cat stdout | grep "$esc" && Exit 1
+ : For shells with broken 'set -e'
+ fi
}
cat >expect-make <<'END'
expect eof
END
-MAKE=$MAKE expect -f expect-make >stdout \
- || { cat stdout; Exit 1; }
-cat stdout
-test_color
+for vpath in false :; do
+
+ if $vpath; then
+ mkdir build
+ cd build
+ srcdir=..
+ else
+ srcdir=.
+ fi
+
+ $srcdir/configure
+
+ MAKE=$MAKE expect -f $srcdir/expect-make >stdout \
+ || { cat stdout; Exit 1; }
+ cat stdout
+ test_color
+
+ AM_COLOR_TESTS=no MAKE=$MAKE expect -f $srcdir/expect-make >stdout \
+ || { cat stdout; Exit 1; }
+ cat stdout
+ test_no_color
+
+ $MAKE distclean
+ cd $srcdir
-AM_COLOR_TESTS=no MAKE=$MAKE expect -f expect-make >stdout \
- || { cat stdout; Exit 1; }
-cat stdout
-test_no_color
+done
:
--- /dev/null
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# parallel-tests: some checks on console output about testsuite
+# progress.
+
+parallel_tests=yes
+. ./defs || Exit 1
+
+cat >> configure.in << 'END'
+AC_OUTPUT
+END
+
+cat > Makefile.am << 'END'
+XFAIL_TESTS = sub/xpass.test xfail.test error.test
+TESTS = $(XFAIL_TESTS) fail.test pass.test a/b/skip.test sub/error2.test
+pass.log: fail.log
+error.log: pass.log
+sub/xpass.log: error.log
+sub/error2.log: xfail.log
+a/b/skip.log: sub/error2.log
+END
+
+cat > exp <<'END'
+FAIL: fail.test
+PASS: pass.test
+ERROR: error.test
+XPASS: sub/xpass.test
+XFAIL: xfail.test
+ERROR: sub/error2.test
+SKIP: a/b/skip.test
+END
+
+mkdir sub a a/b
+
+cat > pass.test << 'END'
+#!/bin/sh
+exit 0
+END
+cp pass.test sub/xpass.test
+
+cat > fail.test << 'END'
+#!/bin/sh
+exit 1
+END
+
+cat > xfail.test << 'END'
+#!/bin/sh
+# The sleep should ensure expected execution order of tests
+# even when make is run in parallel mode.
+# FIXME: quotes below required by maintainer-check.
+sleep '10'
+exit 1
+END
+
+cat > error.test << 'END'
+#!/bin/sh
+exit 99
+END
+cp error.test sub/error2.test
+
+cat > a/b/skip.test << 'END'
+#!/bin/sh
+exit 77
+END
+
+chmod a+x pass.test fail.test xfail.test sub/xpass.test \
+ a/b/skip.test error.test sub/error2.test
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE -a
+
+for vpath in : false; do
+ if $vpath; then
+ mkdir build
+ cd build
+ srcdir=..
+ else
+ srcdir=.
+ fi
+ $srcdir/configure
+ $MAKE check >stdout && { cat stdout; Exit 1; }
+ cat stdout
+ LC_ALL=C grep '^[A-Z][A-Z]*:' stdout > got
+ cat got
+ diff $srcdir/exp got
+ cd $srcdir
+done
+
+:
--- /dev/null
+#! /bin/sh
+# Copyright (C) 2011 Free Software Foundation, Inc.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2, or (at your option)
+# any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program. If not, see <http://www.gnu.org/licenses/>.
+
+# Check parallel-tests features: normal and special exit statuses
+# in the test scripts.
+
+parallel_tests=yes
+. ./defs || Exit 1
+
+cat >> configure.in << 'END'
+AC_OUTPUT
+END
+
+# $failure_statuses should be defined to the list of all integers between
+# 1 and 255 (inclusive), excluded 77 and 99.
+# Let's use `seq' if available, it's faster than the loop.
+failure_statuses=`seq 1 255 \
+ || { i=1; while test $i -le 255; do echo $i; i=\`expr $i + 1\`; done; }`
+failure_statuses=`
+ for i in $failure_statuses; do
+ test $i -eq 77 || test $i -eq 99 || echo $i
+ done | tr "$nl" ' '`
+# For debugging.
+echo "failure_statuses: $failure_statuses"
+# Sanity check.
+test `for st in $failure_statuses; do echo $st; done | wc -l` -eq 253 \
+ || fatal_ "initializing list of exit statuses for simple failures"
+
+cat > Makefile.am <<END
+LOG_COMPILER = ./do-exit
+fail_tests = $failure_statuses
+TESTS = 0 77 99 $failure_statuses
+\$(TESTS):
+END
+
+cat > do-exit <<'END'
+#!/bin/sh
+echo "$0: $1"
+case $1 in
+ [0-9]|[0-9][0-9]|[0-9][0-9][0-9]) st=$1;;
+ */[0-9]|*/[0-9][0-9]|*/[0-9][0-9][0-9]) st=`echo x"$1" | sed 's|.*/||'`;;
+ *) st=99;;
+esac
+exit $st
+END
+chmod a+x do-exit
+
+$ACLOCAL
+$AUTOCONF
+$AUTOMAKE -a
+
+{
+ echo PASS: 0
+ echo SKIP: 77
+ echo ERROR: 99
+ for st in $failure_statuses; do
+ echo "FAIL: $st"
+ done
+} | LC_ALL=C sort > exp-fail
+
+sed 's/^FAIL:/XFAIL:/' exp-fail | LC_ALL=C sort > exp-xfail-1
+sed '/^ERROR:/d' exp-xfail-1 > exp-xfail-2
+
+sort exp-fail
+sort exp-xfail-1
+sort exp-xfail-2
+
+./configure
+
+st=1
+$MAKE check >stdout && st=0
+cat stdout
+cat test-suite.log
+test $st -gt 0 || Exit 1
+LC_ALL=C grep '^[A-Z][A-Z]*:' stdout | LC_ALL=C sort > got-fail
+diff exp-fail got-fail
+
+st=1
+XFAIL_TESTS="$failure_statuses 99" $MAKE -e check >stdout && st=0
+cat stdout
+cat test-suite.log
+test $st -gt 0 || Exit 1
+LC_ALL=C grep '^[A-Z][A-Z]*:' stdout | LC_ALL=C sort > got-xfail-1
+diff exp-xfail-1 got-xfail-1
+
+st=0
+XFAIL_TESTS="$failure_statuses" TESTS="0 77 $failure_statuses" \
+ $MAKE -e check >stdout || st=$?
+cat stdout
+cat test-suite.log
+test $st -eq 0 || Exit 1
+LC_ALL=C grep '^[A-Z][A-Z]*:' stdout | LC_ALL=C sort > got-xfail-2
+diff exp-xfail-2 got-xfail-2
+
+:
$MAKE check DISABLE_HARD_ERRORS='' && Exit 1
cat test-suite.log
-grep '^FAIL: foo\.test .*exit.*99' test-suite.log
+grep '^ERROR: foo\.test .*exit.*99' test-suite.log
cd sub
# The `-e' is wanted here.
DISABLE_HARD_ERRORS='' $MAKE -e check && Exit 1
cat test-suite.log
-grep '^FAIL: bar\.test .*exit.*99' test-suite.log
+grep '^ERROR: bar\.test .*exit.*99' test-suite.log
cd ..
# Check the distributions.
./config.status Makefile
VERBOSE=yes $MAKE check && Exit 1
grep '^FAIL' test-suite.log && Exit 1
-grep '^FAIL: bar\.test .*exit.*99' sub/test-suite.log
+grep '^ERROR: bar\.test .*exit.*99' sub/test-suite.log
echo 'DISABLE_HARD_ERRORS = zardoz' >> sub/Makefile
VERBOSE=yes $MAKE check
#! /bin/sh
echo :test-result:XFAIL
echo :test-result: SKIP
+echo :test-result:ERROR
exit 0
END
cat > bar.test <<'END'
#! /bin/sh
+echo :test-result: ERROR
echo :test-result:FAIL
echo :test-result: XPASS
exit 0
$MAKE check >stdout && { cat stdout; Exit 1; }
cat stdout
-# There should be two errors: bar.test is a hard error.
-test `grep -c '^FAIL' stdout` -eq 2
+# There should be one failure and one hard error.
+test `grep -c '^FAIL:' stdout` -eq 1
+test `grep -c '^ERROR:' stdout` -eq 1
test -f mylog.log
-test `grep -c '^FAIL' mylog.log` -eq 2
+cat mylog.log
+test `grep -c '^FAIL:' mylog.log` -eq 1
+test `grep -c '^ERROR:' mylog.log` -eq 1
test -f baz.log
test -f bar.log
test -f foo.log
if test -f b.ok; then
echo PASS:
else
- echo FAIL:
+ echo ERROR:
fi
: > b.run
END
do_count ()
{
- pass=ERR fail=ERR xpass=ERR xfail=ERR skip=ERR
+ pass=ERR fail=ERR xpass=ERR xfail=ERR skip=ERR error=ERR
eval "$@"
- $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout || : # For debugging.
+ $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP|ERROR)' stdout || : # For debugging.
test `grep -c '^PASS:' stdout` -eq $pass
test `grep -c '^FAIL:' stdout` -eq $fail
test `grep -c '^XPASS:' stdout` -eq $xpass
test `grep -c '^XFAIL:' stdout` -eq $xfail
test `grep -c '^SKIP:' stdout` -eq $skip
+ test `grep -c '^ERROR:' stdout` -eq $error
}
for vpath in : false; do
test ! -r c.log
test ! -r d.run
test ! -r d.log
- do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=0
: Run the tests for the first time.
$MAKE check >stdout && { cat stdout; Exit 1; }
test -f b.run
test -f c.run
test -f d.run
- do_count pass=3 fail=3 xpass=1 xfail=1 skip=1
+ do_count pass=3 fail=2 xpass=1 xfail=1 skip=1 error=1
: Let us make b.test pass.
echo OK > b.ok
test -f b.run
test -f c.run
test -f d.run
- do_count pass=2 fail=2 xpass=1 xfail=1 skip=1
+ do_count pass=2 fail=2 xpass=1 xfail=1 skip=1 error=0
: Let us make the first part of c.test pass.
echo OK > c.pass
test ! -r b.run
test -f c.run
test -f d.run
- do_count pass=1 fail=1 xpass=1 xfail=1 skip=1
+ do_count pass=1 fail=1 xpass=1 xfail=1 skip=1 error=0
: Let us make also the second part of c.test pass.
echo KO > c.xfail
test ! -r b.run
test -f c.run
test -f d.run
- do_count pass=1 fail=1 xpass=0 xfail=2 skip=1
+ do_count pass=1 fail=1 xpass=0 xfail=2 skip=1 error=0
: Nothing changed, so only d.test should be run.
for i in 1 2; do
test ! -r b.run
test ! -r c.run
test -f d.run
- do_count pass=0 fail=1 xpass=0 xfail=0 skip=1
+ do_count pass=0 fail=1 xpass=0 xfail=0 skip=1 error=0
done
: Let us make d.test run more testcases, and experience _more_ failures.
echo XPASS: xp
echo FAIL: f 3
echo FAIL: f 4
+ echo ERROR: e 1
echo PASS: p 2
+ echo ERROR: e 2
END
do_recheck --fail
test ! -r a.run
test ! -r b.run
test ! -r c.run
test -f d.run
- do_count pass=2 fail=4 xpass=1 xfail=0 skip=2
+ do_count pass=2 fail=4 xpass=1 xfail=0 skip=2 error=2
: Let us finally make d.test pass.
echo : > d.extra
test ! -r b.run
test ! -r c.run
test -f d.run
- do_count pass=0 fail=0 xpass=0 xfail=0 skip=1
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=1 error=0
: All tests have been successful or skipped, nothing should be re-run.
do_recheck --pass
test ! -r b.run
test ! -r c.run
test ! -r d.run
- do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=0
cd $srcdir
cat > c.test << 'END'
#! /bin/sh
-echo XPASS: xp
+if test -f c.err; then
+ echo ERROR: xxx
+elif test -f c.ok; then
+ echo PASS: ok
+else
+ echo XPASS: xp
+fi
: > c.run
END
do_count ()
{
- pass=ERR fail=ERR xpass=ERR xfail=ERR skip=ERR
+ pass=ERR fail=ERR xpass=ERR xfail=ERR skip=ERR error=ERR
eval "$@"
- $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout || : # For debugging.
+ $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP|ERROR)' stdout || : # For debugging.
test `grep -c '^PASS:' stdout` -eq $pass
test `grep -c '^FAIL:' stdout` -eq $fail
test `grep -c '^XPASS:' stdout` -eq $xpass
test `grep -c '^XFAIL:' stdout` -eq $xfail
test `grep -c '^SKIP:' stdout` -eq $skip
+ test `grep -c '^ERROR:' stdout` -eq $error
}
for vpath in : false; do
test -f a.run
test -f b.run
test -f c.run
- do_count pass=2 fail=1 xpass=1 xfail=0 skip=1
+ do_count pass=2 fail=1 xpass=1 xfail=0 skip=1 error=0
rm -f *.run
for var in TESTS TEST_LOGS; do
env "$var=" $MAKE -e recheck >stdout || { cat stdout; Exit 1; }
cat stdout
- do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=0
test ! -r a.run
test ! -r b.run
test ! -r c.run
env TESTS=a.test $MAKE -e recheck >stdout \
|| { cat stdout; Exit 1; }
cat stdout
- do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=0
test ! -r a.run
test ! -r b.run
test ! -r c.run
test ! -r a.run
test -f b.run
test ! -r c.run
- do_count pass=0 fail=0 xpass=0 xfail=1 skip=1
+ do_count pass=0 fail=0 xpass=0 xfail=1 skip=1 error=0
rm -f *.run
TEST_LOGS=b.log $MAKE -e recheck >stdout \
|| { cat stdout; Exit 1; }
cat stdout
- do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=0
test ! -r a.run
test ! -r b.run
test ! -r c.run
TESTS='a.test b.test' $MAKE -e recheck >stdout \
|| { cat stdout; Exit 1; }
cat stdout
- do_count pass=0 fail=0 xpass=0 xfail=0 skip=0
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=0
test ! -r a.run
test ! -r b.run
test ! -r c.run
- # An XPASS should count a failure.
+ : No need to re-run a.test anymore, but c.test should be rerun,
+ : as it contained an XPASS. And this time, make it fail with
+ : an hard error.
+ : > c.err
env TEST_LOGS='a.log c.log' $MAKE -e recheck >stdout \
&& { cat stdout; Exit 1; }
cat stdout
- do_count pass=0 fail=0 xpass=1 xfail=0 skip=0
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=1
test ! -r a.run
test ! -r b.run
test -f c.run
- rm -f *.run
- env TESTS='c.test b.test' $MAKE -e recheck >stdout \
- && { cat stdout; Exit 1; }
+
+ rm -f *.run *.err
+
+ : c.test contained and hard error the last time, so it should be re-run.
+ : This time, make it pass
+ : > c.ok
+ env TESTS='c.test a.test' $MAKE -e recheck >stdout \
+ || { cat stdout; Exit 1; }
cat stdout
- do_count pass=0 fail=0 xpass=1 xfail=0 skip=0
+ do_count pass=1 fail=0 xpass=0 xfail=0 skip=0 error=0
test ! -r a.run
test ! -r b.run
test -f c.run
+ rm -f *.run *.err *.ok
+
+ : Nothing should be rerun anymore, as all tests have been eventually
+ : succesful.
+ $MAKE recheck >stdout || { cat stdout; Exit 1; }
+ cat stdout
+ do_count pass=0 fail=0 xpass=0 xfail=0 skip=0 error=0
+ test ! -r a.run
+ test ! -r b.run
+ test ! -r c.run
+
cd $srcdir
done
pass-fail.t \
pass4-skip.t \
pass3-skip2-xfail.t \
- pass-xpass-fail-xfail-skip.t
+ pass-xpass-fail-xfail-skip-error.t
END
expected_pass=10
expected_skip=4
expected_xfail=2
expected_xpass=1
+expected_error=1
cat > pass.t << 'END'
echo %% pass %%
exit 127
END
-cat > pass-xpass-fail-xfail-skip.t << 'END'
+cat > pass-xpass-fail-xfail-skip-error.t << 'END'
echo PASS:
echo FAIL:
echo XFAIL:
echo XPASS:
echo SKIP:
-echo %% pass-xpass-fail-xfail-skip %%
+echo ERROR:
+echo %% pass-xpass-fail-xfail-skip-error %%
END
chmod a+x *.t
cat pass-fail.log
cat pass4-skip.log
cat pass3-skip2-xfail.log
- cat pass-xpass-fail-xfail-skip.log
+ cat pass-xpass-fail-xfail-skip-error.log
# For debugging.
- $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout
+ $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP|ERROR)' stdout
test `grep -c '^PASS:' stdout` -eq $expected_pass
test `grep -c '^FAIL:' stdout` -eq $expected_fail
test `grep -c '^XPASS:' stdout` -eq $expected_xpass
test `grep -c '^XFAIL:' stdout` -eq $expected_xfail
test `grep -c '^SKIP:' stdout` -eq $expected_skip
+ test `grep -c '^ERROR:' stdout` -eq $expected_error
- grep '^PASS: pass-xpass-fail-xfail-skip.t\, testcase 1' stdout
- grep '^FAIL: pass-xpass-fail-xfail-skip\.t, testcase 2' stdout
- grep '^XFAIL: pass-xpass-fail-xfail-skip\.t, testcase 3' stdout
- grep '^XPASS: pass-xpass-fail-xfail-skip\.t, testcase 4' stdout
- grep '^SKIP: pass-xpass-fail-xfail-skip\.t, testcase 5' stdout
+ tst=pass-xpass-fail-xfail-skip-error
+ grep "^PASS: $tst\.t, testcase 1" stdout
+ grep "^FAIL: $tst\.t, testcase 2" stdout
+ grep "^XFAIL: $tst\.t, testcase 3" stdout
+ grep "^XPASS: $tst\.t, testcase 4" stdout
+ grep "^SKIP: $tst\.t, testcase 5" stdout
+ grep "^ERROR: $tst\.t, testcase 6" stdout
# Check testsuite summary printed on console.
sed -e 's/[()]/ /g' -e 's/^/ /' stdout > t
- grep ' 6 of 18 ' t
+ grep ' 7 of 19 ' t
grep ' 1 unexpected pass' t
grep ' 4 test.* not run' t
grep '%% fail %%' test-suite.log
grep '%% fail2 %%' test-suite.log
grep '%% pass-fail %%' test-suite.log
- grep '%% pass-xpass-fail-xfail-skip %%' test-suite.log
+ grep '%% pass-xpass-fail-xfail-skip-error %%' test-suite.log
test `grep -c '%% ' test-suite.log` -eq 4
TESTS='pass.t pass3-skip2-xfail.t' $MAKE -e check >stdout \
cat test-suite.log
cat stdout
# For debugging.
- $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP)' stdout
+ $EGREP '(PASS|FAIL|XPASS|XFAIL|SKIP|ERROR)' stdout
test `grep -c '^PASS:' stdout` -eq 4
test `grep -c '^SKIP:' stdout` -eq 2
test `grep -c '^XFAIL:' stdout` -eq 1
- $EGREP '^(FAIL|XPASS)' stdout && Exit 1
+ $EGREP '^(FAIL|XPASS|ERROR)' stdout && Exit 1
cd $srcdir
# Sanity check: trying to produce HTML output should fail.
$MAKE check-html >output 2>&1 && { cat output; Exit 1; }
cat output
-$EGREP 'SEVERE|ERROR' output
+$FGREP SEVERE output
:
:test-result:PASS
:test-result:END
:test-result:FAIL
+:test-result:ERROR
END
cat > c.test <<END
echo SKIP: skip.test > skip.test
echo FAIL: fail.test > fail.test
echo XPASS: xpass.test > xpass.test
+echo ERROR: error.test > error.test
echo :test-result: PASS > fake-pass.test
echo "$tab $tab$tab" > empty.test
cat > Makefile.am << 'END'
TEST_LOG_DRIVER = ./dummy-driver
-TESTS = pass.test skip.test fail.test xfail.test xpass.test \
+TESTS = pass.test skip.test fail.test xfail.test xpass.test error.test \
fake-pass.test empty.test
END
grep '^SKIP: skip\.test$' test-suite.log
grep '^FAIL: fail.test$' test-suite.log
grep '^XPASS: xpass.test$' test-suite.log
+grep '^ERROR: error.test$' test-suite.log
grep '^:test-result: PASS$' test-suite.log
grep "^$tab $tab$tab$" test-suite.log
$EGREP 'not seen' test-suite.log && Exit 1
rechecked="$rechecked BAD-$R-1 BAD-$R-2 BAD-$R-3 BAD-$R-4"
done
-for R in FAIL XPASS UNKNOWN; do
+for R in FAIL XPASS ERROR UNKNOWN; do
echo $R: > $R-1
echo $R:foo > $R-2
echo $R: bar baz > $R-3
# test results per test script.
#
# The exit status of the wrapped script is ignored. Lines in its stdout
-# and stderr beginning with `PASS', `FAIL', `XFAIL', `XPASS' or `SKIP'
-# count as a test case result with the obviously-corresponding outcome.
-# Every other line is ignored for what concerns the testsuite outcome.
+# and stderr beginning with `PASS', `FAIL', `XFAIL', `XPASS', `SKIP' or
+# `ERROR' count as a test case result with the obviously-corresponding
+# outcome. Every other line is ignored for what concerns the testsuite
+# outcome.
#
# This script is used at least by the `driver-custom-multitest*.test'
# tests.
: > $tmp_res
while read line; do
case $line in
- PASS:*|FAIL:*|XPASS:*|XFAIL:*|SKIP:*)
+ PASS:*|FAIL:*|XPASS:*|XFAIL:*|SKIP:*|ERROR:*)
i=`expr $i + 1`
result=`LC_ALL=C expr "$line" : '\([A-Z]*\):.*'`
- case $result in FAIL|XPASS) st=1;; esac
+ case $result in FAIL|XPASS|ERROR) st=1;; esac
# Output testcase result to console.
echo "$result: $test_name, testcase $i"
# Register testcase outcome for the log file.