commit
v8.25-4-g62e7af0 introduced the issue as it
broke out of the processing loop irrespective of
the value of new_file_flag which was used to indicate
a finite number of filters or not.
For example, this ran forever (as it should):
$ yes | split --filter="head -c1 >/dev/null" -b 1000
However this exited immediately due to EPIPE being propagated
back through cwrite and the loop not considering new filters:
$ yes | split --filter="head -c1 >/dev/null" -b 100000
Similarly processing would exit early for a bounded number of
output files, resulting in empty data sent to all but the first:
$ truncate -s10T big.in
$ split --filter='head -c1 >$FILE' -n 2 big.in
$ echo $(stat -c%s x??)
1 0
I was alerted to this code by clang-analyzer,
which indicated dead assigments, which is often
an indication of code that hasn't considered all cases.
* src/split.c (bytes_split): Change the last condition in
the processing loop to also consider the number of files
before breaking out of the processing loop.
* tests/split/filter.sh: Add a test case.
* NEWS: Mention the bug fix.
* Noteworthy changes in release ?.? (????-??-??) [?]
+** Bug fixes
+
+ split no longer exits when invocations of a --filter return EPIPE.
+ [bug introduced in coreutils-8.26]
+
* Noteworthy changes in release 8.27 (2017-03-08) [stable]
{
/* If filter no longer accepting input, stop reading. */
n_read = to_read = 0;
+ eof = true;
break;
}
bp_out += w;
opened += new_file_flag;
to_write -= to_read;
new_file_flag = false;
- if (!cwrite_ok)
+ if (!cwrite_ok && opened == max_files)
{
/* If filter no longer accepting input, stop reading. */
n_read = 0;
timeout 10 sh -c 'split --filter="head -c1 >/dev/null" -n 1 zero.in' || fail=1
fi
+# Ensure that "endless" input _is_ processed for unbounded number of filters
+for buf in 1000 1000000; do
+ returns_ 124 timeout .5 sh -c \
+ "yes | split --filter='head -c1 >/dev/null' -b $buf" || fail=1
+done
+
Exit $fail