split: ensure input is processed when filters exit early
commit
v8.25-4-g62e7af0 introduced the issue as it
broke out of the processing loop irrespective of
the value of new_file_flag which was used to indicate
a finite number of filters or not.
For example, this ran forever (as it should):
$ yes | split --filter="head -c1 >/dev/null" -b 1000
However this exited immediately due to EPIPE being propagated
back through cwrite and the loop not considering new filters:
$ yes | split --filter="head -c1 >/dev/null" -b 100000
Similarly processing would exit early for a bounded number of
output files, resulting in empty data sent to all but the first:
$ truncate -s10T big.in
$ split --filter='head -c1 >$FILE' -n 2 big.in
$ echo $(stat -c%s x??)
1 0
I was alerted to this code by clang-analyzer,
which indicated dead assigments, which is often
an indication of code that hasn't considered all cases.
* src/split.c (bytes_split): Change the last condition in
the processing loop to also consider the number of files
before breaking out of the processing loop.
* tests/split/filter.sh: Add a test case.
* NEWS: Mention the bug fix.