IBM Z DFLTCC: Fix updating strm.adler with inflate()
inflate() does not update strm.adler with DFLTCC.
Add a missing assignment to dfltcc_inflate() to fix this.
Note that deflate() is not affected.
Also add a test to prevent regressions.
Mika Lindqvist [Sun, 11 Sep 2022 13:15:10 +0000 (16:15 +0300)]
[Compat] Don't use uint32_t for z_crc_t
* We don't include stdint.h as it must be included before stdarg.h and other headers might include stdarg.h before us
Dougall Johnson [Mon, 22 Aug 2022 00:57:39 +0000 (10:57 +1000)]
Inflate: Increase max root table sizes to 10 and 9
This increases the size of the `codes` array by 1920 bytes (33%), but
improves performance a little. Root table size is still limited by the
maximum code length in use, so tiny files typically see no change to
table-building time, as they don't use longer codes.
Mika Lindqvist [Fri, 19 Aug 2022 12:00:21 +0000 (15:00 +0300)]
If the extra field was larger than the space the user provided with
inflateGetHeader(), and if multiple calls of inflate() delivered
the extra header data, then there could be a buffer overflow of the
provided space. This commit assures that provided space is not
exceeded.
Mosè Giordano [Wed, 24 Aug 2022 19:39:03 +0000 (20:39 +0100)]
Actually run `configure` CI on macOS with GCC
`gcc` is an alias for Apple Clang on macOS. See for example https://github.com/zlib-ng/zlib-ng/runs/7963904948?check_suite_focus=true
```
gcc -O2 -std=c11 -Wall -fPIC -DNDEBUG -DHAVE_POSIX_MEMALIGN -DWITH_GZFILEOP -fno-semantic-interposition -DHAVE_VISIBILITY_HIDDEN -DHAVE_VISIBILITY_INTERNAL -DHAVE_BUILTIN_CTZ -DHAVE_BUILTIN_CTZLL -DX86_FEATURES -DX86_AVX2 -DX86_AVX2_ADLER32 -DX86_AVX_CHUNKSET -DX86_AVX512 -DX86_AVX512_ADLER32 -DX86_MASK_INTRIN -DX86_AVX512VNNI -DX86_AVX512VNNI_ADLER32 -DX86_SSE41 -DX86_SSE42_CRC_HASH -DX86_SSE42_ADLER32 -DX86_SSE42_CRC_INTRIN -DX86_SSE2 -DX86_SSE2_CHUNKSET -DX86_SSSE3 -DX86_SSSE3_ADLER32 -DX86_PCLMULQDQ_CRC -DPIC -I/Users/runner/work/zlib-ng/zlib-ng -c -o adler32.lo /Users/runner/work/zlib-ng/zlib-ng/adler32.c
clang: warning: argument unused during compilation: '-fno-semantic-interposition' [-Wunused-command-line-argument]
```
In order to use the real GCC you have to call `gcc-9`, `gcc-10`, or `gcc-11`: https://github.com/actions/runner-images/blob/06dd4c14e4aa8c14febdd8d6cf123b8d770b4e4a/images/macos/macos-11-Readme.md#language-and-runtime.
Fixed content already populated error in CMake scripts. #1327
Should only need to use either FetchContent_MakeAvailable or
FetchContent_GetProperties and FetchContent_Populate but not both methods. We
use the later for CMake compatibility with lower versions.
Viktor Szakats [Sun, 17 Jul 2022 19:33:01 +0000 (19:33 +0000)]
cmake: respect custom `RC` flags and delete `GCC_WINDRES`
Before this patch, `zlib.rc` was compiled using a manual command [1] when
using the MinGW (and MSYS/Cygwin) toolchains. This method ignores
`CMAKE_RC_FLAGS` and offers no other way to pass a custom flag, breaking
the build in cases where a custom `windres` option is required. E.g.
`--target=` or `-I` on some platforms and configuration, in particular
with `llvm-windres`.
This patch deletes the special case for these toolchains and lets CMake
compile the `.rc` file the default way used for all Windows targets.
I'm not entirely sure why this special case was added back in 2011. The
need to pass `-DGCC_WINDRES` is my suspect. We can resolve this much
simpler by adding this line for the targets that require it:
set(CMAKE_RC_FLAGS "${CMAKE_RC_FLAGS} -DGCC_WINDRES")
But, the `.rc` line protected by `GCC_WINDRES`, these days work just fine
with `windres`. Moreover, that protected line are oboslete flags from the
16-bit era, which for a long time have no effect, as documented here:
<https://docs.microsoft.com/windows/win32/menurc/common-resource-attributes>
So, this patch deletes `GCC_WINDRES` from the project entirely.
* Moves cmake scripts for testing into test/cmake.
* Separates out related add_tests into separate cmake scripts.
* Moves building test binaries into CMakeLists.txt in test directory.
Fixed functions declared without a prototype warning in tools.
tools/maketrees.c:101:29: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
static void gen_trees_header()
tools/makecrct.c:65:27: warning: a function declaration without a prototype is deprecated in all versions of C [-Wstrict-prototypes]
static void make_crc_table()
Ilya Leoshkevich [Mon, 27 Jun 2022 18:16:54 +0000 (20:16 +0200)]
Uninstall strawberryperl
strawberryperl installs /c/Strawberry/c/bin/libstdc++-6.dll, which is
incompatible with the mingw64 one. zlib-ng does not need perl, so
simply remove it.
Ilya Leoshkevich [Tue, 28 Jun 2022 08:51:39 +0000 (10:51 +0200)]
Bump _POSIX_C_SOURCE to 200809 for strdup()
Google Test uses strdup(), which makes building tests fail on a fresh
MSYS2 setup:
In file included from zlib-ng/_deps/googletest-src/googletest/include/gtest/internal/gtest-internal.h:40,
from zlib-ng/_deps/googletest-src/googletest/include/gtest/gtest.h:62,
from zlib-ng/test/test_compress.cc:17:
zlib-ng/_deps/googletest-src/googletest/include/gtest/internal/gtest-port.h: In function ‘char* testing::internal::posix::StrDup(const char*)’:
zlib-ng/_deps/googletest-src/googletest/include/gtest/internal/gtest-port.h:2046:47: error: ‘strdup’ was not declared in this scope; did you mean ‘StrDup’?
2046 | inline char* StrDup(const char* src) { return strdup(src); }
| ^~~~~~
| StrDup
Bump _POSIX_C_SOURCE to enable this function. An alternative solution
would be to define _POSIX_C_SOURCE in test/CMakeLists.txt, but having a
bigger value for zlib-ng itself should not hurt.
Include zbuild.h earlier in minideflate.c in order to make the new
setting take effect for this file.
Adam Stylinski [Thu, 2 Jun 2022 22:46:56 +0000 (18:46 -0400)]
Improve the swizzle of the memory magazine fed in a chunk copy for neon
Like the x86 variant, we can leverage the same tables to load a vector
register worth of values. This shows a vast improvement in places where
very large run length encodes can be found in the lz runs.
Ilya Leoshkevich [Tue, 14 Jun 2022 09:19:29 +0000 (11:19 +0200)]
IBM Z DFLTCC: Test with MSan
* Add a CI job.
* Do not collect coverage: LLVM's gcov support (part of compiler-rt)
cannot be built with the MSan instrumentation, which means that
whenever it's called (in particular, in order to write the results to
a file at the end), there is a risk of false positives.
* Add __msan_unpoison() calls to DFLTCC inline assembly.
* Make parameter block sizes symbolic constants.
* Move dfltcc() definition after struct dfltcc_param_v0 definition.
Add a test for concurrently modifying deflate() input
The test simulates what one of the QEMU live migration tests is doing:
increments each buffer byte by 1 while deflate()ing it.
The test tries to produce a race condition and therefore is
probabilistic. The longer it runs, the better are the chances to catch
an issue. The scenario in question is known to be broken on IBM Z
with DFLTCC, and there it is caught in 100ms most of the time. The
run time is therefore set to 1 second in order to balance usability and
reliability.
Add public compile definition for zlib-ng API so that other projects that use CMake and link against the zlib project can easily determine whether or not to include "zlib-ng.h" or "zlib.h".
Negative windowBits arguments are eventually turned positive in
deflateInit2_ and inflateInit2_ (more precisely in inflateReset2).
Such values are used to indicate that raw deflate/inflate should
be performed.
If a user supplies INT32_MIN for windowBits, the code will perform
-INT32_MIN which does not fit into int32_t. In fact, this is
undefined behavior in C and should be avoided.
Clearly this is a user error, but given the careful validation of
input arguments a few lines later in deflateInit2_ I think this
might be of interest.
Proof of Concept:
- Compile zlib-ng with gcc -ftrapv or -fsanitize=undefined
- Compile and run this program:
If gzip support has been disabled during compilation then also
consider gzip relevant states as invalid in deflateStateCheck.
Also the gzip state definitions can be removed.
This change leads to failure in test/example, and I am not sure
what the GZIP conditional is trying to achieve. All gzip related
functions are still defined in zlib.h
Alternative approach is to remove the GZIP define.
Fixed conversion warnings for wsize in slide_hash_c.
slide_hash.c(50,44): warning C4244: 'function': conversion from 'unsigned int' to 'uint16_t', possible loss of data
slide_hash.c(51,40): warning C4244: 'function': conversion from 'unsigned int' to 'uint16_t', possible loss of data
Remove zng_gzgetc_ function from zlib-ng native API.
It exists in zlib for backwards compatibility, but has never been
documented/advertised for use in zlib-ngs native API.
Fabrice Fontaine [Fri, 27 May 2022 21:25:21 +0000 (23:25 +0200)]
CMakeLists.txt: fix version in zlib.pc when building statically
When building statically (i.e. with BUILD_SHARED_LIBS=OFF),
ZLIB_FULL_VERSION is not set resulting in an empty version in zlib.pc
and the following build failure with transmission:
checking for ZLIB... configure: error: Package requirements (zlib >= 1.2.3) were not met:
Package dependency requirement 'zlib >= 1.2.3' could not be satisfied.
Package 'zlib' has version '', required version is '>= 1.2.3'
Mark Adler [Tue, 10 May 2022 15:11:32 +0000 (08:11 -0700)]
Correct incorrect inputs provided to the CRC functions.
The previous releases of zlib were not sensitive to incorrect CRC
inputs with bits set above the low 32. This commit restores that
behavior, so that applications with such bugs will continue to
operate as before.
Speed up software CRC-32 computation by a factor of 1.5 to 3.
Use the interleaved method of Kadatch and Jenkins in order to make
use of pipelined instructions through multiple ALUs in a single
core. This also speeds up and simplifies the combination of CRCs,
and updates the functions to pre-calculate and use an operator for
CRC combination.
Adam Stylinski [Fri, 8 Apr 2022 17:24:21 +0000 (13:24 -0400)]
Adding avx512_vnni inline + copy elision
Interesting revelation while benchmarking all of this is that our
chunkmemset_avx seems to be slower in a lot of use cases than
chunkmemset_sse. That will be an interesting function to attempt to
optimize.
Right now though, we're basically beating google for all PNG decode and
encode benchmarks. There are some variations of flags that can
basically have us trading blows, but we're about as much as 14% faster
than chromium's zlib patches.
While we're here, add a more direct benchmark of the folded copy method
versus the explicit copy + checksum.
Adam Stylinski [Fri, 8 Apr 2022 02:57:09 +0000 (22:57 -0400)]
Added inlined AVX512 adler checksum + copy
While we're here, also simplfy the "fold" signature, as reducing the
number of rebases and horizontal sums did not prove to be meaningfully
faster (slower in many circumstances).
Adam Stylinski [Wed, 6 Apr 2022 19:38:20 +0000 (15:38 -0400)]
Add AVX2 inline copy + adler implementation
This was pretty much across the board wins for performance, but the wins
are very data dependent and it sort of depends on what copy runs look
like. On our less than realistic data in benchmark_zlib_apps, the
decode test saw some of the bigger gains, ranging anywhere from 6 to 11%
when compiled with AVX2 on a Cascade Lake CPU (and with only AVX2
enabled). The decode on realistic imagery enjoyed smaller gains,
somewhere between 2 and 4%.
Interestingly, there was one outlier on encode, at level 5. The best
theory for this is that the copy runs for that particular compression
level were such that glibc's ERMS aware memmove implementation managed
to marginally outpace the copy during the checksum with the move rep str
sequence thanks to clever microcoding on Intel's part. It's hard to say
for sure but the most standout difference between the two perf profiles
was more time spent in memmove (which is expected, as it's calling
memcpy instead of copying the bytes during the checksum).
There's the distinct possibility that the AVX2 checksums could be
marginally improved by one level of unrolling (like what's done in the
SSE3 implementation). The AVX512 implementations are certainly getting
gains from this but it's not appropriate to append this optimization in
this series of commits.
Adam Stylinski [Sun, 3 Apr 2022 16:18:12 +0000 (12:18 -0400)]
Adding an SSE42 optimized copy + adler checksum implementation
We are protecting its usage around a lot of preprocessor macros as the
other methods are not yet implemented and calling this version bypasses
the faster adler implementations implicitly.
When more versions are written for faster vectorizations, the functable
entries will be populated and preprocessor macros removed. This round,
the copy + checksum is not employing as many tricks as one would hope
with a "folded" checksum routine. The reason for this is the
particularly tricky case of dealing with unaligned buffers. The
implementations which don't have CPUs in the mix that have a huge
penalty for unaligned loads will have a much faster implementation.
Fancier methods that minimized rebasing, while having the potential to
be faster, ended up being slower because the compiler structured the
code in a way that ended up either spilling to the stack or trampolining
out of a loop and back in it instead of just jumping over the first load
and store.
Revisiting this for AVX512, where more registers are abundant and more
advanced loads exist, may be prudent.
Adam Stylinski [Fri, 1 Apr 2022 23:02:05 +0000 (19:02 -0400)]
Create adler32_fold_c* functions
These are very simple wrappers that do nothing clever but serve as a
shim interface for implementing versions which do cleverly track the
number of scalar sums performed so that we can minimize rebasing and
also have an efficient copy elision.
This serves as the baseline as each vectorization gets its own commit.
That way the PR will be bisectable.
Adam Stylinski [Sun, 10 Apr 2022 17:01:22 +0000 (13:01 -0400)]
Improved chunkset substantially where it's heavily used
For most realistic use cases, this doesn't make a ton of difference.
However, for things which are highly compressible and enjoy very large
run length encodes in the window, this is a huge win.
We leverage a permutation table to swizzle the contents of the memory
chunk into a vector register and then splat that over memory with a fast
copy loop.
In essence, where this helps, it helps a lot. Where it doesn't, it does
no measurable damage to the runtime.
This commit also simplifies a chunkcopy_safe call for determining a
distance. Using labs is enough to give the same behavior as before,
with the added benefit that no predication is required _and_, most
importantly, static analysis by GCC's string fortification can't throw a
fit because it conveys better to the compiler that the input into
builtin_memcpy will always be in range.