Replace small/large buffer tests with parameterized test_chunked
test_large_buffers reset d_stream.next_out on every inflate iteration, so the
decompressed output was never compared against the source. test_chunked keeps
the input, compressed, and decompressed buffers separate and checks them with
memcmp.
New avail_out values (3, 64, 128, 256, 259) exercise inflate_fast()'s safe-mode
MATCH-state bailout around the 258-byte maximum match length.
Bump Google Benchmark to v1.9.5
* Google Benchmark v1.9.4 fails to compile with recent versions of clang and Visual C++ if warnings are treated as errors
Adds benchmark_corpora.cc which dynamically discovers and benchmarks
all files from the zlib-ng/corpora repository (silesia, calgary,
canterbury, large, snappy, etc.).
Benchmarks are registered at startup using RegisterBenchmark. If the
corpora directory is not present, no benchmarks are registered.
Deflate is tested at levels 1, 6, and 9 per file. Inflate is tested
once per file using data pre-compressed at level 9.
Add --benchmark_cooldown flag to mitigate thermal throttling
Adds a --benchmark_cooldown=<seconds> flag that inserts a sleep between
benchmark families. This helps produce consistent results on systems
where sustained workloads cause thermal throttling and CPU frequency
scaling.
Uses a wrapping BenchmarkReporter that sleeps before forwarding results
to the default display reporter.
Add /delta workflow for per-PR binary size comparison
On a /delta PR comment the job builds the PR head and base with
RelWithDebInfo, splits the DWARF into sibling .debug companions, and
runs several tools against both stripped libraries:
- binutils size for text/data/bss totals plus a Δ row
- bloaty for sections, top 30 compile units, and top 30 symbols
- nm --defined-only --dynamic to diff the exported symbol set
- abidiff for C ABI changes (honouring test/abi/ignore)
- minigzip at levels 1-9 over silesia-small.tar and, on native
builds, the full silesia.tar
Results come back as a "## Delta Report" PR comment with a details
block per section, reporting both head and base SHAs so offset runs
are unambiguous.
Comment syntax is /delta [arch] [-N]. Arch defaults to x86_64 and
accepts aarch64, powerpc64le, riscv64, and s390x. -N selects the Nth
commit back from the PR head so a regression can be bisected without
force-pushing. Cross-compile builds reuse cmake/toolchain-*.cmake
and run the stripped binaries under qemu-user.
Gate Scalar and SSE chorba uniformly on CRC32_CHORBA_FALLBACK and
CRC32_CHORBA_SSE_FALLBACK across prototypes, dispatch, sources, tests
and benchmarks instead of spot-checking WITHOUT_CHORBA /
WITHOUT_CHORBA_SSE directly at each site.
Also move crc32_chorba_c.c into ZLIB_GENERIC_SRCS and align Makefile.in
to match so the CMake and autotools builds stay bit-identical.
The 'Ubuntu GCC No Chorba' matrix entry was passing -DWITH_CHORBA=OFF
since its introduction in 9d4af458, but the actual CMake option is
named WITH_CRC32_CHORBA.
The MSVC and GCC 32-bit polyfills for _mm_cvtsi64_si128 /
_mm_cvtsi128_si64 had identical bodies. Merge them into a single
block guarded by !__clang__ && ARCH_32BIT, with the MSVC-only
#include <intrin.h> nested inside.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Fix MSVC v142 miscompile of _mm_cvtsi64_si128 polyfill on 32-bit
MSVC v142 (Visual Studio 2019, and VS 2022 pre-17.11) miscompiles
_mm_set_epi64x(0, a) on 32-bit Windows by routing part of the synthesis
through a GPR, clobbering live register data and causing stack corruption
in the chorba SSE2/SSE4.1 CRC32 code paths.
Replace the _mm_set_epi64x(0, a) polyfill with _mm_loadl_epi64 which
compiles to a single MOVQ xmm,m64 that bypasses the buggy synthesis
path. Also convert the GCC 32-bit _mm_cvtsi64_si128 macro to a static
inline for consistency, and drop the redundant ARCH_X86 guard since
x86_intrins.h is only reachable from x86 code.
The "slow" variant of longest_match uses a 3-byte rolling hash to seed
its offset-search lookups after a match has been found. Rename the
template gate, the functable entry, and all arch-specific instantiations
from *_slow to *_roll to reflect what the variant actually uses, so a
separate integer-hash offset-search variant can coexist under its own
name.
Benchmarks the inflate fast path with constrained output
buffers ranging from 64 to 16384 bytes per call, reproducing
the libpng decompression pattern described in the "running
off a cliff" analysis.
Fix VPCLMULQDQ CRC32 build with partial AVX-512 baselines
The 512-bit path in crc32_pclmulqdq_tpl.h assumed AVX-512F was
enough, but some of the intrinsics it used actually require
AVX-512DQ. Pick the correct variants based on the available
features.
Add fallback defines to skip generic C code when native intrinsics exist
Each arch header now sets *_FALLBACK defines (ADLER32_FALLBACK,
CHUNKSET_FALLBACK, COMPARE256_FALLBACK, CRC32_BRAID_FALLBACK,
SLIDE_HASH_FALLBACK) when no native SIMD implementation exists.
Generic C source files, declarations, functable entries, tests,
and benchmarks are guarded by these defines.
The `vector` keyword requires -fzvector which is not available on all
GCC versions (e.g. EL10). Use __attribute__((vector_size(16))) typedefs
instead, matching the existing style in crc32_vx.c.
```
C:/build/git/zlib-ng/test/gh1235.c: In function 'main':
C:/build/git/zlib-ng/test/gh1235.c:34:43: error: passing argument 2 of 'compress2' from incompatible pointer type [-Wincompatible-pointer-types]
34 | if (PREFIX(compress2)(compressed, &bytes, plain, i, 1) != Z_OK) return -1;
| ^~~~~~
| |
| z_size_t * {aka unsigned int *}
In file included from C:/build/git/zlib-ng/zutil.h:15,
from C:/build/git/zlib-ng/test/gh1235.c:4:
../zlib.h:1261:69: note: expected 'long unsigned int *' but argument is of type 'z_size_t *' {aka 'unsigned int *'}
1261 | Z_EXTERN int Z_EXPORT compress2(unsigned char *dest, unsigned long *destLen, const unsigned char *source,
| ~~~~~~~~~~~~~~~^~~~~~~
```
- Add local variables match_len and strstart in insert_match, to avoid extra lookups from struct.
- Move check for enough lookahead outside of function, can avoid function call
instead of calling and immediately returning.
- Add local variable match_len in emit_match to avoid extra lookups from struct.
- Move s->lookahead decrement to top of function, both branches of the function
does it and they don't care when it is done.
Process 64 bytes per iteration using 8x uint64_t loads
with interleaved memcpy stores and __crc32d calls.
RPi5 benchmarks show 30-51% improvement over the
separate crc32 + memcpy baseline.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Use OSB workflow as an initial test before queueing all the other tests,
this makes sure we don't spend a lot of CI time testing something that
won't even build.
When 56d3d985 was reverted in b85cfdf9, it restored dead
stores to match.strstart and match.match_length that
have no effect since match is passed by value. The
compiler already eliminated them; remove from source.
Use uintptr_t for ASan function signatures and macro variables
The ASan runtime ABI expects uptr (pointer-sized unsigned) for both
parameters of __asan_loadN/__asan_storeN. On LLP64 targets like
Windows x64, long is 32-bit while pointers are 64-bit, truncating
size values. Use uintptr_t to match the ABI correctly.
Create zsanitizer.h with all sanitizer detection, declaration
stubs, and instrument_read/write/read_write macros. Include it
only in the chunkset, inflate, and dfltcc files that perform
deliberate out-of-bounds reads for performance.
Add 256-bit VPCLMULQDQ CRC32 path for systems without AVX-512.
Split VPCLMULQDQ CRC32 into separate AVX2 and AVX-512 compilation
units. Compute fold-by-8 constants for the AVX2 path using
bitreverse(x^d mod G(x), 33) with d=992 and d=1056.
Add MSAN to Aarch64.
Change tests so we run UBSAN on neon/armv8 code, testing without
our optimizations is less important.
Fix windows arm test skipping check.
Define NMAX_ALIGNED32 as NMAX rounded down to a multiple of 32 (5536)
and use it in the NEON adler32 implementation to ensure that src stays
32-byte aligned throughout the main SIMD loop. Previously, NMAX (5552)
is not a multiple of 32, so after the alignment preamble the first
iteration could process a non-32-aligned number of bytes, causing src
to lose 32-byte alignment for all subsequent iterations.
The first iteration's budget is rounded down with ALIGN_DOWN after
subtracting align_diff, ensuring k is always a multiple of 32.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>