]>
git.ipfire.org Git - thirdparty/zlib-ng.git/log
Nathan Moinvaziri [Tue, 13 Jan 2026 18:04:55 +0000 (10:04 -0800)]
Prefix macros with z in crc32_vpclmulqdq for clarity
Nathan Moinvaziri [Tue, 13 Jan 2026 16:43:26 +0000 (08:43 -0800)]
Use epi32 variants for older MSVC (v141/v140) to avoid cast warnings
Nathan Moinvaziri [Mon, 12 Jan 2026 01:16:36 +0000 (17:16 -0800)]
Fix cast truncates constant value warnings with ternarylogic on Win v141
Nathan Moinvaziri [Sun, 11 Jan 2026 22:53:45 +0000 (14:53 -0800)]
Use epi64 intrinsics for VPCLMULQDQ operations
PCLMULQDQ operates on 64-bit polynomial elements, so use epi64 intrinsics
throughout to provide accurate type information to the compiler.
Nathan Moinvaziri [Sun, 11 Jan 2026 20:05:46 +0000 (12:05 -0800)]
Use masked load/store in partial folding in crc32_vpclmulqdq.
Nathan Moinvaziri [Sun, 11 Jan 2026 20:03:26 +0000 (12:03 -0800)]
Combine final_fold function to remove extra len branch
Nathan Moinvaziri [Sun, 11 Jan 2026 19:32:44 +0000 (11:32 -0800)]
Eliminate extra vmovdqu instruction folding xmm into zmm.
Fixed by using _mm512_castsi128_si512() and removing redundant insert.
Nathan Moinvaziri [Sat, 3 Jan 2026 07:47:34 +0000 (23:47 -0800)]
Clean up variable names for readability in zmm path.
Nathan Moinvaziri [Sat, 3 Jan 2026 02:14:24 +0000 (18:14 -0800)]
Don't compile in Chorba for vpclmulqdq because it is never used
By the time Chorba if statement is hit, len is already reduced to < 256.
Nathan Moinvaziri [Sun, 11 Jan 2026 19:34:45 +0000 (11:34 -0800)]
Combine partial and final fold and reduce the number of operations
We do the partial fold after we have folded the crc32 state into a single
128-bit value.
Nathan Moinvaziri [Fri, 2 Jan 2026 23:03:33 +0000 (15:03 -0800)]
Generate shuffle masks in registers for partial_fold.
Faster than loading table into memory.
Nathan Moinvaziri [Fri, 2 Jan 2026 22:47:53 +0000 (14:47 -0800)]
Use mm_blend_epi16 in crc32_(v)pclmulqdq final reduction
This is the preferred operation mentioned in
https://www.corsix.org/content/alternative-exposition-crc32_4k_pclmulqdq
Nathan Moinvaziri [Sun, 11 Jan 2026 20:17:34 +0000 (12:17 -0800)]
Use ternarylogic when available in crc32_vpclmulqdq.
Nathan Moinvaziri [Sat, 3 Jan 2026 02:26:16 +0000 (18:26 -0800)]
Hoist folding constants to function scope to avoid repeated loads
Nathan Moinvaziri [Sun, 11 Jan 2026 21:28:20 +0000 (13:28 -0800)]
Batch PCLMULQDQ operations to reduce latency
Nathan Moinvaziri [Fri, 2 Jan 2026 08:46:36 +0000 (00:46 -0800)]
Move remaining fold calls before load to hide latency
All fold calls are now consistent in this respect.
Nathan Moinvaziri [Fri, 2 Jan 2026 03:26:14 +0000 (19:26 -0800)]
Revert "Move fold calls closer to last change in xmm_crc# variables."
The fold calls were in a better spot before begin located after loads to
reduce latency.
This reverts commit
cda0827b6d522acdb2656114e2c4b7b18b6c1c20 .
Nathan Moinvaziri [Wed, 31 Dec 2025 23:18:38 +0000 (15:18 -0800)]
Remove old comments about crc32 folding from crc32 benchmark.
Nathan Moinvaziri [Fri, 2 Jan 2026 06:49:56 +0000 (22:49 -0800)]
Remove unnecessary casts from crc32_(v)pclmulqdq.
Originally, some compilers and older versions of intrinsics libraries only
provided _mm_xor_ps (for __m128) and not _mm_xor_si128 (for __m128i).
Developers would cast integer vectors to float vectors to use the XOR
operation, then cast back. Modern compilers and intrinsics headers provide
_mm_xor_si128, making these casts unnecessary.
Nathan Moinvaziri [Tue, 13 Jan 2026 00:47:43 +0000 (16:47 -0800)]
Simplify CRC32 complement operations using bitwise NOT operator
Nathan Moinvaziri [Tue, 13 Jan 2026 00:44:48 +0000 (16:44 -0800)]
Fix space indentation formatting in crc32_zbc
Nathan Moinvaziri [Sun, 11 Jan 2026 21:11:08 +0000 (13:11 -0800)]
Add fallback for __has_builtin to prevent unmatched parenthesis warning
Occurs on MSVC.
Nathan Moinvaziri [Tue, 13 Jan 2026 17:01:11 +0000 (09:01 -0800)]
Add ARM __builtin_bitreverse16 fallback implementation for GCC.
Nathan Moinvaziri [Sun, 11 Jan 2026 00:22:43 +0000 (16:22 -0800)]
Remove compiler check for builtin_bitreverse16 since we check in code
And we have a generic fallback
Nathan Moinvaziri [Sat, 10 Jan 2026 01:38:55 +0000 (17:38 -0800)]
__builtin_bitreverse16 CMake compiler check fails for GCC 13
Provide a final check for __builtin_bitreverse16 in code.
Vladislav Shchapov [Wed, 7 Jan 2026 19:30:19 +0000 (00:30 +0500)]
Use GCC's may_alias attribute for access to buffers in crc32_chorba
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Hans Kristian Rosbach [Tue, 13 Jan 2026 14:32:53 +0000 (15:32 +0100)]
Add Z_UNREACHABLE compiler hint
Hans Kristian Rosbach [Tue, 13 Jan 2026 13:55:37 +0000 (14:55 +0100)]
Fix (impossible) infinite loop in gz_fetch() detected by GCC-14 static analyzer.
According to the comment, gz_fetch() also assumes that state->x.have == 0, so
lets add an Assert to that effect.
Hans Kristian Rosbach [Mon, 12 Jan 2026 19:52:47 +0000 (20:52 +0100)]
Update static analyzer from targeting GCC v10 to v14
Mika T. Lindqvist [Mon, 12 Jan 2026 01:42:13 +0000 (03:42 +0200)]
Fix symbol mangling so symbols in shared library are exported correctly
* We need to mangle symbols in the map file, otherwise none of the symbols are exported
* Fix gz_error name conflict with zlib-ng API
Nathan Moinvaziri [Mon, 12 Jan 2026 22:57:50 +0000 (14:57 -0800)]
Remove extra indirection calling into crc32_z functions.
This also prevents the double-checking of buf == NULL.
Nathan Moinvaziri [Mon, 12 Jan 2026 19:18:56 +0000 (11:18 -0800)]
Clean up buf == NULL handling on adler32 functions and test strings.
Nathan Moinvaziri [Sun, 11 Jan 2026 00:13:54 +0000 (16:13 -0800)]
Fixed UB in adler32_avx512_copy storemask when len is 0.
Nathan Moinvaziri [Thu, 8 Jan 2026 19:02:08 +0000 (11:02 -0800)]
Rename and reorder properties in hash_test.
Nathan Moinvaziri [Sat, 10 Jan 2026 18:29:05 +0000 (10:29 -0800)]
Merge adler32 and crc32 hash test strings.
Nathan Moinvaziri [Wed, 7 Jan 2026 08:39:25 +0000 (00:39 -0800)]
Add adler32_copy unit test
Nathan Moinvaziri [Wed, 7 Jan 2026 08:34:02 +0000 (00:34 -0800)]
Separate adler32 test strings into their own source header
Hans Kristian Rosbach [Sat, 10 Jan 2026 21:08:13 +0000 (22:08 +0100)]
Simplify the gzread.c name mangling workaround by splitting out just
the workaround into a separate file. This allows us to browse gzread.c
with code highlighting and it allows codecov to record coverage data.
Hans Kristian Rosbach [Sat, 10 Jan 2026 22:32:49 +0000 (23:32 +0100)]
Don't count tests/tools towards overall project coverage.
Set project coverage target to 80%.
Loosen project coverage reduction threshold to 10% to avoid failing coverage
tests when CI happens to run on hosts that do not support AVX-512.
Set component coverage reduction thresholds low, except for common and
arch_x86 that need higher limits due to the AVX-512 CI hosts.
Vladislav Shchapov [Fri, 9 Jan 2026 20:02:11 +0000 (01:02 +0500)]
Update to GoogleTest 1.16.0.
This requires minimum CMake 3.13 and C++14, this matches nicely with zlib-ng 2.3.x requirements.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Fri, 9 Jan 2026 19:47:20 +0000 (00:47 +0500)]
Replace deprecated FetchContent_Populate with FetchContent_MakeAvailable
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Fri, 9 Jan 2026 19:01:03 +0000 (00:01 +0500)]
Remove always TRUE or FALSE CMake version checks
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Fri, 9 Jan 2026 18:55:13 +0000 (23:55 +0500)]
Set minimum and upper compatible CMake version
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Hans Kristian Rosbach [Sat, 10 Jan 2026 20:31:06 +0000 (21:31 +0100)]
deflateinit was still checking for failed secondary allocations, this is
no longer necessary as we only allocate a single buffer and has already
been checked for failure before this.
Vladislav Shchapov [Thu, 8 Jan 2026 19:27:55 +0000 (00:27 +0500)]
Explicitly define the __SSE__ and __SSE2__ macros, since starting with MSVS 2012 the default instruction set is SSE2
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Nathan Moinvaziri [Thu, 8 Jan 2026 19:13:00 +0000 (11:13 -0800)]
Cleanup preprocessor indents in fallback_builtins.
Nathan Moinvaziri [Thu, 8 Jan 2026 00:47:06 +0000 (16:47 -0800)]
Add missing compiler preprocessor defines for 32-bit architectures
Nathan Moinvaziri [Thu, 8 Jan 2026 00:50:11 +0000 (16:50 -0800)]
Add ARCH defines to code to make the ifdef logic easier
Nathan Moinvaziri [Thu, 8 Jan 2026 00:47:54 +0000 (16:47 -0800)]
Add ARCH_32BIT and ARCH_64BIT defines for better code clarity
Hans Kristian Rosbach [Sat, 10 Jan 2026 12:54:23 +0000 (13:54 +0100)]
Ignore benchmarks in codecov coverage reports.
We already avoid collecting coverage when running benchmarks because the
benchmarks do not perform most error checking, thus even though they might
code increase coverage, they won't detect most bugs unless it actually
crashes the whole benchmark.
Hans Kristian Rosbach [Fri, 9 Jan 2026 14:17:58 +0000 (15:17 +0100)]
Add missing resets of compiler flags after completing each test,
avoids the next test inheriting the previous flags.
Hans Kristian Rosbach [Fri, 9 Jan 2026 19:58:17 +0000 (20:58 +0100)]
Added separate components.
Wait for CI completion before posting status report, avoids emailing an inital report with very low coverage based on pigz tests only.
Make report informational, low coverage will not be a CI failure.
Disable Github Annotations, these are deprecated due to API limits.
Hans Kristian Rosbach [Fri, 9 Jan 2026 20:45:05 +0000 (21:45 +0100)]
Disable downloading extra test corpora for WITH_SANITIZER builds,
those tests are much too slow, upwards of 1 hour or more.
Hans Kristian Rosbach [Fri, 9 Jan 2026 15:08:38 +0000 (16:08 +0100)]
Resolve merge conflicts in coverage data, instead of aborting.
Nathan Moinvaziri [Thu, 8 Jan 2026 18:31:35 +0000 (10:31 -0800)]
Fix possible loss of data warning in benchmark_inflate on MSVC 2026
benchmark_inflate.cc(131,51): warning C4267: '=': conversion from 'size_
t' to 'uint32_t', possible loss of dat
Vladislav Shchapov [Wed, 31 Dec 2025 10:57:08 +0000 (15:57 +0500)]
Fix warning: 'sprintf' is deprecated
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Hans Kristian Rosbach [Sun, 28 Dec 2025 18:05:31 +0000 (19:05 +0100)]
Rebalance benchmark_compress size ranges
Hans Kristian Rosbach [Sat, 27 Dec 2025 21:53:46 +0000 (22:53 +0100)]
Improve benchmark_compress and benchmark_uncompress.
- These now use the same generated data as benchmark_inflate.
- benchmark_uncompress now also uses level 9 for compression, so that
we also get 3-byte matches to uncompress.
- Improve error checking
- Unify code with benchmark_inflate
Hans Kristian Rosbach [Sat, 27 Dec 2025 21:51:22 +0000 (22:51 +0100)]
Add new benchmark inflate_nocrc. This lets us benchmark just the
inflate process more accurately. Also adds a new shared function for
generating highly compressible data that avoids very long matches.
Nathan Moinvaziri [Thu, 1 Jan 2026 03:50:10 +0000 (19:50 -0800)]
Use Z_FORCEINLINE for all adler32 or crc32 implementation functions
Nathan Moinvaziri [Sun, 4 Jan 2026 07:54:18 +0000 (23:54 -0800)]
Simplify crc32 pre/post conditioning for consistency
Nathan Moinvaziri [Sun, 4 Jan 2026 07:22:39 +0000 (23:22 -0800)]
Simplify alignment checks in crc32_loongarch64
Nathan Moinvaziri [Sun, 4 Jan 2026 07:09:13 +0000 (23:09 -0800)]
Simplify alignment checks in crc32_armv8_pmull_eor3
Nathan Moinvaziri [Sun, 4 Jan 2026 07:09:25 +0000 (23:09 -0800)]
Simplify alignment checks in crc32_armv8
Nathan Moinvaziri [Sun, 4 Jan 2026 04:46:57 +0000 (20:46 -0800)]
Remove unnecessary buf variables in crc32_armv8.
Nathan Moinvaziri [Sun, 4 Jan 2026 04:46:57 +0000 (20:46 -0800)]
Remove unnecessary buf variables in crc32_loongarch64.
Nathan Moinvaziri [Sun, 4 Jan 2026 07:52:27 +0000 (23:52 -0800)]
Add ALIGN_DIFF to perform alignment needed to next boundary
Dougall Johnson [Sun, 28 Dec 2025 23:41:02 +0000 (15:41 -0800)]
Consume bits before branches in inflate_fast.
Vladislav Shchapov [Sat, 27 Dec 2025 19:58:55 +0000 (00:58 +0500)]
Unroll some of the adler checksum for LASX
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Mika Lindqvist [Mon, 5 Jan 2026 00:08:42 +0000 (02:08 +0200)]
[CI] Add workflow with no AVX512VNNI
* This adds coverage with optimizations that have versions for both AVX512 and AVX512VNNI
Vladislav Shchapov [Sat, 20 Dec 2025 14:06:59 +0000 (19:06 +0500)]
Use bitrev instruction on LoongArch.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
dependabot[bot] [Thu, 1 Jan 2026 07:04:31 +0000 (07:04 +0000)]
Bump actions/upload-artifact from 5 to 6
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 5 to 6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v5...v6)
---
updated-dependencies:
- dependency-name: actions/upload-artifact
dependency-version: '6'
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com>
dependabot[bot] [Thu, 1 Jan 2026 07:04:21 +0000 (07:04 +0000)]
Bump actions/download-artifact from 6 to 7
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 6 to 7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)
---
updated-dependencies:
- dependency-name: actions/download-artifact
dependency-version: '7'
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com>
Nathan Moinvaziri [Mon, 8 Dec 2025 02:44:30 +0000 (18:44 -0800)]
Check CPU info for fast PMULL support.
armv8_pmull_eor3 is beneficial only if the CPU has multiple PMULL
execution units.
Co-authored-by: Adam Stylinski <kungfujesus06@gmail.com>
Nathan Moinvaziri [Sun, 28 Dec 2025 22:47:44 +0000 (14:47 -0800)]
Integrate ARMv8 PMULL+EOR3 crc32 algorithm from Peter Cawley
https://github.com/corsix/fast-crc32
https://github.com/zlib-ng/zlib-ng/pull/2023#discussion_r2573303259
Co-authored-by: Peter Cawley <corsix@corsix.org>
Vladislav Shchapov [Thu, 25 Dec 2025 09:40:17 +0000 (14:40 +0500)]
LoongArch64 and e2k has 8-byte general-purpose registers.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Sat, 20 Dec 2025 22:38:50 +0000 (03:38 +0500)]
Simplify LoongArch64 assembler. GCC 16, LLVM 22 have LASX and LSX conversion intrinsics.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Sat, 20 Dec 2025 20:30:38 +0000 (01:30 +0500)]
Improve LoongArch64 toolchain file.
Use COMPILER_SUFFIX variable to set gcc name suffix.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Adam Stylinski [Fri, 12 Dec 2025 21:23:27 +0000 (16:23 -0500)]
Force purely aligned loads in inflate_table code length counting
At the expense of some extra stack space and eating about 4 more cache
lines, let's make these loads purely aligned. On potato CPUs such as the
Core 2, unaligned loads in a loop are not ideal. Additionally some SBC
based ARM chips (usually the little in big.little variants) suffer a
penalty for unaligned loads. This also paves the way for a trivial
altivec implementation, for which unaligned loads don't exist and need
to be synthesized with permutation vectors.
Dougall Johnson [Wed, 10 Dec 2025 03:06:06 +0000 (19:06 -0800)]
Optimize code length counting in inflate_table using intrinsics.
https://github.com/dougallj/zlib-dougallj/commit/
f23fa25aa168ef782bab5e7cd6f9df50d7bb5eb2
https://godbolt.org/z/fojxrEo4T
Co-authored-by: Nathan Moinvaziri <nathan@nathanm.com>
Nathan Moinvaziri [Fri, 26 Dec 2025 16:50:44 +0000 (08:50 -0800)]
Add missing adler32_copy_power8 implementation
Nathan Moinvaziri [Thu, 18 Dec 2025 00:35:18 +0000 (16:35 -0800)]
Add missing adler32_copy_ssse3 implementation
Nathan Moinvaziri [Fri, 26 Dec 2025 16:56:41 +0000 (08:56 -0800)]
Add missing adler32_copy_vmx implementation
Nathan Moinvaziri [Thu, 18 Dec 2025 00:12:30 +0000 (16:12 -0800)]
Add comment to adler32_copy_avx512_vnni about lower vector width usage
Nathan Moinvaziri [Fri, 26 Dec 2025 16:39:04 +0000 (08:39 -0800)]
Add static inline/Z_FORCEINLINE to crc32_(v)pclmulqdq functions.
Nathan Moinvaziri [Fri, 26 Dec 2025 08:30:58 +0000 (00:30 -0800)]
Use tail optimization in final barrett reduction
Fold 4x128-bit into a single 128-bit value using k1/k2 constants, then reduce
128-bits to 32-bits.
https://www.corsix.org/content/alternative-exposition-crc32_4k_pclmulqdq
Nathan Moinvaziri [Fri, 26 Dec 2025 08:15:20 +0000 (00:15 -0800)]
Move COPY out of fold_16 inline with other fold_# functions.
Nathan Moinvaziri [Fri, 26 Dec 2025 07:47:14 +0000 (23:47 -0800)]
Move fold calls closer to last change in xmm_crc# variables.
Nathan Moinvaziri [Fri, 26 Dec 2025 07:14:21 +0000 (23:14 -0800)]
Handle initial crc only at the beginning of crc32_(v)pclmulqdq
Nathan Moinvaziri [Sun, 14 Dec 2025 08:57:37 +0000 (00:57 -0800)]
Fix initial crc value loading in crc32_(v)pclmulqdq
In main function, alignment diff processing was getting in the way of XORing
the initial CRC, because it does not guarantee at least 16 bytes have been
loaded.
In fold_16, src data modified by initial crc XORing before being stored to dst.
Nathan Moinvaziri [Thu, 11 Dec 2025 07:21:47 +0000 (23:21 -0800)]
Rename crc32_fold_pclmulqdq_tpl.h to crc32_pclmulqdq_tpl.h
Nathan Moinvaziri [Thu, 11 Dec 2025 06:59:50 +0000 (22:59 -0800)]
Merged crc32_fold functions save, load, reset
Nathan Moinvaziri [Sun, 14 Dec 2025 18:32:02 +0000 (10:32 -0800)]
Move crc32_fold_s struct into x86 implementation.
Nathan Moinvaziri [Fri, 19 Dec 2025 00:37:34 +0000 (16:37 -0800)]
Update crc32_fold test and benchmarks for crc32_copy
Nathan Moinvaziri [Fri, 19 Dec 2025 00:17:18 +0000 (16:17 -0800)]
Refactor crc32_fold functions into single crc32_copy
Vladislav Shchapov [Sat, 27 Dec 2025 10:58:03 +0000 (15:58 +0500)]
Remove redundant instructions in 256 bit wide chunkset on LoongArch64
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Adam Stylinski [Tue, 23 Dec 2025 23:58:10 +0000 (18:58 -0500)]
Small optimization in 256 bit wide chunkset
It turns out Intel only parses the bottom 4 bits of the shuffle vector.
This makes it already a sufficient permutation vector and saves us a
small bit of latency.
Nathan Moinvaziri [Sat, 13 Dec 2025 01:50:15 +0000 (17:50 -0800)]
Use different bit accumulator type for x86 compiler optimization
Nathan Moinvaziri [Wed, 10 Dec 2025 21:34:31 +0000 (13:34 -0800)]
Fix bits var warning conversion from unsigned int to uint8_t in MSVC
Dougall Johnson [Wed, 3 Dec 2025 07:44:56 +0000 (23:44 -0800)]
Change code table access from pointer to value in inflate_fast.
+r doesn't appear to work on MIPS or RISC-V architectures
Co-authored by: Nathan Moinvaziri <nathan@nathanm.com>