]>
git.ipfire.org Git - thirdparty/zlib-ng.git/log
Nathan Moinvaziri [Fri, 2 Jan 2026 03:26:14 +0000 (19:26 -0800)]
Revert "Move fold calls closer to last change in xmm_crc# variables."
The fold calls were in a better spot before begin located after loads to
reduce latency.
This reverts commit
cda0827b6d522acdb2656114e2c4b7b18b6c1c20 .
Nathan Moinvaziri [Wed, 31 Dec 2025 23:18:38 +0000 (15:18 -0800)]
Remove old comments about crc32 folding from crc32 benchmark.
Nathan Moinvaziri [Fri, 2 Jan 2026 06:49:56 +0000 (22:49 -0800)]
Remove unnecessary casts from crc32_(v)pclmulqdq.
Originally, some compilers and older versions of intrinsics libraries only
provided _mm_xor_ps (for __m128) and not _mm_xor_si128 (for __m128i).
Developers would cast integer vectors to float vectors to use the XOR
operation, then cast back. Modern compilers and intrinsics headers provide
_mm_xor_si128, making these casts unnecessary.
Nathan Moinvaziri [Tue, 13 Jan 2026 00:47:43 +0000 (16:47 -0800)]
Simplify CRC32 complement operations using bitwise NOT operator
Nathan Moinvaziri [Tue, 13 Jan 2026 00:44:48 +0000 (16:44 -0800)]
Fix space indentation formatting in crc32_zbc
Nathan Moinvaziri [Sun, 11 Jan 2026 21:11:08 +0000 (13:11 -0800)]
Add fallback for __has_builtin to prevent unmatched parenthesis warning
Occurs on MSVC.
Nathan Moinvaziri [Tue, 13 Jan 2026 17:01:11 +0000 (09:01 -0800)]
Add ARM __builtin_bitreverse16 fallback implementation for GCC.
Nathan Moinvaziri [Sun, 11 Jan 2026 00:22:43 +0000 (16:22 -0800)]
Remove compiler check for builtin_bitreverse16 since we check in code
And we have a generic fallback
Nathan Moinvaziri [Sat, 10 Jan 2026 01:38:55 +0000 (17:38 -0800)]
__builtin_bitreverse16 CMake compiler check fails for GCC 13
Provide a final check for __builtin_bitreverse16 in code.
Vladislav Shchapov [Wed, 7 Jan 2026 19:30:19 +0000 (00:30 +0500)]
Use GCC's may_alias attribute for access to buffers in crc32_chorba
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Hans Kristian Rosbach [Tue, 13 Jan 2026 14:32:53 +0000 (15:32 +0100)]
Add Z_UNREACHABLE compiler hint
Hans Kristian Rosbach [Tue, 13 Jan 2026 13:55:37 +0000 (14:55 +0100)]
Fix (impossible) infinite loop in gz_fetch() detected by GCC-14 static analyzer.
According to the comment, gz_fetch() also assumes that state->x.have == 0, so
lets add an Assert to that effect.
Hans Kristian Rosbach [Mon, 12 Jan 2026 19:52:47 +0000 (20:52 +0100)]
Update static analyzer from targeting GCC v10 to v14
Mika T. Lindqvist [Mon, 12 Jan 2026 01:42:13 +0000 (03:42 +0200)]
Fix symbol mangling so symbols in shared library are exported correctly
* We need to mangle symbols in the map file, otherwise none of the symbols are exported
* Fix gz_error name conflict with zlib-ng API
Nathan Moinvaziri [Mon, 12 Jan 2026 22:57:50 +0000 (14:57 -0800)]
Remove extra indirection calling into crc32_z functions.
This also prevents the double-checking of buf == NULL.
Nathan Moinvaziri [Mon, 12 Jan 2026 19:18:56 +0000 (11:18 -0800)]
Clean up buf == NULL handling on adler32 functions and test strings.
Nathan Moinvaziri [Sun, 11 Jan 2026 00:13:54 +0000 (16:13 -0800)]
Fixed UB in adler32_avx512_copy storemask when len is 0.
Nathan Moinvaziri [Thu, 8 Jan 2026 19:02:08 +0000 (11:02 -0800)]
Rename and reorder properties in hash_test.
Nathan Moinvaziri [Sat, 10 Jan 2026 18:29:05 +0000 (10:29 -0800)]
Merge adler32 and crc32 hash test strings.
Nathan Moinvaziri [Wed, 7 Jan 2026 08:39:25 +0000 (00:39 -0800)]
Add adler32_copy unit test
Nathan Moinvaziri [Wed, 7 Jan 2026 08:34:02 +0000 (00:34 -0800)]
Separate adler32 test strings into their own source header
Hans Kristian Rosbach [Sat, 10 Jan 2026 21:08:13 +0000 (22:08 +0100)]
Simplify the gzread.c name mangling workaround by splitting out just
the workaround into a separate file. This allows us to browse gzread.c
with code highlighting and it allows codecov to record coverage data.
Hans Kristian Rosbach [Sat, 10 Jan 2026 22:32:49 +0000 (23:32 +0100)]
Don't count tests/tools towards overall project coverage.
Set project coverage target to 80%.
Loosen project coverage reduction threshold to 10% to avoid failing coverage
tests when CI happens to run on hosts that do not support AVX-512.
Set component coverage reduction thresholds low, except for common and
arch_x86 that need higher limits due to the AVX-512 CI hosts.
Vladislav Shchapov [Fri, 9 Jan 2026 20:02:11 +0000 (01:02 +0500)]
Update to GoogleTest 1.16.0.
This requires minimum CMake 3.13 and C++14, this matches nicely with zlib-ng 2.3.x requirements.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Fri, 9 Jan 2026 19:47:20 +0000 (00:47 +0500)]
Replace deprecated FetchContent_Populate with FetchContent_MakeAvailable
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Fri, 9 Jan 2026 19:01:03 +0000 (00:01 +0500)]
Remove always TRUE or FALSE CMake version checks
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Fri, 9 Jan 2026 18:55:13 +0000 (23:55 +0500)]
Set minimum and upper compatible CMake version
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Hans Kristian Rosbach [Sat, 10 Jan 2026 20:31:06 +0000 (21:31 +0100)]
deflateinit was still checking for failed secondary allocations, this is
no longer necessary as we only allocate a single buffer and has already
been checked for failure before this.
Vladislav Shchapov [Thu, 8 Jan 2026 19:27:55 +0000 (00:27 +0500)]
Explicitly define the __SSE__ and __SSE2__ macros, since starting with MSVS 2012 the default instruction set is SSE2
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Nathan Moinvaziri [Thu, 8 Jan 2026 19:13:00 +0000 (11:13 -0800)]
Cleanup preprocessor indents in fallback_builtins.
Nathan Moinvaziri [Thu, 8 Jan 2026 00:47:06 +0000 (16:47 -0800)]
Add missing compiler preprocessor defines for 32-bit architectures
Nathan Moinvaziri [Thu, 8 Jan 2026 00:50:11 +0000 (16:50 -0800)]
Add ARCH defines to code to make the ifdef logic easier
Nathan Moinvaziri [Thu, 8 Jan 2026 00:47:54 +0000 (16:47 -0800)]
Add ARCH_32BIT and ARCH_64BIT defines for better code clarity
Hans Kristian Rosbach [Sat, 10 Jan 2026 12:54:23 +0000 (13:54 +0100)]
Ignore benchmarks in codecov coverage reports.
We already avoid collecting coverage when running benchmarks because the
benchmarks do not perform most error checking, thus even though they might
code increase coverage, they won't detect most bugs unless it actually
crashes the whole benchmark.
Hans Kristian Rosbach [Fri, 9 Jan 2026 14:17:58 +0000 (15:17 +0100)]
Add missing resets of compiler flags after completing each test,
avoids the next test inheriting the previous flags.
Hans Kristian Rosbach [Fri, 9 Jan 2026 19:58:17 +0000 (20:58 +0100)]
Added separate components.
Wait for CI completion before posting status report, avoids emailing an inital report with very low coverage based on pigz tests only.
Make report informational, low coverage will not be a CI failure.
Disable Github Annotations, these are deprecated due to API limits.
Hans Kristian Rosbach [Fri, 9 Jan 2026 20:45:05 +0000 (21:45 +0100)]
Disable downloading extra test corpora for WITH_SANITIZER builds,
those tests are much too slow, upwards of 1 hour or more.
Hans Kristian Rosbach [Fri, 9 Jan 2026 15:08:38 +0000 (16:08 +0100)]
Resolve merge conflicts in coverage data, instead of aborting.
Nathan Moinvaziri [Thu, 8 Jan 2026 18:31:35 +0000 (10:31 -0800)]
Fix possible loss of data warning in benchmark_inflate on MSVC 2026
benchmark_inflate.cc(131,51): warning C4267: '=': conversion from 'size_
t' to 'uint32_t', possible loss of dat
Vladislav Shchapov [Wed, 31 Dec 2025 10:57:08 +0000 (15:57 +0500)]
Fix warning: 'sprintf' is deprecated
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Hans Kristian Rosbach [Sun, 28 Dec 2025 18:05:31 +0000 (19:05 +0100)]
Rebalance benchmark_compress size ranges
Hans Kristian Rosbach [Sat, 27 Dec 2025 21:53:46 +0000 (22:53 +0100)]
Improve benchmark_compress and benchmark_uncompress.
- These now use the same generated data as benchmark_inflate.
- benchmark_uncompress now also uses level 9 for compression, so that
we also get 3-byte matches to uncompress.
- Improve error checking
- Unify code with benchmark_inflate
Hans Kristian Rosbach [Sat, 27 Dec 2025 21:51:22 +0000 (22:51 +0100)]
Add new benchmark inflate_nocrc. This lets us benchmark just the
inflate process more accurately. Also adds a new shared function for
generating highly compressible data that avoids very long matches.
Nathan Moinvaziri [Thu, 1 Jan 2026 03:50:10 +0000 (19:50 -0800)]
Use Z_FORCEINLINE for all adler32 or crc32 implementation functions
Nathan Moinvaziri [Sun, 4 Jan 2026 07:54:18 +0000 (23:54 -0800)]
Simplify crc32 pre/post conditioning for consistency
Nathan Moinvaziri [Sun, 4 Jan 2026 07:22:39 +0000 (23:22 -0800)]
Simplify alignment checks in crc32_loongarch64
Nathan Moinvaziri [Sun, 4 Jan 2026 07:09:13 +0000 (23:09 -0800)]
Simplify alignment checks in crc32_armv8_pmull_eor3
Nathan Moinvaziri [Sun, 4 Jan 2026 07:09:25 +0000 (23:09 -0800)]
Simplify alignment checks in crc32_armv8
Nathan Moinvaziri [Sun, 4 Jan 2026 04:46:57 +0000 (20:46 -0800)]
Remove unnecessary buf variables in crc32_armv8.
Nathan Moinvaziri [Sun, 4 Jan 2026 04:46:57 +0000 (20:46 -0800)]
Remove unnecessary buf variables in crc32_loongarch64.
Nathan Moinvaziri [Sun, 4 Jan 2026 07:52:27 +0000 (23:52 -0800)]
Add ALIGN_DIFF to perform alignment needed to next boundary
Dougall Johnson [Sun, 28 Dec 2025 23:41:02 +0000 (15:41 -0800)]
Consume bits before branches in inflate_fast.
Vladislav Shchapov [Sat, 27 Dec 2025 19:58:55 +0000 (00:58 +0500)]
Unroll some of the adler checksum for LASX
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Mika Lindqvist [Mon, 5 Jan 2026 00:08:42 +0000 (02:08 +0200)]
[CI] Add workflow with no AVX512VNNI
* This adds coverage with optimizations that have versions for both AVX512 and AVX512VNNI
Vladislav Shchapov [Sat, 20 Dec 2025 14:06:59 +0000 (19:06 +0500)]
Use bitrev instruction on LoongArch.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
dependabot[bot] [Thu, 1 Jan 2026 07:04:31 +0000 (07:04 +0000)]
Bump actions/upload-artifact from 5 to 6
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 5 to 6.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v5...v6)
---
updated-dependencies:
- dependency-name: actions/upload-artifact
dependency-version: '6'
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com>
dependabot[bot] [Thu, 1 Jan 2026 07:04:21 +0000 (07:04 +0000)]
Bump actions/download-artifact from 6 to 7
Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 6 to 7.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v6...v7)
---
updated-dependencies:
- dependency-name: actions/download-artifact
dependency-version: '7'
dependency-type: direct:production
update-type: version-update:semver-major
...
Signed-off-by: dependabot[bot] <support@github.com>
Nathan Moinvaziri [Mon, 8 Dec 2025 02:44:30 +0000 (18:44 -0800)]
Check CPU info for fast PMULL support.
armv8_pmull_eor3 is beneficial only if the CPU has multiple PMULL
execution units.
Co-authored-by: Adam Stylinski <kungfujesus06@gmail.com>
Nathan Moinvaziri [Sun, 28 Dec 2025 22:47:44 +0000 (14:47 -0800)]
Integrate ARMv8 PMULL+EOR3 crc32 algorithm from Peter Cawley
https://github.com/corsix/fast-crc32
https://github.com/zlib-ng/zlib-ng/pull/2023#discussion_r2573303259
Co-authored-by: Peter Cawley <corsix@corsix.org>
Vladislav Shchapov [Thu, 25 Dec 2025 09:40:17 +0000 (14:40 +0500)]
LoongArch64 and e2k has 8-byte general-purpose registers.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Sat, 20 Dec 2025 22:38:50 +0000 (03:38 +0500)]
Simplify LoongArch64 assembler. GCC 16, LLVM 22 have LASX and LSX conversion intrinsics.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Vladislav Shchapov [Sat, 20 Dec 2025 20:30:38 +0000 (01:30 +0500)]
Improve LoongArch64 toolchain file.
Use COMPILER_SUFFIX variable to set gcc name suffix.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Adam Stylinski [Fri, 12 Dec 2025 21:23:27 +0000 (16:23 -0500)]
Force purely aligned loads in inflate_table code length counting
At the expense of some extra stack space and eating about 4 more cache
lines, let's make these loads purely aligned. On potato CPUs such as the
Core 2, unaligned loads in a loop are not ideal. Additionally some SBC
based ARM chips (usually the little in big.little variants) suffer a
penalty for unaligned loads. This also paves the way for a trivial
altivec implementation, for which unaligned loads don't exist and need
to be synthesized with permutation vectors.
Dougall Johnson [Wed, 10 Dec 2025 03:06:06 +0000 (19:06 -0800)]
Optimize code length counting in inflate_table using intrinsics.
https://github.com/dougallj/zlib-dougallj/commit/
f23fa25aa168ef782bab5e7cd6f9df50d7bb5eb2
https://godbolt.org/z/fojxrEo4T
Co-authored-by: Nathan Moinvaziri <nathan@nathanm.com>
Nathan Moinvaziri [Fri, 26 Dec 2025 16:50:44 +0000 (08:50 -0800)]
Add missing adler32_copy_power8 implementation
Nathan Moinvaziri [Thu, 18 Dec 2025 00:35:18 +0000 (16:35 -0800)]
Add missing adler32_copy_ssse3 implementation
Nathan Moinvaziri [Fri, 26 Dec 2025 16:56:41 +0000 (08:56 -0800)]
Add missing adler32_copy_vmx implementation
Nathan Moinvaziri [Thu, 18 Dec 2025 00:12:30 +0000 (16:12 -0800)]
Add comment to adler32_copy_avx512_vnni about lower vector width usage
Nathan Moinvaziri [Fri, 26 Dec 2025 16:39:04 +0000 (08:39 -0800)]
Add static inline/Z_FORCEINLINE to crc32_(v)pclmulqdq functions.
Nathan Moinvaziri [Fri, 26 Dec 2025 08:30:58 +0000 (00:30 -0800)]
Use tail optimization in final barrett reduction
Fold 4x128-bit into a single 128-bit value using k1/k2 constants, then reduce
128-bits to 32-bits.
https://www.corsix.org/content/alternative-exposition-crc32_4k_pclmulqdq
Nathan Moinvaziri [Fri, 26 Dec 2025 08:15:20 +0000 (00:15 -0800)]
Move COPY out of fold_16 inline with other fold_# functions.
Nathan Moinvaziri [Fri, 26 Dec 2025 07:47:14 +0000 (23:47 -0800)]
Move fold calls closer to last change in xmm_crc# variables.
Nathan Moinvaziri [Fri, 26 Dec 2025 07:14:21 +0000 (23:14 -0800)]
Handle initial crc only at the beginning of crc32_(v)pclmulqdq
Nathan Moinvaziri [Sun, 14 Dec 2025 08:57:37 +0000 (00:57 -0800)]
Fix initial crc value loading in crc32_(v)pclmulqdq
In main function, alignment diff processing was getting in the way of XORing
the initial CRC, because it does not guarantee at least 16 bytes have been
loaded.
In fold_16, src data modified by initial crc XORing before being stored to dst.
Nathan Moinvaziri [Thu, 11 Dec 2025 07:21:47 +0000 (23:21 -0800)]
Rename crc32_fold_pclmulqdq_tpl.h to crc32_pclmulqdq_tpl.h
Nathan Moinvaziri [Thu, 11 Dec 2025 06:59:50 +0000 (22:59 -0800)]
Merged crc32_fold functions save, load, reset
Nathan Moinvaziri [Sun, 14 Dec 2025 18:32:02 +0000 (10:32 -0800)]
Move crc32_fold_s struct into x86 implementation.
Nathan Moinvaziri [Fri, 19 Dec 2025 00:37:34 +0000 (16:37 -0800)]
Update crc32_fold test and benchmarks for crc32_copy
Nathan Moinvaziri [Fri, 19 Dec 2025 00:17:18 +0000 (16:17 -0800)]
Refactor crc32_fold functions into single crc32_copy
Vladislav Shchapov [Sat, 27 Dec 2025 10:58:03 +0000 (15:58 +0500)]
Remove redundant instructions in 256 bit wide chunkset on LoongArch64
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Adam Stylinski [Tue, 23 Dec 2025 23:58:10 +0000 (18:58 -0500)]
Small optimization in 256 bit wide chunkset
It turns out Intel only parses the bottom 4 bits of the shuffle vector.
This makes it already a sufficient permutation vector and saves us a
small bit of latency.
Nathan Moinvaziri [Sat, 13 Dec 2025 01:50:15 +0000 (17:50 -0800)]
Use different bit accumulator type for x86 compiler optimization
Nathan Moinvaziri [Wed, 10 Dec 2025 21:34:31 +0000 (13:34 -0800)]
Fix bits var warning conversion from unsigned int to uint8_t in MSVC
Dougall Johnson [Wed, 3 Dec 2025 07:44:56 +0000 (23:44 -0800)]
Change code table access from pointer to value in inflate_fast.
+r doesn't appear to work on MIPS or RISC-V architectures
Co-authored by: Nathan Moinvaziri <nathan@nathanm.com>
Nathan Moinvaziri [Thu, 18 Dec 2025 00:05:55 +0000 (16:05 -0800)]
Apply consistent use of UNLIKLEY across adler32 variants
Nathan Moinvaziri [Wed, 17 Dec 2025 02:00:11 +0000 (18:00 -0800)]
Clean up adler32 short length functions
Hans Kristian Rosbach [Fri, 5 Dec 2025 19:04:14 +0000 (20:04 +0100)]
Improve cmake/detect-arch.cmake to also provide bitness.
Rewrite checks in CMakelists.txt and cmake/detect-intrinsics.cmake
to utilize the new variables.
Hans Kristian Rosbach [Wed, 10 Dec 2025 19:27:46 +0000 (20:27 +0100)]
Reorder deflate.h variables to improve cache locality
Hans Kristian Rosbach [Thu, 11 Dec 2025 19:34:05 +0000 (20:34 +0100)]
Use uint32_t for hash_head in update_hash/insert_string
Hans Kristian Rosbach [Thu, 11 Dec 2025 16:24:59 +0000 (17:24 +0100)]
Use uin32_t for Pos in match_tpl.h
Hans Kristian Rosbach [Mon, 8 Dec 2025 13:30:05 +0000 (14:30 +0100)]
- Reorder variables in longest_match, reducing gaps.
- Make window-based pointers in match_tpl.h const, only the
pointers move, never the data.
Hans Kristian Rosbach [Mon, 8 Dec 2025 13:30:05 +0000 (14:30 +0100)]
Use pointer arithmetic to access window in deflate_quick/deflate_fast
Hans Kristian Rosbach [Mon, 8 Dec 2025 12:18:24 +0000 (13:18 +0100)]
- Add local window pointer to:
deflate_quick, deflate_fast, deflate_medium and fill_window.
- Add local strm pointer in fill_window.
- Fix missed change to use local lookahead variable in match_tpl
Hans Kristian Rosbach [Mon, 8 Dec 2025 12:09:42 +0000 (13:09 +0100)]
Deflate_state changes:
- Reduce opt_len/static_len sizes.
- Move matches/insert closer to their related varibles.
These now fill a 8-byte hole in the struct on 64-bit platforms.
- Exclude compressed_len and bits_sent if ZLIB_DEBUG is
not enabled. Also move them to the end.
- Remove x86 MSVC-specific padding
Hans Kristian Rosbach [Mon, 8 Dec 2025 12:03:33 +0000 (13:03 +0100)]
- Minor inlining changes in trees_emit.h:
- Inline the small bi_windup function
- Don't attempt inlining for the big zng_emit_dist
- Don't check for too long match in deflate_quick, it cannot happen.
- Move GOTO_NEXT_CHAIN macro outside of LONGEST_MATCH function to
improve readability.
Vladislav Shchapov [Sat, 20 Dec 2025 14:31:01 +0000 (19:31 +0500)]
Fix warnings: unused parameter state, comparison of integer expressions of different signedness: size_t and int64_t.
Signed-off-by: Vladislav Shchapov <vladislav@shchapov.ru>
Mathias Heyer [Thu, 18 Dec 2025 01:14:41 +0000 (17:14 -0800)]
slide_hash_sse2 and slide_hash_avx2 are not dependent on HAVE_BUILTIN_CTZ
This patch matches x86_functions.h with behavior found in functable.c
It fixes builds where HAVE_BUILTIN_CTZ remained undefined.
Nathan Moinvaziri [Fri, 12 Dec 2025 01:28:12 +0000 (17:28 -0800)]
Change bi_reverse to use uint16_t code arg.
Nathan Moinvaziri [Sun, 7 Dec 2025 07:56:21 +0000 (23:56 -0800)]
Use __builtin_bitreverse16 in inflate_table
https://github.com/dougallj/zlib-dougallj/commit/
f23fa25aa168ef782bab5e7cd6f9df50d7bb5eb2
Nathan Moinvaziri [Sat, 6 Dec 2025 15:55:07 +0000 (07:55 -0800)]
Use __builtin_bitreverse16 in bi_reverse if available.