Adam Stylinski [Tue, 11 Mar 2025 01:17:25 +0000 (21:17 -0400)]
SSE4.1 optimized chorba
This is ~25-30% faster than the SSE2 variant on a core2 quad. The main reason
for this has to do with the fact that, while incurring far fewer shifts,
an entirely separate stack buffer has to be managed that is the size of
the L1 cache on most CPUs. This was one of the main reasons the 32k
specialized function was slower for the scalar counterpart, despite auto
vectorizing. The auto vectorized loop was setting up the stack buffer at
unaligned offsets, which is detrimental to performance pre-nehalem.
Additionally, we were losing a fair bit of time to the zero
initialization, which we are now doing more selectively.
There are a ton of loads and stores happening, and for sure we are bound
on the fill buffer + store forwarding. An SSE2 version of this code is
probably possible by simply replacing the shifts with unpacks with zero
and the palignr's with shufpd's. I'm just not sure it'll be all that worth
it, though. We are gating against SSE4.1 not because we are using specifically
a 4.1 instruction but because that marks when Wolfdale came out and palignr
became a lot faster.
Improve the speed of sub-16 byte matches by first using a
128-bit intrinsic, after that use only 512-bit intrinsics.
This requires us to overlap on the last run, but this is cheaper than
processing the tail using a 256-bit and then a 128-bit run.
Change benchmark steps to avoid it hitting chunk boundaries
of one or the other function as much, this gives more fair benchmarks.
Speed up benchmarks when run as part of gtest as it does not check data
for correctness, making it only run each benchmark for 1 iteration, instead
of thousands or hundreds of thousands.
Add a separate CI step to crashtest benchmarks without collecting any coverage data.
Activate benchmarks in more arches.
Disable some warnings to avoid errors in compiling google benchmark.
Remove separate benchmark CI job, now included in other jobs instead.
Reduce development burden by getting rid of NMake files that are manually
kept up to date. For continued NMake support please generate NMake project
files using CMake.
Pass POSIX_C_SOURCE for std::alligned_alloc try_compile checks
On FreeBSD 11, definining POSIX_C_SOURCE to a lower level has the efect of inhibiting the language level (__ISO_C_VISIBLE ) to be lower than C11, even in the presence of -std=c11
Since the check_symbol_exists runs without setting POSIX_C_SOURCE, this means that we will spuriously define HAVE_ALIGNED_ALLOC, while in the actual build it is not going to be defined
Adam Stylinski [Sun, 16 Feb 2025 17:13:00 +0000 (12:13 -0500)]
Explicit SSE2 vectorization of Chorba CRC method
The version that's currently in the generic implementation for 32768
byte buffers leverages the stack. It manages to autovectorize but
unfortunately the trips to the stack hurt its performance for CPUs which
need this the most. This version is explicitly SIMD vectorized and
doesn't use trips to the stack. In my testing it's ~10% faster than the
"small" variant, and about 42% faster than the "32768" variant.
Icenowy Zheng [Mon, 24 Mar 2025 08:50:37 +0000 (16:50 +0800)]
riscv: chunkset_rvv: fix SIGSEGV in CHUNKCOPY
The chunkset_tpl comment allows negative dist (out - from) as long as
the length is smaller than the absolute value of dist (i.e. memory does
not overlap). However this case is currently broken in the RVV override
of CHUNKCOPY -- it compares dist (which is a ptrdiff_t, a value that
should be of the same size with size_t but signed) with the result of
sizeof (which is a size_t), and this triggers the implicit conversion
from signed to unsigned (thus losing negative values).
As it's promised to be not overlapping when dist is negative, just use a
gaint memcpy() call to copy everything.
Adam Stylinski [Tue, 25 Mar 2025 21:58:19 +0000 (17:58 -0400)]
Fix a bug on the 32k and greater chorba specializations
In testing a SIMD vectorization for this, I wrote a gtest which stumbled
onto the fact that this had a bug on big endian. Before the initial CRC
had been mixed in it needed to be byte swapped.
Icenowy Zheng [Tue, 25 Mar 2025 08:23:31 +0000 (16:23 +0800)]
ci: drop RISC-V Clang test
The SiFive GitHub organization now deploys an IP allowlist which blocked
GitHub Actions, which makes this test always fail. In addition, this is
a quite different test than other non-x86 tests.
Disable MSVC optimizations for AVX512 GET_CHUNK_MAG #1883
MSVC compiler (VS 17.11.x) incorrectly optimizes the GET_CHUNK_MAG code on
older versions. Appears to be resolved in VS 17.13.2. The compiler would
optimize the code in such a way that it would cause a decompression failure.
It only happens when /Os flag is set.
ports: Use memalign or _aligned_malloc, when available. Fallback to malloc
Using "_WIN32" to decide,
if the MSVC extensions _aligned_malloc / _aligned_free are available
is a bug that breaks other Compiler on Windows. (OpenWatcom as Example)
Adam Stylinski [Sat, 30 Nov 2024 17:01:28 +0000 (12:01 -0500)]
Fold a copy into the adler32 function for UPDATEWINDOW for neon
So a lot of alterations had to be done to make this not worse and
so far, it's not really better, either. I had to force inlining for
the adler routine, I had to remove the x4 load instruction otherwise
pipelining stalled, and I had to use restrict pointers with a copy
idiom for GCC to inline a copy routine for the tail.
Still, we see a small benefit in benchmarks, particularly when done
with size of our window or larger. There's also an added benefit that
this will fix #1824.
Clean up internal crc32 function handling.
Mark crc32_c and crc32_braid functions as internal, and remove prefix.
Reorder contents of generic_functions, and remove Z_INTERNAL hints from declarations.
Add test/benchmark output to indicate whether Chorba is used.
Clean up crc32_braid.
- Rename N and W to BRAID_N and BRAID_W
- Remove override capabilities for BRAID_N and BRAID_W
- Fix formatting in crc32_braid_tbl.h
- Make makecrct not rely on crc32_braid_p.h
Adam Stylinski [Mon, 3 Feb 2025 02:05:37 +0000 (21:05 -0500)]
Fix an unfortunate bug with Visual Studio 2015
Evidently this instruction, despite the intrinsic having a register operand,
is a memory-register instruction. There seems to be no alignment requirement
for the source operand. Because of this, compilers when not optimized are doing
the unaligned load and then dumping back to the stack to do the broadcasting load.
In doing this, MSVC seems to be dumping to the stack with an aligned move at an
unaligned address, causing a segfault. GCC does not seem to make this mistake, as
it stashes to an aligned address.
If we're on Visual Studio 2015, let's just do the longer 9 cycle sequence of a 128
bit load followed by a vinserti128. This _should_ fix this (issue #1861).
Eduard Stefes [Tue, 21 Jan 2025 09:48:07 +0000 (10:48 +0100)]
Disable CRC32-VX Extention for some Clang versions
We have to disable the CRC32-VX implementation for some Clang versions
(18 <= version < 19.1.2) that generate bad code for the IBM S390 VGFMA intrinsics.
Dmitry Kurtaev [Wed, 15 Jan 2025 17:28:44 +0000 (20:28 +0300)]
Workaround error G6E97C40B
Warning as an error with GCC from Uubuntu 24.04:
```
/home/runner/work/dotnet_riscv/dotnet_riscv/runtime/src/native/external/zlib-ng/arch/riscv/riscv_features.c(25,33): error G6E97C40B: suggest parentheses around ‘&&’ within ‘||’ [-Wparentheses] [/home/runner/work/dotnet_riscv/dotnet_riscv/runtime/src/native/libs/build-native.proj]
```
Sam James [Thu, 9 Jan 2025 11:36:40 +0000 (11:36 +0000)]
cmake: disable LTO for some configure checks
Some of zlib-ng's configure tests define a function expecting it to be compiled but
don't call that function, or don't use its return value. This is risky with
LTO where the whole thing may be optimised out, which has happened before:
* https://github.com/zlib-ng/zlib-ng/issues/1616
* https://github.com/zlib-ng/zlib-ng/pull/1622
* https://gitlab.kitware.com/cmake/cmake/-/issues/26103
Continued cleanup of old UNALIGNED_OK checks
- Remove obsolete checks
- Fix checks that are inconsistent
- Stop compiling compare256/longest_match variants that never gets called
- Improve how the generic compare256 functions are handled.
- Allow overriding OPTIMAL_CMP
This simplifies the code and avoids having a lot of code in the compiled library than can never get executed.
Adam Stylinski [Sat, 21 Dec 2024 16:04:47 +0000 (11:04 -0500)]
Fix unaligned access in ACLE based crc32
This fixes a rightful complaint from the alignment sanitizer that we
alias memory in an unaligned fashion. A nice added bonus is that this
improves performance a tiny bit on the larger buffers, perhaps due to
loops that idiomatically decrement a count and increment a single buffer
pointer rather than the maze of conditional pointer reassignments.
While here, let's write a unit test just for this. Since this is the only
variant that accesses memory in a potentially unaligned fashion that doesn't
explicitly go byte by byte or use intrinsics that don't require alignment,
we'll enable it only for this function for now. Adding more tests later if
need be should be possible. For everything else not crc, we're relying on
ubsan to hopefully catch things by chance.
Adam Stylinski [Sat, 21 Dec 2024 15:09:58 +0000 (10:09 -0500)]
Fix "RLE" compression with big endian architectures
This was missed in #1831. The RLE methods compare a string of bytes
directly with itself to directly derive a simple run length encoding.
They use similar but not identical methods to compare256. This needs
a similar endianness check at compile time to know which compare bit
count to use (leading or trailing).
Adam Stylinski [Fri, 20 Dec 2024 23:53:51 +0000 (18:53 -0500)]
Make big endians first class citizens again
No longer do the big iron on yore which lack SIMD optimized loads need
to search strings a byte at a time like primitive machines of the vax
era. This guard here was mostly due to the fact that the string
comparison was searched with "count trailing zero", which assumes an
endianness. We can just conditionally use leading zeros when on big
endian and stop using the extremely naive C implementation. This makes
things a tad bit faster.
Icenowy Zheng [Sat, 14 Dec 2024 17:31:48 +0000 (01:31 +0800)]
adler32_rvv: Fix some overflow problems
There are currently some overflow problems in adler32_rvv
implementation, which can lead to wrong results for some input, and
these problems could be easily exhibited when running `git fsck` with
zlib-ng suitituting the system zlib on a big git repository.
These problems and the solutions are the following:
- When the input data is long enough, the v_buf32_accu can overflow too.
Add it to the modulo code that happens per ~NMAX bytes.
- When the vector data is reduced to scalar ones, the resulting scalar
value (and the proceeded length) may lead to the calculation of sum2
to overflow. Add mod BASE to all these reductions and initial
calculation of sum2.
- When the remaining data less than vl bytes, the code falls back to a
scalar implementation; however the sum2 and alder2 values are just
reduced from vectors and could be very big that makes sum2 overflows
in the scalar code. Modulo them before the scalar code to prevent such
overflow (because vl is surely quite smaller than NMAX).
Since we long ago make unaligned reads safe (by using memcpy or intrinsics),
it is time to replace the UNALIGNED_OK checks that have since really only been
used to select the optimal comparison sizes for the arch instead.
Since we long ago make unaligned reads safe (by using memcpy or intrinsics),
it is time to replace the UNALIGNED_OK checks that have since really only been
used to select the optimal comparison sizes for the arch instead.
Adam Stylinski [Sat, 30 Nov 2024 14:23:28 +0000 (09:23 -0500)]
Improve pipeling for AVX512 chunking
For reasons that aren't quite so clear, using the masked writes here
did not pipeline very well. Either setting up the mask stalled things
or masked moves have issues overlapping regular moves. Simply putting
the masked moves behind a branch that is rarely taken seemed to do the
trick in improving the ILP. While here, put masked loads behind the same
branch in case there were ever a hazard for overreading.
Adam Stylinski [Thu, 28 Nov 2024 00:00:52 +0000 (19:00 -0500)]
Enable AVX2 functions to be built with BMI2 instructions
While these are technically different instructions, no such CPU exists
that has AVX2 that doesn't have BMI2. Enabling BMI2 allows us to
eliminate several flag stalls by having flagless versions of shifts, and
allows us to not clobber and move around GPRs so much in scalar code.
There's usually a sizeable benefit for enabling it. Since we're building
with BMI2 for AVX2 functions, let's also just make sure the CPU claims
to support it (just to cover our bases).
Adam Stylinski [Thu, 28 Nov 2024 19:05:32 +0000 (14:05 -0500)]
Fix native detection of CRC instruction
It's unclear if raspberry pi OS's shipped GCC doesn't properly detect
ACLE or not (/proc/cpuinfo claims to support AES), but in any case, the
preprocessor macro for that flag is not defined with -march=native on a
raspberry pi 5. Unfortunately that means when built "WITH_NATIVE", we do
not get a fast CRC function. The CRC32 preprocessor macro _IS_ defined,
and the auto detection when built without NATIVE support does properly
get dispatched to. Since we only need the scalar CRC32 and not the polynomial
stuff anyhow, let's make it be an || condition and not a && one.
Pavel P [Wed, 27 Nov 2024 21:13:34 +0000 (23:13 +0200)]
Fix casting warning/error in test_compress_bound.cc
Fixes the following error when building with msvc compiler
```
test_compress_bound.cc
D:\zlib-ng\test\test_compress_bound.cc(41,50): error C2220: the following warning is treated as an error
D:\zlib-ng\test\test_compress_bound.cc(41,50): warning C4267: 'argument': conversion from 'size_t' to 'unsigned long', possible loss of data
D:\zlib-ng\test\test_compress_bound.cc(43,68): warning C4267: 'argument': conversion from 'size_t' to 'unsigned long', possible loss of data
```
Adam Stylinski [Wed, 25 Sep 2024 21:56:36 +0000 (17:56 -0400)]
Make an AVX512 inflate fast with low cost masked writes
This takes advantage of the fact that on AVX512 architectures, masked
moves are incredibly cheap. There are many places where we have to
fallback to the safe C implementation of chunkcopy_safe because of the
assumed overwriting that occurs. We're to sidestep most of the branching
needed here by simply controlling the bounds of our writes with a mask.
Adam Stylinski [Thu, 12 Sep 2024 21:47:30 +0000 (17:47 -0400)]
Make chunkset_avx2 half chunk aware
This gives us appreciable gains on a number of fronts. The first being
we're inlining a pretty hot function that was getting dispatched to
regularly. Another is that we're able to do a safe lagged copy of a
distance that is smaller, so CHUNKCOPY gets its teeth back here for
smaller sizes, without having to do another dispatch to a function.
We're also now doing two overlapping writes at once and letting the CPU
do its store forwarding. This was an enhancement @dougallj had suggested
a while back.
Additionally, the "half chunk mag" here is fundamentally less
complicated because it doesn't require sythensizing cross lane permutes
with a blend operation, so we can optimistically do that first if the
len is small enough that a full 32 byte chunk doesn't make any sense.
Adam Stylinski [Wed, 11 Sep 2024 22:34:54 +0000 (18:34 -0400)]
Simplify avx2 chunkset a bit
Put length 16 in the length checking ladder and take care of it there
since it's also a simple case to handle. We kind of went out of our way
to pretend 128 bit vectors didn't exist when using avx2 but this can be
handled in a single instruction. Strangely the intrinsic uses vector
register operands but the instruction itself assumes a memory operand
for the source. This also means we don't have to handle this case in our
"GET_CHUNK_MAG" function.