Speed up software CRC-32 computation by a factor of 1.5 to 3.
Use the interleaved method of Kadatch and Jenkins in order to make
use of pipelined instructions through multiple ALUs in a single
core. This also speeds up and simplifies the combination of CRCs,
and updates the functions to pre-calculate and use an operator for
CRC combination.
Adam Stylinski [Fri, 8 Apr 2022 17:24:21 +0000 (13:24 -0400)]
Adding avx512_vnni inline + copy elision
Interesting revelation while benchmarking all of this is that our
chunkmemset_avx seems to be slower in a lot of use cases than
chunkmemset_sse. That will be an interesting function to attempt to
optimize.
Right now though, we're basically beating google for all PNG decode and
encode benchmarks. There are some variations of flags that can
basically have us trading blows, but we're about as much as 14% faster
than chromium's zlib patches.
While we're here, add a more direct benchmark of the folded copy method
versus the explicit copy + checksum.
Adam Stylinski [Fri, 8 Apr 2022 02:57:09 +0000 (22:57 -0400)]
Added inlined AVX512 adler checksum + copy
While we're here, also simplfy the "fold" signature, as reducing the
number of rebases and horizontal sums did not prove to be meaningfully
faster (slower in many circumstances).
Adam Stylinski [Wed, 6 Apr 2022 19:38:20 +0000 (15:38 -0400)]
Add AVX2 inline copy + adler implementation
This was pretty much across the board wins for performance, but the wins
are very data dependent and it sort of depends on what copy runs look
like. On our less than realistic data in benchmark_zlib_apps, the
decode test saw some of the bigger gains, ranging anywhere from 6 to 11%
when compiled with AVX2 on a Cascade Lake CPU (and with only AVX2
enabled). The decode on realistic imagery enjoyed smaller gains,
somewhere between 2 and 4%.
Interestingly, there was one outlier on encode, at level 5. The best
theory for this is that the copy runs for that particular compression
level were such that glibc's ERMS aware memmove implementation managed
to marginally outpace the copy during the checksum with the move rep str
sequence thanks to clever microcoding on Intel's part. It's hard to say
for sure but the most standout difference between the two perf profiles
was more time spent in memmove (which is expected, as it's calling
memcpy instead of copying the bytes during the checksum).
There's the distinct possibility that the AVX2 checksums could be
marginally improved by one level of unrolling (like what's done in the
SSE3 implementation). The AVX512 implementations are certainly getting
gains from this but it's not appropriate to append this optimization in
this series of commits.
Adam Stylinski [Sun, 3 Apr 2022 16:18:12 +0000 (12:18 -0400)]
Adding an SSE42 optimized copy + adler checksum implementation
We are protecting its usage around a lot of preprocessor macros as the
other methods are not yet implemented and calling this version bypasses
the faster adler implementations implicitly.
When more versions are written for faster vectorizations, the functable
entries will be populated and preprocessor macros removed. This round,
the copy + checksum is not employing as many tricks as one would hope
with a "folded" checksum routine. The reason for this is the
particularly tricky case of dealing with unaligned buffers. The
implementations which don't have CPUs in the mix that have a huge
penalty for unaligned loads will have a much faster implementation.
Fancier methods that minimized rebasing, while having the potential to
be faster, ended up being slower because the compiler structured the
code in a way that ended up either spilling to the stack or trampolining
out of a loop and back in it instead of just jumping over the first load
and store.
Revisiting this for AVX512, where more registers are abundant and more
advanced loads exist, may be prudent.
Adam Stylinski [Fri, 1 Apr 2022 23:02:05 +0000 (19:02 -0400)]
Create adler32_fold_c* functions
These are very simple wrappers that do nothing clever but serve as a
shim interface for implementing versions which do cleverly track the
number of scalar sums performed so that we can minimize rebasing and
also have an efficient copy elision.
This serves as the baseline as each vectorization gets its own commit.
That way the PR will be bisectable.
Adam Stylinski [Sun, 10 Apr 2022 17:01:22 +0000 (13:01 -0400)]
Improved chunkset substantially where it's heavily used
For most realistic use cases, this doesn't make a ton of difference.
However, for things which are highly compressible and enjoy very large
run length encodes in the window, this is a huge win.
We leverage a permutation table to swizzle the contents of the memory
chunk into a vector register and then splat that over memory with a fast
copy loop.
In essence, where this helps, it helps a lot. Where it doesn't, it does
no measurable damage to the runtime.
This commit also simplifies a chunkcopy_safe call for determining a
distance. Using labs is enough to give the same behavior as before,
with the added benefit that no predication is required _and_, most
importantly, static analysis by GCC's string fortification can't throw a
fit because it conveys better to the compiler that the input into
builtin_memcpy will always be in range.
Adam Stylinski [Wed, 27 Apr 2022 01:53:30 +0000 (21:53 -0400)]
Fixed regression introduced by inlining CRC + copy
Pretty much every time updatewindow has been called, implicitly a
checksum was performed unless on s/390 or state->wrap & 4 == 0. The
inflateSetDictionary function instead separately calls this checksum
before invoking update window and checks the checksum to see if it
matches the initial checksum (a property that happens from parsing the
DICTID section of the headers).
Instead, we can make updatewindow have a "copy" parameter, which is the
state->wrap value that is being checked anyway. We instead move the 3rd
bit check to be checked by the caller rather than the callee.
Currently deflate and inflate both use a common state struct. There are
several variables in this struct that we don't need for inflate, and
more may be coming in the future. Therefore split them in two separate
structs. This in turn requires splitting ZALLOC_STATE and ZCOPY_STATE
macros.
https://github.com/powturbo/TurboBench links zlib and zlib-ng into the
same binary, causing non-static symbol conflicts. Fix by using PREFIX()
for flush_pending(), bi_reverse(), inflate_ensure_window() and all of
the IBM Z symbols.
Note: do not use an explicit zng_, since one of the long-term goals is
to be able to link two versions of zlib-ng into the same binary for
benchmarking [1].
Mika Lindqvist [Tue, 12 Apr 2022 22:22:29 +0000 (01:22 +0300)]
Check that sys/auxv.h exists at configure time and add preprocessor define for it.
* Protect including sys/auxv.h in all relevant files with the new preprocessor define
* Test for both existence of both sys/auxv.h and getauxval() with both cmake and configure
Mika Lindqvist [Tue, 5 Apr 2022 21:04:45 +0000 (00:04 +0300)]
Add one extra byte to return value of compressBound and deflateBound for small lengths due to shift returning 0.
* Treat 0 byte input as 1 byte input when calculating compressBound and deflateBound
Rename memory alignment functions because they handle custom allocator which is the first parameter so having calloc and cfree (c = custom) is confusing in the name.
Adam Stylinski [Wed, 6 Apr 2022 22:15:57 +0000 (18:15 -0400)]
Fix the custom PNG image based benchmark
The height parameter was using a fixed macro, written at a time when the
test imagery was fully synthetic. Because of this, images smaller than
than our in-memory generated imagery will artificially throw a CRC
error.
Remove sanitizer support from configure since it is better supported in cmake. Anybody who still needs it can use cmake or manually set CFLAGS and LDFLAGS.
Adam Stylinski [Sun, 27 Mar 2022 23:20:08 +0000 (19:20 -0400)]
Use size_t types for len arithmetic, matching signature
This suppresses a warning and keeps everything safely the same type.
While it's unlikely that the input for any of this will exceed the size
of an unsigned 32 bit integer, this approach is cleaner than casting and
should not result in a performance degradation.
Adam Stylinski [Sat, 12 Mar 2022 21:09:02 +0000 (16:09 -0500)]
Leverage inline CRC + copy
This brings back a bit of the performance that may have been sacrificed
by reverting the reorganized inflate window. Doing a copy at the same
time as a CRC is basically free.
Fixed signed comparison warning in zng_calloc_aligned.
zutil.c: In function ‘zng_calloc_aligned’:
zutil.c:133:20: warning: comparison of integer expressions of different signedness: ‘int32_t’ {aka ‘int’} and ‘long unsigned int’ [-Wsign-compare]
Fixed operator precedence warnings in slide_hash_sse2.
slide_hash_sse2.c(58,5): warning C4554: '&': check operator precedence for possible error; use parentheses to clarify precedence
slide_hash_sse2.c(59,5): warning C4554: '&': check operator precedence for possible error; use parentheses to clarify precedence
inflate_p.h(244,18): warning C4018: '>': signed/unsigned mismatch
inflate_p.h(234,38): warning C4244: 'initializing': conversion from '__int64' to 'int', possible loss of data
inffast.c
inflate_p.h(244,18): warning C4018: '>': signed/unsigned mismatch
inflate_p.h(234,38): warning C4244: 'initializing': conversion from '__int64' to 'int', possible loss of data
inflate.c
inflate_p.h(244,18): warning C4018: '>': signed/unsigned mismatch
inflate_p.h(234,38): warning C4244: 'initializing': conversion from '__int64' to 'int', possible loss of data
Adam Stylinski [Fri, 18 Mar 2022 23:18:10 +0000 (19:18 -0400)]
Fix an issue with the ubsan for overflow
While this didn't _actually_ cause any issues for us, technically the
_mm512_reduce_add_epi32() intrinsics returns a signed integer and it
does the very last summation in scalar GPRs as signed integers. While
the ALU still did the math properly (the negative representation is the
same addition in hardware, just interpreted differently), the sanitizer
caught window of inputs here definitely outside the range of a signed
integer for this immediate operation.
The solution, as silly as it may seem, would be to implement our own 32
bit horizontal sum function that does all of the work in vector
registers. This allows us to implicitly keep things in vector register
domain and convert at the very end after we've summed the summation.
The compiler's sanitizer doesn't know the wiser and the solution still
results in being correct.
Adam Stylinski [Sun, 20 Mar 2022 15:44:32 +0000 (11:44 -0400)]
Rename adler32_sse41 to adler32_ssse3
As it turns out, the sum of absolute differences instruction _did_ exist
in SSSE3 all along. SSE41 introduced a stranger, less commonly used
variation of the sum of absolute difference instruction. Knowing this,
the old SSSE3 method can be axed entirely and the SSE41 method can now
be used on CPUs only having SSSE3.
Removing this extra functable entry shrinks the code and allows for a
simpler planned refactor later for the adler checksum and copy elision.
Adam Stylinski [Fri, 18 Mar 2022 00:22:56 +0000 (20:22 -0400)]
Fix a latent issue with chunkmemset
It would seem that on some platforms, namely those which are
!UNALIGNED64_OK, there was a likelihood of chunkmemset_safe_c copying all
the bytes before passing control flow to chunkcopy, a function which is
explicitly unsafe to be called with a zero length copy.
Adam Stylinski [Thu, 17 Mar 2022 02:52:44 +0000 (22:52 -0400)]
Fix UBSAN's cry afoul
Technically, we weren't actually doing this the way C wants us to,
legally. The zmemcpy's turn into NOPs for pretty much all > 0
optimization levels and this gets us defined behavior with the
sanitizer, putting the optimized load by arbitrary alignment into the
compiler's hands instead of ours.
Mika Lindqvist [Sun, 13 Mar 2022 15:12:42 +0000 (17:12 +0200)]
Allow bypassing runtime feature check of TZCNT instructions.
* This avoids conditional branch when it's known at build time that TZCNT instructions are always supported