# at once one spatch process can take around 2.5 GiB of RAM, which can easily eat up all available RAM
# when paired together with parallel
#
- # 2) Make sure spatch can find our includes via -I <dir>, similarly as we do when compiling stuff
+ # 2) Make sure spatch can find our includes via -I <dir>, similarly as we do when compiling stuff.
+ # Also, include the system include path as well, since we're not kernel and we make use of the stdlib
+ # (and other libraries).
#
# 3) Make sure to include includes from includes (--recursive-includes), but use them only to get type
- # definitions (--include-headers-for-types) - otherwise we'd start formating them as well, which might be
- # unwanted, especially for includes we fetch verbatim from third-parties
+ # definitions (--include-headers-for-types) - otherwise we'd start formatting them as well, which might
+ # be unwanted, especially for includes we fetch verbatim from third-parties
#
- # 4) Use cache, since generating the full AST is _very_ expensive, i.e. the uncached run takes 15 - 30
- # minutes (for one rule(!)), vs 30 - 90 seconds when the cache is populated. One major downside of the
- # cache is that it's quite big - ATTOW the cache takes around 15 GiB, but the performance boost is
- # definitely worth it
+ # 4) Explicitly undefine the SD_BOOT symbol, so Coccinelle ignores includes guarded by #if SD_BOOT
+ #
+ # 5) Use cache, since generating the full AST is expensive. With cache we can do that only once and then
+ # reuse the cached ASTs for other rules. This cuts down the time needed to run each rule by ~60%.
parallel --halt now,fail=1 --keep-order --noswap --max-args=10 \
spatch --cache-prefix "$CACHE_DIR" \
-I src \