accesses. Released objects are instantly freed using munmap() so that any
immediate subsequent access to the memory area crashes the process if the
area had not been reallocated yet. This mode can be enabled at build time
- by setting DEBUG_UAF. It tends to consume a lot of memory and not to scale
- at all with concurrent calls, that tends to make the system stall. The
- watchdog may even trigger on some slow allocations.
+ by setting DEBUG_UAF, or at run time by disabling pools and enabling UAF
+ with "-dMuaf". It tends to consume a lot of memory and not to scale at all
+ with concurrent calls, that tends to make the system stall. The watchdog
+ may even trigger on some slow allocations.
There are no more provisions for running with a shared pool but no thread-local
cache: the shared pool's main goal is to compensate for the expensive calls to
through mmap() and munmap(). The memory usage significantly inflates
and the performance degrades, but this allows to detect a lot of
use-after-free conditions by crashing the program at the first abnormal
- access. This should not be used in production.
+ access. This should not be used in production. It corresponds to
+ boot-time options "-dMuaf". Caching is disabled but may be re-enabled
+ using "-dMcache".
DEBUG_POOL_INTEGRITY
When enabled, objects picked from the cache are checked for corruption
#endif
#if defined(DEBUG_MEMORY_POOLS)
POOL_DBG_TAG |
+#endif
+#if defined(DEBUG_UAF)
+ POOL_DBG_UAF |
#endif
0;
{ POOL_DBG_CALLER, "caller", "no-caller", "save caller information in cache" },
{ POOL_DBG_TAG, "tag", "no-tag", "add tag at end of allocated objects" },
{ POOL_DBG_POISON, "poison", "no-poison", "poison newly allocated objects" },
+ { POOL_DBG_UAF, "uaf", "no-uaf", "enable use-after-free checks (slow)" },
{ 0 /* end */ }
};
{
if (!pool->limit || pool->allocated < pool->limit) {
void *ptr;
-#ifdef DEBUG_UAF
- ptr = pool_alloc_area_uaf(pool->alloc_sz);
-#else
- ptr = pool_alloc_area(pool->alloc_sz);
-#endif
+
+ if (pool_debugging & POOL_DBG_UAF)
+ ptr = pool_alloc_area_uaf(pool->alloc_sz);
+ else
+ ptr = pool_alloc_area(pool->alloc_sz);
if (ptr) {
_HA_ATOMIC_INC(&pool->allocated);
return ptr;
*/
void pool_put_to_os(struct pool_head *pool, void *ptr)
{
-#ifdef DEBUG_UAF
- pool_free_area_uaf(ptr, pool->alloc_sz);
-#else
- pool_free_area(ptr, pool->alloc_sz);
-#endif
+ if (pool_debugging & POOL_DBG_UAF)
+ pool_free_area_uaf(ptr, pool->alloc_sz);
+ else
+ pool_free_area(ptr, pool->alloc_sz);
_HA_ATOMIC_DEC(&pool->allocated);
}
" Detect out-of-bound corruptions: -dMno-merge,tag\n"
" Detect post-free cache corruptions: -dMno-merge,cold-first,integrity,caller\n"
" Detect all cache corruptions: -dMno-merge,cold-first,integrity,tag,caller\n"
+ " Detect UAF (disables cache, very slow): -dMuaf\n"
+ " Detect post-cache UAF: -dMuaf,cache,no-merge,cold-first,integrity,tag,caller\n"
" Detect post-free cache corruptions: -dMno-merge,cold-first,integrity,caller\n",
*err);
return -1;
for (v = 0; dbg_options[v].flg; v++) {
if (isteq(feat, ist(dbg_options[v].set))) {
new_dbg |= dbg_options[v].flg;
+ /* UAF implicitly disables caching, but it's
+ * still possible to forcefully re-enable it.
+ */
+ if (dbg_options[v].flg == POOL_DBG_UAF)
+ new_dbg |= POOL_DBG_NO_CACHE;
break;
}
else if (isteq(feat, ist(dbg_options[v].clr))) {