Originally, the plan was to use tests/shell from git HEAD, but this
does not work well in practise because slight changes in the test
output break checks resulting in lots of [DUMP FAIL] errors.
It is easier to test infrastructure self-contained in this 1.0.6.y
branch.
However, backporting the tests/shell into 1.0.6.y turns out to be more
complicated than expected, so I decided to follow the opposite, which is
to (brute) force a copy of tests/shell from 2a38f458f12bc032dac1b3ba63f95ca5a3c03fbd into this branch.
This also requires a number follow up partial reverts on tests/shell
updates to work with 1.0.6.y.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
The bison parser enforces this implicitly by grammar rules.
Because subkeys have to be conatenated via ".", notation, e.g.
"mark . ip saddr", all concatenation expressions always consist of at
least two elements.
But this doesn't apply to the json frontend which just uses an array:
it can be empty or only contain one element.
The included reproducer makes the eval stage set the "concatenation" flag
on the interval set. This prevents the needed conversion code to turn the
element values into ranges from getting run.
Malformed input returns NULL when decoding left/right side of binop.
This causes a NULL dereference in expr_evaluate_binop; left/right must
point to a valid expression.
Fix this in the parser, else would have to sprinkle NULL checks all over
the evaluation code.
After fix, loading the bogon yields:
internal:0:0-0: Error: Malformed object (too many properties): '{}'.
internal:0:0-0: Error: could not decode binop rhs, '<<'.
internal:0:0-0: Error: Invalid mangle statement value
internal:0:0-0: Error: Parsing expr array at index 1 failed.
internal:0:0-0: Error: Parsing command array at index 3 failed.
Fixes: 0ac39384fd9e ("json: Accept more than two operands in binary expressions") Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Pablo Neira Ayuso <pablo@netfilter.org>
After this fix, following errors will be shown:
Error: unqualified type invalid specified in map definition. Try "typeof expression" instead of "type datatype".
map m {
^
map m {
^
Error: map has no mapping data
Fixes: 343a51702656 ("src: store expr, not dtype to track data in sets") Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Pablo Neira Ayuso <pablo@netfilter.org>
typeof_expr allows for symbol, constant and bitwise expressions,
restrict it to selector expressions.
After this patch, input generated by fuzzer is rejected upfront:
# nft -f test.nft
test.nft:3:53-53: Error: syntax error, unexpected number
typeof numgen inc mod 2 : ip daddr . 0
^
test.nft:2:12-13: Error: set definition does not specify key
map t2 {
^^
test.nft:8:65-67: Error: No such file or directory
meta l4proto tcp dnat ip to numgen inc mod 2 map @t2
^^^
test.nft:8:65-67: Error: No such file or directory
meta l4proto tcp dnat ip to numgen inc mod 2 map @t2
^^^
Revisit 4ab1e5e60779 ("src: allow use of 'verdict' in typeof
definitions") to handle verdict as string, later a token can be added
to the scanner and enable it via flex start conditions.
Fixes: 14357cff40ed ("parser: add typeof keyword for declarations") Reported-by: Florian Westphal <fw@strlen.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
netlink_linearize.c has never supported more than 16 chained binops.
Adding more is possible but overwrites the stack in
netlink_gen_bitwise().
Add a recursion counter to catch this at eval stage.
Its not enough to just abort once the counter hits
NFT_MAX_EXPR_RECURSION.
This is because there are valid test cases that exceed this.
For example, evaluation of 1 | 2 will merge the constans, so even
if there are a dozen recursive eval calls this will not end up
with large binop chain post-evaluation.
v2: allow more than 16 binops iff the evaluation function
did constant-merging.
Included bogon input generates following Sanitizer splat:
AddressSanitizer: dynamic-stack-buffer-overflow on address 0x7...
WRITE of size 2 at 0x7fffffffcbe4 thread T0
#0 0x0000003a68b8 in __asan_memset (src/nft+0x3a68b8) (BuildId: 3678ff51a5405c77e3e0492b9a985910efee73b8)
#1 0x0000004eb603 in __mpz_export_data src/gmputil.c:108:2
#2 0x0000004eb603 in netlink_export_pad src/netlink.c:256:2
#3 0x0000004eb603 in netlink_gen_range src/netlink.c:471:2
#4 0x0000004ea250 in __netlink_gen_data src/netlink.c:523:10
#5 0x0000004e8ee3 in alloc_nftnl_setelem src/netlink.c:205:3
#6 0x0000004d4541 in mnl_nft_setelem_batch src/mnl.c:1816:11
Problem is that the range end is emitted to the buffer at the *padded*
location (rounded up to next register size), but buffer sizing is
based of the expression length, not the padded length.
Also extend the test script: Capture stderr and if we see
AddressSanitizer warning, make it fail.
Same bug as the one fixed in 600b84631410 ("netlink: fix stack buffer overflow with sub-reg sized prefixes"),
just in a different function.
Apply same fix: no dynamic array + add a range check.
Kernel's flowtable message might not contain the
NFTA_FLOWTABLE_HOOK_DEVS attribute. In that case, nftnl_flowtable_get()
will return NULL for the respective nftnl attribute.
Fixes: db0697ce7f602 ("src: support for flowtable listing") Signed-off-by: Phil Sutter <phil@nwl.cc> Reviewed-by: Pablo Neira Ayuso <pablo@netfilter.org>
Code accidentally treats missing NFTNL_FLOWTABLE_PRIO attribute as zero
prio value which may not be correct.
Fixes: db0697ce7f602 ("src: support for flowtable listing") Signed-off-by: Phil Sutter <phil@nwl.cc> Reviewed-by: Pablo Neira Ayuso <pablo@netfilter.org>
The two commits mentioned below resolved null dererence crashes when the
policy resp. priority keyword was missing in the chain/flowtable
specification.
Same issue exists in the json output path, so apply similar fix there
and extend the existing test cases.
Fixes: 5b37479b42b3 ("nftables: don't crash in 'list ruleset' if policy is not set") Fixes: b40bebbcee36 ("rule: do not crash if to-be-printed flowtable lacks priority") Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Phil Sutter <phil@nwl.cc>
-o/--optimize results in EEXIST error when merging two rules that lead
to ambiguous set/map, for instance:
table ip x {
chain v4icmp {}
chain v4icmpc {}
chain y {
ip protocol icmp jump v4icmp
ip protocol icmp goto v4icmpc
}
}
which is not possible because duplicated keys are not possible in
set/map. This is how it shows when running a test:
Merging:
testcases/sets/dumps/sets_with_ifnames.nft:56:3-30: ip protocol icmp jump v4icmp
testcases/sets/dumps/sets_with_ifnames.nft:57:3-31: ip protocol icmp goto v4icmpc
into:
ip protocol vmap { icmp : jump v4icmp, icmp : goto v4icmpc }
internal:0:0-0: Error: Could not process rule: File exists
Add a new step to compare rules that are candidate to be merged to
detect colissions in set/map keys in order to skip them in the next
final merging step.
Check if right hand side of relational is a bitmask, ie.
relational
/ \
... or
/ \
value or
/ \
value value
then, if left hand side is a binop expression, compare left and right
hand sides (not only left hand of this binop expression) to check for
redundant matches in consecutive rules, ie.
relational
/ \
and ...
/ \
payload value
before this patch, only payload in the binop expression was compared.
This allows to compact several rules matching tcp flags in a set/map, eg.
Included test will fail with:
/dev/stdin:8:38-52: Error: Transparent proxy support requires transport protocol match
meta l4proto @protos tproxy to :1088
^^^^^^^^^^^^^^^
Tolerate a set reference too. Because the set can be empty (or there
can be removals later), add a fake 0-rhs value.
This will make pctx_update assign proto_unknown as the transport protocol
in use, Thats enough to avoid 'requires transport protocol' error.
v2: restrict it to meta lhs for now (Pablo Neira Ayuso)
When building each component of the set element key, a late byteorder
switch is performed to ensure that all components in the interval are
represented in big endian, as required by the pipapo backend.
In case that the set element does not fit into the netlink message, the
byteorder switch happens twice, leading to inserting an element with a
bogus component with large sets, so instead:
Note that 16777216 is 0x1000000, which should instead be 0x00000001 to
represent "lo" as u32.
Fix this by switching the value in a temporary variable and use it to
set the set element key attribute in the netlink message.
Later, revisit this to perform this byteorder switch from evaluation
step.
Add tests/shell unit to cover for this bug.
Closes: https://bugzilla.netfilter.org/show_bug.cgi?id=1792 Fixes: 8ac2f3b2fca3 ("src: Add support for concatenated set ranges") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
table ip x {
map y {
typeof ip saddr : meta mark
counter
flags interval,timeout
elements = { 1.1.1.1-1.1.1.10 timeout 10m : 20, 2.2.2.2-2.2.2.5 timeout 10m : 30}
}
then, invoking the get element command:
# nft get element x y { 1.1.1.2 }
results in, before (incomplete output):
table ip x {
map y {
type ipv4_addr : mark
flags interval,timeout
elements = { 1.1.1.1 counter packets 0 bytes 0 timeout 10m expires 1m24s160ms : 0x00000014 }
}
}
Note that it displays 1.1.1.1, instead of 1.1.1.1-1.1.1.10.
After this fix:
table ip x {
map y {
type ipv4_addr : mark
flags interval,timeout
elements = { 1.1.1.1-1.1.1.10 counter packets 0 bytes 0 timeout 10m expires 1m24s160ms : 0x00000014 }
}
}
Fixes: a43cc8d53096 ("src: support for get element command") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This uses the wrong length. This must re-use the length of the datatype,
not the string length.
The added test cases will fail without the fix due to erroneous
overlap detection, which in itself is due to incorrect sorting of
the elements.
Example error:
netlink: Error: interval overlaps with an existing one
add element inet testifsets simple_wild { "2-1" } failed.
table inet testifsets {
... elements = { "1-1", "abcdef*", "othername", "ppp0" }
... but clearly "2-1" doesn't overlap with any existing members.
The false detection is because of the "acvdef*" wildcard getting sorted
at the beginning of the list which is because its erronously initialised
as a 64bit number instead of 128 bits (16 bytes / IFNAMSIZ).
Use __datatype_set() to release the existing datatype before assigning
the new one, otherwise ASAN reports the following memleak:
Direct leak of 104 byte(s) in 1 object(s) allocated from:
#0 0x7fbc8a2b89cf in __interceptor_malloc ../../../../src/libsa
#1 0x7fbc898c96c2 in xmalloc src/utils.c:31
#2 0x7fbc8971a182 in datatype_clone src/datatype.c:1406
#3 0x7fbc89737c35 in expr_evaluate_unary src/evaluate.c:1366
#4 0x7fbc89758ae9 in expr_evaluate src/evaluate.c:3057
#5 0x7fbc89726bd9 in byteorder_conversion src/evaluate.c:243
#6 0x7fbc89739ff0 in expr_evaluate_bitwise src/evaluate.c:1491
#7 0x7fbc8973b4f8 in expr_evaluate_binop src/evaluate.c:1600
#8 0x7fbc89758b01 in expr_evaluate src/evaluate.c:3059
#9 0x7fbc8975ae0e in stmt_evaluate_arg src/evaluate.c:3198
#10 0x7fbc8975c51d in stmt_evaluate_payload src/evaluate.c:330
Fixes: faa6908fad60 ("evaluate: clone unary expression datatype to deal with dynamic datatype") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
if (dep->left->payload.base != PROTO_BASE_TRANSPORT_HDR)
is legal only after checking that ->left points to an
EXPR_PAYLOAD expression. The dependency store can also contain
EXPR_META, in this case we access a bogus part of the union.
The payload_may_dependency_kill_icmp helper can't handle a META
dep either, so return early.
Fixes: 533565244d88 ("payload: check icmp dependency before removing previous icmp expression") Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Pablo Neira Ayuso <pablo@netfilter.org>
Listing is not correct because these IPv6 extension header are still
lacking context to properly delinearize the listing, but at least this
does not crash anymore.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Previous commit fixed erroneous handling of raw expressions when RHS sets
a zero value.
Input: @ih,58,6 set 0 @ih,86,6 set 0 @ih,170,22 set 0
Output:@ih,48,16 set @ih,48,16 & 0xffc0 @ih,80,16 set \
@ih,80,16 & 0xfc0f @ih,160,32 set @ih,160,32 & 0xffc00000
After this patch, this will instead display:
@ih,58,6 set 0x0 @ih,86,6 set 0x0 @ih,170,22 set 0x0
payload_expr_trim_force() only works when the payload has no known
protocol (template) attached, i.e. will be printed as raw payload syntax.
It performs sanity checks on @mask and then adjusts the payload expression
length and offset according to the mask.
Also add this check in __binop_postprocess() so we can also discard masks
when matching, e.g.
binop_postprocess now returns if it performed an action or not; if this
returns true then arguments might have been freed so callers must no longer
refer to any of the expressions attached to the binop.
Next patch adds test cases for this.
Signed-off-by: Florian Westphal <fw@strlen.de> Reviewed-by: Pablo Neira Ayuso <pablo@netfilter.org>
@ih,58,6 set 0 <- Zero 6 bits, starting with bit 58
Changes to inner header mandate a checksum update, which only works for
even byte counts (except for last byte in the payload).
Thus, we load 2b at offet 6. (16bits, offset 48).
Because we want to zero 6 bits, we need a mask that retains 10 bits and
clears 6: b1111111111000000 (first 8 bit retains 48-57, last 6 bit clear
58-63). The '0xc0ff' is not correct, but thats because debug output comes
from libnftnl which prints values in host byte order, the value will be
interpreted as big endian on kernel side, so this will do the right thing.
Next, same problem:
@ih,86,6 set 0 <- Zero 6 bits, starting with bit 86.
nft needs to round down to even-sized byte offset, 10, then retain first
6 bits (80 + 6 == 86), then clear 6 bits (86-91), then keep 4 more as-is
(92-95).
So mask is 0xfc0f (in big endian) would be correct (b1111110000001111).
Last expression, @ih,170,22 set 0, asks to clear 22 bits starting with bit
170, nft correctly rounds this down to a 32 bit read at offset 160.
Required mask keeps first 10 bits, then clears 22
(b11111111110000000000000000000000). Required mask would be 0xffc00000,
which corresponds to the wrong-endian-printed value in line 8 above.
Now that we convinced ourselves that the input side is correct, fix up
netlink delinearize to undo the mask alterations if we can't find a
template to print a human-readable payload expression.
With this patch, we get this output:
@ih,48,16 set @ih,48,16 & 0xffc0 @ih,80,16 set @ih,80,16 & 0xfc0f @ih,160,32 set @ih,160,32 & 0xffc00000
... which isn't ideal. We should fixup the payload expression to display
the same output as the input, i.e. adjust payload->len and offset as per
mask and discard the mask instead.
This patch consolidates ctx->stmt_len reset in stmt_evaluate() to avoid
this problem. Note that stmt_evaluate_meta() and stmt_evaluate_ct()
already reset it after the statement evaluation.
Moreover, statement dependency can be generated while evaluating a meta
and ct statement. Payload statement dependency already manually stashes
this before calling stmt_evaluate(). Add a new stmt_dependency_evaluate()
function to stash statement length context when evaluating a new statement
dependency and use it for all of the existing statement dependencies.
Florian also says:
'meta mark set vlan id map { 1 : 0x00000001, 4095 : 0x00004095 }' will
crash. Reason is that the l2 dependency generated here is errounously
expanded to a 32bit-one, so the evaluation path won't recognize this
as a L2 dependency. Therefore, pctx->stacked_ll_count is 0 and
__expr_evaluate_payload() crashes with a null deref when
dereferencing pctx->stacked_ll[0].
nft-test.py gains a fugly hack to tolerate '!map typeof vlan id : meta mark'.
For more generic support we should find something more acceptable, e.g.
!map typeof( everything here is a key or data ) timeout ...
tests/py update and assert(pctx->stacked_ll_count) by Florian Westphal.
Fixes: edecd58755a8 ("evaluate: support shifts larger than the width of the left operand") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
Allow to specify a numeric queue id as part of a map.
The parser side is easy, but the reverse direction (listing) is not.
'queue' is a statement, it doesn't have an expression.
Add a generic 'queue_type' datatype as a shim to the real basetype with
constant expressions, this is used only for udata build/parse, it stores
the "key" (the parser token, here "queue") as udata in kernel and can
then restore the original key.
Add a dumpfile to validate parser & output.
JSON support is missing because JSON allow typeof only since quite
recently.
Implement this as a special "type" property value which is an object
with sole property "typeof". The latter's value is the JSON
representation of the expression in set->key, so for concatenated
typeofs it is a concat expression.
All this is a bit clumsy right now but it works and it should be
possible to tear it down a bit for more user-friendliness in a
compatible way by either replacing the concat expression by the array it
contains or even the whole "typeof" object - the parser would just
assume any object (or objects in an array) in the "type" property value
are expressions to extract a type from.
There are two mechanisms to update the UDP checksum field:
1) _CSUM_TYPE and _CSUM_OFFSET which specify the type of checksum
(e.g. inet) and offset where it is located.
2) use NFT_PAYLOAD_L4CSUM_PSEUDOHDR flag to use layer 4 kernel
protocol parser.
The problem with 1) is that it is inconditional, that is, csum_type and
csum_offset cannot deal with zero UDP checksum.
Use NFT_PAYLOAD_L4CSUM_PSEUDOHDR flag instead since it relies on the
layer 4 kernel parser which skips updating zero UDP checksum.
Extend test coverage for the UDP mangling with and without zero
checksum.
Fixes: e6c9174e13b2 ("proto: add checksum key information to struct proto_desc") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Leaving the invalid pointer value in place will cause a double-free when
users call nft_ctx_clear_vars() first, then nft_ctx_free(). Moreover,
nft_ctx_add_var() passes the pointer to mrealloc() and thus assumes it
to be either NULL or valid.
position refers to the rule handle, it has similar cache requirements as
replace rule command, relax cache requirements.
Commit e5382c0d08e3 ("src: Support intra-transaction rule references")
uses position.id for index support which requires a full cache, but
only in such case.
Fixes: 01e5c6f0ed03 ("src: add cache level flags") Tested-by: Eric Garver <eric@garver.life> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
No need for full cache, this command relies on the rule handle which is
not validated from userspace. Cache requirements are similar to those
of add/create/delete rule commands.
This speeds up incremental updates with large rulesets.
Extend tests/coverage for rule replacement.
Fixes: 01e5c6f0ed03 ("src: add cache level flags") Tested-by: Eric Garver <eric@garver.life> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
nft_cache_evaluate() always takes a non-null filter, remove superfluous
checks when calculating cache requirements via flags.
Note that filter is still option from netlink dump path, since this can
be called from error path to provide hints.
Fixes: 08725a9dc14c ("cache: filter out rules by chain") Fixes: b3ed8fd8c9f3 ("cache: missing family in cache filtering") Fixes: 635ee1cad8aa ("cache: filter out sets and maps that are not requested") Fixes: 3f1d3912c3a6 ("cache: filter out tables that are not requested") Tested-by: Eric Garver <eric@garver.life> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Only family is set on in the dump request, set on table and chain
otherwise, rules for the given family are fetched for each existing
table.
Fixes: afbd102211dc ("src: do not use the nft_cache_filter object from mnl.c") Tested-by: Eric Garver <eric@garver.life> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
babc6ee8773c ("cache: populate chains on demand from error path")
Flags describe cache requirements for a given batch, accumulate flags
that are inferred from commands in this batch.
Fixes: 7df42800cf89 ("src: single cache_update() call to build cache before evaluation") Tested-by: Eric Garver <eric@garver.life> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Inconditionally reset filter for each command in the batch, this is safer.
Fixes: 3f1d3912c3a6 ("cache: filter out tables that are not requested") Tested-by: Eric Garver <eric@garver.life> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Due to missing `NULL`-check, there will be a segfault for invalid statements.
Fixes: 07958ec53830 ("json: add set statement list support") Signed-off-by: Sebastian Walz (sivizius) <sebastian.walz@secunet.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
It will return a pointer to an owned string, the caller must free it.
However, `json_error` just borrows the string to format it as `%s`, but
after printing the formatted error message, the pointer to the string is
lost and thus never freed.
Fixes: 586ad210368b ("libnftables: Implement JSON parser") Signed-off-by: Sebastian Walz (sivizius) <sebastian.walz@secunet.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Unbreak restoring elements in set with rate limit that fail with:
> /dev/stdin:3618:61-61: Error: limit burst must be > 0
> elements = { 1.2.3.4 limit rate over 1000 kbytes/second timeout 1s,
no need for burst != 0 for limit rate byte mode.
Add tests/shell too.
Fixes: 702eff5b5b74 ("src: allow burst 0 for byte ratelimit and use it as default") Fixes: 285baccfea46 ("src: disallow burst 0 in ratelimits") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
ruleset.nft:7:9-52: Error: Chain "input" already exists in table ip 'filter' with different declaration
type filter hook postrouting priority filter;
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
instead of reporting a misleading unsupported chain type when updating
an existing chain with different type/hook/priority.
Fixes: 573788e05363 ("src: improve error reporting for unsupported chain type") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
generates an implicit dependency from the inet family to match on meta
nfproto ip.
The length of this implicit expression is incorrectly adjusted to the
statement length, ie. relational to compare meta nfproto takes 4 bytes
instead of 1 byte. The evaluation of 'ip dscp' under the meta mark
statement triggers this implicit dependency which should not consider
the context statement length since it is added before the statement
itself.
This problem shows when listing the ruleset, since netlink_parse_cmp()
where left->len < right->len, hence handling the implicit dependency as
a concatenation, but it is actually a bug in the evaluation step that
leads to incorrect bytecode.
Fixes: 3c64ea7995cb ("evaluate: honor statement length in integer evaluation") Fixes: edecd58755a8 ("evaluate: support shifts larger than the width of the left operand") Tested-by: Brian Davidson <davidson.brian@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
# nft --debug=netlink add rule ip x y 'ct mark set ip dscp & 0x0f << 1 | 0xff000000'
Error: Value 4278190080 exceeds valid range 0-63
add rule ip x y ct mark set ip dscp & 0x0f << 1 | 0xff000000
^^^^^^^^^^
Use the statement length as the maximum value in the mark statement
expression.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
8c75d3a16960 ("Reject invalid chain priority values in user space")
provides error reporting from the evaluation phase. Instead, this patch
infers the error after the kernel reports EOPNOTSUPP.
test.nft:3:28-40: Error: Chains of type "nat" must have a priority value above -200
type nat hook prerouting priority -300;
^^^^^^^^^^^^^
This patch also adds another common issue for users compiling their own
kernels if they forget to enable CONFIG_NFT_NAT in their .config file.
Acked-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
/dev/stdin is a placeholder, read() from STDIN_FILENO is used to fetch
the standard input into a buffer.
Since 5c2b2b0a2ba7 ("src: error reporting with -f and read from stdin")
stdin is stored in a buffer to fix error reporting.
This patch requires: ("parser_json: use stdin buffer if available")
Fixes: 149b1c95d129 ("libnftables: refuse to open onput files other than named pipes or regular files") Acked-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Since 5c2b2b0a2ba7 ("src: error reporting with -f and read from stdin")
stdin is stored in a buffer, update json support to use it instead of
reading from /dev/stdin.
Some systems do not provide /dev/stdin symlink to /proc/self/fd/0
according to reporter (that mentions Yocto Linux as example).
Fixes: 935f82e7dd49 ("Support 'nft -f -' to read from stdin") Acked-by: Phil Sutter <phil@nwl.cc> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Set element deletion in maps (including catchall elements) does not work.
# nft delete element ip x m { \* }
BUG: invalid range expression type catch-all set element
nft: src/expression.c:1472: range_expr_value_low: Assertion `0' failed.
Aborted
Call interval_expr_key() to fetch expr->left in the mapping but use the
expression that represents the mapping because it provides access to the
EXPR_F_REMOVE flags.
Moreover, assume maximum value for catchall expression by means of the
expr->len to reuse the existing code to check if the element to be
deleted really exists.
Fixes: 3e8d934e4f72 ("intervals: support to partial deletion with automerge") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>