Phil Sutter [Mon, 21 Oct 2019 16:51:14 +0000 (18:51 +0200)]
main: Fix for misleading error with negative chain priority
getopt_long() would try to parse the negative priority as an option and
return -1 as it is not known:
| # nft add chain x y { type filter hook input priority -30\; }
| nft: invalid option -- '3'
Fix this by prefixing optstring with a plus character. This instructs
getopt_long() to not collate arguments but just stop after the first
non-option, leaving the rest for manual handling. In fact, this is just
what nft desires: mixing options with nft syntax leads to confusive
command lines anyway.
Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Phil Sutter [Mon, 21 Oct 2019 14:29:03 +0000 (16:29 +0200)]
tproxy: Add missing error checking when parsing from netlink
netlink_get_register() may return NULL and every other caller checks
that. Assuming this situation is not expected, just jump to 'err' label
without queueing an explicit error message.
Fixes: 2be1d52644cf7 ("src: Add tproxy support") Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
If --echo is passed, then the cache already contains the commands that
have been sent to the kernel. However, anonymous sets are an exception
since the cache needs to be updated in this case.
Remove the old cache logic from the monitor code that has been replaced
by 01e5c6f0ed03 ("src: add cache level flags").
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org> Acked-by: Phil Sutter <phil@nwl.cc>
Eric Jallot [Thu, 17 Oct 2019 11:08:36 +0000 (13:08 +0200)]
flowtable: fix memleak in exit path
Add missing loop in table_free().
Free all objects in flowtable_free() and add conditions in case of error recovery
in the parser (See commit 4be0a3f922a29).
Also, fix memleak in the parser.
This fixes the following memleak:
# valgrind --leak-check=full nft add flowtable inet raw f '{ hook ingress priority filter; devices = { eth0 }; }'
==15414== Memcheck, a memory error detector
==15414== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==15414== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==15414== Command: nft add flowtable inet raw f {\ hook\ ingress\ priority\ filter;\ devices\ =\ {\ eth0\ };\ }
==15414==
==15414==
==15414== HEAP SUMMARY:
==15414== in use at exit: 266 bytes in 4 blocks
==15414== total heap usage: 55 allocs, 51 frees, 208,105 bytes allocated
==15414==
==15414== 5 bytes in 1 blocks are definitely lost in loss record 2 of 4
==15414== at 0x4C29EA3: malloc (vg_replace_malloc.c:309)
==15414== by 0x5C64AA9: strdup (strdup.c:42)
==15414== by 0x4E705ED: xstrdup (utils.c:75)
==15414== by 0x4E93F01: nft_lex (scanner.l:648)
==15414== by 0x4E85C1C: nft_parse (parser_bison.c:5577)
==15414== by 0x4E75A07: nft_parse_bison_buffer (libnftables.c:375)
==15414== by 0x4E75A07: nft_run_cmd_from_buffer (libnftables.c:443)
==15414== by 0x40170F: main (main.c:326)
==15414==
==15414== 261 (128 direct, 133 indirect) bytes in 1 blocks are definitely lost in loss record 4 of 4
==15414== at 0x4C29EA3: malloc (vg_replace_malloc.c:309)
==15414== by 0x4E705AD: xmalloc (utils.c:36)
==15414== by 0x4E705AD: xzalloc (utils.c:65)
==15414== by 0x4E560B6: expr_alloc (expression.c:45)
==15414== by 0x4E56288: symbol_expr_alloc (expression.c:286)
==15414== by 0x4E8A601: nft_parse (parser_bison.y:1842)
==15414== by 0x4E75A07: nft_parse_bison_buffer (libnftables.c:375)
==15414== by 0x4E75A07: nft_run_cmd_from_buffer (libnftables.c:443)
==15414== by 0x40170F: main (main.c:326)
Fixes: 92911b362e906 ("src: add support to add flowtables") Signed-off-by: Eric Jallot <ejallot@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Phil Sutter [Wed, 16 Oct 2019 22:20:59 +0000 (00:20 +0200)]
rule: Fix for single line ct timeout printing
Commit 43ae7a48ae3de ("rule: do not print semicolon in ct timeout")
removed an extra semicolon at end of line, but thereby broke single line
output. The correct fix is to use opts->stmt_separator which holds
either newline or semicolon chars depending on output mode.
Fixes: 43ae7a48ae3de ("rule: do not print semicolon in ct timeout") Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Florian Westphal <fw@strlen.de>
Phil Sutter [Wed, 16 Oct 2019 21:46:10 +0000 (23:46 +0200)]
tests/monitor: Fix for changed ct timeout format
Commit a9b0c385a1d5e ("rule: print space between policy and timeout")
changed spacing in ct timeout objects but missed to adjust related test
case.
Fixes: a9b0c385a1d5e ("rule: print space between policy and timeout") Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Florian Westphal <fw@strlen.de>
Phil Sutter [Tue, 15 Oct 2019 13:58:13 +0000 (15:58 +0200)]
mnl: Don't use nftnl_set_set()
The function is unsafe to use as it effectively bypasses data length
checks. Instead use nftnl_set_set_str() which at least asserts a const
char pointer is passed.
Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Eric Jallot [Tue, 8 Oct 2019 13:47:24 +0000 (15:47 +0200)]
obj: fix memleak in parser_bison.y
Each object (secmark, synproxy, quota, limit, counter) is dynamically allocated
by the parser and not freed at exit.
However, there is no need to use dynamic allocation here because struct obj
already provides the required storage. Update the grammar to ensure that
obj_alloc() is called before config occurs.
This fixes the following memleak (secmark as example):
# valgrind --leak-check=full nft add secmark inet raw ssh \"system_u:object_r:ssh_server_packet_t:s0\"
==14643== Memcheck, a memory error detector
==14643== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==14643== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==14643== Command: nft add secmark inet raw ssh "system_u:object_r:ssh_server_packet_t:s0"
==14643==
==14643==
==14643== HEAP SUMMARY:
==14643== in use at exit: 256 bytes in 1 blocks
==14643== total heap usage: 41 allocs, 40 frees, 207,809 bytes allocated
==14643==
==14643== 256 bytes in 1 blocks are definitely lost in loss record 1 of 1
==14643== at 0x4C29EA3: malloc (vg_replace_malloc.c:309)
==14643== by 0x4E72074: xmalloc (utils.c:36)
==14643== by 0x4E72074: xzalloc (utils.c:65)
==14643== by 0x4E89A31: nft_parse (parser_bison.y:3706)
==14643== by 0x4E778E7: nft_parse_bison_buffer (libnftables.c:375)
==14643== by 0x4E778E7: nft_run_cmd_from_buffer (libnftables.c:443)
==14643== by 0x40170F: main (main.c:326)
Fixes: f44ab88b1088e ("src: add synproxy stateful object support") Fixes: 3bc84e5c1fdd1 ("src: add support for setting secmark") Fixes: c0697eabe832d ("src: add stateful object support for limit") Fixes: 4d38878b39be4 ("src: add/create/delete stateful objects") Signed-off-by: Eric Jallot <ejallot@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Eric Jallot [Mon, 30 Sep 2019 08:38:23 +0000 (10:38 +0200)]
src: obj: fix memleak in handle_free()
Using limit object as example:
# valgrind --leak-check=full nft list ruleset
==9937== Memcheck, a memory error detector
==9937== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==9937== Using Valgrind-3.14.0 and LibVEX; rerun with -h for copyright info
==9937== Command: nft list ruleset
==9937==
table inet raw {
limit lim1 {
rate 1/second
}
}
==9937==
==9937== HEAP SUMMARY:
==9937== in use at exit: 5 bytes in 1 blocks
==9937== total heap usage: 50 allocs, 49 frees, 212,065 bytes allocated
==9937==
==9937== 5 bytes in 1 blocks are definitely lost in loss record 1 of 1
==9937== at 0x4C29EA3: malloc (vg_replace_malloc.c:309)
==9937== by 0x5C65AA9: strdup (strdup.c:42)
==9937== by 0x4E720A3: xstrdup (utils.c:75)
==9937== by 0x4E660FF: netlink_delinearize_obj (netlink.c:972)
==9937== by 0x4E6641C: list_obj_cb (netlink.c:1064)
==9937== by 0x50E8993: nftnl_obj_list_foreach (object.c:494)
==9937== by 0x4E664EA: netlink_list_objs (netlink.c:1085)
==9937== by 0x4E4FE82: cache_init_objects (rule.c:188)
==9937== by 0x4E4FE82: cache_init (rule.c:221)
==9937== by 0x4E4FE82: cache_update (rule.c:271)
==9937== by 0x4E7716E: nft_evaluate (libnftables.c:406)
==9937== by 0x4E778F7: nft_run_cmd_from_buffer (libnftables.c:447)
==9937== by 0x40170F: main (main.c:326)
Fixes: 4756d92e517ae ("src: listing of stateful objects") Signed-off-by: Eric Jallot <ejallot@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
libnftables: memleak when list of commands is empty
==9946== 200,807 (40 direct, 200,767 indirect) bytes in 1 blocks are definitely lost in loss record 4 of 4
==9946== at 0x4837B65: calloc (vg_replace_malloc.c:762)
==9946== by 0x4F28216: nftnl_batch_alloc (batch.c:66)
==9946== by 0x48A33E8: mnl_batch_init (mnl.c:164)
==9946== by 0x48A736F: nft_netlink.isra.0 (libnftables.c:29)
==9946== by 0x48A7D03: nft_run_cmd_from_filename (libnftables.c:508)
==9946== by 0x10A621: main (main.c:328)
Fixes: fc6d0f8b0cb1 ("libnftables: get rid of repeated initialization of netlink_ctx") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
SO_SNDBUF never fails, this socket option just provides a hint to the
kernel. SO_SNDBUFFORCE sets the buffer size to zero if the value goes
over INT_MAX. Userspace is caching the buffer hint that sends to the
kernel, so it might leave userspace out of sync if the kernel ignores
the hint. Do not make assumptions, fetch the sender buffer size from the
kernel via getsockopt().
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Phil Sutter [Fri, 13 Sep 2019 15:20:04 +0000 (17:20 +0200)]
parser_bison: Fix 'exists' keyword on Big Endian
Size value passed to constant_expr_alloc() must correspond with actual
data size, otherwise wrong portion of data will be taken later when
serializing into netlink message.
Booleans require really just a bit, but make type of boolean_keys be
uint8_t (introducing new 'val8' name for it) and pass the data length
using sizeof() to avoid any magic numbers.
While being at it, fix len value in parser_json.c as well although it
worked before due to the value being rounded up to the next multiple of
8.
Fixes: 9fd9baba43c8e ("Introduce boolean datatype and boolean expression") Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Florian Westphal <fw@strlen.de>
json: fix type mismatch on "ct expect" json exporting
The size field in ct_expect struct should be parsed as json integer and not as
a string. Also, l3proto field is parsed as string and not as an integer. That
was causing a segmentation fault when exporting "ct expect" objects as json.
Fixes: 1dd08fcfa07a ("src: add ct expectations support") Signed-off-by: Fernando Fernandez Mancera <ffmancera@riseup.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
==29699== Invalid read of size 8
==29699== at 0x507E140: ct_label_table_exit (ct.c:239)
==29699== by 0x5091877: nft_exit (libnftables.c:97)
==29699== by 0x5091877: nft_ctx_free (libnftables.c:297)
[...]
==29699== Address 0xb251008 is 136 bytes inside a block of size 352 free'd
==29699== at 0x4C2CDDB: free (vg_replace_malloc.c:530)
==29699== by 0x509186F: nft_ctx_free (libnftables.c:296)
[...]
==29699== Block was alloc'd at
==29699== at 0x4C2DBC5: calloc (vg_replace_malloc.c:711)
==29699== by 0x508C51D: xmalloc (utils.c:36)
==29699== by 0x508C51D: xzalloc (utils.c:65)
==29699== by 0x50916BE: nft_ctx_new (libnftables.c:151)
[...]
Release symbol tables before context object.
Fixes: 45cb29a2ada4 ("src: remove global symbol_table") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
netlink_delinearize: fix wrong conversion to "list" in ct mark
We only prefer "list" representation in "ct event". For any other type of "ct"
use the "or" representation so nft prints "ct mark set ct mark | 0x00000001"
instead of "ct mark set ct mark,0x00000001".
Link: https://bugzilla.netfilter.org/show_bug.cgi?id=1364 Fixes: cb8f81ac3079 ("netlink_delinearize: prefer ct event set foo,bar over 'set foo|bar'") Signed-off-by: Fernando Fernandez Mancera <ffmancera@riseup.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Ander Juaristi [Thu, 29 Aug 2019 14:09:02 +0000 (16:09 +0200)]
meta: Introduce new conditions 'time', 'day' and 'hour'
These keywords introduce new checks for a timestamp, an absolute date (which is converted to a timestamp),
an hour in the day (which is converted to the number of seconds since midnight) and a day of week.
When converting an ISO date (eg. 2019-06-06 17:00) to a timestamp,
we need to substract it the GMT difference in seconds, that is, the value
of the 'tm_gmtoff' field in the tm structure. This is because the kernel
doesn't know about time zones. And hence the kernel manages different timestamps
than those that are advertised in userspace when running, for instance, date +%s.
The same conversion needs to be done when converting hours (e.g 17:00) to seconds since midnight
as well.
The result needs to be computed modulo 86400 in case GMT offset (difference in seconds from UTC)
is negative.
We also introduce a new command line option (-t, --seconds) to show the actual
timestamps when printing the values, rather than the ISO dates, or the hour.
Some usage examples:
time < "2019-06-06 17:00" drop;
time < "2019-06-06 17:20:20" drop;
time < 12341234 drop;
day "Saturday" drop;
day 6 drop;
hour >= 17:00 drop;
hour >= "17:00:01" drop;
hour >= 63000 drop;
We need to convert an ISO date to a timestamp
without taking into account the time zone offset, since comparison will
be done in kernel space and there is no time zone information there.
Overwriting TZ is portable, but will cause problems when parsing a
ruleset that has 'time' and 'hour' rules. Parsing an 'hour' type must
not do time zone conversion, but that will be automatically done if TZ has
been overwritten to UTC.
Hence, we use timegm() to parse the 'time' type, even though it's not portable.
Overwriting TZ seems to be a much worse solution.
Finally, be aware that timestamps are converted to nanoseconds when
transferring to the kernel (as comparison is done with nanosecond
precision), and back to seconds when retrieving them for printing.
We swap left and right values in a range to properly handle
cross-day hour ranges (e.g. 23:15-03:22).
Signed-off-by: Ander Juaristi <a@juaristi.eus> Reviewed-by: Florian Westphal <fw@strlen.de>
Ander Juaristi [Thu, 29 Aug 2019 14:09:01 +0000 (16:09 +0200)]
evaluate: New internal helper __expr_evaluate_range
This is used by the followup patch to evaluate a range without emitting
an error when the left value is larger than the right one.
This is done to handle time-matching such as
23:00-01:00 -- expr_evaluate_range() will reject this, but
we want to be able to evaluate and then handle this as a request
to match from 23:00 to 1am.
Signed-off-by: Ander Juaristi <a@juaristi.eus> Signed-off-by: Florian Westphal <fw@strlen.de>
We don't need to check asciidoc output with xmllint because the generated XML
is generated by a tool, not by a human. Moreover, xmllint can cause
problems because it will try to download the DTD and that is problematic in
build systems with no network access.
Signed-off-by: Arturo Borrero Gonzalez <arturo@netfilter.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Wed, 14 Aug 2019 11:45:19 +0000 (13:45 +0200)]
src: json: fix constant parsing on bigendian
json restore is broken on big-endian because we errounously
passed uint8_t with 64 bit size indicator.
On bigendian, this causes all values to get shifted by 56 bit,
this will then cause the eval step to bail because all values
are outside of the 8bit 0-255 protocol range.
Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Tue, 13 Aug 2019 20:12:46 +0000 (22:12 +0200)]
src: mnl: retry when we hit -ENOBUFS
tests/shell/testcases/transactions/0049huge_0
still fails with ENOBUFS error after endian fix done in
previous patch. Its enough to increase the scale factor (4)
on s390x, but rather than continue with these "guess the proper
size" game, just increase the buffer size and retry up to 3 times.
This makes above test work on s390x.
So, implement what Pablo suggested in the earlier commit:
We could also explore increasing the buffer and retry if
mnl_nft_socket_sendmsg() hits ENOBUFS if we ever hit this problem again.
v2: call setsockopt unconditionally, then increase on error.
Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Tue, 13 Aug 2019 20:12:45 +0000 (22:12 +0200)]
src: parser: fix parsing of chain priority and policy on bigendian
tests/shell/testcases/flowtable/0001flowtable_0
tests/shell/testcases/nft-f/0008split_tables_0
fail the 'dump compare' on s390x.
The priority (10) turns to 0, and accept turned to drop.
Problem is that '$1' is a 64bit value -- then we pass the address
and import 'int' -- we then get the upper all zero bits.
Add a 32bit interger type and use that.
v2: add uint32_t type to union, v1 used temporary value instead.
Fixes: 627c451b2351 ("src: allow variables in the chain priority specification") Fixes: dba4a9b4b5fe ("src: allow variable in chain policy") Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Tue, 13 Aug 2019 18:44:08 +0000 (20:44 +0200)]
src: mnl: fix setting rcvbuffer size
Kernel expects socklen_t (int).
Using size_t causes kernel to read upper 0-bits.
This caused tests/shell/testcases/transactions/0049huge_0
to fail on s390x -- it uses 'echo' mode and will quickly
overrun the tiny buffer size set due to this bug.
M. Braun [Sun, 11 Aug 2019 10:16:03 +0000 (12:16 +0200)]
tests: add json test for vlan rule fix
This fixes
ERROR: did not find JSON equivalent for rule 'ether type vlan ip
protocol 1 accept'
when running
./nft-test.py -j bridge/vlan.t
Reported-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Michael Braun <michael-dev@fami-braun.de> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
add table ip foo
add chain ip foo bar { type filter hook input priority $prio; }
add chain ip foo ber { type filter hook input priority $prionum; }
add chain ip foo bor { type filter hook input priority $prioffset; }
Signed-off-by: Fernando Fernandez Mancera <ffmancera@riseup.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Store symbol tables in context object instead. Use the nft_ctx object to
store the dynamic symbol table. Pass it on to the parse_ctx object so
this can be accessed from the parse routines. This dynamic symbol table
is also accesible from the output_ctx object for print routines.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This object stores the dynamic symbol tables that are loaded from files.
Pass this object to datatype parse functions, although this new
parameter is not used yet, this is just a preparation patch.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
After the new cache system, nft raises a table error flushing a chain in
a transaction.
# nft "flush chain ip nftlb filter-newfarm ; \
add rule ip nftlb filter-newfarm update \
@persist-newfarm { ip saddr : ct mark } ; \
flush chain ip nftlb nat-newfarm"
Error: No such file or directory
flush chain ip nftlb filter-newfarm ; add rule ip nftlb (...)
^^^^^
This patch sets the cache flag properly to save this case.
Fixes: 01e5c6f0ed031 ("src: add cache level flags") Signed-off-by: Laura Garcia Liebana <nevola@gmail.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Phil Sutter [Tue, 23 Jul 2019 12:30:39 +0000 (14:30 +0200)]
src: Call bison with -Wno-yacc to silence warnings
Bison-3.3 significantly increased warnings for POSIX incompatibilities,
it now complains about missing support for %name-prefix, %define,
%destructor and string literals. The latter applies to parameter of
%name-prefix and all relevant %token statements.
Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
Replace the last two as suggested but leave the first one in place as
that causes compilation errors in scanner.l - flex seems not to pick up
the changed internal symbol names.
Signed-off-by: Phil Sutter <phil@nwl.cc> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
cache: add NFT_CACHE_UPDATE and NFT_CACHE_FLUSHED flags
NFT_CACHE_FLUSHED tells cache_update() to skip the netlink dump to
populate the cache, since the existing ruleset is going to flushed by
this batch.
NFT_CACHE_UPDATE tells rule_evaluate() to perform incremental updates to
the cache based on the existing batch, this is required by the rule
commands that use the index and the position selectors.
This patch removes cache_flush() which is not required anymore. This
cache removal is coming too late, in the evaluation phase, after the
initial cache_update() invocation.
Be careful with NFT_CACHE_UPDATE, this flag needs to be left in place if
NFT_CACHE_FLUSHED is set on.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Fedora 30 uses very recent gcc (version 9.1.1 20190503 (Red Hat 9.1.1-1)),
osf produces following warnings:
-Wformat-truncation warning have been introduced in the version 7.1 of gcc.
Also, remove a unneeded address check of "tmp + 1" in nf_osf_strchr().
nfnl_osf.c: In function ‘nfnl_osf_load_fingerprints’:
nfnl_osf.c:292:39: warning: ‘%s’ directive output may be truncated writing
up to 1023 bytes into a region of size 128 [-Wformat-truncation=]
292 | cnt = snprintf(obuf, sizeof(obuf), "%s,", pbeg);
| ^~
nfnl_osf.c:292:9: note: ‘snprintf’ output between 2 and 1025 bytes into a
destination of size 128
292 | cnt = snprintf(obuf, sizeof(obuf), "%s,", pbeg);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
nfnl_osf.c:302:46: warning: ‘%s’ directive output may be truncated writing
up to 1023 bytes into a region of size 32 [-Wformat-truncation=]
302 | cnt = snprintf(f.genre, sizeof(f.genre), "%s", pbeg);
| ^~
nfnl_osf.c:302:10: note: ‘snprintf’ output between 1 and 1024 bytes into a
destination of size 32
302 | cnt = snprintf(f.genre, sizeof(f.genre), "%s", pbeg);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
nfnl_osf.c:309:49: warning: ‘%s’ directive output may be truncated writing
up to 1023 bytes into a region of size 32 [-Wformat-truncation=]
309 | cnt = snprintf(f.version, sizeof(f.version), "%s", pbeg);
| ^~
nfnl_osf.c:309:9: note: ‘snprintf’ output between 1 and 1024 bytes into a
destination of size 32
309 | cnt = snprintf(f.version, sizeof(f.version), "%s", pbeg);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
nfnl_osf.c:317:47: warning: ‘%s’ directive output may be truncated writing
up to 1023 bytes into a region of size 32 [-Wformat-truncation=]
317 | snprintf(f.subtype, sizeof(f.subtype), "%s", pbeg);
| ^~
nfnl_osf.c:317:7: note: ‘snprintf’ output between 1 and 1024 bytes into a
destination of size 32
317 | snprintf(f.subtype, sizeof(f.subtype), "%s", pbeg);
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This assertion is correct -- we can't linearize a prefix because
kernel doesn't know what that is.
For LHS prefixes, they get converted to a binary 'and' such as
'10.0.0.0 & 255.255.255.240'. For RHS, we can do something similar
and convert them into a range.
snat to 10.0.0.0/28 will be converted into:
iifname "ens3" snat to 10.0.0.0-10.0.0.15
statement.c:930:11: error: ‘synproxy_stmt_json’ undeclared here (not in a function); did you mean ‘tproxy_stmt_json’?
.json = synproxy_stmt_json,
^~~~~~~~~~~~~~~~~~
tproxy_stmt_json
Fixes: 1188a69604c3 ("src: introduce SYNPROXY matching") Signed-off-by: Fernando Fernandez Mancera <ffmancera@riseup.net> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Forgot to add a json test case for the recently added ct ip addr in map case.
Fix up rawpayload.t for json, it needs to expect new "th dport" when
listing.
Reported-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de>
evaluate: missing basic evaluation of expectations
Basic ct expectation object evaluation. This fixes tests/py errors.
Error reporting is very sparse at this stage. I'm intentionally leaving
this as future work to store location objects for each field, so user
gets better indication on what is missing when configuring expectations.
# nft create table testD
# nft create chain testD test6
Error: No such file or directory
create chain testD test6
^^^^^
Handle 'create' command just like 'add' and 'insert'. Check for object
types to dump the tables for more fine grain listing, instead of dumping
the whole ruleset.
Fixes: 7df42800cf89 ("src: single cache_update() call to build cache before evaluation") Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
This is noticeable when displaying mispelling errors, however, there are
also few spots not checking for the object map flag.
Before:
# nft flush set inet filter countermxx
Error: No such file or directory; did you mean set ‘countermap’ in table inet ‘filter’?
flush set inet filter countermxx
^^^^^^^^^^
After:
# nft flush set inet filter countermxx
Error: No such file or directory; did you mean map ‘countermap’ in table inet ‘filter’?
flush set inet filter countermxx
^^^^^^^^^^
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
proto: add pseudo th protocol to match d/sport in generic way
Problem: Its not possible to easily match both udp and tcp in a single
rule.
... input ip protocol { tcp,udp } dport 53
will not work, as bison expects "tcp dport" or "sctp dport", or any
other transport protocol name.
Its possible to match the sport and dport via raw payload expressions,
e.g.:
... input ip protocol { tcp,udp } @th,16,16 53
but its not very readable.
Furthermore, its not possible to use this for set definitions:
table inet filter {
set myset {
type ipv4_addr . inet_proto . inet_service
}
chain forward {
type filter hook forward priority filter; policy accept;
ip daddr . ip protocol . @th,0,16 @myset
}
}
# nft -f test
test:7:26-35: Error: can not use variable sized data types (integer) in concat expressions
During the netfilter workshop Pablo suggested to add an alias to do raw
sport/dport matching more readable, and make it use the inet_service
type automatically.
So, this change makes @th,0,16 work for the set definition case by
setting the data type to inet_service.
A new "th s|dport" syntax is provided as readable alternative:
ip protocol { tcp, udp } th dport 53
As "th" is an alias for the raw expression, no dependency is
generated -- its the users responsibility to add a suitable test to
select the l4 header types that should be matched.
Suggested-by: Pablo Neira Ayuso <pablo@netfilter.org> Signed-off-by: Florian Westphal <fw@strlen.de> Acked-by: Pablo Neira Ayuso <pablo@netfilter.org>
src/ct: provide fixed data lengh sizes for ip/ip6 keys
nft can load but not list this:
table inet filter {
chain input {
ct original ip daddr {1.2.3.4} accept
}
}
Problem is that the ct template length is 0, so we believe the right hand
side is a concatenation because left->len < set->key->len is true.
nft then calls abort() during concatenation parsing.