If the cache does not contain this object that is defined in this batch,
add it to the cache. This allows for references to this new object in
the same batch.
This patch also adds missing handle_merge() to set the object name,
otherwise object name is NULL and obj_cache_find() crashes.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
If the cache does not contain this flowtable that is defined in this
batch, then add it to the cache. This allows for references to this new
flowtable in the same batch.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
If the cache does not contain the set that is defined in this batch, add
it to the cache. This allows for references to this new set in the same
batch.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Actually I am not expecting that many flowtables to benefit from the
hashtable to be created by streamline this code with tables, chains,
sets and policy objects.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
You can identify chains through the unique handle in deletions, update
this interface to take a string instead of the handle to prepare for
the introduction of 64-bit handle chain lookups.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
evaluate: check if nat statement map specifies a transport header expr
Importing the systemd nat table fails:
table ip io.systemd.nat {
map map_port_ipport {
type inet_proto . inet_service : ipv4_addr . inet_service
elements = { tcp . 8088 : 192.168.162.117 . 80 }
}
chain prerouting {
type nat hook prerouting priority dstnat + 1; policy accept;
fib daddr type local dnat ip addr . port to meta l4proto . th dport map @map_port_ipport
}
}
ruleset:9:48-59: Error: transport protocol mapping is only valid after transport protocol match
To resolve this (no transport header base specified), check if the
map itself contains a network base protocol expression.
This allows nft to import the ruleset.
Import still fails with same error if 'inet_service' is removed
from the map, as it should.
Another process might race to add chains after chain_cache_init().
The generation check does not help since it comes after cache_init().
NLM_F_DUMP_INTR only guarantees consistency within one single netlink
dump operation, so it does not help either (cache population requires
several netlink dump commands).
Let's be safe and do not assume the chain exists in the cache when
populating the rule cache.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
- Chains that reside in the cache are stored in the new
tables->cache_chain and tables->cache_chain_ht. The hashtable chain
cache allows for fast chain lookups.
- Chains that defined via command line / ruleset file reside in
tables->chains.
Note that chains in the cache (already in the kernel) are not placed in
the table->chains.
By keeping separated lists, chains defined via command line / ruleset
file can be added to cache.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Tue, 30 Mar 2021 23:26:19 +0000 (01:26 +0200)]
netlink: don't crash when set elements are not evaluated as expected
define foo = 2001:db8:123::/48
table inet filter {
set foo {
typeof ip6 saddr
elements = $foo
}
}
gives crash. This now exits with:
stdin:1:14-30: Error: Unexpected initial set type prefix
define foo = 2001:db8:123::/48
^^^^^^^^^^^^^^^^^
For literals, bison parser protects us, as it enforces
'elements = { 2001:... '.
For 'elements = $foo' we can't detect it at parsing stage as the '$foo'
symbol might as well evaluate to "{ 2001, ...}" (i.e. we can't do a
set element allocation).
As an alternative to print the datatype values when no symbol table is
available. Use it to print protocols available via getprotobynumber()
which actually refers to /etc/protocols.
Not very efficient, getprotobynumber() causes a series of open()/close()
calls on /etc/protocols, but this is called from a non-critical path.
Closes: https://bugzilla.netfilter.org/show_bug.cgi?id=1503 Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Simon Ruderich [Tue, 9 Mar 2021 10:53:30 +0000 (11:53 +0100)]
doc: use symbolic names for chain priorities
This replaces the numbers with the matching symbolic names with one
exception: The NAT example used "priority 0" for the prerouting
priority. This is replaced by "dstnat" which has priority -100 which is
the new recommended priority.
Also use spaces instead of tabs for consistency in lines which require
updates.
Signed-off-by: Simon Ruderich <simon@ruderich.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Florian Westphal [Tue, 16 Mar 2021 23:40:34 +0000 (00:40 +0100)]
scanner: add support for scope nesting
Adding a COUNTER scope introduces parsing errors. Example:
add rule ... counter ip saddr 1.2.3.4
This is supposed to be
COUNTER IP SADDR SYMBOL
but it will be parsed as
COUNTER IP STRING SYMBOL
... and rule fails with unknown saddr.
This is because IP state change gets popped right after it was pushed.
bison parser invokes scanner_pop_start_cond() helper via
'close_scope_counter' rule after it has processed the entire 'counter' rule.
But that happens *after* flex has executed the 'IP' rule.
IOW, the sequence of events is not the exepcted
"COUNTER close_scope_counter IP SADDR SYMBOL close_scope_ip", it is
"COUNTER IP close_scope_counter".
close_scope_counter pops the just-pushed SCANSTATE_IP and returns the
scanner to SCANSTATE_COUNTER, so next input token (saddr) gets parsed
as a string, which gets then rejected from bison.
To resolve this, defer the pop operation until the current state is done.
scanner_pop_start_cond() already gets the scope that it has been
completed as an argument, so we can compare it to the active state.
If those are not the same, just defer the pop operation until the
bison reports its done with the active flex scope.
This leads to following sequence of events:
1. flex switches to SCANSTATE_COUNTER
2. flex switches to SCANSTATE_IP
3. bison calls scanner_pop_start_cond(SCANSTATE_COUNTER)
4. flex remains in SCANSTATE_IP, bison continues
5. bison calls scanner_pop_start_cond(SCANSTATE_IP) once the entire
ip rule has completed: this pops both IP and COUNTER.
Florian Westphal [Thu, 11 Mar 2021 13:23:02 +0000 (14:23 +0100)]
scanner: ct: move to own scope
This allows moving multiple ct specific keywords out of INITIAL scope.
Next few patches follow same pattern:
1. add a scope_close_XXX rule
2. add a SCANSTATE_XXX & make flex switch to it when
encountering XXX keyword
3. make bison leave SCANSTATE_XXXX when it has seen the complete
expression.
nftables: xt: fix misprint in nft_xt_compatible_revision
The rev variable is used here instead of opt obviously by mistake.
Please see iptables:nft_compatible_revision() for an example how it
should be.
This breaks revision compatibility checks completely when reading
compat-target rules from nft utility. That's why nftables can't work on
"old" kernels which don't support new revisons. That's a problem for
containers.
E.g.: 0 and 1 is supported but not 2:
https://git.sw.ru/projects/VZS/repos/vzkernel/browse/net/netfilter/xt_nat.c#111
Reproduce of the problem on Virtuozzo 7 kernel
3.10.0-1160.11.1.vz7.172.18 in centos 8 container:
iptables-nft -t nat -N TEST
iptables-nft -t nat -A TEST -j DNAT --to-destination 172.19.0.2
nft list ruleset > nft.ruleset
nft -f - < nft.ruleset
#/dev/stdin:19:67-81: Error: Range has zero or negative size
# meta l4proto tcp tcp dport 81 counter packets 0 bytes 0 dnat to 3.0.0.0-0.0.0.0
# ^^^^^^^^^^^^^^^
But nft reads this as rev 2 format (nf_nat_range2) which does not have
rangesize, and thus flugs 3 is treated as ip 3.0.0.0, which is wrong and
can't be restored later.
(Should probably be the same on Centos 7 kernel 3.10.0-1160.11.1)
Fixes: fbc0768cb696 ("nftables: xt: don't use hard-coded AF_INET") Signed-off-by: Pavel Tikhomirov <ptikhomirov@virtuozzo.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Phil Sutter [Thu, 4 Feb 2021 01:20:23 +0000 (02:20 +0100)]
mnl: Set NFTNL_SET_DATA_TYPE before dumping set elements
In combination with libnftnl's commit "set_elem: Fix printing of verdict
map elements", This adds the vmap target to netlink dumps. Adjust dumps
in tests/py accordingly.
Simon Ruderich [Sun, 7 Mar 2021 09:51:35 +0000 (10:51 +0100)]
doc: remove duplicate tables in synproxy example
The "outcome ruleset" is the same as the two tables in the example.
Don't duplicate this information which just wastes space in the
documentation and can confuse the reader (it took me a while to realize
the tables are the same).
In addition, use the same table name for both tables to make it clear
that they can be the same. They will be merged in the resulting ruleset.
Signed-off-by: Simon Ruderich <simon@ruderich.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
nft_mnl_socket_reopen() was introduced to deal with the EINTR case.
By reopening the netlink socket, pending netlink messages that are part of
a stale netlink dump are implicitly drop. This patch replaces the
nft_mnl_socket_reopen() strategy by pulling out all of the remaining
netlink message to restart in a clean state.
This is implicitly fixing up a bug in the table ownership support, which
assumes that the netlink socket remains open until nft_ctx_free() is
invoked.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Add new flag to allow userspace process to own tables: Tables that have
an owner can only be updated/destroyed by the owner. The table is
destroyed either if the owner process calls nft_ctx_free() or owner
process is terminated (implicit table release).
The ruleset listing includes the program name that owns the table:
nft> list ruleset
table ip x { # progname nft
flags owner
chain y {
type filter hook input priority filter; policy accept;
counter packets 1 bytes 309
}
}
Original code to pretty print the netlink portID to program name has
been extracted from the conntrack userspace utility.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Fixes: 719e44277f8e ("main: use one data-structure to initialize getopt_long(3) arguments and help.") Cc: Jeremy Sowden <jeremy@azazel.net> Signed-off-by: Štěpán Němec <snemec@redhat.com> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Phil Sutter [Wed, 17 Feb 2021 11:38:42 +0000 (12:38 +0100)]
monitor: Don't print newgen message with JSON output
Iff this should be printed, it must adhere to output format settings. In
its current form it breaks JSON syntax, so skip it for non-default
output formats.
Fixes: cb7e02f44d6a6 ("src: enable json echo output when reading native syntax") Signed-off-by: Phil Sutter <phil@nwl.cc>