# eBPF file containing a 'filter' function that will be inserted into the
# kernel and used as load balancing function
ebpf-filter-file: /usr/libexec/suricata/ebpf/vlan_filter.bpf
- use-mmap: yes
ring-size: 200000
You can then run Suricata normally ::
# kernel and used as packet filter function
ebpf-filter-file: /usr/libexec/suricata/ebpf/bypass_filter.bpf
bypass: yes
- use-mmap: yes
ring-size: 200000
Constraints on eBPF code to have a bypass compliant code are stronger than for regular filters. The
# eBPF file containing a 'loadbalancer' function that will be inserted into the
# kernel and used as load balancing function
ebpf-lb-file: /usr/libexec/suricata/ebpf/lb.bpf
- use-mmap: yes
ring-size: 200000
Setup XDP bypass
# if the ebpf filter implements a bypass function, you can set 'bypass' to
# yes and benefit from these feature
bypass: yes
- use-mmap: yes
ring-size: 200000
# Uncomment the following if you are using hardware XDP with
# a card like Netronome (default value is yes)
xdp-mode: driver
xdp-filter-file: /usr/libexec/suricata/ebpf/xdp_lb.bpf
xdp-cpu-redirect: ["1-17"] # or ["all"] to load balance on all CPUs
- use-mmap: yes
ring-size: 200000
It is possible to use `xdp_monitor` to have information about the behavior of CPU redirect. This
cluster-id: 99
cluster-type: cluster_qm
defrag: no
- use-mmap: yes
mmap-locked: yes
tpacket-v3: yes
ring-size: 100000
cluster-id: 99
cluster-type: cluster_qm
defrag: no
- use-mmap: yes
mmap-locked: yes
tpacket-v3: yes
ring-size: 100000
cluster-id: 99
cluster-type: cluster_flow
defrag: no
- use-mmap: yes
mmap-locked: yes
tpacket-v3: yes
ring-size: 100000
....
....
....
- use-mmap: yes
tpacket-v3: yes
ring-size
cluster-id: 99
cluster-type: cluster_flow
defrag: yes
- use-mmap: yes
tpacket-v3: yes
This configuration uses the most recent recommended settings for the IDS
copy-mode: ips
copy-iface: eth1
buffer-size: 64535
- use-mmap: yes
- interface: eth1
threads: 1
cluster-id: 97
copy-mode: ips
copy-iface: eth0
buffer-size: 64535
- use-mmap: yes
This is a basic af-packet configuration using two interfaces. Interface
``eth0`` will copy all received packets to ``eth1`` because of the `copy-*`
There are some important points to consider when setting up this mode:
-- The implementation of this mode is dependent of the zero copy mode of
- AF_PACKET. Thus you need to set `use-mmap` to `yes` on both interface.
- MTU on both interfaces have to be equal: the copy from one interface to
the other is direct and packets bigger then the MTU will be dropped by kernel.
- Set different values of `cluster-id` on both interfaces to avoid conflict.
copy-mode: ips
copy-iface: eth1
buffer-size: 64535
- use-mmap: yes
- interface: eth1
threads: 16
cluster-id: 97
copy-mode: ips
copy-iface: eth0
buffer-size: 64535
- use-mmap: yes
The eBPF file ``/usr/libexec/suricata/ebpf/lb.bpf`` may not be present on disk.
See :ref:`ebpf-xdp` for more information.
}
}
- if (ConfGetChildValueBoolWithDefault(if_root, if_default, "use-mmap", &boolval) == 1) {
- if (!boolval) {
- SCLogWarning(
- "%s: \"use-mmap\" option is obsolete: mmap is always enabled", aconf->iface);
- }
- }
-
(void)ConfGetChildValueBoolWithDefault(if_root, if_default, "mmap-locked", &boolval);
if (boolval) {
SCLogConfig("%s: enabling locked memory for mmap", aconf->iface);
# In some fragmentation cases, the hash can not be computed. If "defrag" is set
# to yes, the kernel will do the needed defragmentation before sending the packets.
defrag: yes
- # To use the ring feature of AF_PACKET, set 'use-mmap' to yes
- #use-mmap: yes
# Lock memory map to avoid it being swapped. Be careful that over
# subscribing could lock your system
#mmap-locked: yes
- # Use tpacket_v3 capture mode, only active if use-mmap is true
+ # Use tpacket_v3 capture mode.
# Don't use it in IPS or TAP mode as it causes severe latency
#tpacket-v3: yes
# Ring size will be computed with respect to "max-pending-packets" and number
# in the list above.
- interface: default
#threads: auto
- #use-mmap: no
#tpacket-v3: yes
# Linux high speed af-xdp capture support