Victor Julien [Wed, 9 Jul 2014 06:51:29 +0000 (08:51 +0200)]
packet recycle: do most clean up on packet reuse
Call PACKET_RELEASE_REFS from PacketPoolGetPacket() so that
we only access the large packet structure just before actually
using it. Should give better cache behaviour.
Ken Steele [Wed, 5 Feb 2014 23:00:19 +0000 (18:00 -0500)]
Fix GRE Source Routing Header definition
The Source Routing Header had routing defined as a char* for a field
of variable size. Since that field was not being used in the code, I
removed the pointer and added a comment.
Ken Steele [Fri, 20 Dec 2013 16:52:12 +0000 (11:52 -0500)]
Add Packed attribute on Header structures
Structures that are used to cast packet data into fields need to be packed
so that the compiler doesn't add any padding to these fields. This also helps
Tile-Gx to avoid unaligned loads because the compiler will insert code to
handle the possible unaligned load.
Victor Julien [Mon, 11 Aug 2014 12:14:59 +0000 (14:14 +0200)]
lua: improve configure checks
The base 'lua' library has different names on different OS' and even
Linux distro's. Instead of selecting the proper one, we now just try
all. This way no OS/distro specific knowledge about the name is needed.
Victor Julien [Wed, 16 Jul 2014 22:23:50 +0000 (00:23 +0200)]
stream: detect and filter out bad window updates
Reported in bug 1238 is an issue where stream reassembly can be
disrupted.
A packet that was in-window, but otherwise unexpected set the
window to a really low value, causing the next *expected* packet
to be considered out of window. This lead to missing data in the
stream reassembly.
The packet was unexpected in various ways:
- it would ack unseen traffic
- it's sequence number would not match the expected next_seq
- set a really low window, while not being a proper window update
Detection however, it greatly hampered by the fact that in case of
packet loss, quite similar packets come in. Alerting in this case
is unwanted. Ignoring/skipping packets in this case as well.
The logic used in this patch is as follows. If:
- the packet is not a window update AND
- packet seq > next_seq AND
- packet acq > next_seq (packet acks unseen data) AND
- packet shrinks window more than it's own data size
THEN set event and skip the packet in the stream engine.
So in case of a segment with no data, any window shrinking is rejected.
Victor Julien [Tue, 5 Aug 2014 15:28:17 +0000 (17:28 +0200)]
defrag: use 'struct timeval' for timeout tracking
Until now the time out handling in defrag was done using a single
uint32_t that tracked seconds. This lead to corner cases, where
defrag trackers could be timed out a little too early.
Victor Julien [Thu, 24 Jul 2014 11:39:10 +0000 (13:39 +0200)]
ipv6 defrag: fix unfragmentable exthdr handling
Fix or rather implement handling of unfragmentable exthdrs in ipv6.
The exthdr(s) appearing before the frag header were copied into the
reassembled packet correctly, however the stripping of the frag header
did not work correctly.
Example:
The common case is a frag header directly after the ipv6 header:
The result would be:
[ipv6 header]->[hop header]->[icmpv6]
However, here too the ipv6 header would have been updated to point
to what the frag header pointed at. So it would consider the hop header
as if it was an ICMPv6 header, or whatever the frag header pointed at.
The result is that packets would not be correctly parsed, and thus this
issue can lead to evasion.
This patch implements handling of the unfragmentable part. In the first
segment that is stored in the list for reassembly, this patch detects
unfragmentable headers and updates it to have the last unfragmentable
header point to the layer after the frag header.
Also, the ipv6 headers 'next hdr' is only updated if no unfragmentable
headers are used. If they are used, the original value is correct.
Reported-By: Rafael Schaefer <rschaefer@ernw.de>
Bug #1244.
Eric Leblond [Tue, 29 Jul 2014 08:05:23 +0000 (10:05 +0200)]
unittests: don't register app layer test
Some tests are already registered via the function
AppLayerParserRegisterProtocolUnittests. So we don't need to
egister them during runmode initialization.
Ken Steele [Thu, 3 Jul 2014 16:42:12 +0000 (12:42 -0400)]
Reduce reallocation in AC Tile MPM creation.
Exponentially increase the memory allocated for new states when adding new
states, then at the end resize down to the actually final size so that no space is wasted.
Ken Steele [Tue, 29 Jul 2014 13:31:49 +0000 (09:31 -0400)]
Fix Packet Stacks for non-TLS Operating Systems
On non-TLS systems, check each time the Thread Local Storage
is requested and if it has not been initialized for this thread, initialize it.
The prevents not initializing the worker threads in autofp run mode.
Handle misc tasks only in instance 1: Handle defrag hash timeout
handing, host hash timeout handling and flow spare queue updating
only from the first instance.
Victor Julien [Wed, 16 Jul 2014 07:59:48 +0000 (09:59 +0200)]
threads: add management API
Currently management threads do their own thread setup and handling. This
patch introduces a new way of handling management threads.
Functionality that needs to run as a management thread can now register
itself as a regular 'thread module' (TmModule), where the 'Management'
callback is registered.
Victor Julien [Fri, 23 May 2014 12:54:05 +0000 (14:54 +0200)]
flow: add flow_end_flags field, add logging
The flow end flags field is filled by the flow manager or the flow
hash (in case of forced timeout of a flow) to record the timeout
conditions in the flow:
- emergency mode
- state
- reason (timed out or forced)
Victor Julien [Thu, 22 May 2014 10:53:51 +0000 (12:53 +0200)]
flow-recycler: speed up flow-recycler shutdown
Thread was killed by the generic TmThreadKillThreads instead of
the FlowKillFlowRecyclerThread. The latter wakes the thread up, so
that shutdown is quite a bit faster.
Victor Julien [Wed, 21 May 2014 12:29:15 +0000 (14:29 +0200)]
stream: track TCP flags per stream direction
For netflow logging track TCP flags per stream direction. As the struct
had no more space left without expanding it, the flags and wscale
fields are now compressed.
Victor Julien [Fri, 9 May 2014 12:37:07 +0000 (14:37 +0200)]
flow: prepare flow forced reuse logging
Most flows are marked for clean up by the flow manager, which then
passes them to the recycler. The recycler logs and cleans up. However,
under resource stress conditions, the packet threads can recycle
existing flow directly. So here the recycler has no role to play, as
the flow is immediately used.
For this reason, the packet threads need to be able to invoke the
flow logger directly.
The flow logging thread ctx will stored in the DecodeThreadVars
stucture. Therefore, this patch makes the DecodeThreadVars an argument
to FlowHandlePacket.
Alexander Gozman [Fri, 18 Jul 2014 09:38:03 +0000 (13:38 +0400)]
Fixed variables names in suricata.yaml.in Changed logging logic - now it's possible to enable different payload dumping modes separately Fixed bug in dumping packet without stream segments Fixed indents
Victor Julien [Fri, 25 Jul 2014 15:41:34 +0000 (17:41 +0200)]
Fix engine getting stuck because of optimizations
At -O1+ in both Gcc and Clang, PacketPoolWait would optimize the
wait loop in the wrong way. Adding a compiler barrier to prevent
this optimization issue.
Victor Julien [Fri, 25 Jul 2014 11:47:59 +0000 (13:47 +0200)]
Fix pcap packet acquisition methods
Fix pcap packet acquisition methods passing 0 to pcap_dispatch.
Previously they passed the packet pool size, but the packet_q_len
variable was now hardcoded at 0.
This patch sets packet_q_len to 64. If packet pool is empty, we fall
back to direct alloc. As the pcap_dispatch function is only called
when packet pool is not empty, we alloc at most 63 packets.
Ken Steele [Mon, 30 Jun 2014 19:12:38 +0000 (15:12 -0400)]
For PktPool add local pending freed packets list.
Better handle the autofp case where one thread allocates the majority
of the packets and other threads free those packets.
Add a list of locally pending packets. The first packet freed goes on the
pending list, then subsequent freed packets for the same Packet Pool are
added to this list until it hits a fixed number of packets, then the
entire list of packets is pushed onto the pool's return stack. If a freed
packet is not for the pending pool, it is freed immediately to its pool's
return stack, as before.
For the autofp case, since there is only one Packet Pool doing all the
allocation, every other thread will keep a list of pending packets for
that pool.
For the worker run mode, most packets are allocated and freed locally. For
the case where packets are being returned to a remote pool, a pending list
will be kept for one of those other threads, all others are returned as before.
Which remote pool for which to keep a pending list is changed each time the
pending list is returned. Since the return pending pool is cleared when it is
freed, then next packet to be freed chooses the new pending pool.
Ken Steele [Fri, 28 Mar 2014 18:51:25 +0000 (14:51 -0400)]
Replace ringbuffer in Packet Pool with a stack for better cache locality
Using a stack for free Packet storage causes recently freed Packets to be
reused quickly, while there is more likelihood of the data still being in
cache.
The new structure has a per-thread private stack for allocating Packets
which does not need any locking. Since Packets can be freed by any thread,
there is a second stack (return stack) for freeing packets by other threads.
The return stack is protected by a mutex. Packets are moved from the return
stack to the private stack when the private stack is empty.
Returning packets back to their "home" stack keeps the stacks from getting out
of balance.
The PacketPoolInit() function is now called by each thread that will be
allocating packets. Each thread allocates max_pending_packets, which is a
change from before, where that was the total number of packets across all
threads.