EngineAnalysisRules2 was in a strange location where it did not respect
the --engine-analysis flag. It has been moved to the same call location
as EngineAnalysisRules.
Victor Julien [Fri, 17 Aug 2018 15:53:16 +0000 (17:53 +0200)]
http: implement min size stream logic
Update HTTP parser to set the min inspect depth per transaction. This
allows for signatures to have their fast_pattern in the HTTP body,
while still being able to inspect the raw stream reliably with it.
The inspect depth is set per transaction as it:
- depends on the per personality config for min inspect size
- is set to the size of the actual body if it is smaller
After the initial inspection is done, it is set to 0 which disables
the feature for the rest of the transaction.
This removes the rescanning flush logic in commit 7e004f52c60c5e4d7cd8f5ed09491291d18f42d2 and provides an alternative
fix for bug #2522. The old approach caused too much rescanning of
HTTP body data leading to a performance degradation.
Victor Julien [Fri, 17 Aug 2018 08:41:51 +0000 (10:41 +0200)]
stream: introduce min inspect depth logic
Some rules need to inspect both raw stream data and higher level
buffers together. When this higher level buffer is a streaming
buffer itself, the risk of mismatch exists.
This patch allows an app-layer parser to set a 'min inspect depth'.
The value is used by the stream engine to keep at least this
depth worth of data, so that the detection engine can request
all of it for inspection.
For rules that have the SIG_FLAG_FLUSH flag set, data is inspected
not from offset raw_progress, but from raw_progress minus
min_inspect_depth.
At this time this is only used for sigs that have their fast_pattern
in a HTTP body and have raw stream match as well.
Jason Ish [Thu, 13 Sep 2018 19:09:20 +0000 (13:09 -0600)]
defrag: remove fragments that have complete overlap
Instead of just marking fragments that have been completely
overlapped and won't be part of the assembled packet, remove
them from the fragment tree when detected.
Victor Julien [Mon, 27 Aug 2018 06:11:54 +0000 (08:11 +0200)]
streaming: use rbtree for stream blocks
Switch StreamBufferBlocks implementation to use RBTREE instead of
a list. This makes inserts/removals and lookups a lot cheaper if
the number of data gaps is large.
Use separate compare functions for inserts and regular lookups.
Inserts care about the offset, while lookups care about the blocks
right edge as well.
Victor Julien [Sun, 26 Aug 2018 08:14:18 +0000 (10:14 +0200)]
stream/sack: turn SACK record list into rbtree
Convert to rbtree from linked list. These ranges, of which there can
be multiple per packet, are fully controlled by an attacked. The
attacker could craft a stream of packet in such a way that the list
would grow very large. This would make inserts/removals very expensive,
as well as the list walk that is done and size calculation and pruning
operations.
The RBTREE makes inserts/removals much cheaper, at a slight overhead
for 'normal' operations and slightly higher per record memory use.
Victor Julien [Mon, 27 Aug 2018 20:55:19 +0000 (22:55 +0200)]
stream/segments: speed up inserts
Don't try to do a 'fast path' by checking RB_MAX. RB_MAX walks the
tree which means it can be quite expensive. This cost would be paid
for virtually every data segment. The actual insert that follows would
walk the tree again.
Instead, simply insert it. There is a slight cost of the unnecessary
overlap check, but this is much less than the tree walk in a full
tree.
Victor Julien [Mon, 27 Aug 2018 10:26:11 +0000 (12:26 +0200)]
stream/segments: optimize overlap tree operations
Now that with the RBTREE we have a properly sorted Segment tree,
where with exact SEQ matches the tree is sorted by payload_len
smallest to largest, we can avoid walking backwards when checking
for overlaps. Our direct RB_PREV either overlaps or not and that
is a reliable verdict for the rest of the tree.
Victor Julien [Thu, 23 Aug 2018 15:27:08 +0000 (17:27 +0200)]
stream/segments: turn linked list into rbtree
To improve worst case performance turn the segments list into a rbtree.
This greatly improves inserts, lookups and removals if the number of
segments gets very large.
The tree is sorted by the segment sequence number as its primary key.
If 2 segments have the same seq, the payload_len (segment length) is
used. Then the larger segment will be places after the smaller segment.
Exact matches are not added to the tree.
Mats Klepsland [Tue, 28 Aug 2018 20:46:26 +0000 (22:46 +0200)]
lua: add function 'TlsGetVersion'
Add another function to get TLS version, since 'TlsGetCertInfo' only
works when a TLS session contains a clear text certificate, which is
not the case in TLSv1.3 or when a session is resumed.
Victor Julien [Wed, 15 Aug 2018 10:28:52 +0000 (12:28 +0200)]
detect: fix file_data detect issue with alert ip
Fix mpm progress being updated by irrelevant engines. Esp in the
case of file_data engines, signature can contain multiple versions
of the same engine, registered for different 'progress' values.
This would lead to signatures being considered 'can't match' even
in cases where they clearly could still match.
Only consider those progress values that apply to the protocol in
use.
Victor Julien [Tue, 14 Aug 2018 08:17:37 +0000 (10:17 +0200)]
detect/filehash: try to open data file from rulefile dir
If the data file can't be found in the default location, which
normally is 'default-rule-path', try to see if it can be found
in the path of the rule file that references it.
Victor Julien [Thu, 9 Aug 2018 22:06:24 +0000 (00:06 +0200)]
detect/prefilter: speed up setup
If the global detect.prefilter.default setting is not "auto", it is
wasteful to run each prefilter setup routine. This patch tracks which
of the engines have been explicitly enabled in the rules and only
runs those.
Victor Julien [Thu, 9 Aug 2018 15:35:32 +0000 (17:35 +0200)]
detect/prefilter: fix prefilter when setting is 'mpm'
When prefilter is not enabled globally, it is still possible to
enable it per signature. This was broken however, as the setup
code would never be called.
This commit always call the setup code and lets that sort out
which signatures (if any) to enable prefiltering for.
Victor Julien [Thu, 9 Aug 2018 15:33:19 +0000 (17:33 +0200)]
detect/prefilter: fix alias for fast_pattern
If prefilter is used on a content keyword, it acts as a simple
fast_pattern statement. This was broken because the SIG_FLAG_PREFILTER
flag bypasses MPM for a sig. This commits fixes this by not setting
the flag when it should act as fast_pattern.
These settings default to 32k as quite some existing rules need this.
At the same time, the 'raw stream' inspection uses its own limits. By
default it inspects the data in blocks of about 2.5k. This could lead
to a situation where rules would not match.
Sid 1 would only be inspected when the POST body reached the 32k limit
or when it was complete. Observed case shows the POST body to be 18k.
Sid 2 is inspected as soon as the 2.5k limit is reached, and then again
for each 2.5k increment. This moves the raw stream tracker forward.
So by the time sid 1 is inspected, some 18/19k into the stream, the
raw stream tracker is actually already moved forward for approximately
17.5k, this leads to the stream match of sid 1 possibly not matching.
Since the body match is at the start of the buffer, it makes sense
that the body and stream are inspected together.
The body inspection uses a tracker 'body_inspected', that keeps track
of how far into the body both MPM and per signature inspection has
moved.
This patch updates the logic in 2 ways:
1. it triggers earlier HTTP body inspection, which is matched to the
stream inspection. When the detection engine finds it has stream
data available for inspection, it passes the new 'STREAM_FLUSH'
flag to the HTTP body inspection code. Which will then do an
early inspection, even if still before the min inspect size.
2. to still somewhat adhere to the min inspect size, the body
tracker is not updated until the min inspect size is reached.
This will lead to some re-evaluation of the same body data.
If raw stream reassembly is disabled, this 'STREAM_FLUSH' flag is
never set, and the old behavior is used.
Victor Julien [Mon, 30 Jul 2018 08:26:21 +0000 (10:26 +0200)]
stream: improve TCP CLOSED handling
Trigger app layer reassembly in both directions as soon as we've set
the TCP state to closed.
In IDS mode, if a toserver packet would close the state, the app layer
would not get updated until the next toclient packet. However, in
detection, the raw stream inspection would already use all available
stream data in detection and move the 'raw stream progress' tracker
forward. When in later (a) packet(s) the app layer was updated and
inspection ran on the app layer, the stream progress was already
moved too far forward. This would lead to signatures that matched
on both stream and app layer to not match.
By triggering the app layer reassembly as soon as the TCP state is
set to closed, the inspection as both the stream and app layer data
available at the same time so these rules can match.
Victor Julien [Tue, 7 Aug 2018 11:28:55 +0000 (13:28 +0200)]
flow: flag packets as established for async
If a stream is async we see only on side of the traffic. This would
lead to the flow engine not flagging packets as 'established' even
if the flow state was in fact established. The flow was tagged as
such by the TCP engine.
This patch considers the flow state for setting the packet flag.
Giuseppe Longo [Mon, 12 Mar 2018 11:41:35 +0000 (12:41 +0100)]
app-layer-htp: close file with TRUNCATE state
When a file in TOSERVER direction is being stored and
libhtp or stream depth limit is reached,
it will be closed by HTPCallbackRequest without setting
any flags so the file state will be set to CLOSED
instead of TRUNCATED.