dpdk: replace TSC clock with GetTime (gettimeofday) function
Getting clock through Time Stamp Counter (TSC) can be precise and fast,
however only for a short duration of time.
The implementation across CPUs seems to vary. The original idea is to
increment the counter with every tick. Then dividing the delta of CPU ticks
by the CPU frequency can return the time that passed.
However, the CPU clock/frequency can change over time, resulting in uneven
incrementation of TSC. On some CPUs this is handled by extra logic.
As a result, obtaining time through this method might drift from the real
time.
This commit therefore substitues TSC time retrieval by the standard system
call wrapped in GetTime function - on Linux it is gettimeofday.
Victor Julien [Tue, 25 Jun 2024 08:35:35 +0000 (10:35 +0200)]
smb/ntlmssp: improve version check
Don't assume the ntlmssp version field is always present if the flag is
set. Instead keep track of the offsets of the data of the various blobs
and see if there is space for the version.
filestore: do not try to store a file set to nostore
Ticket: 6390
This can happen with keyword filestore:both,flow
If one direction does not have a signature group with a filestore,
the file is set to nostore on opening, until a signature in
the other direction tries to set it to store.
Subsequent files will be stored in both directions as flow flags
are now set.
DETECT_FLOWBITS_CMD_NOALERT is misleading as it gives an impression that
noalert is a flowbit specific command that'll be used and dealt with at
some point but as soon as noalert is found in the rule lang, signature
flag for noalert is set and control is returned. It never gets added to
cmd of the flowbits object.
Victor Julien [Tue, 4 Jun 2024 12:43:22 +0000 (14:43 +0200)]
defrag: don't use completed tracker
When a Tracker is set up for a IPID, frags come in for it and it's
reassembled and complete, the `DefragTracker::remove` flag is set. This
is mean to tell the hash cleanup code to recyle the tracker and to let
the lookup code skip the tracker during lookup.
A logic error lead to the following scenario:
1. there are sufficient frag trackers to make sure the hash table is
filled with trackers
2. frags for a Packet with IPID X are processed correctly (X1)
3. frags for a new Packet that also has IPID X come in quickly after the
first (X2).
4. during the lookup, the frag for X2 hashes to a hash row that holds
more than one tracker
5. as the trackers in hash row are evaluated, it finds the tracker for
X1, but since the `remove` bit is not checked, it is returned as the
tracker for X2.
6. reassembly fails, as the tracker is already complete
The logic error is that only for the first tracker in a row the `remove`
bit was checked, leading to reuse to a closed tracker if there were more
trackers in the hash row.
Jason Ish [Thu, 16 May 2024 16:42:53 +0000 (10:42 -0600)]
rust: rename .cargo/config to .cargo/config.toml
Addresses this warning from the Rust compiler:
warning: `../rust/.cargo/config` is deprecated in favor of `config.toml`
note: if you need to support cargo 1.38 or earlier, you can symlink `config` to `config.toml`
(cherry picked from commit 8560564657735a4c22004d51db9775ca2f1d9645)
Eric Leblond [Wed, 8 Nov 2023 20:18:33 +0000 (21:18 +0100)]
profiling: add option to active rules profiling at start
When replaying a pcap file, it is not possible to get rules
profiling because it has to be activated from the unix socket.
This patch adds a new option to be able to activate profiling
collection at start so a pcap run can get rules profiling
information.
Eric Leblond [Sun, 15 Oct 2023 13:39:40 +0000 (15:39 +0200)]
eve: revert ethernet addresses when needed
EVE logging has a direction parameter that can cause the logging
of an application layer to be done in a direction that is not linked
to the packet. As a result the source IP addres could be assigned the
MAC address of the destination IP and reverse.
This patch addresses this by propagating the direction to the ethernet
logging function and using it there to define the correct mapping.
Victor Julien [Wed, 29 May 2024 05:03:24 +0000 (07:03 +0200)]
threads: give threads more time to get ready
In certain conditions, it can take a long time for threads to start up.
For example in af-packet, setting up the socket, rings, etc has been
observed to take close to half a second per thread, and since the
threads go one by one in a preset order, this means the start up can
take a lot of time if there are many threads. The old logic would just
allow a hard coded 60s. This was not always enough when the number of
threads was high.
This patch makes the wait time take the number of threads into account.
It adds a second of time budget to the base 60s for each thread.
So as an example, if a system has 112 af-packet threads, it would wait
172 seconds (60 + 112) for the threads to get ready.
Victor Julien [Mon, 27 May 2024 15:12:09 +0000 (17:12 +0200)]
threads: optimize start up check
When starting a large amount of threads, the loop was inefficient. It
would loop over the threads and if one wasn't yet ready it would sleep a
bit and then reevaluate all the threads. This reevaluation of threads
already checked was inefficient, and could lead to the time budget
running out.
This patch splits the check, and keeps track of the threads that have
already passed. This avoids the rescanning of already checked threads.
Shivani Bhardwaj [Wed, 28 Feb 2024 14:29:04 +0000 (19:59 +0530)]
detect/port: remove SigGroupHead* ops
The functions in detect-engine-port.c are only being used at the time of
parsing the ports from rules initially. Since there are no SGHs at that
point, remove the ops related to them too.
Shivani Bhardwaj [Mon, 25 Mar 2024 13:38:31 +0000 (19:08 +0530)]
detect/port: handle range and upper boundary ports
So far, if a port was found to be single which was earlier a part of the
range, port + 1 was added to the list to honor the range that it was a
part of. But, this is incorrect in case the port is 65535 or if the port
was found to be of range when it was earlier a single port.
If a port point is single but later on also a part of a range, it ends
up only creating the port groups for single points and not the range.
Fix it by adding the port next to current single one to unique points
and marking it a range port.
Victor Julien [Mon, 26 Feb 2024 16:08:21 +0000 (21:38 +0530)]
detect/port: use qsort instead of insert sort
Instead of using in place insertion sort on linked list based on two
keys, convert the linked list to an array, perform sorting on it using
qsort and convert it back to a linked list. This turns out to be much
faster.
Shivani Bhardwaj [Wed, 21 Feb 2024 06:42:30 +0000 (12:12 +0530)]
detect/port: merge port ranges for same signatures
To avoid getting multiple entries in the final port list and to also
make the next step more efficient by reducing the size of the items to
traverse over.
Shivani Bhardwaj [Tue, 20 Feb 2024 16:22:38 +0000 (21:52 +0530)]
detect/port: create list of small port ranges
Using the unique port points, create a list of small port ranges which
contain the DetectPort objects and the designated SGHs found by finding
the overlaps with the existing ports and copying the SGHs accordingly.
Shivani Bhardwaj [Fri, 16 Feb 2024 09:18:46 +0000 (14:48 +0530)]
detect/port: create a tree of given ports
After all the SGHs have been appropriately copied to the designated
ports, create an interval tree out of it for a faster lookup when later
a search for overlaps is made.
Shivani Bhardwaj [Fri, 16 Feb 2024 08:57:52 +0000 (14:27 +0530)]
detect/port: find unique port points
In order to create the smallest possible port ranges, it is convenient
to first have a list of unique ports. Then, the work becomes simple. See
below:
Given, a port range P1 = [1, 8]; SGH1
and another, P2 = [3, 94]; SGH2
right now, the code will follow a logic of recursively cutting port
ranges until we create the small ranges. But, with the help of unique
port points, we get, unique_port_points = [1, 3, 8, 94]
So, now, in a later stage, we can create the ranges as
[1, 2], [3, 7], [8, 8], [9, 94] and copy the designated SGHs where they
belong. Note that the intervals are closed which means that the range
is inclusive of both the points.
The final result becomes:
1. [1, 2]; SGH1
2. [3, 7]; SGH1 + SGH2
3. [8, 8]; SGH1 + SGH2
4. [9, 94]; SGH2
There would be 3 unique rule groups made for the case above.
Group 1: [1, 2]
Group 2: [3, 7], [8, 8]
Group 3: [9, 94]
Warning was:
src/util-port-interval-tree.c:50:1: warning: Either the condition 'tmp!=NULL' is redundant or there is possible null pointer dereference: tmp. [nullPointerRedundantCheck]
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Assuming that condition 'tmp!=NULL' is not redundant
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Null pointer dereference
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: warning: Either the condition 'oleft!=NULL' is redundant or there is possible null pointer dereference: oleft. [nullPointerRedundantCheck]
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Assuming that condition 'oleft!=NULL' is not redundant
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Null pointer dereference
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: warning: Either the condition 'oright!=NULL' is redundant or there is possible null pointer dereference: oright. [nullPointerRedundantCheck]
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Assuming that condition 'oright!=NULL' is not redundant
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Null pointer dereference
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: warning: Either the condition 'left!=NULL' is redundant or there is possible null pointer dereference: left. [nullPointerRedundantCheck]
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Assuming that condition 'left!=NULL' is not redundant
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
src/util-port-interval-tree.c:50:1: note: Null pointer dereference
IRB_GENERATE(PI, SCPortIntervalNode, irb, SCPortIntervalCompareAndUpdate);
^
Shivani Bhardwaj [Fri, 16 Feb 2024 08:07:23 +0000 (13:37 +0530)]
util/interval-tree: add utility fns
Add new utility files to deal with the interval trees. These cover the
basic ops:
1. Creation/Destruction of the tree
2. Creation/Destruction of the nodes
It also adds the support for finding overlaps for a given set of ports.
This function is used by the detection engine is the Stage 2 of
signature preparation.
Shivani Bhardwaj [Mon, 29 Jan 2024 06:08:51 +0000 (11:38 +0530)]
interval-tree: add augmentation fns to the tree
An interval tree uses red-black tree as its base data structure and
follows all the properties of a usual red-black tree. The additional
params are:
1. An interval such as [low, high] per node.
2. A max attribute per node. This attribute stores the maximum high
value of any subtree rooted at this node.
At any point in time, an inorder traversal of an interval tree should
give the port ranges sorted by the low key in ascending order.
This commit modifies the IRB_AUGMENT macro and it's call sites to make
sure that on every insertion, the max attribute of the tree is properly
updated.
Victor Julien [Fri, 12 Jan 2024 07:03:06 +0000 (12:33 +0530)]
detect/engine: fix whitelisting check
In the commit 4a00ae607, the whitelisting check was updated in a quest
to make use of the conditional better but it made things worse as every
range would be whitelisted as long as it had any of the default
whitelisted port which is very common.
Philippe Antoine [Thu, 18 Apr 2024 09:54:34 +0000 (11:54 +0200)]
detect: use direction-based tx for app-layer logging
When we only have stream matches.
Ticket: 6846
This solves the case where another transaction was created
by parsing data in the other direction, before running the
detection.
Like
1. get data in direction 1
2. acked data: parse it, but do not run detection in dir 1
3. other data in direction 2
4. other data acked : parse it and create new tx,
then run detection for direction 1 with data from first packet
Callers to these allocators often use ``sc_errno`` to provide context of
the error. And in the case of the above bug, they return ``sc_errno``,
but as it has not been set ``sc_errno = 0; == SC_OK``.
This patch simply sets this variable to ensure there is context provided
upon error.
Victor Julien [Tue, 21 May 2024 12:13:11 +0000 (14:13 +0200)]
pcap-log: use correct pkthdr size for limit enforcement
The on-disk pcap pkthdr is 16 bytes. This was calculated using
`sizeof(struct pcap_pkthdr)`, which is 24 bytes on 64 bit Linux. On
Macos, it's even worse, as a comment field grows the struct to 280
bytes.
Victor Julien [Mon, 20 May 2024 20:09:06 +0000 (22:09 +0200)]
time: only consider packet threads
In offline mode, a timestamp is kept per thread, and the lowest
timestamp of the active threads is used. This was also considering the
non-packet threads, which could lead to the used timestamp being further
behind that needed. This would happen at the start of the program, as
the non-packet threads were set up the same way as the packet threads.
This patch both no longer sets up the timestamp for non-packet threads
as well as not considering non-packet threads during timestamp
retrieval.
Jeff Lucovsky [Sat, 16 Mar 2024 12:58:11 +0000 (08:58 -0400)]
profiling/rules: Improve dynamic rule handling
Issue: 6861
Without this commit, disabling rule profiling via suricatasc's command
'ruleset-profile-stop' may crash because profiling_rules_entered becomes
negative.
This can happen because
- There can be multiple rules evaluated for a single packet
- Each rule is profiled individually.
- Starting profiling is gated by a configuration setting and rule
profiling being active
- Ending profiling is gated by the same configuration setting and
whether the packet was marked as profiling.
The crash can occur when a rule is being profiled and rule profiling
is then disabled after one at least one rule was profiled for the packet
(which marks the packet as being profiled).
In this scenario, the value of profiling_rules_entered was
not incremented so the BUG_ON in the end profiling macro trips
because it is 0.
The changes to fix the problem are:
- In the profiling end macro, gate the actions taken there by the same
configuration setting and use the profiling_rues_entered (instead of
the per-packet profiling flag). Since the start and end macros are
tightly coupled, this will permit profiling to "finish" if started.
- Modify SCProfileRuleStart to only check the sampling values if the
packet hasn't been marked for profiling already. This change makes all
rules for a packet (once selected) to be profiled (without this change
sampling is applied to each *rule* that applies to the packet.
Philippe Antoine [Fri, 17 May 2024 07:39:52 +0000 (09:39 +0200)]
http: fix nul deref on memcap reached
HttpRangeOpenFileAux may return NULL in different cases, including
when memcap is reached.
But is only caller did not check it before calling HttpRangeAppendData
which would dereference the NULL value.
rust: return empty slice without using from_raw_parts
As this triggers rustc 1.78
unsafe precondition(s) violated: slice::from_raw_parts requires
the pointer to be aligned and non-null,
and the total size of the slice not to exceed `isize::MAX`