Eric Leblond [Tue, 2 Feb 2016 22:44:24 +0000 (23:44 +0100)]
stream: fix depth reached detection
When a segment only partially fit in streaming depth, the stream
depth reached flag was not set resulting in a continuous
inspection of the rest of the session.
By setting the stream depth reached flag when the segment partially
fit we avoid to reenter the code and we don't take anymore a code
path resulting in the flag not to be set.
Victor Julien [Sat, 14 May 2016 06:56:49 +0000 (08:56 +0200)]
flow-manager: optimize hash walking
Until now the flow manager would walk the entire flow hash table on an
interval. It would thus touch all flows, leading to a lot of memory
and cache pressure. In scenario's where the number of tracked flows run
into the hundreds on thousands, and the memory used can run into many
hundreds of megabytes or even gigabytes, this would lead to serious
performance degradation.
This patch introduces a new approach. A timestamp per flow bucket
(hash row) is maintained by the flow manager. It holds the timestamp
of the earliest possible timeout of a flow in the list. The hash walk
skips rows with timestamps beyond the current time.
As the timestamp depends on the flows in the hash row's list, and on
the 'state' of each flow in the list, any addition of a flow or
changing of a flow's state invalidates the timestamp. The flow manager
then has to walk the list again to set a new timestamp.
A utility function FlowUpdateState is introduced to change Flow states,
taking care of the bucket timestamp invalidation while at it.
Empty flow buckets use a special value so that we don't have to take
the flow bucket lock to find out the bucket is empty.
This patch also adds more performance counters:
flow_mgr.flows_checked | Total | 929
flow_mgr.flows_notimeout | Total | 391
flow_mgr.flows_timeout | Total | 538
flow_mgr.flows_removed | Total | 277
flow_mgr.flows_timeout_inuse | Total | 261
flow_mgr.rows_checked | Total | 1000000
flow_mgr.rows_skipped | Total | 998835
flow_mgr.rows_empty | Total | 290
flow_mgr.rows_maxlen | Total | 2
flow_mgr.flows_checked: number of flows checked for timeout in the
last pass
flow_mgr.flows_notimeout: number of flows out of flow_mgr.flows_checked
that didn't time out
flow_mgr.flows_timeout: number of out of flow_mgr.flows_checked that
did reach the time out
flow_mgr.flows_removed: number of flows out of flow_mgr.flows_timeout
that were really removed
flow_mgr.flows_timeout_inuse: number of flows out of flow_mgr.flows_timeout
that were still in use or needed work
flow_mgr.rows_checked: hash table rows checked
flow_mgr.rows_skipped: hash table rows skipped because non of the flows
would time out anyway
The counters below are only relating to rows that were not skipped.
flow_mgr.rows_empty: empty hash rows
flow_mgr.rows_maxlen: max number of flows per hash row. Best to keep low,
so increase hash-size if needed.
flow_mgr.rows_busy: row skipped because it was locked by another thread
Victor Julien [Fri, 13 May 2016 21:04:30 +0000 (23:04 +0200)]
flow: simplify timeout logic
Instead of a single big FlowProto array containing timeouts separately
for normal and emergency cases, plus the 'Free' pointer for the
protoctx, split up these arrays.
An array made of FlowProtoTimeout for just the normal timeouts and an
mirror of that for emergency timeouts are used through a pointer that
will be set at init and by swapped by the emergency logic. It's swapped
back when the emergency is over.
The free funcs are moved to their own array.
This simplifies the timeout lookup code and shrinks the data that is
commonly used.
Victor Julien [Thu, 22 Sep 2016 10:38:54 +0000 (12:38 +0200)]
offloading: make disabling offloading configurable
Add a generic 'capture' section to the YAML:
# general settings affecting packet capture
capture:
# disable NIC offloading. It's restored when Suricata exists.
# Enabled by default
#disable-offloading: false
#
# disable checksum validation. Same as setting '-k none' on the
# commandline
#checksum-validation: none
Moved and adapted code from detect-filemd5 to util-detect-file-hash,
generalised code to work with SHA-1 and SHA-256 and added necessary
flags and other constants.
Tom DeCanio [Fri, 16 Sep 2016 12:24:50 +0000 (05:24 -0700)]
util-decode-mime: remove quote from boundary= string.
remove quote from the end of the boundary= string. This was throwing off
the mime parser so that it wouldn't always catch mime boundaries causing
things like missed attachments.
Eric Leblond [Tue, 31 May 2016 13:02:12 +0000 (15:02 +0200)]
unix-socket: add auto mode
When running in live mode, the new default 'auto' value of
unix-command.enabled causes unix-command to be activated. This
will allow users of live capture to benefit from the feature and
result in no side effect for user running in offline capture.
Andreas Herz [Sat, 23 Jul 2016 19:59:12 +0000 (21:59 +0200)]
rule-reload: remember pending USR2 signals
We did ignore additional USR2 signals while a rule-reload was running.
This changes the counter to be incremented with every additional USR2
signal so we don't ignore them anymore but it's still limited to prevent
huge overload or even overflow.
Jason Ish [Mon, 19 Sep 2016 13:47:24 +0000 (07:47 -0600)]
defrag: use frag_pkt_too_large instead of frag_too_large
The rules were using the wrong decoder event type, which was
only set in the unlikely event of a complete overlap, which
really had nothing to do with being too large.
Remove FRAG_TOO_LARGE as its no longer being used, an overlap
event is already set in the case where this event would be set.
Jason Ish [Mon, 6 Jun 2016 19:58:37 +0000 (13:58 -0600)]
logging: use a single entry point for all loggers
Introduces a new thread module, TMM_LOGGER, which is the
root most logger.
Only handles loggers in the packet path, stats and flow
logging are not included.
The loggers are made up of a hierarchy of loggers. At the top we
have the root logger which is the main entry point to
logging. Under the root there exists parent loggers that are the
entry point for specific types of loggers such as packet logger,
transaction loggers, etc. Each parent logger may have 0 or more
loggers that actual handle the job of producing output to something
like a file.