From: Michael Altizer Date: Thu, 6 Dec 2018 18:12:21 +0000 (-0500) Subject: build: Generate and tag build 250 X-Git-Tag: 3.0.0-250 X-Git-Url: http://git.ipfire.org/cgi-bin/gitweb.cgi?a=commitdiff_plain;h=b492d7b94a61e5710510ce8cf15244f4a5bbca83;p=thirdparty%2Fsnort3.git build: Generate and tag build 250 --- diff --git a/ChangeLog b/ChangeLog index 29991b6c5..cdd78cc16 100644 --- a/ChangeLog +++ b/ChangeLog @@ -1,3 +1,58 @@ +18/12/06 - build 250 + +-- actions: Fix incorrect order of IPS reject unreachable codes and adding forward option +-- active: added peg count for injects +-- active, detection: active state is tied to specific packet, not thread +-- appid: Don't build unit test components without ENABLE_UNIT_TESTS +-- appid: Fix heap overflow issue for a fuzzed pcap +-- build: accept generator names with spaces in configure_cmake.sh +-- build: clean up additional warnings +-- build: fix come cppcheck warnings +-- build: fix some int format specifiers +-- build: fix some int type conversion warnings +-- build: reduce variable scope to address warnings +-- detection: enable offloading non-pdu packets +-- detection, stream: fixed assuming packets were offloaded when previous packets on flow have been offloaded +-- file_api: choose whether to get file config from current config or staged one +-- file: fail the reload if capture is enabled for the first time +-- framework: Clone databus to new config during module reload +-- loggers: Use thread safe strerror_r() instead of strerror() +-- main: support resume(n) command +-- managers: update action manager to support reload +-- module_manager: Fix configuring module parameter defaults when modules have list parameters +-- parameter: add max31, max32, and max53 for int upper bounds +-- parameter: add maxSZ upper bound for int sizes +-- parameter: build out validation unit tests +-- parameter: clean up some signed/unsigned mismatches +-- parameter: clean up upper bounds +-- parameter: remove arbitrary one day limit on timers +-- parameter: remove ineffective -1 from pcre_match_limit* +-- parameter: reorgranize for unit tests +-- parameter: use bool instead of int for bools +-- parameter: use consistent default port ranges +-- perf_monitor: Actually allow building perf_monitor as a dynamic plugin +-- perf_monitor: fix benign parameter errors +-- perf_monitor: fixed fbs schema generation when not building with DEBUG +-- protocols: add vlan_idx field to Packet struct and handle multiple vlan type ids; thanks to ymansour for reporting the issue +-- regex worker: removed assert that didn't handle locks cleanly +-- reputation: Fix iterations of layers for different nested_ip configs and show the blacklisted IP in events +-- sip: Added sanity check for buffer boundary while parsing a sip message +-- snort2lua: add code to output control = forward under the reject module +-- snort2lua: Fix compiler warning for catching exceptions by value +-- snort2lua: Fix pcre H and P option conversions for sip +-- snort: add --help-limits to output max* values +-- snort: Default to a snaplen of 1518 +-- snort: fix command line parameters to support setting in Lua; thanks to Meridoff for reporting the issue +-- snort: remove obsolete and inadequate -W option; thanks to Jaime González for reporting the issue +-- snort: terminate gracefully upon DAQ start failure; thanks to Jaime González for reporting the issue +-- so rules: add robust stub parsing +-- stream: fixed stream_base flow peg count sum_stats bug +-- stream tcp: fixed applying post-inspection operations to wrong rebuilt packet +-- stream tcp: fixed sequence overlap handling when working with empty seglist +-- style: clean up comment to reduce spelling exceptions +-- thread: No more breaks for pigs (union busting) +-- tools: Install appid-detector-builder.sh with the other tools; thanks to Jonathan McDowell for reporting the issue + 18/11/07 - build 249 -- appid: Fixing profiler data race and registration issues diff --git a/doc/snort_manual.html b/doc/snort_manual.html index 1fea7401b..1e8c8e35d 100644 --- a/doc/snort_manual.html +++ b/doc/snort_manual.html @@ -779,7 +779,7 @@ asciidoc.install(2);
 ,,_     -*> Snort++ <*-
-o"  )~   Version 3.0.0 (Build 248) from 2.9.11
+o"  )~   Version 3.0.0 (Build 250) from 2.9.11
  ''''    By Martin Roesch & The Snort Team
          http://snort.org/contact#team
          Copyright (C) 2014-2018 Cisco and/or its affiliates. All rights reserved.
@@ -2499,6 +2499,7 @@ all text mode outputs default to stdout
 --help-commands [<module prefix>] output matching commands
 --help-config [<module prefix>] output matching config options
 --help-counts [<module prefix>] output matching peg counts
+--help-limits print the int upper bounds denoted by max*
 --help-module <module> output description of given module
 --help-modules list all available modules with brief help
 --help-plugins list all available plugins with brief help
@@ -7022,7 +7023,7 @@ configuration for core processing.

  • -int active.attempts = 0: number of TCP packets sent per response (with varying sequence numbers) { 0:20 } +int active.attempts = 0: number of TCP packets sent per response (with varying sequence numbers) { 0:255 }

  • @@ -7037,7 +7038,7 @@ string active.dst_mac: use format 01:23:45:67:89:ab
  • -int active.max_responses = 0: maximum number of responses { 0: } +int active.max_responses = 0: maximum number of responses { 0:255 }

  • @@ -7046,6 +7047,14 @@ int active.min_interval = 255: minimum number of seconds betwee

+

Peg counts:

+
    +
  • +

    +active.injects: total crafted packets injected (sum) +

    +
  • +

alerts

@@ -7066,12 +7075,12 @@ bool alerts.default_rule_state = true: enable or disable ips ru
  • -int alerts.detection_filter_memcap = 1048576: set available bytes of memory for detection_filters { 0: } +int alerts.detection_filter_memcap = 1048576: set available MB of memory for detection_filters { 0:max32 }

  • -int alerts.event_filter_memcap = 1048576: set available bytes of memory for event_filters { 0: } +int alerts.event_filter_memcap = 1048576: set available MB of memory for event_filters { 0:max32 }

  • @@ -7086,7 +7095,7 @@ string alerts.order = pass drop alert log: change the order of
  • -int alerts.rate_filter_memcap = 1048576: set available bytes of memory for rate_filters { 0: } +int alerts.rate_filter_memcap = 1048576: set available MB of memory for rate_filters { 0:max32 }

  • @@ -7115,7 +7124,7 @@ string alerts.tunnel_verdicts: let DAQ handle non-allow verdict
    • -int attribute_table.max_hosts = 1024: maximum number of hosts in attribute table { 32:207551 } +int attribute_table.max_hosts = 1024: maximum number of hosts in attribute table { 32:max53 }

    • @@ -7125,7 +7134,7 @@ int attribute_table.max_services_per_host = 8: maximum number o
    • -int attribute_table.max_metadata_services = 8: maximum number of services in rule metadata { 1:256 } +int attribute_table.max_metadata_services = 8: maximum number of services in rule { 1:255 }

    @@ -7144,7 +7153,7 @@ string classifications[].name: name used with classtype rule op
  • -int classifications[].priority = 1: default priority for class { 0: } +int classifications[].priority = 1: default priority for class { 0:max32 }

  • @@ -7183,7 +7192,7 @@ string daq.variables[].str: string parameter
  • -int daq.instances[].id: instance ID (required) { 0: } +int daq.instances[].id: instance ID (required) { 0:max32 }

  • @@ -7364,17 +7373,17 @@ bool daq.no_promisc = false: whether to put DAQ device into pro
    • -int detection.asn1 = 256: maximum decode nodes { 1: } +int detection.asn1 = 0: maximum decode nodes { 0:65535 }

    • -int detection.offload_limit = 99999: minimum sizeof PDU to offload fast pattern search (defaults to disabled) { 0: } +int detection.offload_limit = 99999: minimum sizeof PDU to offload fast pattern search (defaults to disabled) { 0:max32 }

    • -int detection.offload_threads = 0: maximum number of simultaneous offloads (defaults to disabled) { 0: } +int detection.offload_threads = 0: maximum number of simultaneous offloads (defaults to disabled) { 0:max32 }

    • @@ -7384,12 +7393,12 @@ bool detection.pcre_enable = true: disable pcre pattern matchin
    • -int detection.pcre_match_limit = 1500: limit pcre backtracking, -1 = max, 0 = off { -1:1000000 } +int detection.pcre_match_limit = 1500: limit pcre backtracking, 0 = off { 0:max32 }

    • -int detection.pcre_match_limit_recursion = 1500: limit pcre stack consumption, -1 = max, 0 = off { -1:10000 } +int detection.pcre_match_limit_recursion = 1500: limit pcre stack consumption, 0 = off { 0:max32 }

    • @@ -7399,7 +7408,7 @@ bool detection.enable_address_anomaly_checks = false: enable ch
    • -int detection.trace: mask for enabling debug traces in module +int detection.trace: mask for enabling debug traces in module { 0:max53 }

    @@ -7516,12 +7525,12 @@ int detection.trace: mask for enabling debug traces in module
    • -int event_filter[].gid = 1: rule generator ID { 0: } +int event_filter[].gid = 1: rule generator ID { 0:max32 }

    • -int event_filter[].sid = 1: rule signature ID { 0: } +int event_filter[].sid = 1: rule signature ID { 0:max32 }

    • @@ -7536,12 +7545,12 @@ enum event_filter[].track: filter only matching source or desti
    • -int event_filter[].count = 0: number of events in interval before tripping; -1 to disable { -1: } +int event_filter[].count = 0: number of events in interval before tripping; -1 to disable { -1:max31 }

    • -int event_filter[].seconds = 0: count interval { 0: } +int event_filter[].seconds = 0: count interval { 0:max32 }

    • @@ -7560,12 +7569,12 @@ string event_filter[].ip: restrict filter to these addresses ac
      • -int event_queue.max_queue = 8: maximum events to queue { 1: } +int event_queue.max_queue = 8: maximum events to queue { 1:max32 }

      • -int event_queue.log = 3: maximum events to log { 1: } +int event_queue.log = 3: maximum events to log { 1:max32 }

      • @@ -7631,7 +7640,7 @@ real high_availability.min_sync = 1.0: minimum interval between
        • -int host_cache[].size: size of host cache +int host_cache[].size: size of host cache { 1:max32 }

        @@ -7842,7 +7851,7 @@ string ips.uuid = 00000000-0000-0000-0000-000000000000: IPS pol
        • -int latency.packet.max_time = 500: set timeout for packet latency thresholding (usec) { 0: } +int latency.packet.max_time = 500: set timeout for packet latency thresholding (usec) { 0:max53 }

        • @@ -7857,7 +7866,7 @@ enum latency.packet.action = none: event action if packet times
        • -int latency.rule.max_time = 500: set timeout for rule evaluation (usec) { 0: } +int latency.rule.max_time = 500: set timeout for rule evaluation (usec) { 0:max53 }

        • @@ -7867,12 +7876,12 @@ bool latency.rule.suspend = false: temporarily suspend expensiv
        • -int latency.rule.suspend_threshold = 5: set threshold for number of timeouts before suspending a rule { 1: } +int latency.rule.suspend_threshold = 5: set threshold for number of timeouts before suspending a rule { 1:max32 }

        • -int latency.rule.max_suspend_time = 30000: set max time for suspending a rule (ms, 0 means permanently disable rule) { 0: } +int latency.rule.max_suspend_time = 30000: set max time for suspending a rule (ms, 0 means permanently disable rule) { 0:max32 }

        • @@ -7947,7 +7956,7 @@ enum latency.rule.action = none: event action for rule latency
          • -int memory.cap = 0: set the per-packet-thread cap on memory (bytes, 0 to disable) { 0: } +int memory.cap = 0: set the per-packet-thread cap on memory (bytes, 0 to disable) { 0:maxSZ }

          • @@ -7957,7 +7966,7 @@ bool memory.soft = false: always succeed in allocating memory,
          • -int memory.threshold = 0: set the per-packet-thread threshold for preemptive cleanup actions (percent, 0 to disable) { 0: } +int memory.threshold = 0: set the per-packet-thread threshold for preemptive cleanup actions (percent, 0 to disable) { 0:100 }

          @@ -8070,7 +8079,7 @@ bool output.show_year = false: include year in timestamp in the
        • -int output.tagged_packet_limit = 256: maximum number of packets tagged for non-packet metrics { 0: } +int output.tagged_packet_limit = 256: maximum number of packets tagged for non-packet metrics { 0:max32 }

        • @@ -8080,7 +8089,7 @@ bool output.verbose = false: be verbose (same as -v)
        • -bool output.wide_hex_dump = true: output 20 bytes per lines instead of 16 when dumping buffers +bool output.wide_hex_dump = false: output 20 bytes per lines instead of 16 when dumping buffers

        @@ -8136,12 +8145,12 @@ string packets.bpf_file: file with BPF to select traffic for Sn
      • -int packets.limit = 0: maximum number of packets to process before stopping (0 is unlimited) { 0: } +int packets.limit = 0: maximum number of packets to process before stopping (0 is unlimited) { 0:max53 }

      • -int packets.skip = 0: number of packets to skip before before processing { 0: } +int packets.skip = 0: number of packets to skip before before processing { 0:max53 }

      • @@ -8170,7 +8179,7 @@ string process.threads[].cpuset: pin the associated thread to t
      • -int process.threads[].thread = 0: set cpu affinity for the <cur_thread_num> thread that runs { 0: } +int process.threads[].thread = 0: set cpu affinity for the <cur_thread_num> thread that runs { 0:65535 }

      • @@ -8195,7 +8204,7 @@ string process.set_uid: set user ID (same as -u)
      • -string process.umask: set process umask (same as -m) +int process.umask: set process umask (same as -m) { 0x000:0x1FF }

      • @@ -8219,7 +8228,7 @@ bool profiler.modules.show = true: show module time profile sta
      • -int profiler.modules.count = 0: limit results to count items per level (0 = no limit) { 0: } +int profiler.modules.count = 0: limit results to count items per level (0 = no limit) { 0:max32 }

      • @@ -8229,7 +8238,7 @@ enum profiler.modules.sort = total_time: sort by given field {
      • -int profiler.modules.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1: } +int profiler.modules.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1:255 }

      • @@ -8239,7 +8248,7 @@ bool profiler.memory.show = true: show module memory profile st
      • -int profiler.memory.count = 0: limit results to count items per level (0 = no limit) { 0: } +int profiler.memory.count = 0: limit results to count items per level (0 = no limit) { 0:max32 }

      • @@ -8249,7 +8258,7 @@ enum profiler.memory.sort = total_used: sort by given field { n
      • -int profiler.memory.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1: } +int profiler.memory.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1:255 }

      • @@ -8259,7 +8268,7 @@ bool profiler.rules.show = true: show rule time profile stats
      • -int profiler.rules.count = 0: print results to given level (0 = all) { 0: } +int profiler.rules.count = 0: print results to given level (0 = all) { 0:max32 }

      • @@ -8278,12 +8287,12 @@ enum profiler.rules.sort = total_time: sort by given field { no
        • -int rate_filter[].gid = 1: rule generator ID { 0: } +int rate_filter[].gid = 1: rule generator ID { 0:max32 }

        • -int rate_filter[].sid = 1: rule signature ID { 0: } +int rate_filter[].sid = 1: rule signature ID { 0:max32 }

        • @@ -8293,12 +8302,12 @@ enum rate_filter[].track = by_src: filter only matching source
        • -int rate_filter[].count = 1: number of events in interval before tripping { 0: } +int rate_filter[].count = 1: number of events in interval before tripping { 0:max32 }

        • -int rate_filter[].seconds = 1: count interval { 0: } +int rate_filter[].seconds = 1: count interval { 0:max32 }

        • @@ -8308,7 +8317,7 @@ enum rate_filter[].new_action = alert: take this action on futu
        • -int rate_filter[].timeout = 1: count interval { 0: } +int rate_filter[].timeout = 1: count interval { 0:max32 }

        • @@ -8346,12 +8355,12 @@ string references[].url: where this reference is defined
          • -int rule_state[].gid = 0: rule generator ID { 0: } +int rule_state[].gid = 0: rule generator ID { 0:max32 }

          • -int rule_state[].sid = 0: rule signature ID { 0: } +int rule_state[].sid = 0: rule signature ID { 0:max32 }

          • @@ -8370,7 +8379,7 @@ bool rule_state[].enable = true: enable or disable rule in all
            • -int search_engine.bleedover_port_limit = 1024: maximum ports in rule before demotion to any-any port group { 1: } +int search_engine.bleedover_port_limit = 1024: maximum ports in rule before demotion to any-any port group { 1:max32 }

            • @@ -8410,7 +8419,7 @@ bool search_engine.debug_print_rule_groups_compiled = false: pr
            • -int search_engine.max_pattern_len = 0: truncate patterns when compiling into state machine (0 means no maximum) { 0: } +int search_engine.max_pattern_len = 0: truncate patterns when compiling into state machine (0 means no maximum) { 0:max32 }

            • @@ -8614,12 +8623,12 @@ implied snort.-M: log messages to syslog (not alerts)
            • -int snort.-m: <umask> set umask = <umask> { 0: } +int snort.-m: <umask> set the process file mode creation mask { 0x000:0x1FF }

            • -int snort.-n: <count> stop after count packets { 0: } +int snort.-n: <count> stop after count packets { 0:max53 }

            • @@ -8654,7 +8663,7 @@ string snort.-S: <x=v> set config variable x equal to val
            • -int snort.-s = 1514: <snap> (same as --snaplen); default is 1514 { 68:65535 } +int snort.-s = 1518: <snap> (same as --snaplen); default is 1518 { 68:65535 }

            • @@ -8689,11 +8698,6 @@ implied snort.-v: be verbose
            • -implied snort.-W: lists available interfaces -

              -
            • -
            • -

              implied snort.-X: dump the raw packet data starting at the link layer

            • @@ -8709,7 +8713,7 @@ implied snort.-y: include year in timestamp in the alert and lo
            • -int snort.-z = 1: <count> maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported by the system; default is 1 { 0: } +int snort.-z = 1: <count> maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported by the system; default is 1 { 0:max32 }

            • @@ -8814,6 +8818,11 @@ string snort.--help-counts: [<module prefix>] output matc
            • +implied snort.--help-limits: print the int upper bounds denoted by max* +

              +
            • +
            • +

              string snort.--help-module: <module> output description of given module

            • @@ -8894,7 +8903,7 @@ implied snort.--markup: output help in asciidoc compatible form
            • -int snort.--max-packet-threads = 1: <count> configure maximum number of packet threads (same as -z) { 0: } +int snort.--max-packet-threads = 1: <count> configure maximum number of packet threads (same as -z) { 0:max32 }

            • @@ -8919,11 +8928,6 @@ implied snort.--pause: wait for resume/quit command before proc
            • -int snort.--pause-after-n: <count> pause after count packets, to be used with single packet thread only { 1: } -

              -
            • -
            • -

              implied snort.--parsing-follows-files: parse relative paths from the perspective of the current configuration file

            • @@ -8949,7 +8953,7 @@ string snort.--pcap-filter: <filter> filter to apply when
            • -int snort.--pcap-loop: <count> read all pcaps <count> times; 0 will read until Snort is terminated { -1: } +int snort.--pcap-loop: <count> read all pcaps <count> times; 0 will read until Snort is terminated { 0:max32 }

            • @@ -8999,7 +9003,7 @@ implied snort.--rule-to-hex: output so rule header to stdout fo
            • -string snort.--rule-to-text = [SnortFoo]: output plain so rule header to stdout for text rule on stdin { 16 } +string snort.--rule-to-text: output plain so rule header to stdout for text rule on stdin (specify delimiter or [Snort_SO_Rule] will be used) { 16 }

            • @@ -9019,22 +9023,17 @@ implied snort.--shell: enable the interactive command line
            • -implied snort.--piglet: enable piglet test harness mode -

              -
            • -
            • -

              implied snort.--show-plugins: list module and plugin versions

            • -int snort.--skip: <n> skip 1st n packets { 0: } +int snort.--skip: <n> skip 1st n packets { 0:max53 }

            • -int snort.--snaplen = 1514: <snap> set snaplen of packet (same as -s) { 68:65535 } +int snort.--snaplen = 1518: <snap> set snaplen of packet (same as -s) { 68:65535 }

            • @@ -9064,11 +9063,6 @@ string snort.--tweaks: tune configuration
            • -string snort.--catch-test: comma separated list of cat unit test tags or all -

              -
            • -
            • -

              implied snort.--version: show version number (same as -V)

            • @@ -9124,7 +9118,7 @@ implied snort.--warn-vars: warn about variable definition and u
            • -int snort.--x2c: output ASCII char for given hex (see also --c2x) +int snort.--x2c: output ASCII char for given hex (see also --c2x) { 0x00:0xFF }

            • @@ -9139,7 +9133,7 @@ implied snort.--trace: turn on main loop debug trace
            • -int snort.trace: mask for enabling debug traces in module +int snort.trace: mask for enabling debug traces in module { 0:max53 }

            @@ -9197,7 +9191,7 @@ int snort.trace: mask for enabling debug traces in module
          • -snort.resume(): continue packet processing +snort.resume(pkt_num): continue packet processing. If number of packet is specified, will resume for n packets and pause

          • @@ -9274,12 +9268,12 @@ int snort.trace: mask for enabling debug traces in module
            • -int suppress[].gid = 0: rule generator ID { 0: } +int suppress[].gid = 0: rule generator ID { 0:max32 }

            • -int suppress[].sid = 0: rule signature ID { 0: } +int suppress[].sid = 0: rule signature ID { 0:max32 }

            • @@ -10050,7 +10044,7 @@ bool mpls.enable_mpls_overlapping_ip = false: enable if private
            • -int mpls.max_mpls_stack_depth = -1: set MPLS stack depth { -1: } +int mpls.max_mpls_stack_depth = -1: set MPLS stack depth { -1:255 }

            • @@ -10508,12 +10502,7 @@ protocols beyond basic decoding.

            • -int appid.first_decrypted_packet_debug = 0: the first packet of an already decrypted SSL flow (debug single session only) { 0: } -

              -
            • -
            • -

              -int appid.memcap = 0: disregard - not implemented { 0: } +int appid.memcap = 0: disregard - not implemented { 0:maxSZ }

            • @@ -10523,17 +10512,17 @@ bool appid.log_stats = false: enable logging of appid statistic
            • -int appid.app_stats_period = 300: time period for collecting and logging appid statistics { 0: } +int appid.app_stats_period = 300: time period for collecting and logging appid statistics { 0:max32 }

            • -int appid.app_stats_rollover_size = 20971520: max file size for appid stats before rolling over the log file { 0: } +int appid.app_stats_rollover_size = 20971520: max file size for appid stats before rolling over the log file { 0:max32 }

            • -int appid.app_stats_rollover_time = 86400: max time period for collection appid stats before rolling over the log file { 0: } +int appid.app_stats_rollover_time = 86400: max time period for collection appid stats before rolling over the log file { 0:max31 }

            • @@ -10543,7 +10532,7 @@ string appid.app_detector_dir: directory to load appid detector
            • -int appid.instance_id = 0: instance id - ignored { 0: } +int appid.instance_id = 0: instance id - ignored { 0:max32 }

            • @@ -10583,7 +10572,7 @@ bool appid.log_all_sessions = false: enable logging of all appi
            • -int appid.trace: mask for enabling debug traces in module +int appid.trace: mask for enabling debug traces in module { 0:max53 }

            @@ -10725,7 +10714,7 @@ mac arp_spoof.hosts[].mac: host mac address
            • -int binder[].when.ips_policy_id = 0: unique ID for selection of this config by external logic { 0: } +int binder[].when.ips_policy_id = 0: unique ID for selection of this config by external logic { 0:max32 }

            • @@ -10775,12 +10764,12 @@ bit_list binder[].when.dst_ports: list of destination ports { 6
            • -int binder[].when.src_zone: source zone { 0:2147483647 } +int binder[].when.src_zone: source zone { 0:max31 }

            • -int binder[].when.dst_zone: destination zone { 0:2147483647 } +int binder[].when.dst_zone: destination zone { 0:max31 }

            • @@ -10877,7 +10866,7 @@ select data_log.key = http_request_header_event : name of the e
            • -int data_log.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int data_log.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:max32 }

            @@ -10937,52 +10926,52 @@ int data_log.limit = 0: set maximum size in MB before rollover
            • -bool dce_smb.disable_defrag = false: Disable DCE/RPC defragmentation +bool dce_smb.disable_defrag = false: disable DCE/RPC defragmentation

            • -int dce_smb.max_frag_len = 65535: Maximum fragment size for defragmentation { 1514:65535 } +int dce_smb.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 }

            • -int dce_smb.reassemble_threshold = 0: Minimum bytes received before performing reassembly { 0:65535 } +int dce_smb.reassemble_threshold = 0: minimum bytes received before performing reassembly { 0:65535 }

            • -enum dce_smb.smb_fingerprint_policy = none: Target based SMB policy to use { none | client | server | both } +enum dce_smb.smb_fingerprint_policy = none: target based SMB policy to use { none | client | server | both }

            • -enum dce_smb.policy = WinXP: Target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 } +enum dce_smb.policy = WinXP: target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 }

            • -int dce_smb.smb_max_chain = 3: SMB max chain size { 0:255 } +int dce_smb.smb_max_chain = 3: SMB max chain size { 0:255 }

            • -int dce_smb.smb_max_compound = 3: SMB max compound size { 0:255 } +int dce_smb.smb_max_compound = 3: SMB max compound size { 0:255 }

            • -multi dce_smb.valid_smb_versions = all: Valid SMB versions { v1 | v2 | all } +multi dce_smb.valid_smb_versions = all: valid SMB versions { v1 | v2 | all }

            • -enum dce_smb.smb_file_inspection = off: SMB file inspection { off | on | only } +enum dce_smb.smb_file_inspection = off: SMB file inspection { off | on | only }

            • -int dce_smb.smb_file_depth = 16384: SMB file depth for file data { -1: } +int dce_smb.smb_file_depth = 16384: SMB file depth for file data { -1:32767 }

            • @@ -10997,7 +10986,7 @@ bool dce_smb.smb_legacy_mode = false: inspect only SMBv1
            • -int dce_smb.trace: mask for enabling debug traces in module +int dce_smb.trace: mask for enabling debug traces in module { 0:max53 }

            @@ -11437,22 +11426,22 @@ int dce_smb.trace: mask for enabling debug traces in module
            • -bool dce_tcp.disable_defrag = false: Disable DCE/RPC defragmentation +bool dce_tcp.disable_defrag = false: disable DCE/RPC defragmentation

            • -int dce_tcp.max_frag_len = 65535: Maximum fragment size for defragmentation { 1514:65535 } +int dce_tcp.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 }

            • -int dce_tcp.reassemble_threshold = 0: Minimum bytes received before performing reassembly { 0:65535 } +int dce_tcp.reassemble_threshold = 0: minimum bytes received before performing reassembly { 0:65535 }

            • -enum dce_tcp.policy = WinXP: Target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 } +enum dce_tcp.policy = WinXP: target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 }

            @@ -11697,17 +11686,17 @@ enum dce_tcp.policy = WinXP: Target based policy to use { Win2
            • -bool dce_udp.disable_defrag = false: Disable DCE/RPC defragmentation +bool dce_udp.disable_defrag = false: disable DCE/RPC defragmentation

            • -int dce_udp.max_frag_len = 65535: Maximum fragment size for defragmentation { 1514:65535 } +int dce_udp.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 }

            • -int dce_udp.trace: mask for enabling debug traces in module +int dce_udp.trace: mask for enabling debug traces in module { 0:max53 }

            @@ -12074,22 +12063,22 @@ int dpx.max = 0: maximum payload before alert { 0:65535 }
            • -int file_id.type_depth = 1460: stop type ID at this point { 0: } +int file_id.type_depth = 1460: stop type ID at this point { 0:max53 }

            • -int file_id.signature_depth = 10485760: stop signature at this point { 0: } +int file_id.signature_depth = 10485760: stop signature at this point { 0:max53 }

            • -int file_id.block_timeout = 86400: stop blocking after this many seconds { 0: } +int file_id.block_timeout = 86400: stop blocking after this many seconds { 0:max31 }

            • -int file_id.lookup_timeout = 2: give up on lookup after this many seconds { 0: } +int file_id.lookup_timeout = 2: give up on lookup after this many seconds { 0:max31 }

            • @@ -12099,27 +12088,27 @@ bool file_id.block_timeout_lookup = false: block if lookup time
            • -int file_id.capture_memcap = 100: memcap for file capture in megabytes { 0: } +int file_id.capture_memcap = 100: memcap for file capture in megabytes { 0:max53 }

            • -int file_id.capture_max_size = 1048576: stop file capture beyond this point { 0: } +int file_id.capture_max_size = 1048576: stop file capture beyond this point { 0:max53 }

            • -int file_id.capture_min_size = 0: stop file capture if file size less than this { 0: } +int file_id.capture_min_size = 0: stop file capture if file size less than this { 0:max53 }

            • -int file_id.capture_block_size = 32768: file capture block size in bytes { 8: } +int file_id.capture_block_size = 32768: file capture block size in bytes { 8:max53 }

            • -int file_id.max_files_cached = 65536: maximal number of files cached in memory { 8: } +int file_id.max_files_cached = 65536: maximal number of files cached in memory { 8:max53 }

            • @@ -12139,12 +12128,12 @@ bool file_id.enable_capture = false: enable file capture
            • -int file_id.show_data_depth = 100: print this many octets { 0: } +int file_id.show_data_depth = 100: print this many octets { 0:max53 }

            • -int file_id.file_rules[].rev = 0: rule revision { 0: } +int file_id.file_rules[].rev = 0: rule revision { 0:max32 }

            • @@ -12159,7 +12148,7 @@ string file_id.file_rules[].type: file type name
            • -int file_id.file_rules[].id = 0: file type id { 0: } +int file_id.file_rules[].id = 0: file type id { 0:max32 }

            • @@ -12184,12 +12173,12 @@ string file_id.file_rules[].magic[].content: file magic content
            • -int file_id.file_rules[].magic[].offset = 0: file magic offset { 0: } +int file_id.file_rules[].magic[].offset = 0: file magic offset { 0:max32 }

            • -int file_id.file_policy[].when.file_type_id = 0: unique ID for file type in file magic rule { 0: } +int file_id.file_policy[].when.file_type_id = 0: unique ID for file type in file magic rule { 0:max32 }

            • @@ -12234,7 +12223,7 @@ bool file_id.trace_stream = false: enable runtime dump of file
            • -int file_id.verdict_delay = 0: number of queries to return final verdict { 0: } +int file_id.verdict_delay = 0: number of queries to return final verdict { 0:max53 }

            @@ -12303,12 +12292,12 @@ addr ftp_client.bounce_to[].address = 1.0.0.0/32: allowed IP ad
          • -port ftp_client.bounce_to[].port = 20: allowed port { 1: } +port ftp_client.bounce_to[].port = 20: allowed port

          • -port ftp_client.bounce_to[].last_port: optional allowed range from port to last_port inclusive { 0: } +port ftp_client.bounce_to[].last_port: optional allowed range from port to last_port inclusive

          • @@ -12318,7 +12307,7 @@ bool ftp_client.ignore_telnet_erase_cmds = false: ignore erase
          • -int ftp_client.max_resp_len = -1: maximum FTP response accepted by client { -1: } +int ftp_client.max_resp_len = 4294967295: maximum FTP response accepted by client { 0:max32 }

          • @@ -12376,7 +12365,7 @@ string ftp_server.directory_cmds[].dir_cmd: directory command
          • -int ftp_server.directory_cmds[].rsp_code = 200: expected successful response code for command { 200: } +int ftp_server.directory_cmds[].rsp_code = 200: expected successful response code for command { 200:max32 }

          • @@ -12416,12 +12405,12 @@ string ftp_server.cmd_validity[].format: format specification
          • -int ftp_server.cmd_validity[].length = 0: specify non-default maximum for command { 0: } +int ftp_server.cmd_validity[].length = 0: specify non-default maximum for command { 0:max32 }

          • -int ftp_server.def_max_param_len = 100: default maximum length of commands handled by server; 0 is unlimited { 1: } +int ftp_server.def_max_param_len = 100: default maximum length of commands handled by server; 0 is unlimited { 1:max32 }

          • @@ -12561,7 +12550,7 @@ int gtp_inspect[].infos[].length = 0: information element type
          • -int gtp_inspect.trace: mask for enabling debug traces in module +int gtp_inspect.trace: mask for enabling debug traces in module { 0:max53 }

          @@ -12651,12 +12640,12 @@ int gtp_inspect.trace: mask for enabling debug traces in module
          • -int http_inspect.request_depth = -1: maximum request message body bytes to examine (-1 no limit) { -1: } +int http_inspect.request_depth = -1: maximum request message body bytes to examine (-1 no limit) { -1:max53 }

          • -int http_inspect.response_depth = -1: maximum response message body bytes to examine (-1 no limit) { -1: } +int http_inspect.response_depth = -1: maximum response message body bytes to examine (-1 no limit) { -1:max53 }

          • @@ -12754,36 +12743,6 @@ bool http_inspect.plus_to_space = true: replace + with <sp&g bool http_inspect.simplify_path = true: reduce URI directory path to simplest form

          • -
          • -

            -bool http_inspect.test_input = false: read HTTP messages from text file -

            -
          • -
          • -

            -bool http_inspect.test_output = false: print out HTTP section data -

            -
          • -
          • -

            -int http_inspect.print_amount = 1200: number of characters to print from a Field { 1:1000000 } -

            -
          • -
          • -

            -bool http_inspect.print_hex = false: nonprinting characters printed in [HH] format instead of using an asterisk -

            -
          • -
          • -

            -bool http_inspect.show_pegs = true: display peg counts with test output -

            -
          • -
          • -

            -bool http_inspect.show_scan = false: display scanned segments -

            -

          Rules:

            @@ -14050,12 +14009,12 @@ string packet_capture.filter: bpf filter to use for packet dump
            • -bool perf_monitor.base = true: enable base statistics { nullptr } +bool perf_monitor.base = true: enable base statistics

            • -bool perf_monitor.cpu = false: enable cpu statistics { nullptr } +bool perf_monitor.cpu = false: enable cpu statistics

            • @@ -14070,22 +14029,22 @@ bool perf_monitor.flow_ip = false: enable statistics on host pa
            • -int perf_monitor.packets = 10000: minimum packets to report { 0: } +int perf_monitor.packets = 10000: minimum packets to report { 0:max32 }

            • -int perf_monitor.seconds = 60: report interval { 1: } +int perf_monitor.seconds = 60: report interval { 1:max32 }

            • -int perf_monitor.flow_ip_memcap = 52428800: maximum memory in bytes for flow tracking { 8200: } +int perf_monitor.flow_ip_memcap = 52428800: maximum memory in bytes for flow tracking { 8200:maxSZ }

            • -int perf_monitor.max_file_size = 1073741824: files will be rolled over if they exceed this size { 4096: } +int perf_monitor.max_file_size = 1073741824: files will be rolled over if they exceed this size { 4096:max53 }

            • @@ -14257,7 +14216,7 @@ int pop.uu_decode_depth = 1460: Unix-to-Unix decoding depth (-1
              • -int port_scan.memcap = 1048576: maximum tracker memory in bytes { 1: } +int port_scan.memcap = 1048576: maximum tracker memory in bytes { 1:maxSZ }

              • @@ -14297,282 +14256,282 @@ bool port_scan.include_midstream = false: list of CIDRs with op
              • -int port_scan.tcp_ports.scans = 100: scan attempts { 0: } +int port_scan.tcp_ports.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.tcp_ports.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_ports.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.tcp_ports.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_ports.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_decoy.scans = 100: scan attempts { 0: } +int port_scan.tcp_decoy.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.tcp_decoy.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_decoy.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.tcp_decoy.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_decoy.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_sweep.scans = 100: scan attempts { 0: } +int port_scan.tcp_sweep.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.tcp_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.tcp_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_dist.scans = 100: scan attempts { 0: } +int port_scan.tcp_dist.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.tcp_dist.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_dist.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.tcp_dist.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_dist.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.udp_ports.scans = 100: scan attempts { 0: } +int port_scan.udp_ports.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.udp_ports.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_ports.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.udp_ports.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_ports.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.udp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.udp_decoy.scans = 100: scan attempts { 0: } +int port_scan.udp_decoy.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.udp_decoy.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_decoy.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.udp_decoy.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_decoy.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.udp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.udp_sweep.scans = 100: scan attempts { 0: } +int port_scan.udp_sweep.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.udp_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.udp_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.udp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.udp_dist.scans = 100: scan attempts { 0: } +int port_scan.udp_dist.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.udp_dist.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_dist.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.udp_dist.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_dist.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.udp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.ip_proto.scans = 100: scan attempts { 0: } +int port_scan.ip_proto.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.ip_proto.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_proto.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.ip_proto.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_proto.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.ip_proto.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_proto.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.ip_decoy.scans = 100: scan attempts { 0: } +int port_scan.ip_decoy.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.ip_decoy.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_decoy.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.ip_decoy.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_decoy.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.ip_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.ip_sweep.scans = 100: scan attempts { 0: } +int port_scan.ip_sweep.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.ip_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.ip_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.ip_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.ip_dist.scans = 100: scan attempts { 0: } +int port_scan.ip_dist.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.ip_dist.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_dist.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.ip_dist.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_dist.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.ip_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.icmp_sweep.scans = 100: scan attempts { 0: } +int port_scan.icmp_sweep.scans = 100: scan attempts { 0:65535 }

              • -int port_scan.icmp_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.icmp_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

              • -int port_scan.icmp_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.icmp_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

              • -int port_scan.icmp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.icmp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

              • -int port_scan.tcp_window = 0: detection interval for all TCP scans { 0: } +int port_scan.tcp_window = 0: detection interval for all TCP scans { 0:max32 }

              • -int port_scan.udp_window = 0: detection interval for all UDP scans { 0: } +int port_scan.udp_window = 0: detection interval for all UDP scans { 0:max32 }

              • -int port_scan.ip_window = 0: detection interval for all IP scans { 0: } +int port_scan.ip_window = 0: detection interval for all IP scans { 0:max32 }

              • -int port_scan.icmp_window = 0: detection interval for all ICMP scans { 0: } +int port_scan.icmp_window = 0: detection interval for all ICMP scans { 0:max32 }

              @@ -14931,7 +14890,7 @@ int sip.max_content_len = 1024: maximum content length of the m
            • -int sip.max_dialogs = 4: maximum number of dialogs within one stream session { 1:4194303 } +int sip.max_dialogs = 4: maximum number of dialogs within one stream session { 1:max32 }

            • @@ -15281,7 +15240,7 @@ string smtp.alt_max_command_line_len[].command: command string
            • -int smtp.alt_max_command_line_len[].length = 0: specify non-default maximum for command { 0: } +int smtp.alt_max_command_line_len[].length = 0: specify non-default maximum for command { 0:max32 }

            • @@ -15761,7 +15720,7 @@ int ssl.max_heartbeat_length = 0: maximum length of heartbeat r
              • -int stream.footprint = 0: use zero for production, non-zero for testing at given size (for TCP and user) { 0: } +int stream.footprint = 0: use zero for production, non-zero for testing at given size (for TCP and user) { 0:max32 }

              • @@ -15771,97 +15730,97 @@ bool stream.ip_frags_only = false: don’t process non-frag
              • -int stream.ip_cache.max_sessions = 16384: maximum simultaneous sessions tracked before pruning { 2: } +int stream.ip_cache.max_sessions = 16384: maximum simultaneous sessions tracked before pruning { 2:max32 }

              • -int stream.ip_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.ip_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

              • -int stream.ip_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.ip_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

              • -int stream.icmp_cache.max_sessions = 65536: maximum simultaneous sessions tracked before pruning { 2: } +int stream.icmp_cache.max_sessions = 65536: maximum simultaneous sessions tracked before pruning { 2:max32 }

              • -int stream.icmp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.icmp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

              • -int stream.icmp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.icmp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

              • -int stream.tcp_cache.max_sessions = 262144: maximum simultaneous sessions tracked before pruning { 2: } +int stream.tcp_cache.max_sessions = 262144: maximum simultaneous sessions tracked before pruning { 2:max32 }

              • -int stream.tcp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.tcp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

              • -int stream.tcp_cache.idle_timeout = 3600: maximum inactive time before retiring session tracker { 1: } +int stream.tcp_cache.idle_timeout = 3600: maximum inactive time before retiring session tracker { 1:max32 }

              • -int stream.udp_cache.max_sessions = 131072: maximum simultaneous sessions tracked before pruning { 2: } +int stream.udp_cache.max_sessions = 131072: maximum simultaneous sessions tracked before pruning { 2:max32 }

              • -int stream.udp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.udp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

              • -int stream.udp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.udp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

              • -int stream.user_cache.max_sessions = 1024: maximum simultaneous sessions tracked before pruning { 2: } +int stream.user_cache.max_sessions = 1024: maximum simultaneous sessions tracked before pruning { 2:max32 }

              • -int stream.user_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.user_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

              • -int stream.user_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.user_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

              • -int stream.file_cache.max_sessions = 128: maximum simultaneous sessions tracked before pruning { 2: } +int stream.file_cache.max_sessions = 128: maximum simultaneous sessions tracked before pruning { 2:max32 }

              • -int stream.file_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.file_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

              • -int stream.file_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.file_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

              • -int stream.trace: mask for enabling debug traces in module +int stream.trace: mask for enabling debug traces in module { 0:max53 }

              @@ -16150,7 +16109,7 @@ bool stream_file.upload = false: indicate file transfer directi
              • -int stream_icmp.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_icmp.session_timeout = 30: session tracking timeout { 1:max31 }

              @@ -16197,17 +16156,17 @@ int stream_icmp.session_timeout = 30: session tracking timeout
              • -int stream_ip.max_frags = 8192: maximum number of simultaneous fragments being tracked { 1: } +int stream_ip.max_frags = 8192: maximum number of simultaneous fragments being tracked { 1:max32 }

              • -int stream_ip.max_overlaps = 0: maximum allowed overlaps per datagram; 0 is unlimited { 0: } +int stream_ip.max_overlaps = 0: maximum allowed overlaps per datagram; 0 is unlimited { 0:max32 }

              • -int stream_ip.min_frag_length = 0: alert if fragment length is below this limit before or after trimming { 0: } +int stream_ip.min_frag_length = 0: alert if fragment length is below this limit before or after trimming { 0:65535 }

              • @@ -16222,12 +16181,12 @@ enum stream_ip.policy = linux: fragment reassembly policy { fir
              • -int stream_ip.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_ip.session_timeout = 30: session tracking timeout { 1:max31 }

              • -int stream_ip.trace: mask for enabling debug traces in module +int stream_ip.trace: mask for enabling debug traces in module { 0:max53 }

              @@ -16422,7 +16381,7 @@ int stream_ip.trace: mask for enabling debug traces in module
              • -int stream_tcp.flush_factor = 0: flush upon seeing a drop in segment size after given number of non-decreasing segments { 0: } +int stream_tcp.flush_factor = 0: flush upon seeing a drop in segment size after given number of non-decreasing segments { 0:65535 }

              • @@ -16432,7 +16391,7 @@ int stream_tcp.max_window = 0: maximum allowed TCP window { 0:1
              • -int stream_tcp.overlap_limit = 0: maximum number of allowed overlapping segments per session { 0:255 } +int stream_tcp.overlap_limit = 0: maximum number of allowed overlapping segments per session { 0:max32 }

              • @@ -16452,7 +16411,7 @@ bool stream_tcp.reassemble_async = true: queue data for reassem
              • -int stream_tcp.require_3whs = -1: don’t track midstream sessions after given seconds from start up; -1 tracks all { -1:86400 } +int stream_tcp.require_3whs = -1: don’t track midstream sessions after given seconds from start up; -1 tracks all { -1:max31 }

              • @@ -16462,12 +16421,12 @@ bool stream_tcp.show_rebuilt_packets = false: enable cmg like o
              • -int stream_tcp.queue_limit.max_bytes = 1048576: don’t queue more than given bytes per session and direction { 0: } +int stream_tcp.queue_limit.max_bytes = 1048576: don’t queue more than given bytes per session and direction { 0:max32 }

              • -int stream_tcp.queue_limit.max_segments = 2621: don’t queue more than given segments per session and direction { 0: } +int stream_tcp.queue_limit.max_segments = 2621: don’t queue more than given segments per session and direction { 0:max32 }

              • @@ -16482,7 +16441,7 @@ int stream_tcp.small_segments.maximum_size = 0: limit number of
              • -int stream_tcp.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_tcp.session_timeout = 30: session tracking timeout { 1:max31 }

              @@ -16802,7 +16761,7 @@ int stream_tcp.session_timeout = 30: session tracking timeout {
              • -int stream_udp.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_udp.session_timeout = 30: session tracking timeout { 1:max31 }

              @@ -16854,12 +16813,12 @@ int stream_udp.session_timeout = 30: session tracking timeout {
              • -int stream_user.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_user.session_timeout = 30: session tracking timeout { 1:max31 }

              • -int stream_user.trace: mask for enabling debug traces in module +int stream_user.trace: mask for enabling debug traces in module { 0:max53 }

              @@ -16873,7 +16832,7 @@ int stream_user.trace: mask for enabling debug traces in module
              • -int telnet.ayt_attack_thresh = -1: alert on this number of consecutive Telnet AYT commands { -1: } +int telnet.ayt_attack_thresh = -1: alert on this number of consecutive Telnet AYT commands { -1:max31 }

              • @@ -17070,7 +17029,7 @@ enum reject.reset: send TCP reset to one or both ends { source|
              • -enum reject.control: send ICMP unreachable(s) { network|host|port|all } +enum reject.control: send ICMP unreachable(s) { network|host|port|forward|all }

              @@ -17147,17 +17106,17 @@ implied asn1.print: dump decode data to console; always true
            • -int asn1.oversize_length: compares ASN.1 type lengths with the supplied argument { 0: } +int asn1.oversize_length: compares ASN.1 type lengths with the supplied argument { 0:max32 }

            • -int asn1.absolute_offset: absolute offset from the beginning of the packet { 0: } +int asn1.absolute_offset: absolute offset from the beginning of the packet { 0:65535 }

            • -int asn1.relative_offset: relative offset from the cursor +int asn1.relative_offset: relative offset from the cursor { -65535:65535 }

            @@ -17171,12 +17130,12 @@ int asn1.relative_offset: relative offset from the cursor
            • -int base64_decode.bytes: number of base64 encoded bytes to decode { 1: } +int base64_decode.bytes: number of base64 encoded bytes to decode { 1:max32 }

            • -int base64_decode.offset = 0: bytes past start of buffer to start decoding { 0: } +int base64_decode.offset = 0: bytes past start of buffer to start decoding { 0:max32 }

            • @@ -17539,12 +17498,12 @@ implied content.fast_pattern: use this content in the fast patt
            • -int content.fast_pattern_offset = 0: number of leading characters of this content the fast pattern matcher should exclude { 0: } +int content.fast_pattern_offset = 0: number of leading characters of this content the fast pattern matcher should exclude { 0:65535 }

            • -int content.fast_pattern_length: maximum number of characters from this content the fast pattern matcher should use { 1: } +int content.fast_pattern_length: maximum number of characters from this content the fast pattern matcher should use { 1:65535 }

            • @@ -17641,12 +17600,12 @@ enum detection_filter.track: track hits by source or destinatio
            • -int detection_filter.count: hits in interval before allowing the rule to fire { 1: } +int detection_filter.count: hits in interval before allowing the rule to fire { 1:max32 }

            • -int detection_filter.seconds: length of interval to count hits { 1: } +int detection_filter.seconds: length of interval to count hits { 1:max32 }

            @@ -17882,7 +17841,7 @@ interval fragoffset.~range: check if ip fragment offset is in g
            • -int gid.~: generator id { 1: } +int gid.~: generator id { 1:max32 }

            @@ -18608,7 +18567,7 @@ interval pkt_num.~range: check if packet number is in given ran
            • -int priority.~: relative severity level; 1 is highest priority { 1: } +int priority.~: relative severity level; 1 is highest priority { 1:max31 }

            @@ -18714,7 +18673,7 @@ string replace.~: byte code to replace with
            • -int rev.~: revision { 1: } +int rev.~: revision { 1:max32 }

            @@ -18728,7 +18687,7 @@ int rev.~: revision { 1: }
            • -int rpc.~app: application number +int rpc.~app: application number { 0:max32 }

            • @@ -18757,7 +18716,7 @@ string sd_pattern.~pattern: The pattern to search for
            • -int sd_pattern.threshold: number of matches before alerting { 1 } +int sd_pattern.threshold = 1: number of matches before alerting { 1:max32 }

            @@ -18889,7 +18848,7 @@ implied sha512.relative = false: offset from cursor instead of
            • -int sid.~: signature id { 1: } +int sid.~: signature id { 1:max32 }

            @@ -18929,7 +18888,7 @@ string sip_method.*method: sip method
            • -int sip_stat_code.*code: stat code { 1:999 } +int sip_stat_code.*code: status code { 1:999 }

            @@ -19142,17 +19101,17 @@ enum tag.~: log all packets in session or all packets to or fro
          • -int tag.packets: tag this many packets { 1: } +int tag.packets: tag this many packets { 1:max32 }

          • -int tag.seconds: tag for this many seconds { 1: } +int tag.seconds: tag for this many seconds { 1:max32 }

          • -int tag.bytes: tag for this many bytes { 1: } +int tag.bytes: tag for this many bytes { 1:max32 }

          @@ -19283,7 +19242,7 @@ multi alert_csv.fields = timestamp pkt_num proto pkt_gen pkt_le
        • -int alert_csv.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_csv.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

        • @@ -19326,7 +19285,7 @@ bool alert_fast.packet = false: output packet dump with alert
        • -int alert_fast.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_fast.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

        @@ -19345,7 +19304,7 @@ bool alert_full.file = false: output to alert_full.txt instead
      • -int alert_full.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_full.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

      @@ -19369,7 +19328,7 @@ multi alert_json.fields = timestamp pkt_num proto pkt_gen pkt_l
    • -int alert_json.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_json.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

    • @@ -19393,12 +19352,12 @@ string alert_sfsocket.file: name of unix socket file
    • -int alert_sfsocket.rules[].gid = 1: rule generator ID { 1: } +int alert_sfsocket.rules[].gid = 1: rule generator ID { 1:max32 }

    • -int alert_sfsocket.rules[].sid = 1: rule signature ID { 1: } +int alert_sfsocket.rules[].sid = 1: rule signature ID { 1:max32 }

    @@ -19471,12 +19430,12 @@ bool log_hext.raw = false: output all full packets if true, els
  • -int log_hext.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int log_hext.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • -int log_hext.width = 20: set line width (0 is unlimited) { 0: } +int log_hext.width = 20: set line width (0 is unlimited) { 0:max32 }

  • @@ -19490,7 +19449,7 @@ int log_hext.width = 20: set line width (0 is unlimited) { 0: }
    • -int log_pcap.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int log_pcap.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

    @@ -19509,7 +19468,7 @@ bool unified2.legacy_events = false: generate Snort 2.X style e
  • -int unified2.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int unified2.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -21027,12 +20986,6 @@ options into a Snort++ configuration file

  • ---print-binding-order - Print sorting priority used when generating binder table -

    -
  • -
  • -

    --print-differences Same as -d. output the differences, and only the differences, between the Snort and Snort++ configurations to the <out_file> @@ -23366,12 +23319,12 @@ these libraries see the Getting Started section of the manual.

  • --m <umask> set umask = <umask> (0:) +-m <umask> set the process file mode creation mask (0x000:0x1FF)

  • --n <count> stop after count packets (0:) +-n <count> stop after count packets (0:max53)

  • @@ -23406,7 +23359,7 @@ these libraries see the Getting Started section of the manual.

  • --s <snap> (same as --snaplen); default is 1514 (68:65535) +-s <snap> (same as --snaplen); default is 1518 (68:65535)

  • @@ -23441,11 +23394,6 @@ these libraries see the Getting Started section of the manual.

  • --W lists available interfaces -

    -
  • -
  • -

    -X dump the raw packet data starting at the link layer

  • @@ -23461,7 +23409,7 @@ these libraries see the Getting Started section of the manual.

  • --z <count> maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported by the system; default is 1 (0:) +-z <count> maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported by the system; default is 1 (0:max32)

  • @@ -23566,6 +23514,11 @@ these libraries see the Getting Started section of the manual.

  • +--help-limits print the int upper bounds denoted by max* +

    +
  • +
  • +

    --help-module <module> output description of given module

  • @@ -23646,7 +23599,7 @@ these libraries see the Getting Started section of the manual.

  • ---max-packet-threads <count> configure maximum number of packet threads (same as -z) (0:) +--max-packet-threads <count> configure maximum number of packet threads (same as -z) (0:max32)

  • @@ -23671,11 +23624,6 @@ these libraries see the Getting Started section of the manual.

  • ---pause-after-n <count> pause after count packets, to be used with single packet thread only (1:) -

    -
  • -
  • -

    --parsing-follows-files parse relative paths from the perspective of the current configuration file

  • @@ -23701,7 +23649,7 @@ these libraries see the Getting Started section of the manual.

  • ---pcap-loop <count> read all pcaps <count> times; 0 will read until Snort is terminated (-1:) +--pcap-loop <count> read all pcaps <count> times; 0 will read until Snort is terminated (0:max32)

  • @@ -23751,7 +23699,7 @@ these libraries see the Getting Started section of the manual.

  • ---rule-to-text output plain so rule header to stdout for text rule on stdin (16) +--rule-to-text output plain so rule header to stdout for text rule on stdin (specify delimiter or [Snort_SO_Rule] will be used) (16)

  • @@ -23771,17 +23719,12 @@ these libraries see the Getting Started section of the manual.

  • ---piglet enable piglet test harness mode -

    -
  • -
  • -

    --show-plugins list module and plugin versions

  • ---skip <n> skip 1st n packets (0:) +--skip <n> skip 1st n packets (0:max53)

  • @@ -23816,11 +23759,6 @@ these libraries see the Getting Started section of the manual.

  • ---catch-test comma separated list of cat unit test tags or all -

    -
  • -
  • -

    --version show version number (same as -V)

  • @@ -23876,7 +23814,7 @@ these libraries see the Getting Started section of the manual.

  • ---x2c output ASCII char for given hex (see also --c2x) +--x2c output ASCII char for given hex (see also --c2x) (0x00:0xFF)

  • @@ -23901,7 +23839,7 @@ interval ack.~range: check if TCP ack value is value | min&
  • -int active.attempts = 0: number of TCP packets sent per response (with varying sequence numbers) { 0:20 } +int active.attempts = 0: number of TCP packets sent per response (with varying sequence numbers) { 0:255 }

  • @@ -23916,7 +23854,7 @@ string active.dst_mac: use format 01:23:45:67:89:ab
  • -int active.max_responses = 0: maximum number of responses { 0: } +int active.max_responses = 0: maximum number of responses { 0:255 }

  • @@ -23936,7 +23874,7 @@ bool alert_csv.file = false: output to alert_csv.txt instead of
  • -int alert_csv.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_csv.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -23956,7 +23894,7 @@ bool alert_fast.file = false: output to alert_fast.txt instead
  • -int alert_fast.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_fast.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -23971,7 +23909,7 @@ bool alert_full.file = false: output to alert_full.txt instead
  • -int alert_full.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_full.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -23986,7 +23924,7 @@ bool alert_json.file = false: output to alert_json.txt instead
  • -int alert_json.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int alert_json.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -24006,12 +23944,12 @@ bool alerts.default_rule_state = true: enable or disable ips ru
  • -int alerts.detection_filter_memcap = 1048576: set available bytes of memory for detection_filters { 0: } +int alerts.detection_filter_memcap = 1048576: set available MB of memory for detection_filters { 0:max32 }

  • -int alerts.event_filter_memcap = 1048576: set available bytes of memory for event_filters { 0: } +int alerts.event_filter_memcap = 1048576: set available MB of memory for event_filters { 0:max32 }

  • @@ -24021,12 +23959,12 @@ string alert_sfsocket.file: name of unix socket file
  • -int alert_sfsocket.rules[].gid = 1: rule generator ID { 1: } +int alert_sfsocket.rules[].gid = 1: rule generator ID { 1:max32 }

  • -int alert_sfsocket.rules[].sid = 1: rule signature ID { 1: } +int alert_sfsocket.rules[].sid = 1: rule signature ID { 1:max32 }

  • @@ -24041,7 +23979,7 @@ string alerts.order = pass drop alert log: change the order of
  • -int alerts.rate_filter_memcap = 1048576: set available bytes of memory for rate_filters { 0: } +int alerts.rate_filter_memcap = 1048576: set available MB of memory for rate_filters { 0:max32 }

  • @@ -24081,17 +24019,17 @@ string appid.app_detector_dir: directory to load appid detector
  • -int appid.app_stats_period = 300: time period for collecting and logging appid statistics { 0: } +int appid.app_stats_period = 300: time period for collecting and logging appid statistics { 0:max32 }

  • -int appid.app_stats_rollover_size = 20971520: max file size for appid stats before rolling over the log file { 0: } +int appid.app_stats_rollover_size = 20971520: max file size for appid stats before rolling over the log file { 0:max32 }

  • -int appid.app_stats_rollover_time = 86400: max time period for collection appid stats before rolling over the log file { 0: } +int appid.app_stats_rollover_time = 86400: max time period for collection appid stats before rolling over the log file { 0:max31 }

  • @@ -24106,12 +24044,7 @@ bool appid.dump_ports = false: enable dump of appid port inform
  • -int appid.first_decrypted_packet_debug = 0: the first packet of an already decrypted SSL flow (debug single session only) { 0: } -

    -
  • -
  • -

    -int appid.instance_id = 0: instance id - ignored { 0: } +int appid.instance_id = 0: instance id - ignored { 0:max32 }

  • @@ -24126,7 +24059,7 @@ bool appid.log_stats = false: enable logging of appid statistic
  • -int appid.memcap = 0: disregard - not implemented { 0: } +int appid.memcap = 0: disregard - not implemented { 0:maxSZ }

  • @@ -24156,7 +24089,7 @@ bool appid.tp_appid_stats_enable: enable collection of stats an
  • -int appid.trace: mask for enabling debug traces in module +int appid.trace: mask for enabling debug traces in module { 0:max53 }

  • @@ -24171,7 +24104,7 @@ mac arp_spoof.hosts[].mac: host mac address
  • -int asn1.absolute_offset: absolute offset from the beginning of the packet { 0: } +int asn1.absolute_offset: absolute offset from the beginning of the packet { 0:65535 }

  • @@ -24186,7 +24119,7 @@ implied asn1.double_overflow: detects a double ASCII encoding t
  • -int asn1.oversize_length: compares ASN.1 type lengths with the supplied argument { 0: } +int asn1.oversize_length: compares ASN.1 type lengths with the supplied argument { 0:max32 }

  • @@ -24196,17 +24129,17 @@ implied asn1.print: dump decode data to console; always true
  • -int asn1.relative_offset: relative offset from the cursor +int asn1.relative_offset: relative offset from the cursor { -65535:65535 }

  • -int attribute_table.max_hosts = 1024: maximum number of hosts in attribute table { 32:207551 } +int attribute_table.max_hosts = 1024: maximum number of hosts in attribute table { 32:max53 }

  • -int attribute_table.max_metadata_services = 8: maximum number of services in rule metadata { 1:256 } +int attribute_table.max_metadata_services = 8: maximum number of services in rule { 1:255 }

  • @@ -24216,12 +24149,12 @@ int attribute_table.max_services_per_host = 8: maximum number o
  • -int base64_decode.bytes: number of base64 encoded bytes to decode { 1: } +int base64_decode.bytes: number of base64 encoded bytes to decode { 1:max32 }

  • -int base64_decode.offset = 0: bytes past start of buffer to start decoding { 0: } +int base64_decode.offset = 0: bytes past start of buffer to start decoding { 0:max32 }

  • @@ -24281,7 +24214,7 @@ bit_list binder[].when.dst_ports: list of destination ports { 6
  • -int binder[].when.dst_zone: destination zone { 0:2147483647 } +int binder[].when.dst_zone: destination zone { 0:max31 }

  • @@ -24291,7 +24224,7 @@ bit_list binder[].when.ifaces: list of interface indices { 255
  • -int binder[].when.ips_policy_id = 0: unique ID for selection of this config by external logic { 0: } +int binder[].when.ips_policy_id = 0: unique ID for selection of this config by external logic { 0:max32 }

  • @@ -24331,7 +24264,7 @@ bit_list binder[].when.src_ports: list of source ports { 65535
  • -int binder[].when.src_zone: source zone { 0:2147483647 } +int binder[].when.src_zone: source zone { 0:max31 }

  • @@ -24616,7 +24549,7 @@ string classifications[].name: name used with classtype rule op
  • -int classifications[].priority = 1: default priority for class { 0: } +int classifications[].priority = 1: default priority for class { 0:max32 }

  • @@ -24646,12 +24579,12 @@ string content.distance: var or number of bytes from cursor to
  • -int content.fast_pattern_length: maximum number of characters from this content the fast pattern matcher should use { 1: } +int content.fast_pattern_length: maximum number of characters from this content the fast pattern matcher should use { 1:65535 }

  • -int content.fast_pattern_offset = 0: number of leading characters of this content the fast pattern matcher should exclude { 0: } +int content.fast_pattern_offset = 0: number of leading characters of this content the fast pattern matcher should exclude { 0:65535 }

  • @@ -24686,7 +24619,7 @@ string daq.input_spec: input specification
  • -int daq.instances[].id: instance ID (required) { 0: } +int daq.instances[].id: instance ID (required) { 0:max32 }

  • @@ -24731,7 +24664,7 @@ select data_log.key = http_request_header_event : name of the e
  • -int data_log.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int data_log.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:max32 }

  • @@ -24756,37 +24689,37 @@ string dce_opnum.~: match given dcerpc operation number, range
  • -bool dce_smb.disable_defrag = false: Disable DCE/RPC defragmentation +bool dce_smb.disable_defrag = false: disable DCE/RPC defragmentation

  • -int dce_smb.max_frag_len = 65535: Maximum fragment size for defragmentation { 1514:65535 } +int dce_smb.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 }

  • -enum dce_smb.policy = WinXP: Target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 } +enum dce_smb.policy = WinXP: target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 }

  • -int dce_smb.reassemble_threshold = 0: Minimum bytes received before performing reassembly { 0:65535 } +int dce_smb.reassemble_threshold = 0: minimum bytes received before performing reassembly { 0:65535 }

  • -int dce_smb.smb_file_depth = 16384: SMB file depth for file data { -1: } +int dce_smb.smb_file_depth = 16384: SMB file depth for file data { -1:32767 }

  • -enum dce_smb.smb_file_inspection = off: SMB file inspection { off | on | only } +enum dce_smb.smb_file_inspection = off: SMB file inspection { off | on | only }

  • -enum dce_smb.smb_fingerprint_policy = none: Target based SMB policy to use { none | client | server | both } +enum dce_smb.smb_fingerprint_policy = none: target based SMB policy to use { none | client | server | both }

  • @@ -24801,67 +24734,67 @@ bool dce_smb.smb_legacy_mode = false: inspect only SMBv1
  • -int dce_smb.smb_max_chain = 3: SMB max chain size { 0:255 } +int dce_smb.smb_max_chain = 3: SMB max chain size { 0:255 }

  • -int dce_smb.smb_max_compound = 3: SMB max compound size { 0:255 } +int dce_smb.smb_max_compound = 3: SMB max compound size { 0:255 }

  • -int dce_smb.trace: mask for enabling debug traces in module +int dce_smb.trace: mask for enabling debug traces in module { 0:max53 }

  • -multi dce_smb.valid_smb_versions = all: Valid SMB versions { v1 | v2 | all } +multi dce_smb.valid_smb_versions = all: valid SMB versions { v1 | v2 | all }

  • -bool dce_tcp.disable_defrag = false: Disable DCE/RPC defragmentation +bool dce_tcp.disable_defrag = false: disable DCE/RPC defragmentation

  • -int dce_tcp.max_frag_len = 65535: Maximum fragment size for defragmentation { 1514:65535 } +int dce_tcp.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 }

  • -enum dce_tcp.policy = WinXP: Target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 } +enum dce_tcp.policy = WinXP: target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 }

  • -int dce_tcp.reassemble_threshold = 0: Minimum bytes received before performing reassembly { 0:65535 } +int dce_tcp.reassemble_threshold = 0: minimum bytes received before performing reassembly { 0:65535 }

  • -bool dce_udp.disable_defrag = false: Disable DCE/RPC defragmentation +bool dce_udp.disable_defrag = false: disable DCE/RPC defragmentation

  • -int dce_udp.max_frag_len = 65535: Maximum fragment size for defragmentation { 1514:65535 } +int dce_udp.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 }

  • -int dce_udp.trace: mask for enabling debug traces in module +int dce_udp.trace: mask for enabling debug traces in module { 0:max53 }

  • -int decode.trace: mask for enabling debug traces in module +int decode.trace: mask for enabling debug traces in module { 0:max53 }

  • -int detection.asn1 = 256: maximum decode nodes { 1: } +int detection.asn1 = 0: maximum decode nodes { 0:65535 }

  • @@ -24871,12 +24804,12 @@ bool detection.enable_address_anomaly_checks = false: enable ch
  • -int detection_filter.count: hits in interval before allowing the rule to fire { 1: } +int detection_filter.count: hits in interval before allowing the rule to fire { 1:max32 }

  • -int detection_filter.seconds: length of interval to count hits { 1: } +int detection_filter.seconds: length of interval to count hits { 1:max32 }

  • @@ -24886,12 +24819,12 @@ enum detection_filter.track: track hits by source or destinatio
  • -int detection.offload_limit = 99999: minimum sizeof PDU to offload fast pattern search (defaults to disabled) { 0: } +int detection.offload_limit = 99999: minimum sizeof PDU to offload fast pattern search (defaults to disabled) { 0:max32 }

  • -int detection.offload_threads = 0: maximum number of simultaneous offloads (defaults to disabled) { 0: } +int detection.offload_threads = 0: maximum number of simultaneous offloads (defaults to disabled) { 0:max32 }

  • @@ -24901,17 +24834,17 @@ bool detection.pcre_enable = true: disable pcre pattern matchin
  • -int detection.pcre_match_limit = 1500: limit pcre backtracking, -1 = max, 0 = off { -1:1000000 } +int detection.pcre_match_limit = 1500: limit pcre backtracking, 0 = off { 0:max32 }

  • -int detection.pcre_match_limit_recursion = 1500: limit pcre stack consumption, -1 = max, 0 = off { -1:10000 } +int detection.pcre_match_limit_recursion = 1500: limit pcre stack consumption, 0 = off { 0:max32 }

  • -int detection.trace: mask for enabling debug traces in module +int detection.trace: mask for enabling debug traces in module { 0:max53 }

  • @@ -24971,12 +24904,12 @@ bool esp.decode_esp = false: enable for inspection of esp traff
  • -int event_filter[].count = 0: number of events in interval before tripping; -1 to disable { -1: } +int event_filter[].count = 0: number of events in interval before tripping; -1 to disable { -1:max31 }

  • -int event_filter[].gid = 1: rule generator ID { 0: } +int event_filter[].gid = 1: rule generator ID { 0:max32 }

  • @@ -24986,12 +24919,12 @@ string event_filter[].ip: restrict filter to these addresses ac
  • -int event_filter[].seconds = 0: count interval { 0: } +int event_filter[].seconds = 0: count interval { 0:max32 }

  • -int event_filter[].sid = 1: rule signature ID { 0: } +int event_filter[].sid = 1: rule signature ID { 0:max32 }

  • @@ -25006,12 +24939,12 @@ enum event_filter[].type: 1st count events | every count events
  • -int event_queue.log = 3: maximum events to log { 1: } +int event_queue.log = 3: maximum events to log { 1:max32 }

  • -int event_queue.max_queue = 8: maximum events to queue { 1: } +int event_queue.max_queue = 8: maximum events to queue { 1:max32 }

  • @@ -25046,7 +24979,7 @@ string file_connector.name: channel name
  • -int file_id.block_timeout = 86400: stop blocking after this many seconds { 0: } +int file_id.block_timeout = 86400: stop blocking after this many seconds { 0:max31 }

  • @@ -25056,22 +24989,22 @@ bool file_id.block_timeout_lookup = false: block if lookup time
  • -int file_id.capture_block_size = 32768: file capture block size in bytes { 8: } +int file_id.capture_block_size = 32768: file capture block size in bytes { 8:max53 }

  • -int file_id.capture_max_size = 1048576: stop file capture beyond this point { 0: } +int file_id.capture_max_size = 1048576: stop file capture beyond this point { 0:max53 }

  • -int file_id.capture_memcap = 100: memcap for file capture in megabytes { 0: } +int file_id.capture_memcap = 100: memcap for file capture in megabytes { 0:max53 }

  • -int file_id.capture_min_size = 0: stop file capture if file size less than this { 0: } +int file_id.capture_min_size = 0: stop file capture if file size less than this { 0:max53 }

  • @@ -25111,7 +25044,7 @@ enum file_id.file_policy[].use.verdict = unknown: what to do wi
  • -int file_id.file_policy[].when.file_type_id = 0: unique ID for file type in file magic rule { 0: } +int file_id.file_policy[].when.file_type_id = 0: unique ID for file type in file magic rule { 0:max32 }

  • @@ -25131,7 +25064,7 @@ string file_id.file_rules[].group: comma separated list of grou
  • -int file_id.file_rules[].id = 0: file type id { 0: } +int file_id.file_rules[].id = 0: file type id { 0:max32 }

  • @@ -25141,7 +25074,7 @@ string file_id.file_rules[].magic[].content: file magic content
  • -int file_id.file_rules[].magic[].offset = 0: file magic offset { 0: } +int file_id.file_rules[].magic[].offset = 0: file magic offset { 0:max32 }

  • @@ -25151,7 +25084,7 @@ string file_id.file_rules[].msg: information about the file typ
  • -int file_id.file_rules[].rev = 0: rule revision { 0: } +int file_id.file_rules[].rev = 0: rule revision { 0:max32 }

  • @@ -25166,22 +25099,22 @@ string file_id.file_rules[].version: file type version
  • -int file_id.lookup_timeout = 2: give up on lookup after this many seconds { 0: } +int file_id.lookup_timeout = 2: give up on lookup after this many seconds { 0:max31 }

  • -int file_id.max_files_cached = 65536: maximal number of files cached in memory { 8: } +int file_id.max_files_cached = 65536: maximal number of files cached in memory { 8:max53 }

  • -int file_id.show_data_depth = 100: print this many octets { 0: } +int file_id.show_data_depth = 100: print this many octets { 0:max53 }

  • -int file_id.signature_depth = 10485760: stop signature at this point { 0: } +int file_id.signature_depth = 10485760: stop signature at this point { 0:max53 }

  • @@ -25201,12 +25134,12 @@ bool file_id.trace_type = false: enable runtime dump of type in
  • -int file_id.type_depth = 1460: stop type ID at this point { 0: } +int file_id.type_depth = 1460: stop type ID at this point { 0:max53 }

  • -int file_id.verdict_delay = 0: number of queries to return final verdict { 0: } +int file_id.verdict_delay = 0: number of queries to return final verdict { 0:max53 }

  • @@ -25326,12 +25259,12 @@ addr ftp_client.bounce_to[].address = 1.0.0.0/32: allowed IP ad
  • -port ftp_client.bounce_to[].last_port: optional allowed range from port to last_port inclusive { 0: } +port ftp_client.bounce_to[].last_port: optional allowed range from port to last_port inclusive

  • -port ftp_client.bounce_to[].port = 20: allowed port { 1: } +port ftp_client.bounce_to[].port = 20: allowed port

  • @@ -25341,7 +25274,7 @@ bool ftp_client.ignore_telnet_erase_cmds = false: ignore erase
  • -int ftp_client.max_resp_len = -1: maximum FTP response accepted by client { -1: } +int ftp_client.max_resp_len = 4294967295: maximum FTP response accepted by client { 0:max32 }

  • @@ -25371,7 +25304,7 @@ string ftp_server.cmd_validity[].format: format specification
  • -int ftp_server.cmd_validity[].length = 0: specify non-default maximum for command { 0: } +int ftp_server.cmd_validity[].length = 0: specify non-default maximum for command { 0:max32 }

  • @@ -25391,7 +25324,7 @@ string ftp_server.data_xfer_cmds: check the formatting of the g
  • -int ftp_server.def_max_param_len = 100: default maximum length of commands handled by server; 0 is unlimited { 1: } +int ftp_server.def_max_param_len = 100: default maximum length of commands handled by server; 0 is unlimited { 1:max32 }

  • @@ -25401,7 +25334,7 @@ string ftp_server.directory_cmds[].dir_cmd: directory command
  • -int ftp_server.directory_cmds[].rsp_code = 200: expected successful response code for command { 200: } +int ftp_server.directory_cmds[].rsp_code = 200: expected successful response code for command { 200:max32 }

  • @@ -25456,7 +25389,7 @@ bool ftp_server.telnet_cmds = false: detect Telnet escape seque
  • -int gid.~: generator id { 1: } +int gid.~: generator id { 1:max32 }

  • @@ -25491,7 +25424,7 @@ int gtp_inspect[].messages[].type = 0: message type code { 0:25
  • -int gtp_inspect.trace: mask for enabling debug traces in module +int gtp_inspect.trace: mask for enabling debug traces in module { 0:max53 }

  • @@ -25536,7 +25469,7 @@ bit_list high_availability.ports: side channel message port lis
  • -int host_cache[].size: size of host cache +int host_cache[].size: size of host cache { 1:max32 }

  • @@ -25711,32 +25644,12 @@ bool http_inspect.plus_to_space = true: replace + with <sp&g
  • -int http_inspect.print_amount = 1200: number of characters to print from a Field { 1:1000000 } -

    -
  • -
  • -

    -bool http_inspect.print_hex = false: nonprinting characters printed in [HH] format instead of using an asterisk +int http_inspect.request_depth = -1: maximum request message body bytes to examine (-1 no limit) { -1:max53 }

  • -int http_inspect.request_depth = -1: maximum request message body bytes to examine (-1 no limit) { -1: } -

    -
  • -
  • -

    -int http_inspect.response_depth = -1: maximum response message body bytes to examine (-1 no limit) { -1: } -

    -
  • -
  • -

    -bool http_inspect.show_pegs = true: display peg counts with test output -

    -
  • -
  • -

    -bool http_inspect.show_scan = false: display scanned segments +int http_inspect.response_depth = -1: maximum response message body bytes to examine (-1 no limit) { -1:max53 }

  • @@ -25746,16 +25659,6 @@ bool http_inspect.simplify_path = true: reduce URI directory pa
  • -bool http_inspect.test_input = false: read HTTP messages from text file -

    -
  • -
  • -

    -bool http_inspect.test_output = false: print out HTTP section data -

    -
  • -
  • -

    bool http_inspect.unzip = true: decompress gzip and deflate message bodies

  • @@ -26111,7 +26014,7 @@ bool latency.packet.fastpath = false: fastpath expensive packet
  • -int latency.packet.max_time = 500: set timeout for packet latency thresholding (usec) { 0: } +int latency.packet.max_time = 500: set timeout for packet latency thresholding (usec) { 0:max53 }

  • @@ -26121,12 +26024,12 @@ enum latency.rule.action = none: event action for rule latency
  • -int latency.rule.max_suspend_time = 30000: set max time for suspending a rule (ms, 0 means permanently disable rule) { 0: } +int latency.rule.max_suspend_time = 30000: set max time for suspending a rule (ms, 0 means permanently disable rule) { 0:max32 }

  • -int latency.rule.max_time = 500: set timeout for rule evaluation (usec) { 0: } +int latency.rule.max_time = 500: set timeout for rule evaluation (usec) { 0:max53 }

  • @@ -26136,7 +26039,7 @@ bool latency.rule.suspend = false: temporarily suspend expensiv
  • -int latency.rule.suspend_threshold = 5: set threshold for number of timeouts before suspending a rule { 1: } +int latency.rule.suspend_threshold = 5: set threshold for number of timeouts before suspending a rule { 1:max32 }

  • @@ -26156,7 +26059,7 @@ bool log_hext.file = false: output to log_hext.txt instead of s
  • -int log_hext.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int log_hext.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -26166,12 +26069,12 @@ bool log_hext.raw = false: output all full packets if true, els
  • -int log_hext.width = 20: set line width (0 is unlimited) { 0: } +int log_hext.width = 20: set line width (0 is unlimited) { 0:max32 }

  • -int log_pcap.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int log_pcap.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -26196,7 +26099,7 @@ implied md5.relative = false: offset from cursor instead of sta
  • -int memory.cap = 0: set the per-packet-thread cap on memory (bytes, 0 to disable) { 0: } +int memory.cap = 0: set the per-packet-thread cap on memory (bytes, 0 to disable) { 0:maxSZ }

  • @@ -26206,7 +26109,7 @@ bool memory.soft = false: always succeed in allocating memory,
  • -int memory.threshold = 0: set the per-packet-thread threshold for preemptive cleanup actions (percent, 0 to disable) { 0: } +int memory.threshold = 0: set the per-packet-thread threshold for preemptive cleanup actions (percent, 0 to disable) { 0:100 }

  • @@ -26236,7 +26139,7 @@ bool mpls.enable_mpls_overlapping_ip = false: enable if private
  • -int mpls.max_mpls_stack_depth = -1: set MPLS stack depth { -1: } +int mpls.max_mpls_stack_depth = -1: set MPLS stack depth { -1:255 }

  • @@ -26476,7 +26379,7 @@ bool output.show_year = false: include year in timestamp in the
  • -int output.tagged_packet_limit = 256: maximum number of packets tagged for non-packet metrics { 0: } +int output.tagged_packet_limit = 256: maximum number of packets tagged for non-packet metrics { 0:max32 }

  • @@ -26486,7 +26389,7 @@ bool output.verbose = false: be verbose (same as -v)
  • -bool output.wide_hex_dump = true: output 20 bytes per lines instead of 16 when dumping buffers +bool output.wide_hex_dump = false: output 20 bytes per lines instead of 16 when dumping buffers

  • @@ -26511,12 +26414,12 @@ string packets.bpf_file: file with BPF to select traffic for Sn
  • -int packets.limit = 0: maximum number of packets to process before stopping (0 is unlimited) { 0: } +int packets.limit = 0: maximum number of packets to process before stopping (0 is unlimited) { 0:max53 }

  • -int packets.skip = 0: number of packets to skip before before processing { 0: } +int packets.skip = 0: number of packets to skip before before processing { 0:max53 }

  • @@ -26541,12 +26444,12 @@ string pcre.~re: Snort regular expression
  • -bool perf_monitor.base = true: enable base statistics { nullptr } +bool perf_monitor.base = true: enable base statistics

  • -bool perf_monitor.cpu = false: enable cpu statistics { nullptr } +bool perf_monitor.cpu = false: enable cpu statistics

  • @@ -26561,7 +26464,7 @@ bool perf_monitor.flow_ip = false: enable statistics on host pa
  • -int perf_monitor.flow_ip_memcap = 52428800: maximum memory in bytes for flow tracking { 8200: } +int perf_monitor.flow_ip_memcap = 52428800: maximum memory in bytes for flow tracking { 8200:maxSZ }

  • @@ -26576,7 +26479,7 @@ enum perf_monitor.format = csv: output format for stats { csv |
  • -int perf_monitor.max_file_size = 1073741824: files will be rolled over if they exceed this size { 4096: } +int perf_monitor.max_file_size = 1073741824: files will be rolled over if they exceed this size { 4096:max53 }

  • @@ -26596,12 +26499,12 @@ enum perf_monitor.output = file: output location for stats { fi
  • -int perf_monitor.packets = 10000: minimum packets to report { 0: } +int perf_monitor.packets = 10000: minimum packets to report { 0:max32 }

  • -int perf_monitor.seconds = 60: report interval { 1: } +int perf_monitor.seconds = 60: report interval { 1:max32 }

  • @@ -26641,27 +26544,27 @@ bool port_scan.alert_all = false: alert on all events over thre
  • -int port_scan.icmp_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.icmp_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.icmp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.icmp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.icmp_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.icmp_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.icmp_sweep.scans = 100: scan attempts { 0: } +int port_scan.icmp_sweep.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.icmp_window = 0: detection interval for all ICMP scans { 0: } +int port_scan.icmp_window = 0: detection interval for all ICMP scans { 0:max32 }

  • @@ -26681,92 +26584,92 @@ bool port_scan.include_midstream = false: list of CIDRs with op
  • -int port_scan.ip_decoy.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_decoy.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.ip_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.ip_decoy.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_decoy.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.ip_decoy.scans = 100: scan attempts { 0: } +int port_scan.ip_decoy.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.ip_dist.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_dist.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.ip_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.ip_dist.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_dist.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.ip_dist.scans = 100: scan attempts { 0: } +int port_scan.ip_dist.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.ip_proto.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_proto.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.ip_proto.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_proto.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.ip_proto.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_proto.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.ip_proto.scans = 100: scan attempts { 0: } +int port_scan.ip_proto.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.ip_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.ip_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.ip_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.ip_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.ip_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.ip_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.ip_sweep.scans = 100: scan attempts { 0: } +int port_scan.ip_sweep.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.ip_window = 0: detection interval for all IP scans { 0: } +int port_scan.ip_window = 0: detection interval for all IP scans { 0:max32 }

  • -int port_scan.memcap = 1048576: maximum tracker memory in bytes { 1: } +int port_scan.memcap = 1048576: maximum tracker memory in bytes { 1:maxSZ }

  • @@ -26781,172 +26684,172 @@ multi port_scan.scan_types = all: choose type of scans to look
  • -int port_scan.tcp_decoy.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_decoy.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_decoy.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_decoy.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.tcp_decoy.scans = 100: scan attempts { 0: } +int port_scan.tcp_decoy.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.tcp_dist.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_dist.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_dist.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_dist.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.tcp_dist.scans = 100: scan attempts { 0: } +int port_scan.tcp_dist.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.tcp_ports.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_ports.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_ports.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_ports.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.tcp_ports.scans = 100: scan attempts { 0: } +int port_scan.tcp_ports.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.tcp_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.tcp_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.tcp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.tcp_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.tcp_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.tcp_sweep.scans = 100: scan attempts { 0: } +int port_scan.tcp_sweep.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.tcp_window = 0: detection interval for all TCP scans { 0: } +int port_scan.tcp_window = 0: detection interval for all TCP scans { 0:max32 }

  • -int port_scan.udp_decoy.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_decoy.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.udp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_decoy.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.udp_decoy.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_decoy.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.udp_decoy.scans = 100: scan attempts { 0: } +int port_scan.udp_decoy.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.udp_dist.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_dist.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.udp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_dist.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.udp_dist.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_dist.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.udp_dist.scans = 100: scan attempts { 0: } +int port_scan.udp_dist.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.udp_ports.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_ports.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.udp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_ports.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.udp_ports.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_ports.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.udp_ports.scans = 100: scan attempts { 0: } +int port_scan.udp_ports.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.udp_sweep.nets = 25: number of times address changed from prior attempt { 0: } +int port_scan.udp_sweep.nets = 25: number of times address changed from prior attempt { 0:65535 }

  • -int port_scan.udp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0: } +int port_scan.udp_sweep.ports = 25: number of times port (or proto) changed from prior attempt { 0:65535 }

  • -int port_scan.udp_sweep.rejects = 15: scan attempts with negative response { 0: } +int port_scan.udp_sweep.rejects = 15: scan attempts with negative response { 0:65535 }

  • -int port_scan.udp_sweep.scans = 100: scan attempts { 0: } +int port_scan.udp_sweep.scans = 100: scan attempts { 0:65535 }

  • -int port_scan.udp_window = 0: detection interval for all UDP scans { 0: } +int port_scan.udp_window = 0: detection interval for all UDP scans { 0:max32 }

  • @@ -26956,7 +26859,7 @@ string port_scan.watch_ip: list of CIDRs with optional ports to
  • -int priority.~: relative severity level; 1 is highest priority { 1: } +int priority.~: relative severity level; 1 is highest priority { 1:max31 }

  • @@ -26991,12 +26894,12 @@ string process.threads[].cpuset: pin the associated thread to t
  • -int process.threads[].thread = 0: set cpu affinity for the <cur_thread_num> thread that runs { 0: } +int process.threads[].thread = 0: set cpu affinity for the <cur_thread_num> thread that runs { 0:65535 }

  • -string process.umask: set process umask (same as -m) +int process.umask: set process umask (same as -m) { 0x000:0x1FF }

  • @@ -27006,12 +26909,12 @@ bool process.utc = false: use UTC instead of local time for tim
  • -int profiler.memory.count = 0: limit results to count items per level (0 = no limit) { 0: } +int profiler.memory.count = 0: limit results to count items per level (0 = no limit) { 0:max32 }

  • -int profiler.memory.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1: } +int profiler.memory.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1:255 }

  • @@ -27026,12 +26929,12 @@ enum profiler.memory.sort = total_used: sort by given field { n
  • -int profiler.modules.count = 0: limit results to count items per level (0 = no limit) { 0: } +int profiler.modules.count = 0: limit results to count items per level (0 = no limit) { 0:max32 }

  • -int profiler.modules.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1: } +int profiler.modules.max_depth = -1: limit depth to max_depth (-1 = no limit) { -1:255 }

  • @@ -27046,7 +26949,7 @@ enum profiler.modules.sort = total_time: sort by given field {
  • -int profiler.rules.count = 0: print results to given level (0 = all) { 0: } +int profiler.rules.count = 0: print results to given level (0 = all) { 0:max32 }

  • @@ -27066,12 +26969,12 @@ string rate_filter[].apply_to: restrict filter to these address
  • -int rate_filter[].count = 1: number of events in interval before tripping { 0: } +int rate_filter[].count = 1: number of events in interval before tripping { 0:max32 }

  • -int rate_filter[].gid = 1: rule generator ID { 0: } +int rate_filter[].gid = 1: rule generator ID { 0:max32 }

  • @@ -27081,17 +26984,17 @@ enum rate_filter[].new_action = alert: take this action on futu
  • -int rate_filter[].seconds = 1: count interval { 0: } +int rate_filter[].seconds = 1: count interval { 0:max32 }

  • -int rate_filter[].sid = 1: rule signature ID { 0: } +int rate_filter[].sid = 1: rule signature ID { 0:max32 }

  • -int rate_filter[].timeout = 1: count interval { 0: } +int rate_filter[].timeout = 1: count interval { 0:max32 }

  • @@ -27166,7 +27069,7 @@ bool reg_test.test_daq_retry = true: test daq packet retry feat
  • -enum reject.control: send ICMP unreachable(s) { network|host|port|all } +enum reject.control: send ICMP unreachable(s) { network|host|port|forward|all }

  • @@ -27226,7 +27129,7 @@ enum reputation.white = unblack: specify the meaning of whiteli
  • -int rev.~: revision { 1: } +int rev.~: revision { 1:max32 }

  • @@ -27236,7 +27139,7 @@ bool rewrite.disable_replace = false: disable replace of packet
  • -int rpc.~app: application number +int rpc.~app: application number { 0:max32 }

  • @@ -27256,12 +27159,12 @@ bool rule_state[].enable = true: enable or disable rule in all
  • -int rule_state[].gid = 0: rule generator ID { 0: } +int rule_state[].gid = 0: rule generator ID { 0:max32 }

  • -int rule_state[].sid = 0: rule signature ID { 0: } +int rule_state[].sid = 0: rule signature ID { 0:max32 }

  • @@ -27271,12 +27174,12 @@ string sd_pattern.~pattern: The pattern to search for
  • -int sd_pattern.threshold: number of matches before alerting { 1 } +int sd_pattern.threshold = 1: number of matches before alerting { 1:max32 }

  • -int search_engine.bleedover_port_limit = 1024: maximum ports in rule before demotion to any-any port group { 1: } +int search_engine.bleedover_port_limit = 1024: maximum ports in rule before demotion to any-any port group { 1:max32 }

  • @@ -27321,7 +27224,7 @@ bool search_engine.enable_single_rule_group = false: put all ru
  • -int search_engine.max_pattern_len = 0: truncate patterns when compiling into state machine (0 means no maximum) { 0: } +int search_engine.max_pattern_len = 0: truncate patterns when compiling into state machine (0 means no maximum) { 0:max32 }

  • @@ -27421,7 +27324,7 @@ bit_list side_channel.ports: side channel message port list { 6
  • -int sid.~: signature id { 1: } +int sid.~: signature id { 1:max32 }

  • @@ -27446,7 +27349,7 @@ int sip.max_content_len = 1024: maximum content length of the m
  • -int sip.max_dialogs = 4: maximum number of dialogs within one stream session { 1:4194303 } +int sip.max_dialogs = 4: maximum number of dialogs within one stream session { 1:max32 }

  • @@ -27486,7 +27389,7 @@ string sip.methods = invite cancel ack bye register options: l
  • -int sip_stat_code.*code: stat code { 1:999 } +int sip_stat_code.*code: status code { 1:999 }

  • @@ -27496,7 +27399,7 @@ string smtp.alt_max_command_line_len[].command: command string
  • -int smtp.alt_max_command_line_len[].length = 0: specify non-default maximum for command { 0: } +int smtp.alt_max_command_line_len[].length = 0: specify non-default maximum for command { 0:max32 }

  • @@ -27641,11 +27544,6 @@ string snort.--c2x: output hex for given char (see also --x2c)
  • -string snort.--catch-test: comma separated list of cat unit test tags or all -

    -
  • -
  • -

    string snort.-c: <conf> use this configuration

  • @@ -27766,6 +27664,11 @@ string snort.--help-counts: [<module prefix>] output matc
  • +implied snort.--help-limits: print the int upper bounds denoted by max* +

    +
  • +
  • +

    implied snort.--help: list command line options

  • @@ -27881,7 +27784,7 @@ implied snort.--markup: output help in asciidoc compatible form
  • -int snort.--max-packet-threads = 1: <count> configure maximum number of packet threads (same as -z) { 0: } +int snort.--max-packet-threads = 1: <count> configure maximum number of packet threads (same as -z) { 0:max32 }

  • @@ -27896,12 +27799,12 @@ implied snort.-M: log messages to syslog (not alerts)
  • -int snort.-m: <umask> set umask = <umask> { 0: } +int snort.-m: <umask> set the process file mode creation mask { 0x000:0x1FF }

  • -int snort.-n: <count> stop after count packets { 0: } +int snort.-n: <count> stop after count packets { 0:max53 }

  • @@ -27931,11 +27834,6 @@ implied snort.--parsing-follows-files: parse relative paths fro
  • -int snort.--pause-after-n: <count> pause after count packets, to be used with single packet thread only { 1: } -

    -
  • -
  • -

    implied snort.--pause: wait for resume/quit command before processing packets/terminating

  • @@ -27961,7 +27859,7 @@ string snort.--pcap-list: <list> a space separated list o
  • -int snort.--pcap-loop: <count> read all pcaps <count> times; 0 will read until Snort is terminated { -1: } +int snort.--pcap-loop: <count> read all pcaps <count> times; 0 will read until Snort is terminated { 0:max32 }

  • @@ -27986,11 +27884,6 @@ implied snort.--pedantic: warnings are fatal
  • -implied snort.--piglet: enable piglet test harness mode -

    -
  • -
  • -

    string snort.--plugin-path: <path> where to find plugins

  • @@ -28036,7 +27929,7 @@ implied snort.--rule-to-hex: output so rule header to stdout fo
  • -string snort.--rule-to-text = [SnortFoo]: output plain so rule header to stdout for text rule on stdin { 16 } +string snort.--rule-to-text: output plain so rule header to stdout for text rule on stdin (specify delimiter or [Snort_SO_Rule] will be used) { 16 }

  • @@ -28046,7 +27939,7 @@ string snort.--run-prefix: <pfx> prepend this to each out
  • -int snort.-s = 1514: <snap> (same as --snaplen); default is 1514 { 68:65535 } +int snort.-s = 1518: <snap> (same as --snaplen); default is 1518 { 68:65535 }

  • @@ -28066,12 +27959,12 @@ implied snort.--show-plugins: list module and plugin versions
  • -int snort.--skip: <n> skip 1st n packets { 0: } +int snort.--skip: <n> skip 1st n packets { 0:max53 }

  • -int snort.--snaplen = 1514: <snap> set snaplen of packet (same as -s) { 68:65535 } +int snort.--snaplen = 1518: <snap> set snaplen of packet (same as -s) { 68:65535 }

  • @@ -28096,7 +27989,7 @@ string snort.-t: <dir> chroots process to <dir> aft
  • -int snort.trace: mask for enabling debug traces in module +int snort.trace: mask for enabling debug traces in module { 0:max53 }

  • @@ -28201,12 +28094,7 @@ implied snort.--warn-vars: warn about variable definition and u
  • -implied snort.-W: lists available interfaces -

    -
  • -
  • -

    -int snort.--x2c: output ASCII char for given hex (see also --c2x) +int snort.--x2c: output ASCII char for given hex (see also --c2x) { 0x00:0xFF }

  • @@ -28231,7 +28119,7 @@ implied snort.-y: include year in timestamp in the alert and lo
  • -int snort.-z = 1: <count> maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported by the system; default is 1 { 0: } +int snort.-z = 1: <count> maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported by the system; default is 1 { 0:max32 }

  • @@ -28371,17 +28259,17 @@ implied ssl_version.tls1.2: check for tls1.2
  • -int stream.file_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.file_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

  • -int stream.file_cache.max_sessions = 128: maximum simultaneous sessions tracked before pruning { 2: } +int stream.file_cache.max_sessions = 128: maximum simultaneous sessions tracked before pruning { 2:max32 }

  • -int stream.file_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.file_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

  • @@ -28391,42 +28279,42 @@ bool stream_file.upload = false: indicate file transfer directi
  • -int stream.footprint = 0: use zero for production, non-zero for testing at given size (for TCP and user) { 0: } +int stream.footprint = 0: use zero for production, non-zero for testing at given size (for TCP and user) { 0:max32 }

  • -int stream.icmp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.icmp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

  • -int stream.icmp_cache.max_sessions = 65536: maximum simultaneous sessions tracked before pruning { 2: } +int stream.icmp_cache.max_sessions = 65536: maximum simultaneous sessions tracked before pruning { 2:max32 }

  • -int stream.icmp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.icmp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

  • -int stream_icmp.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_icmp.session_timeout = 30: session tracking timeout { 1:max31 }

  • -int stream.ip_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.ip_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

  • -int stream.ip_cache.max_sessions = 16384: maximum simultaneous sessions tracked before pruning { 2: } +int stream.ip_cache.max_sessions = 16384: maximum simultaneous sessions tracked before pruning { 2:max32 }

  • -int stream.ip_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.ip_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

  • @@ -28436,17 +28324,17 @@ bool stream.ip_frags_only = false: don’t process non-frag
  • -int stream_ip.max_frags = 8192: maximum number of simultaneous fragments being tracked { 1: } +int stream_ip.max_frags = 8192: maximum number of simultaneous fragments being tracked { 1:max32 }

  • -int stream_ip.max_overlaps = 0: maximum allowed overlaps per datagram; 0 is unlimited { 0: } +int stream_ip.max_overlaps = 0: maximum allowed overlaps per datagram; 0 is unlimited { 0:max32 }

  • -int stream_ip.min_frag_length = 0: alert if fragment length is below this limit before or after trimming { 0: } +int stream_ip.min_frag_length = 0: alert if fragment length is below this limit before or after trimming { 0:65535 }

  • @@ -28461,12 +28349,12 @@ enum stream_ip.policy = linux: fragment reassembly policy { fir
  • -int stream_ip.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_ip.session_timeout = 30: session tracking timeout { 1:max31 }

  • -int stream_ip.trace: mask for enabling debug traces in module +int stream_ip.trace: mask for enabling debug traces in module { 0:max53 }

  • @@ -28501,22 +28389,22 @@ interval stream_size.~range: check if the stream size is in the
  • -int stream.tcp_cache.idle_timeout = 3600: maximum inactive time before retiring session tracker { 1: } +int stream.tcp_cache.idle_timeout = 3600: maximum inactive time before retiring session tracker { 1:max32 }

  • -int stream.tcp_cache.max_sessions = 262144: maximum simultaneous sessions tracked before pruning { 2: } +int stream.tcp_cache.max_sessions = 262144: maximum simultaneous sessions tracked before pruning { 2:max32 }

  • -int stream.tcp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.tcp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

  • -int stream_tcp.flush_factor = 0: flush upon seeing a drop in segment size after given number of non-decreasing segments { 0: } +int stream_tcp.flush_factor = 0: flush upon seeing a drop in segment size after given number of non-decreasing segments { 0:65535 }

  • @@ -28531,7 +28419,7 @@ int stream_tcp.max_window = 0: maximum allowed TCP window { 0:1
  • -int stream_tcp.overlap_limit = 0: maximum number of allowed overlapping segments per session { 0:255 } +int stream_tcp.overlap_limit = 0: maximum number of allowed overlapping segments per session { 0:max32 }

  • @@ -28541,12 +28429,12 @@ enum stream_tcp.policy = bsd: determines operating system chara
  • -int stream_tcp.queue_limit.max_bytes = 1048576: don’t queue more than given bytes per session and direction { 0: } +int stream_tcp.queue_limit.max_bytes = 1048576: don’t queue more than given bytes per session and direction { 0:max32 }

  • -int stream_tcp.queue_limit.max_segments = 2621: don’t queue more than given segments per session and direction { 0: } +int stream_tcp.queue_limit.max_segments = 2621: don’t queue more than given segments per session and direction { 0:max32 }

  • @@ -28556,12 +28444,12 @@ bool stream_tcp.reassemble_async = true: queue data for reassem
  • -int stream_tcp.require_3whs = -1: don’t track midstream sessions after given seconds from start up; -1 tracks all { -1:86400 } +int stream_tcp.require_3whs = -1: don’t track midstream sessions after given seconds from start up; -1 tracks all { -1:max31 }

  • -int stream_tcp.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_tcp.session_timeout = 30: session tracking timeout { 1:max31 }

  • @@ -28581,57 +28469,57 @@ int stream_tcp.small_segments.maximum_size = 0: limit number of
  • -int stream.trace: mask for enabling debug traces in module +int stream.trace: mask for enabling debug traces in module { 0:max53 }

  • -int stream.udp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.udp_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

  • -int stream.udp_cache.max_sessions = 131072: maximum simultaneous sessions tracked before pruning { 2: } +int stream.udp_cache.max_sessions = 131072: maximum simultaneous sessions tracked before pruning { 2:max32 }

  • -int stream.udp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.udp_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

  • -int stream_udp.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_udp.session_timeout = 30: session tracking timeout { 1:max31 }

  • -int stream.user_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1: } +int stream.user_cache.idle_timeout = 180: maximum inactive time before retiring session tracker { 1:max32 }

  • -int stream.user_cache.max_sessions = 1024: maximum simultaneous sessions tracked before pruning { 2: } +int stream.user_cache.max_sessions = 1024: maximum simultaneous sessions tracked before pruning { 2:max32 }

  • -int stream.user_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1: } +int stream.user_cache.pruning_timeout = 30: minimum inactive time before being eligible for pruning { 1:max32 }

  • -int stream_user.session_timeout = 30: session tracking timeout { 1:86400 } +int stream_user.session_timeout = 30: session tracking timeout { 1:max31 }

  • -int stream_user.trace: mask for enabling debug traces in module +int stream_user.trace: mask for enabling debug traces in module { 0:max53 }

  • -int suppress[].gid = 0: rule generator ID { 0: } +int suppress[].gid = 0: rule generator ID { 0:max32 }

  • @@ -28641,7 +28529,7 @@ string suppress[].ip: restrict suppression to these addresses a
  • -int suppress[].sid = 0: rule signature ID { 0: } +int suppress[].sid = 0: rule signature ID { 0:max32 }

  • @@ -28651,7 +28539,7 @@ enum suppress[].track: suppress only matching source or destina
  • -int tag.bytes: tag for this many bytes { 1: } +int tag.bytes: tag for this many bytes { 1:max32 }

  • @@ -28661,12 +28549,12 @@ enum tag.~: log all packets in session or all packets to or fro
  • -int tag.packets: tag this many packets { 1: } +int tag.packets: tag this many packets { 1:max32 }

  • -int tag.seconds: tag for this many seconds { 1: } +int tag.seconds: tag for this many seconds { 1:max32 }

  • @@ -28696,7 +28584,7 @@ enum tcp_connector.setup: stream establishment { call | answer
  • -int telnet.ayt_attack_thresh = -1: alert on this number of consecutive Telnet AYT commands { -1: } +int telnet.ayt_attack_thresh = -1: alert on this number of consecutive Telnet AYT commands { -1:max31 }

  • @@ -28746,7 +28634,7 @@ bool unified2.legacy_events = false: generate Snort 2.X style e
  • -int unified2.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0: } +int unified2.limit = 0: set maximum size in MB before rollover (0 is unlimited) { 0:maxSZ }

  • @@ -28831,6 +28719,11 @@ interval wscale.~range: check if TCP window scale is in given r
    • +active.injects: total crafted packets injected (sum) +

      +
    • +
    • +

      appid.appid_unknown: count of sessions where appid could not be determined (sum)

    • @@ -34446,7 +34339,7 @@ interval wscale.~range: check if TCP window scale is in given r
    • -snort.resume(): continue packet processing +snort.resume(pkt_num): continue packet processing. If number of packet is specified, will resume for n packets and pause

    • @@ -36982,46 +36875,6 @@ deleted -> unified2: 'filename'
    • -piglet::pp_codec: Codec piglet -

      -
    • -
    • -

      -piglet::pp_inspector: Inspector piglet -

      -
    • -
    • -

      -piglet::pp_ips_action: Ips action piglet -

      -
    • -
    • -

      -piglet::pp_ips_option: Ips option piglet -

      -
    • -
    • -

      -piglet::pp_logger: Logger piglet -

      -
    • -
    • -

      -piglet::pp_search_engine: Search engine piglet -

      -
    • -
    • -

      -piglet::pp_so_rule: SO rule piglet -

      -
    • -
    • -

      -piglet::pp_test: Test piglet -

      -
    • -
    • -

      search_engine::ac_banded: Aho-Corasick Banded (high memory, moderate performance)

    • @@ -37699,7 +37552,7 @@ Note that on OpenBSD, divert sockets don’t work with bridges!

      diff --git a/doc/snort_manual.pdf b/doc/snort_manual.pdf index 9da387e20..e92e73454 100644 Binary files a/doc/snort_manual.pdf and b/doc/snort_manual.pdf differ diff --git a/doc/snort_manual.text b/doc/snort_manual.text index 2ecdac088..91781e323 100644 --- a/doc/snort_manual.text +++ b/doc/snort_manual.text @@ -384,7 +384,7 @@ Table of Contents Snorty ,,_ -*> Snort++ <*- -o" )~ Version 3.0.0 (Build 248) from 2.9.11 +o" )~ Version 3.0.0 (Build 250) from 2.9.11 '''' By Martin Roesch & The Snort Team http://snort.org/contact#team Copyright (C) 2014-2018 Cisco and/or its affiliates. All rights reserved. @@ -1455,6 +1455,7 @@ Snort has several options to get more help: --help-commands [] output matching commands --help-config [] output matching config options --help-counts [] output matching peg counts +--help-limits print the int upper bounds denoted by max* --help-module output description of given module --help-modules list all available modules with brief help --help-plugins list all available plugins with brief help @@ -5390,14 +5391,19 @@ Usage: global Configuration: * int active.attempts = 0: number of TCP packets sent per response - (with varying sequence numbers) { 0:20 } + (with varying sequence numbers) { 0:255 } * string active.device: use ip for network layer responses or eth0 etc for link layer * string active.dst_mac: use format 01:23:45:67:89:ab - * int active.max_responses = 0: maximum number of responses { 0: } + * int active.max_responses = 0: maximum number of responses { 0:255 + } * int active.min_interval = 255: minimum number of seconds between responses { 1:255 } +Peg counts: + + * active.injects: total crafted packets injected (sum) + 6.2. alerts @@ -5415,16 +5421,16 @@ Configuration: in alert info (fast, full, or syslog only) * bool alerts.default_rule_state = true: enable or disable ips rules - * int alerts.detection_filter_memcap = 1048576: set available bytes - of memory for detection_filters { 0: } - * int alerts.event_filter_memcap = 1048576: set available bytes of - memory for event_filters { 0: } + * int alerts.detection_filter_memcap = 1048576: set available MB of + memory for detection_filters { 0:max32 } + * int alerts.event_filter_memcap = 1048576: set available MB of + memory for event_filters { 0:max32 } * bool alerts.log_references = false: include rule references in alert info (full only) * string alerts.order = pass drop alert log: change the order of rule action application - * int alerts.rate_filter_memcap = 1048576: set available bytes of - memory for rate_filters { 0: } + * int alerts.rate_filter_memcap = 1048576: set available MB of + memory for rate_filters { 0:max32 } * string alerts.reference_net: set the CIDR for homenet (for use with -l or -B, does NOT change $HOME_NET in IDS mode) * bool alerts.stateful = false: don’t alert w/o established session @@ -5446,11 +5452,11 @@ Usage: global Configuration: * int attribute_table.max_hosts = 1024: maximum number of hosts in - attribute table { 32:207551 } + attribute table { 32:max53 } * int attribute_table.max_services_per_host = 8: maximum number of services per host entry in attribute table { 1:65535 } * int attribute_table.max_metadata_services = 8: maximum number of - services in rule metadata { 1:256 } + services in rule { 1:255 } 6.4. classifications @@ -5468,7 +5474,7 @@ Configuration: * string classifications[].name: name used with classtype rule option * int classifications[].priority = 1: default priority for class { - 0: } + 0:max32 } * string classifications[].text: description of class @@ -5488,7 +5494,7 @@ Configuration: * string daq.input_spec: input specification * string daq.module: DAQ module to use * string daq.variables[].str: string parameter - * int daq.instances[].id: instance ID (required) { 0: } + * int daq.instances[].id: instance ID (required) { 0:max32 } * string daq.instances[].input_spec: input specification * string daq.instances[].variables[].str: string parameter * int daq.snaplen: set snap length (same as -s) { 0:65535 } @@ -5556,19 +5562,20 @@ Usage: global Configuration: - * int detection.asn1 = 256: maximum decode nodes { 1: } + * int detection.asn1 = 0: maximum decode nodes { 0:65535 } * int detection.offload_limit = 99999: minimum sizeof PDU to - offload fast pattern search (defaults to disabled) { 0: } + offload fast pattern search (defaults to disabled) { 0:max32 } * int detection.offload_threads = 0: maximum number of simultaneous - offloads (defaults to disabled) { 0: } + offloads (defaults to disabled) { 0:max32 } * bool detection.pcre_enable = true: disable pcre pattern matching - * int detection.pcre_match_limit = 1500: limit pcre backtracking, - -1 = max, 0 = off { -1:1000000 } + * int detection.pcre_match_limit = 1500: limit pcre backtracking, 0 + = off { 0:max32 } * int detection.pcre_match_limit_recursion = 1500: limit pcre stack - consumption, -1 = max, 0 = off { -1:10000 } + consumption, 0 = off { 0:max32 } * bool detection.enable_address_anomaly_checks = false: enable check and alerting of address anomalies - * int detection.trace: mask for enabling debug traces in module + * int detection.trace: mask for enabling debug traces in module { + 0:max53 } Peg counts: @@ -5615,15 +5622,15 @@ Usage: context Configuration: - * int event_filter[].gid = 1: rule generator ID { 0: } - * int event_filter[].sid = 1: rule signature ID { 0: } + * int event_filter[].gid = 1: rule generator ID { 0:max32 } + * int event_filter[].sid = 1: rule signature ID { 0:max32 } * enum event_filter[].type: 1st count events | every count events | once after count events { limit | threshold | both } * enum event_filter[].track: filter only matching source or destination addresses { by_src | by_dst } * int event_filter[].count = 0: number of events in interval before - tripping; -1 to disable { -1: } - * int event_filter[].seconds = 0: count interval { 0: } + tripping; -1 to disable { -1:max31 } + * int event_filter[].seconds = 0: count interval { 0:max32 } * string event_filter[].ip: restrict filter to these addresses according to track @@ -5640,8 +5647,9 @@ Usage: context Configuration: - * int event_queue.max_queue = 8: maximum events to queue { 1: } - * int event_queue.log = 3: maximum events to log { 1: } + * int event_queue.max_queue = 8: maximum events to queue { 1:max32 + } + * int event_queue.log = 3: maximum events to log { 1:max32 } * enum event_queue.order_events = content_length: criteria for ordering incoming events { priority|content_length } * bool event_queue.process_all_events = false: process just first @@ -5687,7 +5695,7 @@ Usage: global Configuration: - * int host_cache[].size: size of host cache + * int host_cache[].size: size of host cache { 1:max32 } Peg counts: @@ -5813,19 +5821,20 @@ Usage: context Configuration: * int latency.packet.max_time = 500: set timeout for packet latency - thresholding (usec) { 0: } + thresholding (usec) { 0:max53 } * bool latency.packet.fastpath = false: fastpath expensive packets (max_time exceeded) * enum latency.packet.action = none: event action if packet times out and is fastpathed { none | alert | log | alert_and_log } * int latency.rule.max_time = 500: set timeout for rule evaluation - (usec) { 0: } + (usec) { 0:max53 } * bool latency.rule.suspend = false: temporarily suspend expensive rules * int latency.rule.suspend_threshold = 5: set threshold for number - of timeouts before suspending a rule { 1: } + of timeouts before suspending a rule { 1:max32 } * int latency.rule.max_suspend_time = 30000: set max time for - suspending a rule (ms, 0 means permanently disable rule) { 0: } + suspending a rule (ms, 0 means permanently disable rule) { + 0:max32 } * enum latency.rule.action = none: event action for rule latency enable and suspend events { none | alert | log | alert_and_log } @@ -5859,11 +5868,11 @@ Usage: global Configuration: * int memory.cap = 0: set the per-packet-thread cap on memory - (bytes, 0 to disable) { 0: } + (bytes, 0 to disable) { 0:maxSZ } * bool memory.soft = false: always succeed in allocating memory, even if above the cap * int memory.threshold = 0: set the per-packet-thread threshold for - preemptive cleanup actions (percent, 0 to disable) { 0: } + preemptive cleanup actions (percent, 0 to disable) { 0:100 } 6.18. network @@ -5931,9 +5940,9 @@ Configuration: * bool output.show_year = false: include year in timestamp in the alert and log files (same as -y) * int output.tagged_packet_limit = 256: maximum number of packets - tagged for non-packet metrics { 0: } + tagged for non-packet metrics { 0:max32 } * bool output.verbose = false: be verbose (same as -v) - * bool output.wide_hex_dump = true: output 20 bytes per lines + * bool output.wide_hex_dump = false: output 20 bytes per lines instead of 16 when dumping buffers @@ -5978,9 +5987,9 @@ Configuration: * string packets.bpf_file: file with BPF to select traffic for Snort * int packets.limit = 0: maximum number of packets to process - before stopping (0 is unlimited) { 0: } + before stopping (0 is unlimited) { 0:max53 } * int packets.skip = 0: number of packets to skip before before - processing { 0: } + processing { 0:max53 } * bool packets.vlan_agnostic = false: determines whether VLAN info is used to track fragments and connections @@ -6001,12 +6010,12 @@ Configuration: * string process.threads[].cpuset: pin the associated thread to this cpuset * int process.threads[].thread = 0: set cpu affinity for the - thread that runs { 0: } + thread that runs { 0:65535 } * bool process.daemon = false: fork as a daemon (same as -D) * bool process.dirty_pig = false: shutdown without internal cleanup * string process.set_gid: set group ID (same as -g) * string process.set_uid: set user ID (same as -u) - * string process.umask: set process umask (same as -m) + * int process.umask: set process umask (same as -m) { 0x000:0x1FF } * bool process.utc = false: use UTC instead of local time for timestamps @@ -6025,22 +6034,22 @@ Configuration: * bool profiler.modules.show = true: show module time profile stats * int profiler.modules.count = 0: limit results to count items per - level (0 = no limit) { 0: } + level (0 = no limit) { 0:max32 } * enum profiler.modules.sort = total_time: sort by given field { none | checks | avg_check | total_time } * int profiler.modules.max_depth = -1: limit depth to max_depth (-1 - = no limit) { -1: } + = no limit) { -1:255 } * bool profiler.memory.show = true: show module memory profile stats * int profiler.memory.count = 0: limit results to count items per - level (0 = no limit) { 0: } + level (0 = no limit) { 0:max32 } * enum profiler.memory.sort = total_used: sort by given field { none | allocations | total_used | avg_allocation } * int profiler.memory.max_depth = -1: limit depth to max_depth (-1 - = no limit) { -1: } + = no limit) { -1:255 } * bool profiler.rules.show = true: show rule time profile stats * int profiler.rules.count = 0: print results to given level (0 = - all) { 0: } + all) { 0:max32 } * enum profiler.rules.sort = total_time: sort by given field { none | checks | avg_check | total_time | matches | no_matches | avg_match | avg_no_match } @@ -6058,16 +6067,16 @@ Usage: detect Configuration: - * int rate_filter[].gid = 1: rule generator ID { 0: } - * int rate_filter[].sid = 1: rule signature ID { 0: } + * int rate_filter[].gid = 1: rule generator ID { 0:max32 } + * int rate_filter[].sid = 1: rule signature ID { 0:max32 } * enum rate_filter[].track = by_src: filter only matching source or destination addresses { by_src | by_dst | by_rule } * int rate_filter[].count = 1: number of events in interval before - tripping { 0: } - * int rate_filter[].seconds = 1: count interval { 0: } + tripping { 0:max32 } + * int rate_filter[].seconds = 1: count interval { 0:max32 } * enum rate_filter[].new_action = alert: take this action on future hits until timeout { log | pass | alert | drop | block | reset } - * int rate_filter[].timeout = 1: count interval { 0: } + * int rate_filter[].timeout = 1: count interval { 0:max32 } * string rate_filter[].apply_to: restrict filter to these addresses according to track @@ -6100,8 +6109,8 @@ Usage: detect Configuration: - * int rule_state[].gid = 0: rule generator ID { 0: } - * int rule_state[].sid = 0: rule signature ID { 0: } + * int rule_state[].gid = 0: rule generator ID { 0:max32 } + * int rule_state[].sid = 0: rule signature ID { 0:max32 } * bool rule_state[].enable = true: enable or disable rule in all policies @@ -6119,7 +6128,7 @@ Usage: global Configuration: * int search_engine.bleedover_port_limit = 1024: maximum ports in - rule before demotion to any-any port group { 1: } + rule before demotion to any-any port group { 1:max32 } * bool search_engine.bleedover_warnings_enabled = false: print warning if a rule is demoted to any-any port group * bool search_engine.enable_single_rule_group = false: put all @@ -6134,7 +6143,7 @@ Configuration: * bool search_engine.debug_print_rule_groups_compiled = false: prints compiled rule group information * int search_engine.max_pattern_len = 0: truncate patterns when - compiling into state machine (0 means no maximum) { 0: } + compiling into state machine (0 means no maximum) { 0:max32 } * int search_engine.max_queue_events = 5: maximum number of matching fast pattern states to queue per packet { 2:100 } * bool search_engine.detect_raw_tcp = false: detect on TCP payload @@ -6222,8 +6231,9 @@ Configuration: * string snort.-l: log to this directory instead of current directory * implied snort.-M: log messages to syslog (not alerts) - * int snort.-m: set umask = { 0: } - * int snort.-n: stop after count packets { 0: } + * int snort.-m: set the process file mode creation mask { + 0x000:0x1FF } + * int snort.-n: stop after count packets { 0:max53 } * implied snort.-O: obfuscate the logged IP addresses * implied snort.-Q: enable inline mode operation * implied snort.-q: quiet mode - Don’t show banner and status @@ -6232,7 +6242,7 @@ Configuration: policy * string snort.-r: … (same as --pcap-list) * string snort.-S: set config variable x equal to value v - * int snort.-s = 1514: (same as --snaplen); default is 1514 + * int snort.-s = 1518: (same as --snaplen); default is 1518 { 68:65535 } * implied snort.-T: test and report on the current Snort configuration @@ -6243,7 +6253,6 @@ Configuration: initialization * implied snort.-V: (same as --version) * implied snort.-v: be verbose - * implied snort.-W: lists available interfaces * implied snort.-X: dump the raw packet data starting at the link layer * implied snort.-x: same as --pedantic @@ -6251,7 +6260,7 @@ Configuration: files * int snort.-z = 1: maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported - by the system; default is 1 { 0: } + by the system; default is 1 { 0:max32 } * implied snort.--alert-before-pass: process alert, drop, sdrop, or reject before pass; default is pass before alert, drop,… * string snort.--bpf: are standard BPF options, as @@ -6288,6 +6297,8 @@ Configuration: config options { (optional) } * string snort.--help-counts: [] output matching peg counts { (optional) } + * implied snort.--help-limits: print the int upper bounds denoted + by max* * string snort.--help-module: output description of given module * implied snort.--help-modules: list all available modules with @@ -6317,7 +6328,7 @@ Configuration: for multiple snorts (same as -G) { 0:65535 } * implied snort.--markup: output help in asciidoc compatible format * int snort.--max-packet-threads = 1: configure maximum - number of packet threads (same as -z) { 0: } + number of packet threads (same as -z) { 0:max32 } * implied snort.--mem-check: like -T but also compile search engines * implied snort.--nostamps: don’t include timestamps in log file @@ -6325,8 +6336,6 @@ Configuration: * implied snort.--nolock-pidfile: do not try to lock Snort PID file * implied snort.--pause: wait for resume/quit command before processing packets/terminating - * int snort.--pause-after-n: pause after count packets, to - be used with single packet thread only { 1: } * implied snort.--parsing-follows-files: parse relative paths from the perspective of the current configuration file * string snort.--pcap-file: file that contains a list of @@ -6338,7 +6347,7 @@ Configuration: * string snort.--pcap-filter: filter to apply when getting pcaps from file or directory * int snort.--pcap-loop: read all pcaps times; 0 - will read until Snort is terminated { -1: } + will read until Snort is terminated { 0:max32 } * implied snort.--pcap-no-filter: reset to use no filter when getting pcaps from file or directory * implied snort.--pcap-reload: if reading multiple pcaps, reload @@ -6353,16 +6362,16 @@ Configuration: * string snort.--rule-path: where to find rules files * implied snort.--rule-to-hex: output so rule header to stdout for text rule on stdin - * string snort.--rule-to-text = [SnortFoo]: output plain so rule - header to stdout for text rule on stdin { 16 } + * string snort.--rule-to-text: output plain so rule header to + stdout for text rule on stdin (specify delimiter or + [Snort_SO_Rule] will be used) { 16 } * string snort.--run-prefix: prepend this to each output file * string snort.--script-path: to a luajit script or directory containing luajit scripts * implied snort.--shell: enable the interactive command line - * implied snort.--piglet: enable piglet test harness mode * implied snort.--show-plugins: list module and plugin versions - * int snort.--skip: skip 1st n packets { 0: } - * int snort.--snaplen = 1514: set snaplen of packet (same as + * int snort.--skip: skip 1st n packets { 0:max53 } + * int snort.--snaplen = 1518: set snaplen of packet (same as -s) { 68:65535 } * implied snort.--stdin-rules: read rules from stdin until EOF or a line starting with END is read @@ -6373,8 +6382,6 @@ Configuration: * implied snort.--treat-drop-as-ignore: use drop, sdrop, and reject rules to ignore session traffic when not inline * string snort.--tweaks: tune configuration - * string snort.--catch-test: comma separated list of cat unit test - tags or all * implied snort.--version: show version number (same as -V) * implied snort.--warn-all: enable all warnings * implied snort.--warn-conf: warn about configuration issues @@ -6394,10 +6401,12 @@ Configuration: * implied snort.--warn-vars: warn about variable definition and usage issues * int snort.--x2c: output ASCII char for given hex (see also --c2x) + { 0x00:0xFF } * string snort.--x2s: output ASCII string for given byte code (see also --x2c) * implied snort.--trace: turn on main loop debug trace - * int snort.trace: mask for enabling debug traces in module + * int snort.trace: mask for enabling debug traces in module { + 0:max53 } Commands: @@ -6413,7 +6422,8 @@ Commands: * snort.reload_daq(): reload daq module * snort.reload_hosts(filename): load a new hosts table * snort.pause(): suspend packet processing - * snort.resume(): continue packet processing + * snort.resume(pkt_num): continue packet processing. If number of + packet is specified, will resume for n packets and pause * snort.detach(): exit shell w/o shutdown * snort.quit(): shutdown and dump-stats * snort.help(): this output @@ -6448,8 +6458,8 @@ Usage: detect Configuration: - * int suppress[].gid = 0: rule generator ID { 0: } - * int suppress[].sid = 0: rule signature ID { 0: } + * int suppress[].gid = 0: rule generator ID { 0:max32 } + * int suppress[].sid = 0: rule signature ID { 0:max32 } * enum suppress[].track: suppress only matching source or destination addresses { by_src | by_dst } * string suppress[].ip: restrict suppression to these addresses @@ -6871,7 +6881,8 @@ Configuration: * bool mpls.enable_mpls_overlapping_ip = false: enable if private network addresses overlap and must be differentiated by MPLS label(s) - * int mpls.max_mpls_stack_depth = -1: set MPLS stack depth { -1: } + * int mpls.max_mpls_stack_depth = -1: set MPLS stack depth { -1:255 + } * enum mpls.mpls_payload_type = ip4: set encapsulated payload type { eth | ip4 | ip6 } @@ -7134,19 +7145,18 @@ Usage: context Configuration: - * int appid.first_decrypted_packet_debug = 0: the first packet of - an already decrypted SSL flow (debug single session only) { 0: } - * int appid.memcap = 0: disregard - not implemented { 0: } + * int appid.memcap = 0: disregard - not implemented { 0:maxSZ } * bool appid.log_stats = false: enable logging of appid statistics * int appid.app_stats_period = 300: time period for collecting and - logging appid statistics { 0: } + logging appid statistics { 0:max32 } * int appid.app_stats_rollover_size = 20971520: max file size for - appid stats before rolling over the log file { 0: } + appid stats before rolling over the log file { 0:max32 } * int appid.app_stats_rollover_time = 86400: max time period for - collection appid stats before rolling over the log file { 0: } + collection appid stats before rolling over the log file { 0:max31 + } * string appid.app_detector_dir: directory to load appid detectors from - * int appid.instance_id = 0: instance id - ignored { 0: } + * int appid.instance_id = 0: instance id - ignored { 0:max32 } * bool appid.debug = false: enable appid debug logging * bool appid.dump_ports = false: enable dump of appid port information @@ -7160,7 +7170,8 @@ Configuration: on startup * bool appid.log_all_sessions = false: enable logging of all appid sessions - * int appid.trace: mask for enabling debug traces in module + * int appid.trace: mask for enabling debug traces in module { + 0:max53 } Commands: @@ -7240,7 +7251,7 @@ Usage: inspect Configuration: * int binder[].when.ips_policy_id = 0: unique ID for selection of - this config by external logic { 0: } + this config by external logic { 0:max32 } * bit_list binder[].when.ifaces: list of interface indices { 255 } * bit_list binder[].when.vlans: list of VLAN IDs { 4095 } * addr_list binder[].when.nets: list of networks @@ -7252,8 +7263,8 @@ Configuration: * bit_list binder[].when.src_ports: list of source ports { 65535 } * bit_list binder[].when.dst_ports: list of destination ports { 65535 } - * int binder[].when.src_zone: source zone { 0:2147483647 } - * int binder[].when.dst_zone: destination zone { 0:2147483647 } + * int binder[].when.src_zone: source zone { 0:max31 } + * int binder[].when.dst_zone: destination zone { 0:max31 } * enum binder[].when.role = any: use the given configuration on one or any end of a session { client | server | any } * string binder[].when.service: override default configuration @@ -7295,7 +7306,7 @@ Configuration: event to log { http_request_header_event | http_response_header_event } * int data_log.limit = 0: set maximum size in MB before rollover (0 - is unlimited) { 0: } + is unlimited) { 0:max32 } Peg counts: @@ -7350,28 +7361,29 @@ Usage: inspect Configuration: - * bool dce_smb.disable_defrag = false: Disable DCE/RPC + * bool dce_smb.disable_defrag = false: disable DCE/RPC defragmentation - * int dce_smb.max_frag_len = 65535: Maximum fragment size for + * int dce_smb.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 } - * int dce_smb.reassemble_threshold = 0: Minimum bytes received + * int dce_smb.reassemble_threshold = 0: minimum bytes received before performing reassembly { 0:65535 } - * enum dce_smb.smb_fingerprint_policy = none: Target based SMB + * enum dce_smb.smb_fingerprint_policy = none: target based SMB policy to use { none | client | server | both } - * enum dce_smb.policy = WinXP: Target based policy to use { Win2000 + * enum dce_smb.policy = WinXP: target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 } * int dce_smb.smb_max_chain = 3: SMB max chain size { 0:255 } * int dce_smb.smb_max_compound = 3: SMB max compound size { 0:255 } - * multi dce_smb.valid_smb_versions = all: Valid SMB versions { v1 | + * multi dce_smb.valid_smb_versions = all: valid SMB versions { v1 | v2 | all } * enum dce_smb.smb_file_inspection = off: SMB file inspection { off | on | only } * int dce_smb.smb_file_depth = 16384: SMB file depth for file data - { -1: } + { -1:32767 } * string dce_smb.smb_invalid_shares: SMB shares to alert on * bool dce_smb.smb_legacy_mode = false: inspect only SMBv1 - * int dce_smb.trace: mask for enabling debug traces in module + * int dce_smb.trace: mask for enabling debug traces in module { + 0:max53 } Rules: @@ -7519,13 +7531,13 @@ Usage: inspect Configuration: - * bool dce_tcp.disable_defrag = false: Disable DCE/RPC + * bool dce_tcp.disable_defrag = false: disable DCE/RPC defragmentation - * int dce_tcp.max_frag_len = 65535: Maximum fragment size for + * int dce_tcp.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 } - * int dce_tcp.reassemble_threshold = 0: Minimum bytes received + * int dce_tcp.reassemble_threshold = 0: minimum bytes received before performing reassembly { 0:65535 } - * enum dce_tcp.policy = WinXP: Target based policy to use { Win2000 + * enum dce_tcp.policy = WinXP: target based policy to use { Win2000 | WinXP | WinVista | Win2003 | Win2008 | Win7 | Samba | Samba-3.0.37 | Samba-3.0.22 | Samba-3.0.20 } @@ -7625,11 +7637,12 @@ Usage: inspect Configuration: - * bool dce_udp.disable_defrag = false: Disable DCE/RPC + * bool dce_udp.disable_defrag = false: disable DCE/RPC defragmentation - * int dce_udp.max_frag_len = 65535: Maximum fragment size for + * int dce_udp.max_frag_len = 65535: maximum fragment size for defragmentation { 1514:65535 } - * int dce_udp.trace: mask for enabling debug traces in module + * int dce_udp.trace: mask for enabling debug traces in module { + 0:max53 } Rules: @@ -7799,43 +7812,45 @@ Usage: global Configuration: - * int file_id.type_depth = 1460: stop type ID at this point { 0: } + * int file_id.type_depth = 1460: stop type ID at this point { + 0:max53 } * int file_id.signature_depth = 10485760: stop signature at this - point { 0: } + point { 0:max53 } * int file_id.block_timeout = 86400: stop blocking after this many - seconds { 0: } + seconds { 0:max31 } * int file_id.lookup_timeout = 2: give up on lookup after this many - seconds { 0: } + seconds { 0:max31 } * bool file_id.block_timeout_lookup = false: block if lookup times out * int file_id.capture_memcap = 100: memcap for file capture in - megabytes { 0: } + megabytes { 0:max53 } * int file_id.capture_max_size = 1048576: stop file capture beyond - this point { 0: } + this point { 0:max53 } * int file_id.capture_min_size = 0: stop file capture if file size - less than this { 0: } + less than this { 0:max53 } * int file_id.capture_block_size = 32768: file capture block size - in bytes { 8: } + in bytes { 8:max53 } * int file_id.max_files_cached = 65536: maximal number of files - cached in memory { 8: } + cached in memory { 8:max53 } * bool file_id.enable_type = true: enable type ID * bool file_id.enable_signature = true: enable signature calculation * bool file_id.enable_capture = false: enable file capture - * int file_id.show_data_depth = 100: print this many octets { 0: } - * int file_id.file_rules[].rev = 0: rule revision { 0: } + * int file_id.show_data_depth = 100: print this many octets { + 0:max53 } + * int file_id.file_rules[].rev = 0: rule revision { 0:max32 } * string file_id.file_rules[].msg: information about the file type * string file_id.file_rules[].type: file type name - * int file_id.file_rules[].id = 0: file type id { 0: } + * int file_id.file_rules[].id = 0: file type id { 0:max32 } * string file_id.file_rules[].category: file type category * string file_id.file_rules[].group: comma separated list of groups associated with file type * string file_id.file_rules[].version: file type version * string file_id.file_rules[].magic[].content: file magic content * int file_id.file_rules[].magic[].offset = 0: file magic offset { - 0: } + 0:max32 } * int file_id.file_policy[].when.file_type_id = 0: unique ID for - file type in file magic rule { 0: } + file type in file magic rule { 0:max32 } * string file_id.file_policy[].when.sha256: SHA 256 * enum file_id.file_policy[].use.verdict = unknown: what to do with matching traffic { unknown | log | stop | block | reset } @@ -7851,7 +7866,7 @@ Configuration: * bool file_id.trace_stream = false: enable runtime dump of file data * int file_id.verdict_delay = 0: number of queries to return final - verdict { 0: } + verdict { 0:max53 } Peg counts: @@ -7898,13 +7913,13 @@ Configuration: * bool ftp_client.bounce = false: check for bounces * addr ftp_client.bounce_to[].address = 1.0.0.0/32: allowed IP address in CIDR format - * port ftp_client.bounce_to[].port = 20: allowed port { 1: } + * port ftp_client.bounce_to[].port = 20: allowed port * port ftp_client.bounce_to[].last_port: optional allowed range - from port to last_port inclusive { 0: } + from port to last_port inclusive * bool ftp_client.ignore_telnet_erase_cmds = false: ignore erase character and erase line commands when normalizing - * int ftp_client.max_resp_len = -1: maximum FTP response accepted - by client { -1: } + * int ftp_client.max_resp_len = 4294967295: maximum FTP response + accepted by client { 0:max32 } * bool ftp_client.telnet_cmds = false: detect Telnet escape sequences on FTP control channel @@ -7946,7 +7961,7 @@ Configuration: given commands * string ftp_server.directory_cmds[].dir_cmd: directory command * int ftp_server.directory_cmds[].rsp_code = 200: expected - successful response code for command { 200: } + successful response code for command { 200:max32 } * string ftp_server.file_put_cmds: check the formatting of the given commands * string ftp_server.file_get_cmds: check the formatting of the @@ -7960,9 +7975,9 @@ Configuration: * string ftp_server.cmd_validity[].command: command string * string ftp_server.cmd_validity[].format: format specification * int ftp_server.cmd_validity[].length = 0: specify non-default - maximum for command { 0: } + maximum for command { 0:max32 } * int ftp_server.def_max_param_len = 100: default maximum length of - commands handled by server; 0 is unlimited { 1: } + commands handled by server; 0 is unlimited { 1:max32 } * bool ftp_server.encrypted_traffic = false: check for encrypted Telnet and FTP * string ftp_server.ftp_cmds: specify additional commands supported @@ -8020,7 +8035,8 @@ Configuration: * string gtp_inspect[].infos[].name: information element name * int gtp_inspect[].infos[].length = 0: information element type code { 0:255 } - * int gtp_inspect.trace: mask for enabling debug traces in module + * int gtp_inspect.trace: mask for enabling debug traces in module { + 0:max53 } Rules: @@ -8074,9 +8090,9 @@ Usage: inspect Configuration: * int http_inspect.request_depth = -1: maximum request message body - bytes to examine (-1 no limit) { -1: } + bytes to examine (-1 no limit) { -1:max53 } * int http_inspect.response_depth = -1: maximum response message - body bytes to examine (-1 no limit) { -1: } + body bytes to examine (-1 no limit) { -1:max53 } * bool http_inspect.unzip = true: decompress gzip and deflate message bodies * bool http_inspect.normalize_utf = true: normalize charset utf @@ -8119,17 +8135,6 @@ Configuration: normalizing URIs * bool http_inspect.simplify_path = true: reduce URI directory path to simplest form - * bool http_inspect.test_input = false: read HTTP messages from - text file - * bool http_inspect.test_output = false: print out HTTP section - data - * int http_inspect.print_amount = 1200: number of characters to - print from a Field { 1:1000000 } - * bool http_inspect.print_hex = false: nonprinting characters - printed in [HH] format instead of using an asterisk - * bool http_inspect.show_pegs = true: display peg counts with test - output - * bool http_inspect.show_scan = false: display scanned segments Rules: @@ -8540,18 +8545,18 @@ Usage: global Configuration: - * bool perf_monitor.base = true: enable base statistics { nullptr } - * bool perf_monitor.cpu = false: enable cpu statistics { nullptr } + * bool perf_monitor.base = true: enable base statistics + * bool perf_monitor.cpu = false: enable cpu statistics * bool perf_monitor.flow = false: enable traffic statistics * bool perf_monitor.flow_ip = false: enable statistics on host pairs - * int perf_monitor.packets = 10000: minimum packets to report { 0: - } - * int perf_monitor.seconds = 60: report interval { 1: } + * int perf_monitor.packets = 10000: minimum packets to report { + 0:max32 } + * int perf_monitor.seconds = 60: report interval { 1:max32 } * int perf_monitor.flow_ip_memcap = 52428800: maximum memory in - bytes for flow tracking { 8200: } + bytes for flow tracking { 8200:maxSZ } * int perf_monitor.max_file_size = 1073741824: files will be rolled - over if they exceed this size { 4096: } + over if they exceed this size { 4096:max53 } * int perf_monitor.flow_ports = 1023: maximum ports to track { 0:65535 } * enum perf_monitor.output = file: output location for stats { file @@ -8629,7 +8634,7 @@ Usage: global Configuration: * int port_scan.memcap = 1048576: maximum tracker memory in bytes { - 1: } + 1:maxSZ } * multi port_scan.protos = all: choose the protocols to monitor { tcp | udp | icmp | ip | all } * multi port_scan.scan_types = all: choose type of scans to look @@ -8645,105 +8650,105 @@ Configuration: threshold within window if true; else alert on first only * bool port_scan.include_midstream = false: list of CIDRs with optional ports - * int port_scan.tcp_ports.scans = 100: scan attempts { 0: } + * int port_scan.tcp_ports.scans = 100: scan attempts { 0:65535 } * int port_scan.tcp_ports.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.tcp_ports.nets = 25: number of times address - changed from prior attempt { 0: } + changed from prior attempt { 0:65535 } * int port_scan.tcp_ports.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.tcp_decoy.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.tcp_decoy.scans = 100: scan attempts { 0:65535 } * int port_scan.tcp_decoy.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.tcp_decoy.nets = 25: number of times address - changed from prior attempt { 0: } + changed from prior attempt { 0:65535 } * int port_scan.tcp_decoy.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.tcp_sweep.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.tcp_sweep.scans = 100: scan attempts { 0:65535 } * int port_scan.tcp_sweep.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.tcp_sweep.nets = 25: number of times address - changed from prior attempt { 0: } + changed from prior attempt { 0:65535 } * int port_scan.tcp_sweep.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.tcp_dist.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.tcp_dist.scans = 100: scan attempts { 0:65535 } * int port_scan.tcp_dist.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.tcp_dist.nets = 25: number of times address changed - from prior attempt { 0: } + from prior attempt { 0:65535 } * int port_scan.tcp_dist.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.udp_ports.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.udp_ports.scans = 100: scan attempts { 0:65535 } * int port_scan.udp_ports.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.udp_ports.nets = 25: number of times address - changed from prior attempt { 0: } + changed from prior attempt { 0:65535 } * int port_scan.udp_ports.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.udp_decoy.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.udp_decoy.scans = 100: scan attempts { 0:65535 } * int port_scan.udp_decoy.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.udp_decoy.nets = 25: number of times address - changed from prior attempt { 0: } + changed from prior attempt { 0:65535 } * int port_scan.udp_decoy.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.udp_sweep.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.udp_sweep.scans = 100: scan attempts { 0:65535 } * int port_scan.udp_sweep.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.udp_sweep.nets = 25: number of times address - changed from prior attempt { 0: } + changed from prior attempt { 0:65535 } * int port_scan.udp_sweep.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.udp_dist.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.udp_dist.scans = 100: scan attempts { 0:65535 } * int port_scan.udp_dist.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.udp_dist.nets = 25: number of times address changed - from prior attempt { 0: } + from prior attempt { 0:65535 } * int port_scan.udp_dist.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.ip_proto.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.ip_proto.scans = 100: scan attempts { 0:65535 } * int port_scan.ip_proto.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.ip_proto.nets = 25: number of times address changed - from prior attempt { 0: } + from prior attempt { 0:65535 } * int port_scan.ip_proto.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.ip_decoy.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.ip_decoy.scans = 100: scan attempts { 0:65535 } * int port_scan.ip_decoy.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.ip_decoy.nets = 25: number of times address changed - from prior attempt { 0: } + from prior attempt { 0:65535 } * int port_scan.ip_decoy.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.ip_sweep.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.ip_sweep.scans = 100: scan attempts { 0:65535 } * int port_scan.ip_sweep.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.ip_sweep.nets = 25: number of times address changed - from prior attempt { 0: } + from prior attempt { 0:65535 } * int port_scan.ip_sweep.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } - * int port_scan.ip_dist.scans = 100: scan attempts { 0: } + proto) changed from prior attempt { 0:65535 } + * int port_scan.ip_dist.scans = 100: scan attempts { 0:65535 } * int port_scan.ip_dist.rejects = 15: scan attempts with negative - response { 0: } + response { 0:65535 } * int port_scan.ip_dist.nets = 25: number of times address changed - from prior attempt { 0: } + from prior attempt { 0:65535 } * int port_scan.ip_dist.ports = 25: number of times port (or proto) - changed from prior attempt { 0: } - * int port_scan.icmp_sweep.scans = 100: scan attempts { 0: } + changed from prior attempt { 0:65535 } + * int port_scan.icmp_sweep.scans = 100: scan attempts { 0:65535 } * int port_scan.icmp_sweep.rejects = 15: scan attempts with - negative response { 0: } + negative response { 0:65535 } * int port_scan.icmp_sweep.nets = 25: number of times address - changed from prior attempt { 0: } + changed from prior attempt { 0:65535 } * int port_scan.icmp_sweep.ports = 25: number of times port (or - proto) changed from prior attempt { 0: } + proto) changed from prior attempt { 0:65535 } * int port_scan.tcp_window = 0: detection interval for all TCP - scans { 0: } + scans { 0:max32 } * int port_scan.udp_window = 0: detection interval for all UDP - scans { 0: } + scans { 0:max32 } * int port_scan.ip_window = 0: detection interval for all IP scans - { 0: } + { 0:max32 } * int port_scan.icmp_window = 0: detection interval for all ICMP - scans { 0: } + scans { 0:max32 } Rules: @@ -8893,7 +8898,7 @@ Configuration: * int sip.max_content_len = 1024: maximum content length of the message body { 0:65535 } * int sip.max_dialogs = 4: maximum number of dialogs within one - stream session { 1:4194303 } + stream session { 1:max32 } * int sip.max_from_len = 256: maximum from field size { 0:65535 } * int sip.max_requestName_len = 20: maximum request name field size { 0:65535 } @@ -8985,7 +8990,7 @@ Configuration: * string smtp.alt_max_command_line_len[].command: command string * int smtp.alt_max_command_line_len[].length = 0: specify - non-default maximum for command { 0: } + non-default maximum for command { 0:max32 } * string smtp.auth_cmds: commands that initiate an authentication exchange * int smtp.b64_decode_depth = 1460: depth used to decode the base64 @@ -9165,45 +9170,46 @@ Usage: global Configuration: * int stream.footprint = 0: use zero for production, non-zero for - testing at given size (for TCP and user) { 0: } + testing at given size (for TCP and user) { 0:max32 } * bool stream.ip_frags_only = false: don’t process non-frag flows * int stream.ip_cache.max_sessions = 16384: maximum simultaneous - sessions tracked before pruning { 2: } + sessions tracked before pruning { 2:max32 } * int stream.ip_cache.pruning_timeout = 30: minimum inactive time - before being eligible for pruning { 1: } + before being eligible for pruning { 1:max32 } * int stream.ip_cache.idle_timeout = 180: maximum inactive time - before retiring session tracker { 1: } + before retiring session tracker { 1:max32 } * int stream.icmp_cache.max_sessions = 65536: maximum simultaneous - sessions tracked before pruning { 2: } + sessions tracked before pruning { 2:max32 } * int stream.icmp_cache.pruning_timeout = 30: minimum inactive time - before being eligible for pruning { 1: } + before being eligible for pruning { 1:max32 } * int stream.icmp_cache.idle_timeout = 180: maximum inactive time - before retiring session tracker { 1: } + before retiring session tracker { 1:max32 } * int stream.tcp_cache.max_sessions = 262144: maximum simultaneous - sessions tracked before pruning { 2: } + sessions tracked before pruning { 2:max32 } * int stream.tcp_cache.pruning_timeout = 30: minimum inactive time - before being eligible for pruning { 1: } + before being eligible for pruning { 1:max32 } * int stream.tcp_cache.idle_timeout = 3600: maximum inactive time - before retiring session tracker { 1: } + before retiring session tracker { 1:max32 } * int stream.udp_cache.max_sessions = 131072: maximum simultaneous - sessions tracked before pruning { 2: } + sessions tracked before pruning { 2:max32 } * int stream.udp_cache.pruning_timeout = 30: minimum inactive time - before being eligible for pruning { 1: } + before being eligible for pruning { 1:max32 } * int stream.udp_cache.idle_timeout = 180: maximum inactive time - before retiring session tracker { 1: } + before retiring session tracker { 1:max32 } * int stream.user_cache.max_sessions = 1024: maximum simultaneous - sessions tracked before pruning { 2: } + sessions tracked before pruning { 2:max32 } * int stream.user_cache.pruning_timeout = 30: minimum inactive time - before being eligible for pruning { 1: } + before being eligible for pruning { 1:max32 } * int stream.user_cache.idle_timeout = 180: maximum inactive time - before retiring session tracker { 1: } + before retiring session tracker { 1:max32 } * int stream.file_cache.max_sessions = 128: maximum simultaneous - sessions tracked before pruning { 2: } + sessions tracked before pruning { 2:max32 } * int stream.file_cache.pruning_timeout = 30: minimum inactive time - before being eligible for pruning { 1: } + before being eligible for pruning { 1:max32 } * int stream.file_cache.idle_timeout = 180: maximum inactive time - before retiring session tracker { 1: } - * int stream.trace: mask for enabling debug traces in module + before retiring session tracker { 1:max32 } + * int stream.trace: mask for enabling debug traces in module { + 0:max53 } Rules: @@ -9312,7 +9318,7 @@ Usage: inspect Configuration: * int stream_icmp.session_timeout = 30: session tracking timeout { - 1:86400 } + 1:max31 } Peg counts: @@ -9337,18 +9343,19 @@ Usage: inspect Configuration: * int stream_ip.max_frags = 8192: maximum number of simultaneous - fragments being tracked { 1: } + fragments being tracked { 1:max32 } * int stream_ip.max_overlaps = 0: maximum allowed overlaps per - datagram; 0 is unlimited { 0: } + datagram; 0 is unlimited { 0:max32 } * int stream_ip.min_frag_length = 0: alert if fragment length is - below this limit before or after trimming { 0: } + below this limit before or after trimming { 0:65535 } * int stream_ip.min_ttl = 1: discard fragments with TTL below the minimum { 1:255 } * enum stream_ip.policy = linux: fragment reassembly policy { first | linux | bsd | bsd_right | last | windows | solaris } * int stream_ip.session_timeout = 30: session tracking timeout { - 1:86400 } - * int stream_ip.trace: mask for enabling debug traces in module + 1:max31 } + * int stream_ip.trace: mask for enabling debug traces in module { + 0:max53 } Rules: @@ -9408,11 +9415,12 @@ Usage: inspect Configuration: * int stream_tcp.flush_factor = 0: flush upon seeing a drop in - segment size after given number of non-decreasing segments { 0: } + segment size after given number of non-decreasing segments { + 0:65535 } * int stream_tcp.max_window = 0: maximum allowed TCP window { 0:1073725440 } * int stream_tcp.overlap_limit = 0: maximum number of allowed - overlapping segments per session { 0:255 } + overlapping segments per session { 0:max32 } * int stream_tcp.max_pdu = 16384: maximum reassembled PDU size { 1460:32768 } * enum stream_tcp.policy = bsd: determines operating system @@ -9422,19 +9430,19 @@ Configuration: * bool stream_tcp.reassemble_async = true: queue data for reassembly before traffic is seen in both directions * int stream_tcp.require_3whs = -1: don’t track midstream sessions - after given seconds from start up; -1 tracks all { -1:86400 } + after given seconds from start up; -1 tracks all { -1:max31 } * bool stream_tcp.show_rebuilt_packets = false: enable cmg like output of reassembled packets * int stream_tcp.queue_limit.max_bytes = 1048576: don’t queue more - than given bytes per session and direction { 0: } + than given bytes per session and direction { 0:max32 } * int stream_tcp.queue_limit.max_segments = 2621: don’t queue more - than given segments per session and direction { 0: } + than given segments per session and direction { 0:max32 } * int stream_tcp.small_segments.count = 0: limit number of small segments queued { 0:2048 } * int stream_tcp.small_segments.maximum_size = 0: limit number of small segments queued { 0:2048 } * int stream_tcp.session_timeout = 30: session tracking timeout { - 1:86400 } + 1:max31 } Rules: @@ -9533,7 +9541,7 @@ Usage: inspect Configuration: * int stream_udp.session_timeout = 30: session tracking timeout { - 1:86400 } + 1:max31 } Peg counts: @@ -9559,8 +9567,9 @@ Usage: inspect Configuration: * int stream_user.session_timeout = 30: session tracking timeout { - 1:86400 } - * int stream_user.trace: mask for enabling debug traces in module + 1:max31 } + * int stream_user.trace: mask for enabling debug traces in module { + 0:max53 } 9.44. telnet @@ -9576,7 +9585,7 @@ Usage: inspect Configuration: * int telnet.ayt_attack_thresh = -1: alert on this number of - consecutive Telnet AYT commands { -1: } + consecutive Telnet AYT commands { -1:max31 } * bool telnet.check_encrypted = false: check for end of encryption * bool telnet.encrypted_traffic = false: check for encrypted Telnet and FTP @@ -9689,7 +9698,7 @@ Configuration: * enum reject.reset: send TCP reset to one or both ends { source| dest|both } * enum reject.control: send ICMP unreachable(s) { network|host|port - |all } + |forward|all } 10.3. rewrite @@ -9766,10 +9775,11 @@ Configuration: that is larger than a standard buffer * implied asn1.print: dump decode data to console; always true * int asn1.oversize_length: compares ASN.1 type lengths with the - supplied argument { 0: } + supplied argument { 0:max32 } * int asn1.absolute_offset: absolute offset from the beginning of - the packet { 0: } - * int asn1.relative_offset: relative offset from the cursor + the packet { 0:65535 } + * int asn1.relative_offset: relative offset from the cursor { + -65535:65535 } 11.4. base64_decode @@ -9786,9 +9796,9 @@ Usage: detect Configuration: * int base64_decode.bytes: number of base64 encoded bytes to decode - { 1: } + { 1:max32 } * int base64_decode.offset = 0: bytes past start of buffer to start - decoding { 0: } + decoding { 0:max32 } * implied base64_decode.relative: apply offset to cursor instead of start of buffer @@ -9980,9 +9990,10 @@ Configuration: * implied content.fast_pattern: use this content in the fast pattern matcher instead of the content selected by default * int content.fast_pattern_offset = 0: number of leading characters - of this content the fast pattern matcher should exclude { 0: } + of this content the fast pattern matcher should exclude { 0:65535 + } * int content.fast_pattern_length: maximum number of characters - from this content the fast pattern matcher should use { 1: } + from this content the fast pattern matcher should use { 1:65535 } * string content.offset: var or number of bytes from start of buffer to start search * string content.depth: var or maximum number of bytes to search @@ -10068,9 +10079,9 @@ Configuration: * enum detection_filter.track: track hits by source or destination IP address { by_src | by_dst } * int detection_filter.count: hits in interval before allowing the - rule to fire { 1: } + rule to fire { 1:max32 } * int detection_filter.seconds: length of interval to count hits { - 1: } + 1:max32 } 11.17. dnp3_data @@ -10276,7 +10287,7 @@ Usage: detect Configuration: - * int gid.~: generator id { 1: } + * int gid.~: generator id { 1:max32 } 11.30. gtp_info @@ -10976,7 +10987,7 @@ Usage: detect Configuration: * int priority.~: relative severity level; 1 is highest priority { - 1: } + 1:max31 } 11.71. raw_data @@ -11071,7 +11082,7 @@ Usage: detect Configuration: - * int rev.~: revision { 1: } + * int rev.~: revision { 1:max32 } 11.77. rpc @@ -11086,7 +11097,7 @@ Usage: detect Configuration: - * int rpc.~app: application number + * int rpc.~app: application number { 0:max32 } * string rpc.~ver: version number or * for any * string rpc.~proc: procedure number or * for any @@ -11104,7 +11115,8 @@ Usage: detect Configuration: * string sd_pattern.~pattern: The pattern to search for - * int sd_pattern.threshold: number of matches before alerting { 1 } + * int sd_pattern.threshold = 1: number of matches before alerting { + 1:max32 } Peg counts: @@ -11212,7 +11224,7 @@ Usage: detect Configuration: - * int sid.~: signature id { 1: } + * int sid.~: signature id { 1:max32 } 11.85. sip_body @@ -11265,7 +11277,7 @@ Usage: detect Configuration: - * int sip_stat_code.*code: stat code { 1:999 } + * int sip_stat_code.*code: status code { 1:999 } 11.89. so @@ -11408,9 +11420,9 @@ Configuration: * enum tag.~: log all packets in session or all packets to or from host { session|host_src|host_dst } - * int tag.packets: tag this many packets { 1: } - * int tag.seconds: tag for this many seconds { 1: } - * int tag.bytes: tag for this many bytes { 1: } + * int tag.packets: tag this many packets { 1:max32 } + * int tag.seconds: tag for this many seconds { 1:max32 } + * int tag.bytes: tag for this many bytes { 1:max32 } 11.96. target @@ -11565,7 +11577,7 @@ Configuration: tcp_len | tcp_seq | tcp_win | timestamp | tos | ttl | udp_len | vlan } * int alert_csv.limit = 0: set maximum size in MB before rollover - (0 is unlimited) { 0: } + (0 is unlimited) { 0:maxSZ } * string alert_csv.separator = , : separate fields with this character sequence @@ -11602,7 +11614,7 @@ Configuration: stdout * bool alert_fast.packet = false: output packet dump with alert * int alert_fast.limit = 0: set maximum size in MB before rollover - (0 is unlimited) { 0: } + (0 is unlimited) { 0:maxSZ } 14.4. alert_full @@ -11620,7 +11632,7 @@ Configuration: * bool alert_full.file = false: output to alert_full.txt instead of stdout * int alert_full.limit = 0: set maximum size in MB before rollover - (0 is unlimited) { 0: } + (0 is unlimited) { 0:maxSZ } 14.5. alert_json @@ -11648,7 +11660,7 @@ Configuration: tcp_len | tcp_seq | tcp_win | timestamp | tos | ttl | udp_len | vlan } * int alert_json.limit = 0: set maximum size in MB before rollover - (0 is unlimited) { 0: } + (0 is unlimited) { 0:maxSZ } * string alert_json.separator = , : separate fields with this character sequence @@ -11666,8 +11678,8 @@ Usage: context Configuration: * string alert_sfsocket.file: name of unix socket file - * int alert_sfsocket.rules[].gid = 1: rule generator ID { 1: } - * int alert_sfsocket.rules[].sid = 1: rule signature ID { 1: } + * int alert_sfsocket.rules[].gid = 1: rule generator ID { 1:max32 } + * int alert_sfsocket.rules[].sid = 1: rule signature ID { 1:max32 } 14.7. alert_syslog @@ -11737,8 +11749,9 @@ Configuration: * bool log_hext.raw = false: output all full packets if true, else just TCP payload * int log_hext.limit = 0: set maximum size in MB before rollover (0 - is unlimited) { 0: } - * int log_hext.width = 20: set line width (0 is unlimited) { 0: } + is unlimited) { 0:maxSZ } + * int log_hext.width = 20: set line width (0 is unlimited) { + 0:max32 } 14.11. log_pcap @@ -11754,7 +11767,7 @@ Usage: context Configuration: * int log_pcap.limit = 0: set maximum size in MB before rollover (0 - is unlimited) { 0: } + is unlimited) { 0:maxSZ } 14.12. unified2 @@ -11772,7 +11785,7 @@ Configuration: * bool unified2.legacy_events = false: generate Snort 2.X style events for barnyard2 compatibility * int unified2.limit = 0: set maximum size in MB before rollover (0 - is unlimited) { 0: } + is unlimited) { 0:maxSZ } * bool unified2.nostamp = true: append file creation time to name (in Unix Epoch format) @@ -12574,8 +12587,6 @@ Converts the Snort configuration file specified by the -c or * --output-file= Same as -o. output the new Snort++ lua configuration to * --print-all Same as -a. default option. print all data - * --print-binding-order Print sorting priority used when generating - binder table * --print-differences Same as -d. output the differences, and only the differences, between the Snort and Snort++ configurations to the @@ -13830,28 +13841,27 @@ these libraries see the Getting Started section of the manual. * -L logging mode (none, dump, pcap, or log_*) * -l log to this directory instead of current directory * -M log messages to syslog (not alerts) - * -m set umask = (0:) - * -n stop after count packets (0:) + * -m set the process file mode creation mask (0x000:0x1FF) + * -n stop after count packets (0:max53) * -O obfuscate the logged IP addresses * -Q enable inline mode operation * -q quiet mode - Don’t show banner and status report * -R include this rules file in the default policy * -r … (same as --pcap-list) * -S set config variable x equal to value v - * -s (same as --snaplen); default is 1514 (68:65535) + * -s (same as --snaplen); default is 1518 (68:65535) * -T test and report on the current Snort configuration * -t chroots process to after initialization * -U use UTC for timestamps * -u run snort as or after initialization * -V (same as --version) * -v be verbose - * -W lists available interfaces * -X dump the raw packet data starting at the link layer * -x same as --pedantic * -y include year in timestamp in the alert and log files * -z maximum number of packet threads (same as --max-packet-threads); 0 gets the number of CPU cores reported by - the system; default is 1 (0:) + the system; default is 1 (0:max32) * --alert-before-pass process alert, drop, sdrop, or reject before pass; default is pass before alert, drop,… * --bpf are standard BPF options, as seen in @@ -13883,6 +13893,7 @@ these libraries see the Getting Started section of the manual. (optional) * --help-counts [] output matching peg counts (optional) + * --help-limits print the int upper bounds denoted by max* * --help-module output description of given module * --help-modules list all available modules with brief help * --help-options [