Snort Integration for Elastic
| Version | 1.21.1 (View all) |
| Subscription level What's this? |
Basic |
| Developed by What's this? |
Elastic |
| Ingestion method(s) | File, Network Protocol |
| Minimum Kibana version(s) | 9.0.0 8.11.0 |
This AI-assisted guide was validated by our engineers. You may need to adjust the steps to match your environment.
The Snort integration for Elastic enables you to collect logs from Snort, a leading open-source Intrusion Prevention System (IPS). You can monitor network traffic in real-time to detect security threats, policy violations, and unauthorized access attempts. By ingesting Snort logs, you'll gain visibility into network activity, audit security events, and troubleshoot network issues.
This integration has been developed and tested against Snort versions 2.9 and 3.0. It's expected to work with other versions that use the supported output formats.
This integration is compatible with Elastic Stack version 8.11.0 or later.
The integration collects logs from your Snort instances by deploying an Elastic Agent on a host that has access to the log data. Once you've configured the agent, it forwards the logs to your Elastic deployment, where they're parsed and enriched with relevant metadata before being indexed for analysis in the log data stream.
You can configure the agent to receive data in two ways:
- Log file monitoring: You configure the Elastic Agent to read logs directly from Snort's output log files on the local filesystem.
- Syslog: You configure Snort to send logs to a syslog server, and the Elastic Agent listens for these logs on a specified UDP port.
The Snort integration collects log messages containing information about network traffic and security events. You can ingest data from various versions of Snort and specific environments like pfSense.
The Snort integration collects log messages of the following types:
- Intrusion detection logs: High-priority alerts generated when network traffic matches Snort's rule definitions.
- Network metadata: Protocol information, source and destination IP addresses, and port numbers associated with security events.
- Alert formats: Support for JSON (Snort 3), CSV (pfSense), and Alert Fast (legacy Snort) formats.
- Network packets: Captured packet data for deep inspection and analysis.
- Protocol analysis data: Detailed information about network protocols and session states.
This data is collected and processed into the log data stream.
Integrating Snort logs with the Elastic Stack provides visibility into your network security posture. You can use this integration for the following use cases:
- Intrusion detection: Monitor network traffic in real-time to detect unauthorized access attempts, policy violations, and other security threats.
- Network traffic analysis: Identify malicious patterns and anomalies by visualizing network traffic and security alerts in Kibana.
- Incident response: Accelerate incident investigation by correlating Snort alerts with other security and observability data sources within the Elastic Stack.
- Compliance monitoring: Ensure adherence to security policies and regulatory requirements by logging and auditing security violations.
To use this integration, you'll need the following:
- An active installation of Snort
v2.9,v3.0, or later. - Root or
sudoprivileges on the Snort host to modify configuration files and restart the service. - Read permissions for the Elastic Agent to access the Snort log directory, for example,
/var/log/snort/. - Network connectivity that allows the Snort host to reach the Elastic Agent over the configured UDP port (the default is
9514) if you're using the UDP/Syslog method. - Connectivity between the Elastic Agent and the Elasticsearch cluster to ship collected data.
Elastic Agent must be installed. For more details, check the Elastic Agent installation instructions. You can install only one Elastic Agent per host.
Elastic Agent is required to stream data from the syslog or log file receiver and ship the data to Elastic, where the events will then be processed using the integration's ingest pipelines.
You'll need to configure Snort to output logs in a format that the agent can read. We recommend using JSON for Snort 3.
Follow these steps to configure Snort 3 using Lua:
- Open the
snort.luaconfiguration file, typically located at/usr/local/etc/snort/snort.luaor/etc/snort/snort.lua. - To output high-fidelity logs for the Elastic Agent, add the following JSON logging block:
alert_json = { file = true, fields = 'timestamp pkt_num proto pkt_gen pkt_len dir src_addr src_port dst_addr dst_port service rule action class b64_data', limit = 100 } - Alternatively, for the Alert Fast format, add this block:
alert_fast = { file = true, packet = false, limit = 100 } - If you're forwarding using UDP, add this syslog block:
alert_syslog = { facility = 'local5', level = 'alert' } - Validate the configuration by running
snort -c /usr/local/etc/snort/snort.lua -T. If it's successful, restart Snort usingsudo systemctl restart snort.
Follow these steps to configure Snort 2.9:
- Open the
snort.conffile, usually found at/etc/snort/snort.conf. - Locate the output section and uncomment or add the following line to enable Alert Fast:
output alert_fast: alert.fast - To send alerts to a local syslog facility for forwarding, add this line:
output alert_syslog: LOG_LOCAL5 LOG_ALERT - Apply the changes by running
sudo service snort restartorsudo systemctl restart snort.
If you're using the UDP input, follow these steps to forward logs using Rsyslog:
- Create a configuration file at
/etc/rsyslog.d/50-snort.conf. - Add the following line to forward the
local5facility to the Elastic Agent (replace<ELASTIC_AGENT_IP_ADDRESS>with your actual value):local5.* @<ELASTIC_AGENT_IP_ADDRESS>:9514 - Restart the rsyslog daemon to apply the forwarding rule:
sudo systemctl restart rsyslog.
You can find more details in the official documentation:
Follow these steps to set up the integration in Kibana:
- In Kibana, navigate to Management > Integrations.
- Search for "Snort" and select the integration.
- Click Add Snort.
- Configure the integration by selecting an input type and providing the necessary settings.
This input collects logs directly from file paths on your host. Configure the following settings:
paths: The list of paths to Snort log files (for example,['/var/log/snort/alert.log']).multiline_full: Set totrueif you're reading the Snort "Alert Full" log format which spans multiple lines. Default isfalse.internal_networks: Specify the internal IP subnet(s) of your network (for example,['10.0.0.0/8']). Default is['private'].tz_offset: Set the timezone offset (for example,"Europe/Amsterdam","EST", or"-05:00") if logs are from a different timezone than the host. Default islocal.preserve_original_event: If enabled, the raw copy of the original event's added to the fieldevent.original. Default isfalse.tags: Custom tags to add to the events. Default is['forwarded', 'snort.log'].processors: Add optional processors to filter or enhance data before it leaves the agent.
This input collects logs sent over the network using UDP. Configure the following settings:
syslog_host: The interface address to listen on for UDP traffic (for example,localhost).syslog_port: The UDP port to listen on (for example,9514).internal_networks: Specify the internal IP subnet(s) of your network. Default is['private'].tz_offset: Set the timezone offset for correct datetime parsing. Default islocal.preserve_original_event: If enabled, preserves the raw original event inevent.original. Default isfalse.tags: Custom tags to add to the events. Default is['forwarded', 'snort.log'].udp_options: Specify custom configuration options likeread_buffer,max_message_size, ortimeout.processors: Add optional processors to enhance or reduce fields in the exported event.
After configuring the input, click Save and continue to deploy the integration to your Elastic Agent policy.
Follow these steps to verify that data's flowing correctly:
Trigger data flow on Snort using one of these methods:
- Generate Test Alert: Use a tool like
curlto access a known malicious URI if you've rules for it, or usenmapto perform a basic scan against the interface Snort's monitoring. - Trigger ICMP Alert: If ICMP rules are active, perform a ping sweep or a large packet ping:
ping -s 1500 <target_ip>. - Manual Log Entry: For testing the logfile input, append a test entry to the monitored file:
echo "01/01-12:00:00.000000 [**] [1:1000001:1] TEST ALERT [**] [Priority: 0] {TCP} 192.168.1.1:12345 -> 192.168.1.2:80" >> /var/log/snort/alert.log
- Generate Test Alert: Use a tool like
Check for data in Kibana:
- Navigate to Analytics > Discover.
- Select the
logs-*data view. - Enter the KQL filter:
data_stream.dataset : "snort.log". - Verify logs appear. Expand a log entry and confirm these fields:
event.dataset(should besnort.log)source.ipand/ordestination.ipevent.action,event.outcome, orevent.type
For help with Elastic ingest tools, check Common problems.
The following issues are commonly encountered when configuring or running the Snort integration:
Snort fails to start due to configuration errors:
- Run Snort in test mode to identify and resolve issues:
# Replace /path/to/snort.conf with your actual configuration file path snort -T -c /path/to/snort.conf
- Run Snort in test mode to identify and resolve issues:
No alerts are being generated:
- Verify that Snort is monitoring the correct network interface.
- Ensure the relevant rules are enabled in your
snort.conforsnort.luafile.
Logs are not appearing when using the logfile input:
- Ensure the Elastic Agent user has read access to the Snort log directory (for example,
/var/log/snort/). Check file permissions or other security frameworks which may be blocking read access (for example, SELinux or AppArmor).
- Ensure the Elastic Agent user has read access to the Snort log directory (for example,
UDP input port binding conflicts:
- If the Elastic Agent fails to start when using the UDP input, check if another service is already using port 9514:
# Check for services listening on port 9514 netstat -tulpn | grep 9514
- If the Elastic Agent fails to start when using the UDP input, check if another service is already using port 9514:
Snort output is not enabled:
- Confirm that the
outputdirective insnort.confor thealert_module insnort.luais correctly configured and not commented out, as some installations do not enable disk logging by default.
- Confirm that the
Logs appear in Kibana but are not parsed correctly:
- Check for the
tags: _grokparsefailuretag in Discover. - Verify that the Snort log format (such as Alert Fast or JSON) matches the configuration you selected in the integration settings.
- Check for the
Events display incorrect timestamps:
- Verify the
tz_offsetsetting in your integration configuration. This is often necessary if the Snort sensor is in a different timezone than the host or the Elastic Stack.
- Verify the
To ensure optimal performance in high-volume environments, consider the following:
- For network-based collection using the
udpinput, ensure the network path between Snort and the Elastic Agent has sufficient bandwidth and low latency. While UDP offers high performance for syslog transmission, it doesn't guarantee delivery. In environments where log reliability is critical, it's recommended to use thelogfileinput with an Elastic Agent installed locally on the Snort host to read directly from the disk. - To manage high volumes of log data and reduce processing overhead, use Snort's internal
thresholdandsuppressionconfiguration to limit the number of alerts generated by noisy rules. Additionally, ensure that only necessary log formats (for example,JSONorFast) are enabled at the source to prevent redundant data ingestion and storage.
For more information on architectures that can be used for scaling this integration, check the Ingest Architectures documentation.
These inputs can be used with this integration:
logfile
For more details about the logfile input settings, check the Filebeat documentation.
To collect logs via logfile, select Collect logs via the logfile input and configure the following parameter:
- Paths: List of glob-based paths to crawl and fetch log files from. Supports glob patterns like
/var/log/*.logor/var/log/*/*.logfor subfolder matching. Each file found starts a separate harvester.
udp
For more details about the UDP input settings, check the Filebeat documentation.
To collect logs via UDP, select Collect logs via UDP and configure the following parameters:
Required Settings:
- Host
- Port
Common Optional Settings:
- Max Message Size - Maximum size of UDP packets to accept (default: 10KB, max: 64KB)
- Read Buffer - UDP socket read buffer size for handling bursts of messages
- Read Timeout - How long to wait for incoming packets before checking for shutdown
The following external resources provide more information about Snort:
- Official Snort Documentation
- Snort FAQ
- Alert Logging - Snort 3 Rule Writing Guide
- Configuration - Snort 3 Rule Writing Guide
The log data stream collects all log types from Snort. This includes intrusion detection logs, network metadata, and various alert formats such as JSON, CSV, and Alert Fast.
Exported fields
| Field | Description | Type |
|---|---|---|
| @timestamp | Date/time when the event originated. This is the date/time extracted from the event, typically representing when the event was generated by the source. If the event source has no original timestamp, this value is typically populated by the first time the event was received by the pipeline. Required field for all events. | date |
| cloud.account.id | The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. | keyword |
| cloud.availability_zone | Availability zone in which this host is running. | keyword |
| cloud.image.id | Image ID for the cloud instance. | keyword |
| cloud.instance.id | Instance ID of the host machine. | keyword |
| cloud.instance.name | Instance name of the host machine. | keyword |
| cloud.machine.type | Machine type of the host machine. | keyword |
| cloud.project.id | Name of the project in Google Cloud. | keyword |
| cloud.provider | Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. | keyword |
| cloud.region | Region in which this host is running. | keyword |
| container.id | Unique container id. | keyword |
| container.image.name | Name of the image the container was built on. | keyword |
| container.labels | Image labels. | object |
| container.name | Container name. | keyword |
| data_stream.dataset | Data stream dataset. | constant_keyword |
| data_stream.namespace | Data stream namespace. | constant_keyword |
| data_stream.type | Data stream type. | constant_keyword |
| destination.address | Some event destination addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the .address field. Then it should be duplicated to .ip or .domain, depending on which one it is. |
keyword |
| destination.as.number | Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. | long |
| destination.as.organization.name | Organization name. | keyword |
| destination.as.organization.name.text | Multi-field of destination.as.organization.name. |
match_only_text |
| destination.bytes | Bytes sent from the destination to the source. | long |
| destination.domain | The domain name of the destination system. This value may be a host name, a fully qualified domain name, or another host naming format. The value may derive from the original event or be added from enrichment. | keyword |
| destination.geo.city_name | City name. | keyword |
| destination.geo.continent_name | Name of the continent. | keyword |
| destination.geo.country_iso_code | Country ISO code. | keyword |
| destination.geo.country_name | Country name. | keyword |
| destination.geo.location | Longitude and latitude. | geo_point |
| destination.geo.region_iso_code | Region ISO code. | keyword |
| destination.geo.region_name | Region name. | keyword |
| destination.ip | IP address of the destination (IPv4 or IPv6). | ip |
| destination.mac | MAC address of the destination. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. | keyword |
| destination.packets | Packets sent from the destination to the source. | long |
| destination.port | Port of the destination. | long |
| ecs.version | ECS version this event conforms to. ecs.version is a required field and must exist in all events. When querying across multiple indices -- which may conform to slightly different ECS versions -- this field lets integrations adjust to the schema version of the events. |
keyword |
| event.category | This is one of four ECS Categorization Fields, and indicates the second level in the ECS category hierarchy. event.category represents the "big buckets" of ECS categories. For example, filtering on event.category:process yields all events relating to process activity. This field is closely related to event.type, which is used as a subcategory. This field is an array. This will allow proper categorization of some events that fall in multiple categories. |
keyword |
| event.created | event.created contains the date/time when the event was first read by an agent, or by your pipeline. This field is distinct from @timestamp in that @timestamp typically contain the time extracted from the original event. In most situations, these two timestamps will be slightly different. The difference can be used to calculate the delay between your source generating an event, and the time when your agent first processed it. This can be used to monitor your agent's or pipeline's ability to keep up with your event source. In case the two timestamps are identical, @timestamp should be used. |
date |
| event.dataset | Event dataset | constant_keyword |
| event.kind | This is one of four ECS Categorization Fields, and indicates the highest level in the ECS category hierarchy. event.kind gives high-level information about what type of information the event contains, without being specific to the contents of the event. For example, values of this field distinguish alert events from metric events. The value of this field can be used to inform how these kinds of events should be handled. They may warrant different retention, different access control, it may also help understand whether the data is coming in at a regular interval or not. |
keyword |
| event.module | Event module | constant_keyword |
| event.original | Raw text message of entire event. Used to demonstrate log integrity or where the full log message (before splitting it up in multiple parts) may be required, e.g. for reindex. This field is not indexed and doc_values are disabled. It cannot be searched, but it can be retrieved from _source. If users wish to override this and index this field, please see Field data types in the Elasticsearch Reference. |
keyword |
| event.outcome | This is one of four ECS Categorization Fields, and indicates the lowest level in the ECS category hierarchy. event.outcome simply denotes whether the event represents a success or a failure from the perspective of the entity that produced the event. Note that when a single transaction is described in multiple events, each event may populate different values of event.outcome, according to their perspective. Also note that in the case of a compound event (a single event that contains multiple logical events), this field should be populated with the value that best captures the overall success or failure from the perspective of the event producer. Further note that not all events will have an associated outcome. For example, this field is generally not populated for metric events, events with event.type:info, or any events for which an outcome does not make logical sense. |
keyword |
| event.severity | The numeric severity of the event according to your event source. What the different severity values mean can be different between sources and use cases. It's up to the implementer to make sure severities are consistent across events from the same source. The Syslog severity belongs in log.syslog.severity.code. event.severity is meant to represent the severity according to the event source (e.g. firewall, IDS). If the event source does not publish its own severity, you may optionally copy the log.syslog.severity.code to event.severity. |
long |
| event.timezone | This field should be populated when the event's timestamp does not include timezone information already (e.g. default Syslog timestamps). It's optional otherwise. Acceptable timezone formats are: a canonical ID (e.g. "Europe/Amsterdam"), abbreviated (e.g. "EST") or an HH:mm differential (e.g. "-05:00"). | keyword |
| event.type | This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy. event.type represents a categorization "sub-bucket" that, when used along with the event.category field values, enables filtering events down to a level appropriate for single visualization. This field is an array. This will allow proper categorization of some events that fall in multiple event types. |
keyword |
| host.architecture | Operating system architecture. | keyword |
| host.containerized | If the host is a container. | boolean |
| host.domain | Name of the domain of which the host is a member. For example, on Windows this could be the host's Active Directory domain or NetBIOS domain name. For Linux this could be the domain of the host's LDAP provider. | keyword |
| host.hostname | Hostname of the host. It normally contains what the hostname command returns on the host machine. |
keyword |
| host.id | Unique host id. As hostname is not always unique, use values that are meaningful in your environment. Example: The current usage of beat.name. |
keyword |
| host.ip | Host ip addresses. | ip |
| host.mac | Host mac addresses. | keyword |
| host.name | Name of the host. It can contain what hostname returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. |
keyword |
| host.os.build | OS build information. | keyword |
| host.os.codename | OS codename, if any. | keyword |
| host.os.family | OS family (such as redhat, debian, freebsd, windows). | keyword |
| host.os.kernel | Operating system kernel version as a raw string. | keyword |
| host.os.name | Operating system name, without the version. | keyword |
| host.os.name.text | Multi-field of host.os.name. |
text |
| host.os.platform | Operating system platform (such centos, ubuntu, windows). | keyword |
| host.os.version | Operating system version as a raw string. | keyword |
| host.type | Type of host. For Cloud providers this can be the machine type like t2.medium. If vm, this could be the container, for example, or other information meaningful in your environment. |
keyword |
| input.type | Input type | keyword |
| log.file.path | Full path to the log file this event came from, including the file name. It should include the drive letter, when appropriate. If the event wasn't read from a log file, do not populate this field. | keyword |
| log.flags | Flags for the log file. | keyword |
| log.offset | Log offset | long |
| log.source.address | Source address from which the log event was read / sent from. | keyword |
| message | For log events the message field contains the log message, optimized for viewing in a log viewer. For structured logs without an original message field, other fields can be concatenated to form a human-readable summary of the event. If multiple messages exist, they can be combined into one message. | match_only_text |
| network.bytes | Total bytes transferred in both directions. If source.bytes and destination.bytes are known, network.bytes is their sum. |
long |
| network.community_id | A hash of source and destination IPs and ports, as well as the protocol used in a communication. This is a tool-agnostic standard to identify flows. Learn more at https://github.com/corelight/community-id-spec. | keyword |
| network.direction | Direction of the network traffic. When mapping events from a host-based monitoring context, populate this field from the host's point of view, using the values "ingress" or "egress". When mapping events from a network or perimeter-based monitoring context, populate this field from the point of view of the network perimeter, using the values "inbound", "outbound", "internal" or "external". Note that "internal" is not crossing perimeter boundaries, and is meant to describe communication between two hosts within the perimeter. Note also that "external" is meant to describe traffic between two hosts that are external to the perimeter. This could for example be useful for ISPs or VPN service providers. | keyword |
| network.iana_number | IANA Protocol Number (https://www.iana.org/assignments/protocol-numbers/protocol-numbers.xhtml). Standardized list of protocols. This aligns well with NetFlow and sFlow related logs which use the IANA Protocol Number. | keyword |
| network.packets | Total packets transferred in both directions. If source.packets and destination.packets are known, network.packets is their sum. |
long |
| network.protocol | In the OSI Model this would be the Application Layer protocol. For example, http, dns, or ssh. The field value must be normalized to lowercase for querying. |
keyword |
| network.transport | Same as network.iana_number, but instead using the Keyword name of the transport layer (udp, tcp, ipv6-icmp, etc.) The field value must be normalized to lowercase for querying. | keyword |
| network.type | In the OSI Model this would be the Network Layer. ipv4, ipv6, ipsec, pim, etc The field value must be normalized to lowercase for querying. | keyword |
| network.vlan.id | VLAN ID as reported by the observer. | keyword |
| observer.ingress.interface.name | Interface name as reported by the system. | keyword |
| observer.ip | IP addresses of the observer. | ip |
| observer.name | Custom name of the observer. This is a name that can be given to an observer. This can be helpful for example if multiple firewalls of the same model are used in an organization. If no custom name is needed, the field can be left empty. | keyword |
| observer.product | The product name of the observer. | keyword |
| observer.type | The type of the observer the data is coming from. There is no predefined list of observer types. Some examples are forwarder, firewall, ids, ips, proxy, poller, sensor, APM server. |
keyword |
| observer.vendor | Vendor name of the observer. | keyword |
| process.name | Process name. Sometimes called program name or similar. | keyword |
| process.name.text | Multi-field of process.name. |
match_only_text |
| process.pid | Process id. | long |
| related.ip | All of the IPs seen on your event. | ip |
| rule.category | A categorization value keyword used by the entity using the rule for detection of this event. | keyword |
| rule.description | The description of the rule generating the event. | keyword |
| rule.id | A rule ID that is unique within the scope of an agent, observer, or other entity using the rule for detection of this event. | keyword |
| rule.name | The name of the rule or signature generating the event. | keyword |
| rule.version | The version / revision of the rule being used for analysis. | keyword |
| snort.dgm.length | Length of | long |
| snort.eth.length | Length of the Ethernet header and payload. | long |
| snort.gid | The gid keyword (generator id) is used to identify what part of Snort generates the event when a particular rule fires.dd | long |
| snort.icmp.code | ICMP code. | long |
| snort.icmp.id | ID of the echo request/reply | long |
| snort.icmp.seq | ICMP sequence number. | long |
| snort.icmp.type | ICMP type. | long |
| snort.ip.flags | IP flags. | keyword |
| snort.ip.id | ID of the packet | long |
| snort.ip.length | Length of the IP header and payload. | long |
| snort.ip.tos | IP Type of Service identification. | long |
| snort.ip.ttl | Time To Live (TTL) of the packet | long |
| snort.tcp.ack | TCP Acknowledgment number. | long |
| snort.tcp.flags | TCP flags. | keyword |
| snort.tcp.length | Length of the TCP header and payload. | long |
| snort.tcp.seq | TCP sequence number. | long |
| snort.tcp.window | Advertised TCP window size. | long |
| snort.udp.length | Length of the UDP header and payload. | long |
| source.address | Some event source addresses are defined ambiguously. The event will sometimes list an IP, a domain or a unix socket. You should always store the raw address in the .address field. Then it should be duplicated to .ip or .domain, depending on which one it is. |
keyword |
| source.as.number | Unique number allocated to the autonomous system. The autonomous system number (ASN) uniquely identifies each network on the Internet. | long |
| source.as.organization.name | Organization name. | keyword |
| source.as.organization.name.text | Multi-field of source.as.organization.name. |
match_only_text |
| source.bytes | Bytes sent from the source to the destination. | long |
| source.geo.city_name | City name. | keyword |
| source.geo.continent_name | Name of the continent. | keyword |
| source.geo.country_iso_code | Country ISO code. | keyword |
| source.geo.country_name | Country name. | keyword |
| source.geo.location | Longitude and latitude. | geo_point |
| source.geo.region_iso_code | Region ISO code. | keyword |
| source.geo.region_name | Region name. | keyword |
| source.ip | IP address of the source (IPv4 or IPv6). | ip |
| source.mac | MAC address of the source. The notation format from RFC 7042 is suggested: Each octet (that is, 8-bit byte) is represented by two [uppercase] hexadecimal digits giving the value of the octet as an unsigned integer. Successive octets are separated by a hyphen. | keyword |
| source.packets | Packets sent from the source to the destination. | long |
| source.port | Port of the source. | long |
| tags | List of keywords used to tag each event. | keyword |
Example
{
"@timestamp": "2022-09-05T16:02:55.000-05:00",
"agent": {
"ephemeral_id": "3ada3cc1-9563-4aa5-880e-585d87fc6adf",
"id": "ca0beb8d-9522-4450-8af7-3cb7f3d8c478",
"name": "docker-fleet-agent",
"type": "filebeat",
"version": "8.2.0"
},
"data_stream": {
"dataset": "snort.log",
"namespace": "ep",
"type": "logs"
},
"destination": {
"address": "175.16.199.1",
"geo": {
"city_name": "Changchun",
"continent_name": "Asia",
"country_iso_code": "CN",
"country_name": "China",
"location": {
"lat": 43.88,
"lon": 125.3228
},
"region_iso_code": "CN-22",
"region_name": "Jilin Sheng"
},
"ip": "175.16.199.1"
},
"ecs": {
"version": "8.17.0"
},
"elastic_agent": {
"id": "ca0beb8d-9522-4450-8af7-3cb7f3d8c478",
"snapshot": false,
"version": "8.2.0"
},
"event": {
"agent_id_status": "verified",
"category": [
"network"
],
"created": "2022-09-05T16:02:55.000-05:00",
"dataset": "snort.log",
"ingested": "2022-05-09T16:00:09Z",
"kind": "alert",
"original": "Sep 5 16:02:55 dev snort: [1:1000015:0] Pinging... [Classification: Misc activity] [Priority: 3] {ICMP} 10.50.10.88 -> 175.16.199.1",
"severity": 3,
"timezone": "-05:00"
},
"input": {
"type": "udp"
},
"log": {
"source": {
"address": "172.18.0.4:54924"
}
},
"network": {
"community_id": "1:AwywM3uuS+luH6U/hUKtj2x2LWU=",
"direction": "outbound",
"transport": "icmp",
"type": "ipv4"
},
"observer": {
"name": "dev",
"product": "ids",
"type": "ids",
"vendor": "snort"
},
"process": {
"name": "snort"
},
"related": {
"ip": [
"10.50.10.88",
"175.16.199.1"
]
},
"rule": {
"category": "Misc activity",
"description": "Pinging...",
"id": "1000015",
"version": "0"
},
"snort": {
"gid": 1
},
"source": {
"address": "10.50.10.88",
"ip": "10.50.10.88"
},
"tags": [
"preserve_original_event",
"forwarded",
"snort.log"
]
}
Changelog
| Version | Details | Minimum Kibana version |
|---|---|---|
| 1.21.1 | Enhancement (View pull request) Improve documentation |
9.0.0 8.11.0 |
| 1.21.0 | Enhancement (View pull request) Update documentation |
9.0.0 8.11.0 |
| 1.20.0 | Enhancement (View pull request) Preserve event.original on pipeline error. |
9.0.0 8.11.0 |
| 1.19.2 | Enhancement (View pull request) Generate processor tags and normalize error handler. |
9.0.0 8.11.0 |
| 1.19.1 | Enhancement (View pull request) Changed owners. |
9.0.0 8.11.0 |
| 1.19.0 | Enhancement (View pull request) Allow @custom pipeline access to event.original without setting preserve_original_event. |
9.0.0 8.11.0 |
| 1.18.0 | Enhancement (View pull request) Support stack version 9.0. |
9.0.0 8.0.0 7.16.0 |
| 1.17.0 | Enhancement (View pull request) Allow the usage of deprecated log input and support for stack 9.0 |
8.0.0 7.16.0 |
| 1.16.1 | Bug fix (View pull request) Fix time format using incorrect year specifier. |
8.0.0 7.16.0 |
| 1.16.0 | Enhancement (View pull request) ECS version updated to 8.17.0. |
8.0.0 7.16.0 |
| 1.15.1 | Bug fix (View pull request) Use triple-brace Mustache templating when referencing variables in ingest pipelines. |
8.0.0 7.16.0 |
| 1.15.0 | Enhancement (View pull request) Update package spec to 3.0.3. |
8.0.0 7.16.0 |
| 1.14.1 | Bug fix (View pull request) Fix exclude_files pattern. |
8.0.0 7.16.0 |
| 1.14.0 | Enhancement (View pull request) ECS version updated to 8.11.0. |
8.0.0 7.16.0 |
| 1.13.0 | Enhancement (View pull request) Improve 'event.original' check to avoid errors if set. |
8.0.0 7.16.0 |
| 1.12.0 | Enhancement (View pull request) ECS version updated to 8.10.0. |
8.0.0 7.16.0 |
| 1.11.0 | Enhancement (View pull request) The format_version in the package manifest changed from 2.11.0 to 3.0.0. Removed dotted YAML keys from package manifest. Added 'owner.type: elastic' to package manifest. |
8.0.0 7.16.0 |
| 1.10.0 | Enhancement (View pull request) Add tags.yml file so that integration's dashboards and saved searches are tagged with "Security Solution" and displayed in the Security Solution UI. |
8.0.0 7.16.0 |
| 1.9.0 | Enhancement (View pull request) Update package to ECS 8.9.0. |
8.0.0 7.16.0 |
| 1.8.0 | Enhancement (View pull request) Ensure event.kind is correctly set for pipeline errors. |
8.0.0 7.16.0 |
| 1.7.0 | Enhancement (View pull request) Update package to ECS 8.8.0. |
8.0.0 7.16.0 |
| 1.6.0 | Enhancement (View pull request) Update package-spec version to 2.7.0. |
8.0.0 7.16.0 |
| 1.5.0 | Enhancement (View pull request) Update package to ECS 8.7.0. |
8.0.0 7.16.0 |
| 1.4.2 | Enhancement (View pull request) Added categories and/or subcategories. |
8.0.0 7.16.0 |
| 1.4.1 | Bug fix (View pull request) Ensure numeric timezones are correctly interpreted. |
8.0.0 7.16.0 |
| 1.4.0 | Enhancement (View pull request) Update package to ECS 8.6.0. |
8.0.0 7.16.0 |
| 1.3.0 | Enhancement (View pull request) Add udp_options to the UDP input. |
8.0.0 7.16.0 |
| 1.2.0 | Enhancement (View pull request) Update package to ECS 8.5.0. |
8.0.0 7.16.0 |
| 1.1.0 | Enhancement (View pull request) Add Snort 3 JSON support. |
8.0.0 7.16.0 |
| 1.0.0 | Enhancement (View pull request) Make GA |
8.0.0 7.16.0 |
| 0.5.0 | Enhancement (View pull request) Update package to ECS 8.4.0 |
— |
| 0.4.0 | Enhancement (View pull request) Update package to ECS 8.3.0. |
— |
| 0.3.1 | Bug fix (View pull request) Format source.mac and destination.mac as per ECS and add missing mappings for various event.* fields. |
— |
| 0.3.0 | Enhancement (View pull request) Update to ECS 8.2 |
— |
| 0.2.2 | Enhancement (View pull request) Add documentation for multi-fields |
— |
| 0.2.1 | Bug fix (View pull request) Fix test data |
— |
| 0.2.0 | Enhancement (View pull request) Update to ECS 8.0 |
— |
| 0.1.2 | Bug fix (View pull request) Regenerate test files using the new GeoIP database |
— |
| 0.1.1 | Bug fix (View pull request) Change test public IPs to the supported subset |
— |
| 0.1.0 | Enhancement (View pull request) Add 8.0.0 version constraint |
— |
| 0.0.3 | Enhancement (View pull request) Update Title and Description. |
— |
| 0.0.2 | Bug fix (View pull request) Fix logic that checks for the 'forwarded' tag |
— |
| 0.0.1 | Enhancement (View pull request) initial release |
— |