Custom UDP Logs Integration for Elastic
| Version | 2.4.0 (View all) |
| Subscription level What's this? |
Basic |
| Developed by What's this? |
Elastic |
| Minimum Kibana version(s) | 9.2.0 |
This AI-assisted guide was validated by our engineers. You may need to adjust the steps to match your environment.
The Custom UDP Logs integration for Elastic enables you to collect raw UDP data by listening on a specified UDP port using an Elastic Agent. This integration acts as a generic network data collector, allowing the Elastic Agent to serve as a high-performance UDP server. It's designed for environments where data sources don't support TCP or where the overhead of a connection-oriented protocol is undesirable.
This integration is a protocol-based listener and is compatible with any third-party vendor, hardware appliance, or software application capable of transmitting data using the User Datagram Protocol (UDP).
This integration is compatible with the following:
- Network appliances: Cisco IOS/NX-OS, Juniper Junos, Fortinet FortiOS, and Check Point Gaia.
- Operating systems: Linux (using rsyslog or syslog-ng), Windows (using event-to-syslog agents), and macOS.
- Standard protocols: Support for RFC 3164 (BSD Syslog) and RFC 5424 (IETF Syslog) message formats.
This integration works by opening a listening UDP port on the host where the Elastic Agent is running. When the agent receives a UDP packet, it ingests the payload and automatically appends metadata about the source.
The data collection process involves several steps:
- Listening: The agent waits for incoming packets on the port you configure.
- Payload capture: The raw text-based or binary data sent over UDP is captured and stored in the
messagefield. - Metadata attachment: Information regarding the source IP and port of the incoming traffic is automatically appended to each event.
- Parsing and processing: Automatic parsing is available for syslog data following RFC 3164 and RFC 5424 standards. Other formats like Common Event Format (CEF) or JSON can be processed through custom ingest pipelines.
- Data indexing: All incoming UDP traffic is collected and indexed into the
udp.genericdata stream as log documents.
The Custom UDP Logs integration collects several types of data by listening on a specified network port and ingesting the payload of each received packet:
- Syslog data: Automatic parsing is available for logs following RFC 3164 and RFC 5424 standards, which are commonly used by Linux systems and network appliances.
- Generic log events: Any raw text-based or binary data sent over UDP is captured and stored in the
messagefield. - Security events: Formats such as Common Event Format (CEF) or JSON-encoded security logs can be ingested and processed through custom pipelines.
- Network traffic metadata: Information about the source IP and port of the incoming traffic is automatically appended to each event.
The integration provides the following data stream:
udp.generic: This is the default data stream used to collect and index all incoming UDP traffic as log documents.
You can use this integration to enable several operational and security scenarios:
- Log centralization: Collect logs from legacy hardware and network appliances that only support UDP transport for log transmission.
- Custom application monitoring: Ingest raw text or binary telemetry from internal applications that use UDP for performance or low-latency reasons.
- Security monitoring: Bring in security events from third-party tools that output CEF or JSON over the network for analysis in Elastic Security.
- Operational visibility: Gain insights into network activity by capturing the metadata from incoming packets.
Before you can collect data, you'll need to satisfy a few requirements on your source device and within your Elastic Stack.
To prepare your source device or application, make sure you meet these requirements:
- You have administrative access to modify the logging or telemetry export configuration on the device sending the logs.
- Your network allows unrestricted UDP traffic flow from the source device's IP address to the Elastic Agent's IP address on the chosen port, like
8080or514(replace with your actual port). - You know if the source device sends data in a specific format like
RFC 5424, which helps you decide whether to enable the syslog parsing toggle. - You've configured any intermediate or host-based firewalls, such as
iptablesor Windows Firewall, to allow inbound UDP traffic on the listener port.
You'll also need to satisfy these Elastic prerequisites:
- You've installed the Elastic Agent and successfully enrolled it in a Fleet policy.
- You're running the Elastic Agent service with root or administrative privileges if you intend to use a privileged port below
1024, such asUDP 514.
You must install Elastic Agent. For more details, check the Elastic Agent installation instructions. You can install only one Elastic Agent per host.
You use Elastic Agent to stream data from the syslog or log file receiver and ship the data to Elastic, where the system processes the events through the integration's ingest pipelines.
To begin ingesting data, you must configure your external devices to target the Elastic Agent using these instructions.
You can configure your network appliance or server using the following steps:
- Log in to the management interface (CLI or Web UI) of your network appliance or server.
- Navigate to the System Logging, Remote Logging, or Telemetry configuration section.
- Add a new remote log destination or syslog server entry.
- Set the Destination IP Address to the IP address of the host running the Elastic Agent.
- Set the Destination Port to the port you plan to configure in Kibana (default is
8080). - Set the Protocol to
UDP. - If the device allows, select the log format. RFC 5424 is preferred for better structured data, though RFC 3164 is widely supported.
- Specify the facility and severity levels you wish to export (for example,
Local0,Notice). - Save the configuration and, if necessary, restart the logging service on the device to initiate the stream.
If you use a custom application, you can configure it with these steps:
- Access the application's configuration file or environment variables.
- Locate the logging output settings.
- Configure the application to use a
UDPappender or socket logger. - Point the appender to the Elastic Agent's host IP and the configured
UDPport. - Ensure you send the message payload as a single packet per log line to ensure correct indexing.
You can configure the integration in Kibana using these steps:
- Navigate to Management > Integrations in Kibana and search for Custom UDP Logs.
- Click Add Custom UDP Logs to begin the configuration.
- Provide the configuration settings for the following fields:
- Listen Address: The bind address for the
UDPlistener. Use0.0.0.0to listen on all network interfaces orlocalhostfor local traffic only. Default:localhost. - Listen Port: The
UDPport the agent will bind to. Default:8080. - Syslog Parsing: Toggle this to On to automatically parse RFC 3164 and RFC 5424 formatted messages.
- Max Message Size: Define the maximum allowed size for a single
UDPpacket. The system truncates large packets exceeding this value. Default:10KiB. - Ingest Pipeline: (Optional) Enter the ID of a custom ingest pipeline to process logs on the server side.
- Dataset Name: Specify the dataset name, which determines the target index. Default:
udp.generic. - Read Buffer Size: Configure the size of the operating system's
UDPreceive buffer (uses OS default if not specified). - Preserve Original Event: Enable this to store the raw, unmodified log in the
event.originalfield. - Timeout: (Advanced) Set the read and write timeout for socket operations. Valid time units are
ns,us,ms,s,m,h. - Keep Null Values: (Advanced) If you enable this setting, the system publishes fields with null values in the output document. Default: disabled.
- Use the "logs" data stream: (Advanced) Enable this to send all ingested data to the "logs" data stream. Requires Elasticsearch 9.2.0 or later. When enabled, the Dataset name option is ignored. Note: "Write to logs streams" must also be enabled in the output settings. Default: disabled.
- Syslog Options: (Advanced) Configure syslog parsing options in YAML format, including format type and timezone settings.
- Custom configurations: (Advanced) Add custom YAML configuration options. Use with caution as incorrect settings might break your configuration.
- Listen Address: The bind address for the
- (Optional) In the Processors field, add
YAML-formatted processors to drop, rename, or add fields at the Agent level. - Click Save and Continue, select the appropriate Agent Policy, and click Save and deploy changes.
To verify the integration is working, you can generate test traffic and check for the results in Kibana.
You can generate test traffic from a source device or a terminal using these methods:
- Using Netcat (Linux/macOS): Run the following command from a remote machine to send a test syslog message:
echo "<34>1 2023-10-11T10:30:00Z myhost.example.com test-app - - [test@1234 message=\"Hello Elastic\"] This is a test message" | nc -u -w1 <AGENT_IP> 8080 - Using Logger (Linux): Execute the following command to send a standard system log:
logger -n <AGENT_IP> -P 8080 -d "Test UDP Log Entry" - Generate Device Event: Log out and log back into the web interface of your configured network switch to trigger an authentication event.
You can verify that data is flowing into Elasticsearch with these steps:
- Navigate to Analytics > Discover.
- Select the
logs-*data view. - Enter the following KQL filter:
data_stream.dataset : "udp.generic" - Verify logs appear in the results table. You can expand a recent log entry and confirm the system populated these fields:
event.dataset: This should be exactlyudp.generic.source.ip: This contains the IP address of the device that sent the test message.message: This contains the raw text of your test log (for example, "This is a test message").log.syslog.priority: (If you enabled Syslog Parsing) This shows the numerical priority extracted from the header.event.original: (If you enabled this setting) This contains the full raw packet including syslog headers.
For help with Elastic ingest tools, check Common problems.
The following issues are common when setting up the Custom UDP Logs integration:
- Permission denied for low ports:
- If you configure a port below 1024 (like UDP
514), the Elastic Agent might fail to start because it doesn't have sufficient privileges. - You'll need to use a port above 1024 or run the Elastic Agent service as a privileged user like root or administrator.
- If you configure a port below 1024 (like UDP
- Address already in use:
- The Elastic Agent can't bind to the port if another service, such as a local
rsyslogorsyslog-ngdaemon, is already using it. - You can check for conflicting services using a command like
netstat -tuln | grep <your-port>.
- The Elastic Agent can't bind to the port if another service, such as a local
- Firewall blocking incoming traffic:
- If the agent is running but data doesn't appear in Kibana, the host's firewall might be blocking the UDP packets.
- Check your firewall settings using
iptables -Lon Linux orGet-NetFirewallRuleon Windows to ensure the configured port is open.
- Listen address mismatch:
- If you set the Listen Address to
localhost, the agent only accepts traffic from its own host. - To receive logs from external network devices, ensure the Listen Address is set to
0.0.0.0.
- If you set the Listen Address to
- Parsing failures:
- If logs appear in Kibana but fields aren't correctly extracted, verify that the Syslog Parsing toggle matches the format (RFC 3164 or RFC 5424) being sent by your source.
- If your device uses a non-standard format, you might need to turn off automatic parsing and use a custom ingest pipeline with a Grok processor.
- Message truncation:
- Long log messages or jumbo frames might be cut off if they exceed the Max Message Size limit.
- You can increase this value in the integration settings (for example, to
64KiB) to accommodate larger payloads.
- Timestamp mismatches:
- If your logs show an incorrect time in Kibana, the source device might be using a different timezone than the Elastic Stack.
- You can use an ingest pipeline to correct timezone offsets if the source device doesn't provide UTC timestamps.
- Packet loss during traffic bursts:
- UDP doesn't guarantee delivery, so packets can be dropped if the network is congested or the agent's buffer is overwhelmed.
- You can mitigate this by increasing the Read Buffer Size in the integration settings to allow the operating system to buffer more incoming packets.
For more information on architectures that can be used for scaling this integration, check the Ingest Architectures documentation.
You should consider several factors when scaling the Custom UDP Logs integration to ensure reliable data collection and optimal performance.
UDP is a connectionless protocol that offers lower latency and less overhead than TCP, which makes it ideal for high-throughput logging. However, because it lacks delivery guarantees, packets can be dropped during periods of extreme network congestion or if the Elastic Agent's read buffer is overwhelmed. You should consider the following to improve reliability:
- Increase read buffer size: Adjust the
read_buffer_sizesetting in the integration configuration to help the agent handle traffic bursts without dropping packets. Note that increasing this value will consume more memory on the host machine. - Monitor packet loss: Use host-level network monitoring tools to track UDP packet drops at the operating system level, which can indicate that the agent's buffer or the system's network stack needs tuning.
To prevent overwhelming your Elastic Stack and to control costs, you can manage the volume of data being ingested using these strategies:
- Filter at the source: Whenever possible, configure your source devices or applications to limit exports to specific severity levels (for example, Warning and above) or specific facilities.
- Use processors: If you can't reduce volume at the source, use the
processorssetting within the integration configuration to drop irrelevant events at the agent level before they're transmitted to Elasticsearch. This reduces network bandwidth and storage usage.
In high-traffic environments, such as those exceeding 10,000 events per second, a single Elastic Agent might become a performance bottleneck. You can scale your deployment using the following methods:
- Deploy multiple agents: Distribute the incoming UDP traffic across multiple Elastic Agents deployed on different hosts to increase total processing capacity.
- Use a network load balancer: Place a network load balancer in front of your Elastic Agents to distribute the incoming UDP traffic evenly across the agent pool.
- Ensure sufficient CPU resources: Make sure the host machines have enough CPU cores to handle the intensive context switching required for high-speed packet processing.
The Reference section for the Custom UDP Logs integration provides detailed information about the inputs and data streams used to collect and process your UDP data.
The Custom UDP Logs integration produces a single data stream that handles the ingested data.
The generic data stream provides events from UDP listeners of the following types:
- Raw UDP messages ingested as plain text.
- Syslog formatted data adhering to RFC3164 or RFC5424 standards.
By default, the integration sends all collected data to the udp.generic dataset. You can customize the dataset name in the integration settings to categorize your logs differently.
For more information about configuring UDP logging and optimizing your data collection, refer to these resources:
- Elastic Agent troubleshooting guide
- Elastic integration documentation for processors and field mappings
- RFC 3164 - The BSD Syslog Protocol
- RFC 5424 - The Syslog Protocol
Changelog
| Version | Details | Minimum Kibana version |
|---|---|---|
| 2.4.0 | Enhancement (View pull request) Improved documentation with detailed setup instructions, troubleshooting guidance, and knowledge base service info. |
9.2.0 |
| 2.3.0 | Enhancement (View pull request) Add logs stream support. |
9.2.0 |
| 2.2.1 | Enhancement (View pull request) Changed owners. |
9.0.0 8.13.0 |
| 2.2.0 | Enhancement (View pull request) Support stack version 9.0. |
9.0.0 8.13.0 |
| 2.1.0 | Enhancement (View pull request) ECS version updated to 8.17.0. |
8.13.0 |
| 2.0.0 | Enhancement (View pull request) Convert to input package |
8.13.0 |
| 1.19.1 | Enhancement (View pull request) Introduce option to preserve original event |
8.2.1 |
| 1.19.0 | Enhancement (View pull request) Update package-spec to 3.0.3. |
8.2.1 |
| 1.18.1 | Enhancement (View pull request) Changed owners |
8.2.1 |
| 1.18.0 | Bug fix (View pull request) Added log.syslog.msgid and log.syslog.structured_data to ECS mapping. |
8.2.1 |
| 1.17.0 | Enhancement (View pull request) ECS version updated to 8.11.0. |
8.2.1 |
| 1.16.0 | Enhancement (View pull request) Update ES permissions to support reroute processors |
8.2.1 |
| 1.15.0 | Enhancement (View pull request) ECS version updated to 8.10.0. |
8.2.1 |
| 1.14.0 | Enhancement (View pull request) The format_version in the package manifest changed from 2.11.0 to 3.0.0. Removed dotted YAML keys from package manifest. Added 'owner.type: elastic' to package manifest. |
8.2.1 |
| 1.13.0 | Enhancement (View pull request) Add tags.yml file so that integration's dashboards and saved searches are tagged with "Security Solution" and displayed in the Security Solution UI. |
8.2.1 |
| 1.12.0 | Enhancement (View pull request) Update package to ECS 8.9.0. |
8.2.1 |
| 1.11.0 | Enhancement (View pull request) Document duration units. |
8.2.1 |
| 1.10.0 | Enhancement (View pull request) Update package to ECS 8.8.0. |
8.2.1 |
| 1.9.0 | Enhancement (View pull request) Update package-spec version to 2.7.0. |
8.2.1 |
| 1.8.0 | Enhancement (View pull request) Update package to ECS 8.7.0. |
8.2.1 |
| 1.7.1 | Enhancement (View pull request) Added categories and/or subcategories. |
8.2.1 |
| 1.7.0 | Enhancement (View pull request) Allow YAML custom configuration. |
8.2.1 |
| 1.6.0 | Enhancement (View pull request) Update package to ECS 8.6.0. |
8.2.1 |
| 1.5.0 | Enhancement (View pull request) Update package to ECS 8.5.0. |
8.2.1 |
| 1.4.1 | Bug fix (View pull request) Fix indentation of syslog processor in agent handlebars file. |
8.2.1 |
| 1.4.0 | Enhancement (View pull request) Update package to ECS 8.4.0 |
8.2.1 |
| 1.3.1 | Enhancement (View pull request) Improve syslog parsing description |
8.2.1 |
| 1.3.0 | Enhancement (View pull request) Add syslog parsing option, expose SSL config |
8.2.1 |
| 1.2.0 | Enhancement (View pull request) Update package to ECS 8.3.0. |
8.0.0 7.16.0 |
| 1.1.1 | Bug fix (View pull request) Fixing typo in readme |
8.0.0 7.16.0 |
| 1.1.0 | Enhancement (View pull request) Update ECS to 8.2 |
8.0.0 7.16.0 |
| 1.0.1 | Bug fix (View pull request) Fixing typo in manifest for listen address |
8.0.0 7.16.0 |
| 1.0.0 | Enhancement (View pull request) Initial Release |
8.0.0 7.16.0 |