Custom TCP Logs Integration for Elastic
| Version | 2.2.0 (View all) |
| Subscription level What's this? |
Basic |
| Developed by What's this? |
Elastic |
| Minimum Kibana version(s) | 9.2.0 |
This AI-assisted guide was validated by our engineers. You may need to adjust the steps to match your environment.
The Custom TCP Logs integration for Elastic enables you to collect raw TCP data from any source that can establish a TCP connection and transmit text-based data. It's a flexible solution for ingesting logs from various third-party software or hardware devices into the Elastic Stack. By using this integration, you can centralize your log data, making it easier to monitor, search, and analyze your environment's activity.
The Custom TCP Logs integration is compatible with any third-party software or hardware capable of establishing a TCP connection and transmitting text-based data.
This integration supports the following standards:
- Syslog standards: Supports devices compliant with
RFC 3164(BSD syslog) andRFC 5424(The Syslog Protocol). - Framing standards: Supports
RFC 6587for octet-counted framing, which is commonly used in high-reliability log transmission. - Encryption: Compatible with clients supporting
TLS/SSLfor secure transport.
This integration collects data by having an Elastic Agent listen on a specified TCP port. You'll configure the agent to act as a receiver for incoming TCP traffic. When your external systems or devices send text-based data to this port, the Elastic Agent receives it.
Once received, the data is processed according to your configuration—whether it's raw text, syslog formatted, or uses specific framing like octet counting. The Elastic Agent then forwards the logs to your Elastic deployment, where you can analyze them using Kibana.
The Custom TCP Logs integration collects log messages of the following types:
- Raw TCP streams: Any text-based data stream sent over a TCP connection, typically separated by newline characters or other delimiters.
- Syslog messages: Structured messages following RFC 3164 or RFC 5424, which include metadata such as facility, severity, and timestamps.
This integration includes the following data stream:
tcp.generic: This is the default data stream. It captures the raw message payload in themessagefield along with connection metadata such assource.ipandsource.port. If you enable Syslog parsing, additional ECS fields are populated from the syslog header.
The Custom TCP Logs integration provides a versatile and robust mechanism for ingesting log data from any source capable of transmitting information over a TCP socket. You can use this integration for the following use cases:
- Custom application logging: Directly stream application events from internal software to the Elastic Agent by configuring a TCP appender in your application's logging framework.
- Legacy syslog ingestion: Collect logs from older network hardware or Unix-based systems that use TCP-based syslog (RFC 3164 or RFC 5424) to ensure centralized visibility.
- Centralized log aggregation: Act as a middle-tier listener for log forwarders or custom scripts that aggregate data before sending it to the Elastic Stack for analysis.
- Encrypted data ingestion: Secure sensitive log transmissions from remote sites using the built-in SSL/TLS support, ensuring data integrity and confidentiality during transit.
To use the Custom TCP Logs integration, you'll need to meet several requirements.
To successfully integrate a third-party source with the Custom TCP Logs listener, you must meet these prerequisites:
- Firewall rules: You'll need to configure local and network firewalls (for example,
iptables,firewalld, or cloud security groups) to allow inbound traffic on the selected TCP port. - Source configuration knowledge: You'll need access to the configuration interface or configuration files of the source device or application to specify the destination IP address and port.
- SSL certificates: If you're enabling TLS, you must have a valid CA-signed or self-signed certificate and private key that's accessible by the Elastic Agent.
You'll also need the following Elastic components:
- Elastic Agent: A running Elastic Agent that's enrolled in a Fleet policy.
- Network access: Connectivity between the Elastic Agent and the Elasticsearch or Kibana endpoint for data delivery.
Elastic Agent must be installed. For more details, check the Elastic Agent installation instructions. You can install only one Elastic Agent per host.
Elastic Agent is required to stream data from the TCP receiver and ship the data to Elastic, where the events will then be processed using the integration's ingest pipelines.
To send data to the Elastic Agent, you'll need to configure your external system or application to point its output to the Agent's IP and port.
For generic Linux/Unix log forwarding using rsyslog, you'll need to:
- Log in to the source server that'll be sending the logs.
- Locate the configuration file, which is typically
/etc/rsyslog.confor found within/etc/rsyslog.d/50-default.conf. - Add a forwarding rule to point to your Elastic Agent's IP and port (replace
<ELASTIC_AGENT_IP>and8080with your actual values):*.* @@<ELASTIC_AGENT_IP>:8080NoteThe
@@symbol denotes TCP transport in rsyslog. - Restart the rsyslog service:
sudo systemctl restart rsyslog - Verify that the server can establish a connection to the Agent using a tool like
telnetornc -zv <ELASTIC_AGENT_IP> 8080.
For custom application loggers, you'll need to:
- Open your application's logging configuration file.
- Configure a Socket Appender or TCP Handler.
- Set the remote host or destination to the IP address of the host running the Elastic Agent.
- Set the port to match the port you've configured in the Elastic integration (for example,
8080). - Ensure the application is configured to send logs in a newline-delimited format unless you've configured a custom framing method in Kibana.
- Restart the application to apply the changes and begin the data stream.
You'll follow these steps to add and configure the integration in Kibana:
- Navigate to Management > Integrations in Kibana.
- Search for Custom TCP Logs and select it.
- Click Add Custom TCP Logs.
- Configure the integration settings:
- Listen Address: The interface address to listen on. Use
0.0.0.0to accept connections from any network interface. The default islocalhost. - Listen Port: The TCP port the Agent will open to listen for incoming logs. The default is
8080. - Dataset Name: The name of the dataset where logs will be written. The default is
tcp.generic. - Framing: Specify how the Agent identifies the end of a log message. Options include
delimiter(default) orrfc6587. - Line Delimiter: The character used to split incoming data into separate log events. The default is
\n. - Max Message Size: The maximum allowed size for a single log message. The default is
20MiB. - Syslog Parsing: Enable this boolean if the incoming data is in standard Syslog format (RFC3164/5424).
- Listen Address: The interface address to listen on. Use
- If you're using SSL, expand the Advanced options or SSL Configuration section and provide:
- Certificate: The path to the SSL certificate file.
- Key: The path to the SSL private key file.
- (Optional) Provide a Custom Ingest Pipeline name if you've already defined processing logic in Elasticsearch.
- Click Save and Continue to deploy the configuration to your Agents.
After you've finished the configuration, you'll need to verify that data is flowing correctly from your source to the Elastic Stack.
Depending on you system, you might be able to trigger a data flow on the source using one of these methods:
- To send a manual test message from the source machine (or any machine with network access to the Agent), run this command:
echo "Integration Validation Test Message $(date)" | nc <AGENT_IP_ADDRESS> <PORT> - If the source is a Linux server, you can use the
loggercommand to generate a syslog event:logger -n <AGENT_IP_ADDRESS> -P <PORT> -T "This is a test syslog message" - You can also perform an action in your custom application that's known to trigger a log entry, such as a failed login attempt.
To check for the data in Kibana, you'll need to:
- Navigate to Analytics > Discover.
- Select the
logs-*data view. - Enter this KQL filter:
data_stream.dataset : "tcp.generic" - Verify that logs appear in the results. You'll want to expand a log entry and confirm these fields are populated:
event.dataset(should betcp.generic)log.syslog.priority(if you've enabled syslog parsing)source.addressorsource.ip(showing the sender's IP)message(containing the test message)input.type(should indicatetcp)
- Navigate to Analytics > Dashboards and search for "TCP" to view any available visualizations for generic log traffic.
For help with Elastic ingest tools, check Common problems.
You might encounter the following common configuration issues when setting up or using this integration:
- Port binding failure:
- If the Elastic Agent fails to start the listener, check if another process is using the configured port with
netstat -tulpn | grep <PORT>. - If you're using a port below 1024, ensure the Agent has root or administrator privileges.
- If the Elastic Agent fails to start the listener, check if another process is using the configured port with
- Firewall blocking:
- If your source device shows connection timeouts, verify that the host firewall (such as
firewalld,iptables, or Windows Firewall) on the Elastic Agent machine allows inbound traffic on the configured TCP port.
- If your source device shows connection timeouts, verify that the host firewall (such as
- Incorrect listen address:
- If you set the
Listen Addresstolocalhostor127.0.0.1, remote devices won't be able to connect. Ensure it's set to0.0.0.0or the specific internal IP of your Elastic Agent host.
- If you set the
- Dataset naming restriction:
- If data isn't appearing, check your integration configuration for hyphens in the
Dataset Name. Hyphens aren't supported in this field and will cause ingestion issues.
- If data isn't appearing, check your integration configuration for hyphens in the
- Parsing failures:
- If data appears in Kibana but doesn't parse correctly, check the
error.messagefield. This often happens if you've enabledSyslog Parsingbut the incoming logs don't strictly adhere to RFC 3164 or RFC 5424.
- If data appears in Kibana but doesn't parse correctly, check the
- Framing issues:
- If multiple log lines appear as a single event or if events are cut off, verify that the
Framingmethod matches the sender. For example, if the sender uses octet counting but the integration is set todelimiter, messages will be malformed.
- If multiple log lines appear as a single event or if events are cut off, verify that the
- Message truncation:
- If logs are incomplete, check if they exceed the
Max Message Size. You'll need to increase this value in the integration settings if your application sends large payloads like large JSON blobs.
- If logs are incomplete, check if they exceed the
For more information on architectures that can be used for scaling this integration, check the Ingest Architectures documentation.
When you're managing high-volume data streams, consider the following factors to optimize performance and ensure successful scaling:
- Data volume management: To prevent overwhelming your Elastic Agent, you should filter logs at the source whenever possible.
- Message size: Adjusting the
Max Message Size(default20MiB) is critical for performance. Excessively large limits can lead to high memory usage per connection, while limits that are too small will truncate your log entries. - Elastic Agent scaling: For high-throughput environments receiving data from hundreds of sources, you can deploy multiple Elastic Agents behind a network load balancer. This approach allows for horizontal scaling and ensures high availability for your log collection.
- Resource sizing: You should account for the number of concurrent TCP connections when sizing your system resources, as each open socket consumes system file descriptors and memory.
The following links provide additional information about the protocols and configurations supported by this integration:
- RFC 3164: The BSD Syslog Protocol
- RFC 5424: The Syslog Protocol
- RFC 6587: Transmission of Syslog Messages over TCP
- Filebeat SSL Configuration
Changelog
| Version | Details | Minimum Kibana version |
|---|---|---|
| 2.2.0 | Enhancement (View pull request) Update integration documentation. |
9.2.0 |
| 2.1.0 | Enhancement (View pull request) Add logs stream support. |
9.2.0 |
| 2.0.1 | Enhancement (View pull request) Changed owners. |
9.0.0 8.13.0 |
| 2.0.0 | Enhancement (View pull request) Convert to input package. |
9.0.0 8.13.0 |
| 1.21.0 | Enhancement (View pull request) Support stack version 9.0. |
9.0.0 8.2.1 |
| 1.20.1 | Bug fix (View pull request) Updated SSL description to be uniform and to include links to documentation. |
8.2.1 |
| 1.20.0 | Enhancement (View pull request) ECS version updated to 8.17.0. |
8.2.1 |
| 1.19.1 | Enhancement (View pull request) Introduce option to preserve original event |
8.2.1 |
| 1.19.0 | Enhancement (View pull request) Update package-spec to 3.0.3. |
8.2.1 |
| 1.18.1 | Enhancement (View pull request) Changed owners |
8.2.1 |
| 1.18.0 | Bug fix (View pull request) Added log.syslog.msgid and log.syslog.structured_data to ECS mapping. |
8.2.1 |
| 1.17.0 | Enhancement (View pull request) ECS version updated to 8.11.0. |
8.2.1 |
| 1.16.0 | Enhancement (View pull request) Update ES permissions to support reroute processors |
8.2.1 |
| 1.15.0 | Enhancement (View pull request) ECS version updated to 8.10.0. |
8.2.1 |
| 1.14.0 | Enhancement (View pull request) The format_version in the package manifest changed from 2.11.0 to 3.0.0. Removed dotted YAML keys from package manifest. Added 'owner.type: elastic' to package manifest. |
8.2.1 |
| 1.13.0 | Enhancement (View pull request) Add tags.yml file so that integration's dashboards and saved searches are tagged with "Security Solution" and displayed in the Security Solution UI. |
8.2.1 |
| 1.12.0 | Enhancement (View pull request) Update package to ECS 8.9.0. |
8.2.1 |
| 1.11.0 | Enhancement (View pull request) Document duration units. |
8.2.1 |
| 1.10.0 | Enhancement (View pull request) Update package to ECS 8.8.0. |
8.2.1 |
| 1.9.0 | Enhancement (View pull request) Update package-spec version to 2.7.0. |
8.2.1 |
| 1.8.0 | Enhancement (View pull request) Update package to ECS 8.7.0. |
8.2.1 |
| 1.7.1 | Enhancement (View pull request) Added categories and/or subcategories. |
8.2.1 |
| 1.7.0 | Enhancement (View pull request) Allow YAML custom configuration. |
8.2.1 |
| 1.6.0 | Enhancement (View pull request) Update package to ECS 8.6.0. |
8.2.1 |
| 1.5.0 | Enhancement (View pull request) Update package to ECS 8.5.0. |
8.2.1 |
| 1.4.1 | Bug fix (View pull request) Fix indentation of syslog processor in agent handlebars file. |
8.2.1 |
| 1.4.0 | Enhancement (View pull request) Update package to ECS 8.4.0 |
8.2.1 |
| 1.3.1 | Enhancement (View pull request) Improve syslog parsing description |
8.2.1 |
| 1.3.0 | Enhancement (View pull request) Add syslog parsing option |
8.2.1 |
| 1.2.0 | Enhancement (View pull request) Update package to ECS 8.3.0. |
8.0.0 7.16.0 |
| 1.1.0 | Enhancement (View pull request) Update to ECS 8.2 |
8.0.0 7.16.0 |
| 1.0.0 | Enhancement (View pull request) Initial Release |
8.0.0 7.16.0 |