Indexing data into Elasticsearchedit

By now you’ve probably spun up a deployment and might be wondering what’s next. Congratulations on completing that first big step! Now let’s help you do something with it. You likely have data that you want to add, known as ingesting or indexing, to Elasticsearch, so let’s explore some options.

Migrating dataedit

If you want to move your existing Elasticsearch data into your new infrastructure, check out the migration options. You’ll find instructions to guide you through:

  • Migrating data from its original source
  • Reindexing data from a remote Elasticsearch cluster
  • Restoring data from a snapshot
  • Migrating internal Elasticsearch indices

Ingestion methodsedit

When it comes to delivering your data into the Elastic Stack, a variety of options are available. All of the documentation and tutorials listed here rely on one of four ingestion methods: Elastic Agent, Beats, Logstash, or a direct connection from a client application. You can use these options individually or in combination.

Trying to choose between Beats or Elastic Agent? Check out our comparison of supported inputs, outputs, and configurations: Beats and Elastic Agent capabilities.

Elastic Agent and Fleetedit

Elastic Agent offers a single, unified way to enable shipping monitoring data from multiple hosts or containers into the Elastic Stack. Elastic Agent serves as a convenient front end that uses Beats shippers or Elastic Endpoint under the covers. See the Elastic Agents product documentation to learn more.

Fleet provides a web-based UI in Kibana to add and manage integrations for popular services and platforms, as well as manage a fleet of Elastic Agents. To learn more about how these work, see the Fleet and Elastic Agent overview.

Beatsedit

These lightweight shippers get installed as agents on the endpoint. The data that they collect gets pushed back to the Elasticsearch cluster. Each Beat is created with a specific purpose, making it possible to target and aggregate common data from a multitude of endpoints. Some Beats, like Filebeat, have modules that specialize them even further. Configure the Beat to use the Cloud ID to simplify sending data back to your deployment. TLS, basic authentication, and API key authentication are available to ensure that your data is shipped securely.

Here’s a short and incomplete list of the types of data that Beats can handle: log files, metrics, audit data, network traffic, uptime monitoring, and more. Learn about the possibilities from the Beats documentation.

Logstashedit

Logstash is an open source data collection engine with real-time pipelining capabilities. It can accept data that is pushed to it as well as pull data from external sources. Logstash has the added benefit of being able to persist information in queues if the cluster temporarily cannot accept data, smoothing out ingestion spikes. However, Logstash is not available as part of the Elastic Cloud Enterprise deployment and requires separate installation and maintenance. Use the Cloud ID to configure Logstash to work with a deployment. And, as with Beats, you have the option of using basic authentication or an API key for secure data transmission.

Learn more about the possible inputs, filters, and outputs from the Logstash documentation.

Language clientsedit

Elastic provides libraries for several programming languages that allow you to connect your code directly to your Elastic Cloud Enterprise deployment. A unique Cloud ID. TLS, basic authentication, and API key authentication methods are available to ensure that your data is shipped securely into Elasticsearch.

This approach is described in detail in the two guides:

The details for each programming language library provided are in the Elasticsearch Client documentation.

Try out sample dataedit

There are number of ways for you to get a sample data set ingested into Elasticsearch. This gives you a convenient way to test drive the broad set of Kibana tools and visualizations before ingesting your own data. Several data packages are available with a one-click installation, as well as a makelogs script and a simple CSV upload method for more ingest options.

To learn more, see Installing sample data.

Ingest data with Elastic solutionsedit

You can choose from one of the following guides to find the most suitable data ingestion steps and examples for your needs.

Enterprise Searchedit

Getting started with App Search
The App Search getting started documentation and video can help you to index an initial set of documents to create a custom search experience in your applications.
Workplace Search Content Sources Overview
Learn how to integrate Workplace Search with a variety of third-party content sources such as GitHub, Google Drive, or Dropbox. You can also build your own connectors using custom API sources, allowing you to search unique content repositories and ingest that data into Workplace Search.
Enterprise Search web crawler
The Elastic Enterprise Search web crawler, currently a beta feature, discovers, extracts, and indexes your web content into your App Search engines.

Observabilityedit

Send data to Elasticsearch
These steps get you started with the ingestion aspects of Elastic Observability, detailing how to configure Elasticsearch to store and search your data, and Kibana to visualize and manage it. You can also set up APM Server as part of an Elastic Cloud Enterprise deployment and then configure APM agents to send data into the deployment.
Tutorials
Try out these tutorials to guide you through specific observability scenarios, including monitoring data from AWS, GCP, or Azure, in a Java application, or from Kubernetes.

Securityedit

Ingest data to Elastic Security
This guide presents options for ingesting data into Elastic Security, including using the Elastic Agent with Elastic Endpoint Integration, using Beats shippers installed on the systems that you want to monitor, using Elastic Agent with Splunk, and using third party collectors shipping ECS-compliant data.
Ingest data into SIEM
Learn about how to ingest data into the Elastic SIEM app (now part of the Elastic Security solution), including using Beats shippers installed on the systems that you want to monitor, using Elastic Endpoint Security to ship data directly to Elasticsearch, and using third party collectors shipping ECS-compliant data.

Ingest from custom sourcesedit

These guides walk you through the process of securely ingesting your custom data into an Elastic Cloud Enterprise deployment, whether it be client application data, ECS (Elastic Common Schema)-formatted log data, server monitoring metrics, or relational database records that you want to synchronize with Elasticsearch.

In addition, have a look at our large collection of prebuilt Elastic integrations that enable you connect and easily stream in logs, metrics, traces, content, and other data types from popular sources.

Learn how to:

Ingest data with Node.js on Elastic Cloud Enterprise
Get Node.js application data securely into Elastic Cloud Enterprise, where it can then be searched and modified.
Ingest data with Python on Elastic Cloud Enterprise
Get Python application data securely into Elastic Cloud Enterprise, where it can then be searched and modified.
Ingest data from Beats to Elastic Cloud Enterprise with Logstash as a proxy
Get server metrics or other types of data from Filebeat and Metricbeat into Logstash as an intermediary, and then send that data to Elastic Cloud Enterprise. Using Logstash as a proxy limits your Elastic Stack traffic through a single, external-facing firewall exception or rule.
Ingest data from a relational database into Elastic Cloud Enterprise
Get data from a relational database into Elastic Cloud Enterprise using the Logstash JDBC input plugin. Logstash can be used as an efficient way to copy records and to receive updates from a relational database as changes happen, and then send the new data to a deployment.
Ingest logs from a Python application using Filebeat
Get logs from a Python application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor an ECS-formatted log file, and then view real-time visualizations of the log events in Kibana as they occur.
Ingest logs from a Node.js web application using Filebeat
Get HTTP request logs from a Node.js web application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor an ECS-formatted log file and then view real-time visualizations of the log events as HTTP requests occur on your Node.js web server.

Whenever you add data into Elasticsearch indices, that data can be pre-processed using an Elasticsearch ingest pipeline. An ingest pipeline is an ideal way to optimize how your data is indexed. It simplifies tasks such as extracting error codes from a log file, or mapping geographic locations to IP addresses. To learn about ingest preprocessors and pipelines see the Elasticsearch ingest documentation.