Structuring and Processing Data into Elasticsearch
Effectively indexing and structuring data into Elasticsearch is critical for establishing efficient search criteria and effective results. Logstash filters and ingest pipelines makes processing unstructured data easier by providing a set of common processors to efficiently parse, transform, and index that data into the desired structure.
In this webinar, we will explore concepts from the new Elastic Stack logging courses, including how to process and structure data using a variety of common filters and processors. Our expert instructors will demonstrate various solutions and built-in features that convert, enrich, process, and structure different types of fields from unstructured data. In addition, we will show how to create your own pipeline of processors for transformations that are not possible using the prebuilt processors.
- Introduce a set of common ingest processors and Logstash filters to parse unstructured data into structured data
- Discuss best practices for dissecting, converting, and enriching your fields
- Explore different scenarios that require either a built-in processor or a pipeline of processors to deal with certain types of unstructured data
- Elastic Stack logging course
- Should I use Logstash or Elasticsearch ingest nodes?
- How to ingest data into Elasticsearch Service
- Using Logstash to split data and send it to multiple outputs
- Want to try it for yourself? Take some of these features for a spin with a free trial of our Elasticsearch Service
Register to watch
You'll also receive an email with related content.