The documentation is broken-down in two parts:
This section provides an overview of the project, its requirements (and supported environment and libraries) plus information on how to easily install elasticsearch-hadoop in your environment.
This part of the documentation explains the core functionality of elasticsearch-hadoop starting with the configuration options and architecture and gradually explaining the various major features. At a higher level the reference is broken down into architecture and configuration section which are general, Map/Reduce and the libraries built on top of it, upcoming computation libraries (like Apache Spark) and finally mapping, metrics and troubleshooting.
We recommend going through the entire documentation even superficially when trying out elasticsearch-hadoop for the first time, however those in a rush, can jump directly to the desired sections:
- overview of the elasticsearch-hadoop architecture and how it maps on top of Hadoop
- explore the various configuration switches in elasticsearch-hadoop
- Map/Reduce integration
- describes how to use elasticsearch-hadoop in vanilla Map/Reduce environments - typically useful for those interested in data loading and saving to/from Elasticsearch without little, if any, ETL (extract-transform-load).
- Apache Hive integration
- Hive users should refer to this section.
- Apache Pig support
- how-to on using Elasticsearch in Pig scripts through elasticsearch-hadoop.
- Apache Spark support
- describes how to use Apache Spark with Elasticsearch through elasticsearch-hadoop.
- Mapping and Types
- deep-dive into the strategies employed by elasticsearch-hadoop for doing type conversion and mapping to and from Elasticsearch.
- Hadoop Metrics
- Elasticsearch Hadoop metrics
- tips on troubleshooting and getting help
Intro to Kibana
ELK for Logs & Metrics