Apache Hive integrationedit

Hive is a data warehouse system for Hadoop that facilitates easy data summarization, ad-hoc queries, and the analysis of large datasets stored in Hadoop compatible file systems.

-- Hive website

Hive abstracts Hadoop by abstracting it through SQL-like language, called HiveQL so that users can apply data defining and manipulating operations to it, just like with SQL. In Hive data sets are defined through tables (that expose type information) in which data can be loaded, selected and transformed through built-in operators or custom/user defined functions (or UDFs).

Installationedit

Make elasticsearch-hadoop jar available in the Hive classpath. Depending on your options, there are various ways to achieve that. Use ADD command to add files, jars (what we want) or archives to the classpath:

ADD JAR /path/elasticsearch-hadoop.jar;

the command expects a proper URI that can be found either on the local file-system or remotely. Typically it’s best to use a distributed file-system (like HDFS or Amazon S3) and use that since the script might be executed on various machines.

When using JDBC/ODBC drivers, ADD JAR command is not available and will be ignored. Thus it is recommend to make the jar available to the Hive global classpath and indicated below.

As an alternative, one can use the command-line:

CLI configuration.

$ bin/hive --auxpath=/path/elasticsearch-hadoop.jar

or use the hive.aux.jars.path property specified either through the command-line or, if available, through the hive-site.xml file, to register additional jars (that accepts an URI as well):

$ bin/hive -hiveconf hive.aux.jars.path=/path/elasticsearch-hadoop.jar

or if the hive-site.xml configuration can be modified, one can register additional jars through the hive.aux.jars.path option (that accepts an URI as well):

hive-site.xml configuration.

<property>
  <name>hive.aux.jars.path</name>
  <value>/path/elasticsearch-hadoop.jar</value>
  <description>A comma separated list (with no spaces) of the jar files</description>
</property>

Configurationedit

When using Hive, one can use TBLPROPERTIES to specify the configuration properties (as an alternative to Hadoop Configuration object) when declaring the external table backed by Elasticsearch:

CREATE EXTERNAL TABLE artists (...)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'radio/artists',
              'es.index.auto.create' = 'false'); 

elasticsearch-hadoop setting

Mappingedit

By default, elasticsearch-hadoop uses the Hive table schema to map the data in Elasticsearch, using both the field names and types in the process. There are cases however when the names in Hive cannot be used with Elasticsearch (the field name can contain characters accepted by Elasticsearch but not by Hive). For such cases, one can use the es.mapping.names setting which accepts a comma-separated list of mapped names in the following format: Hive field name:Elasticsearch field name

To wit:

CREATE EXTERNAL TABLE artists (...)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'radio/artists',
            'es.mapping.names' = 'date:@timestamp, url:url_123'); 

Hive column date mapped in Elasticsearch to @timestamp; Hive column url mapped in Elasticsearch to url_123

Hive is case insensitive while Elasticsearch is not. The loss of information can create invalid queries (as the column in Hive might not match the one in Elasticsearch). To avoid this, elasticsearch-hadoop will always convert Hive column names to lower-case. This being said, it is recommended to use the default Hive style and use upper-case names only for Hive commands and avoid mixed-case names.

Hive treats missing values through a special value NULL as indicated here. This means that when running an incorrect query (with incorrect or non-existing field names) the Hive tables will be populated with NULL instead of throwing an exception. Make sure to validate your data and keep a close eye on your schema since updates will otherwise go unnotice due to this lenient behavior.

Writing data to Elasticsearchedit

With elasticsearch-hadoop, Elasticsearch becomes just an external table in which data can be loaded or read from:

CREATE EXTERNAL TABLE artists (
    id      BIGINT,
    name    STRING,
    links   STRUCT<url:STRING, picture:STRING>)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'radio/artists'); 

-- insert data to Elasticsearch from another table called 'source'
INSERT OVERWRITE TABLE artists
    SELECT NULL, s.name, named_struct('url', s.url, 'picture', s.picture)
                    FROM source s;

Elasticsearch Hive StorageHandler

Elasticsearch resource (index and type) associated with the given storage

For cases where the id (or other metadata fields like ttl or timestamp) of the document needs to be specified, one can do so by setting the appropriate mapping, namely es.mapping.id. Following the previous example, to indicate to Elasticsearch to use the field id as the document id, update the table properties:

CREATE EXTERNAL TABLE artists (
    id      BIGINT,
    ...)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.mapping.id' = 'id'...);

Writing existing JSON to Elasticsearchedit

For cases where the job input data is already in JSON, elasticsearch-hadoop allows direct indexing without applying any transformation; the data is taken as is and sent directly to Elasticsearch. In such cases, one needs to indicate the json input by setting the es.input.json parameter. As such, in this case elasticsearch-hadoop expects the output table to contain only one field, who’s content is used as the JSON document. That is, the library will recognize specific textual types (such as string or binary) or simply call (toString).

Table 3. Hive types to use for JSON representation

Hive type Comment

binary

use this when the JSON data is represented as a byte[] or similar

string

use this if the JSON data is represented as a String

anything else

make sure the toString() returns the desired JSON document

varchar

use this as an alternative to Hive string

Make sure the data is properly encoded, in UTF-8. The field content is considered the final form of the document sent to Elasticsearch.

CREATE EXTERNAL TABLE json (data STRING) 
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = '...',
              'es.input.json` = 'yes'); 
...

The table declaration only one field of type STRING

Indicate elasticsearch-hadoop the table content is in JSON format

Writing to dynamic/multi-resourcesedit

One can index the data to a different resource, depending on the row being read, by using patterns. Coming back to the aforementioned media example, one could configure it as follows:

CREATE EXTERNAL TABLE media (
    name    STRING,
    type    STRING,
    year    STRING,
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'my-collection-{type}/doc'); 

Table field used by the resource pattern. Any of the declared fields can be used.

Resource pattern using field type

For each row about to be written, elasticsearch-hadoop will extract the type field and use its value to determine the target resource.

The functionality is also available when dealing with raw JSON - in this case, the value will be extracted from the JSON document itself. Assuming the JSON source contains documents with the following structure:

{
    "media_type":"music",
    "title":"Surfing With The Alien",
    "year":"1987"
}

field within the JSON document that will be used by the pattern

the table declaration can be as follows:

CREATE EXTERNAL TABLE json (data STRING) 
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'my-collection-{media_type}/doc', 
              'es.input.json` = 'yes');

Schema declaration for the table. Since JSON input is used, the schema is simply a holder to the raw data

Resource pattern relying on fields within the JSON document and not on the table schema

Reading data from Elasticsearchedit

Reading from Elasticsearch is strikingly similar:

CREATE EXTERNAL TABLE artists (
    id      BIGINT,
    name    STRING,
    links   STRUCT<url:STRING, picture:STRING>)
STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
TBLPROPERTIES('es.resource' = 'radio/artists', 
              'es.query' = '?q=me*');          

-- stream data from Elasticsearch
SELECT * FROM artists;

same Elasticsearch Hive StorageHandler

Elasticsearch resource

Elasticsearch query

Type conversionedit

If automatic index creation is used, please review this section for more information.

Hive provides various types for defining data and internally uses different implementations depending on the target environment (from JDK native types to binary-optimized ones). Elasticsearch integrates with all of them, including and Serde2 lazy and lazy binary:

Hive type Elasticsearch type

void

null

boolean

boolean

tinyint

byte

smallint

short

int

int

bigint

long

double

double

float

float

string

string

binary

binary

timestamp

date

struct

map

map

map

array

array

union

not supported (yet)

decimal

string

date

date

varchar

string

char

string

While Elasticsearch understands Hive types up to version 2.0, it is backwards compatible with Hive 1.0

It is worth mentioning that rich data types available only in Elasticsearch, such as GeoPoint or GeoShape are supported by converting their structure into the primitives available in the table above. For example, based on its storage a geo_point might be returned as a string or an array.