Inferenceedit

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Inference is a machine learning feature that enables you to use supervised machine learning processes – like Regression or Classification – not only as a batch analysis but in a continuous fashion. This means that inference makes it possible to use trained machine learning models against incoming data.

For instance, suppose you have an online service and you would like to predict whether a customer is likely to churn. You have an index with historical data – information on the customer behavior throughout the years in your business – and a classification model that is trained on this data. The new information comes into a destination index of a continuous transform. With inference, you can perform the classification analysis against the new data with the same input fields that you’ve trained the model on, and get a prediction.

Let’s take a closer look at the machinery behind inference.

Trained machine learning models as functionsedit

When you create a data frame analytics job that executes a supervised process, you need to train a machine learning model on a training dataset to be able to make predictions on data points that the model has never seen. The models that are created by data frame analytics are stored as Elasticsearch documents in internal indices. In other words, the characteristics of your trained models are saved and ready to be used as functions.

Alternatively, you can use a pre-trained language identification model to determine the language of text. Language identification supports 109 languages. For more information and configuration details, check the Language identification page.

Inference processoredit

Inference is a processor specified in an ingest pipeline. It uses a stored data frame analytics model to infer against the data that is being ingested in the pipeline. The model is used on the ingest node. Inference pre-processes the data by using the model and provides a prediction. After the process, the pipeline continues executing (if there is any other processor in the pipeline), finally the new data together with the results are indexed into the destination index.

Check the inference processor and the machine learning data frame analytics API documentation to learn more about the feature.