Accessing machine learning models in Elastic

Elastic supports the machine learning models you need

Elastic® lets you apply the machine learning (ML) that’s appropriate for your use case and level of ML expertise. You have multiple options:

  1. Leverage the models that come built-in. Aside from models targeting specific security threats and types of system issues in our observability and security solution, you can use our proprietary Elastic Learned Sparse Encoder model out of the box, as well as a language identification — useful if you’re working with non-English text data.
  2. Access third-party PyTorch models from anywhere including the HuggingFace model hub.
  3. Load a model you trained yourself — primarily NLP transformers at this point.

Using built-in models gets you value out of the box, without requiring any ML expertise from you, yet you have the flexibility to try out different models and determine what performs best on your data

We designed our model management to be scalable across multiple nodes in a cluster, while also ensuring good inference performance for both high throughput and low latency workloads. That’s in part by empowering ingest pipelines to run inference and by using dedicated nodes for the computationally demanding model inference — during the ingestion phase, as well as data analysis and search.

Read on to learn more about the Eland library that lets you load models into Elastic and how that plays out for the various types of machine learning you might use within Elasticsearch® — from the latest transformer and natural language processing (NLP) models to boosted tree models for regression.

Eland lets you load ML models into Elastic

Our Eland library provides an easy interface to load ML models into Elasticsearch — provided they were trained using PyTorch. Using the native library libtorch, and expecting models that have been exported or saved as a TorchScript representation, Elasticsearch avoids running a Python interpreter while performing model inference.

By integrating with one of the most popular formats for building NLP models in PyTorch, Elasticsearch can provide a platform that works with a large variety of NLP tasks and use cases. We’ll get more into that in the section on transformers that follows.

You have three options for using Eland to upload a model: command-line, Docker, and from within your own Python code. Docker is less complex because it does not require a local installation of Eland and all of its dependencies. Once you have access to Eland, the code sample below shows how to upload a DistilBERT NER model, as an example:

distilbert ner model

Further below we’ll walk through each of the arguments of eland_import_hub_model. And you can issue the same command from a Docker container.

Once uploaded, Kibana’s ML Model Management user interface lets you manage the models on an Elasticsearch cluster, including increasing allocations for additional throughput, and stop/resume models while (re)configuring your system.

Which models are supported?

Elastic supports a variety of transformer models, as well as the most popular supervised learning libraries:

  • NLP and embedding models: All transformers that conform to the standard BERT model interface and use the WordPiece tokenization algorithm. View a complete list of supported model architectures.
  • Supervised learning: Trained models from scikit-learn, XGBoost, and LightGBM libraries to be serialized and used as an inference model in Elasticsearch. Our documentation provides an example for training an XGBoost classify on data in Elastic. You can also export and import supervised models trained in Elastic with our data frame analytics.
  • Generative AI: You can use the API provided for the LLM to pass queries — potentially enriched with context retrieved from Elastic — and process the results returned. For further instructions, refer to this blog, which links to a GitHub repository with example code for communicating via ChatGPT’s API.

Below we provide more information for the type of model you’re most likely to use in the context of search applications: NLP transformers.

How to apply transformers and NLP in Elastic, with ease!

Let us walk you through the steps to load and use an NLP model, for example a popular NER model from Hugging Face, going over the arguments identified in below code snippet.

eland import hub model

  1. Specify the Elastic Cloud identifier. Alternatively, use --url .
  2. Provide authentication details to access your cluster. You can look up available authentication methods.
  3. Specify the identifier for the model in the Hugging Face model hub.
  4. Specify the type of NLP task. Supported values are fill_mask, ner, text_classification, text_embedding, and zero_shot_classification.

Once you’ve loaded the model, next you need to deploy it. You accomplish this on the Model Management screen of the Machine Learning tab in Kibana. Then you’d typically test the model to ensure it’s working properly.

Now you’re ready to use the deployed model for inference. For example to extract named entities, you call the _infer endpoint on the loaded NER model:

loaded ner model

The model identifies two entities: the person "Josh" and the location "Berlin."

predicted value

For additional steps, like using this model in an inference pipeline and tuning the deployment, read the blog that describes this example.

Want to see how to apply semantic search — for example, how to create embeddings for text and then apply vector search to find related documents? This blog lays that out step-by-step, including validating model performance.

Don’t know which type of task for which model? This table should help you get started.

Hugging Face Modeltask-type
Named entity recognitionner
Text embeddingtext_embedding
Text classificationtext_classification
Zero shot classificationzero_shot_classification
Question answeringquestion_answering

Elastic also supports comparing how similar two pieces of text are to each other as text_similarity task-type — this is useful for ranking document text when comparing it to another provided text input, and it’s sometimes referred to as cross-encoding.

Check these resources for more details

Elastic, Elasticsearch and associated marks are trademarks, logos or registered trademarks of Elasticsearch N.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

Recommended Articles
Retrieval Augmented Generation using Cohere Command model through Amazon Bedrock and domain data in Elasticsearch
Generative AIIntegrations

Retrieval Augmented Generation using Cohere Command model through Amazon Bedrock and domain data in Elasticsearch

Retrieval Augmented Generation using Cohere Command model through Amazon Bedrock and domain data in Elasticsearch

Uday Theepireddy

Meor Amer

Ayan Ray

James Yi

Use Amazon Bedrock with Elasticsearch and Langchain
Integrations

Use Amazon Bedrock with Elasticsearch and Langchain

Learn to split fictional workplace documents into passages, transform these passages into embeddings in Elasticsearch and integrate Amazon Bedrock LLM.

Yan Savitski

Generative AI using Elastic and Amazon SageMaker JumpStart
Generative AIIntegrations

Generative AI using Elastic and Amazon SageMaker JumpStart

Learn how to build a GAI solution by exploring Amazon SageMaker JumpStart, Elastic, and Hugging Face open source LLMs using the sample implementation provided in this post and a data set relevant to your business.

Uday Theepireddy

Ayan Ray

Unlocking the potential of large language models: Elastic's first code contribution to LangChain
Integrations

Unlocking the potential of large language models: Elastic's first code contribution to LangChain

In this blog, we explore the exciting synergy between Langchain and Elasticsearch, two powerful tools transforming the landscape of large language models. We provide an overview of the collaboration and its potential to shape application development.

Jeff Vestal

Privacy-first AI search using LangChain and Elasticsearch
Integrations

Privacy-first AI search using LangChain and Elasticsearch

The world of search is changing very quickly. ChatGPT has cemented generative AI's place in making finding data faster. We'll use Elasticsearch and LangChain to build a private trivia bot on fun Star Wars trivia data.

Dave Erickson