Create an inference endpoint to perform an inference task with the hugging_face service.
Supported tasks include: text_embedding, completion, and chat_completion.
To configure the endpoint, first visit the Hugging Face Inference Endpoints page and create a new endpoint. Select a model that supports the task you intend to use.
For Elastic's text_embedding task:
The selected model must support the Sentence Embeddings task. On the new endpoint creation page, select the Sentence Embeddings task under the Advanced Configuration section.
After the endpoint has initialized, copy the generated endpoint URL.
Recommended models for text_embedding task:
all-MiniLM-L6-v2all-MiniLM-L12-v2all-mpnet-base-v2e5-base-v2e5-small-v2multilingual-e5-basemultilingual-e5-smallFor Elastic's chat_completion and completion tasks:
The selected model must support the Text Generation task and expose OpenAI API. HuggingFace supports both serverless and dedicated endpoints for Text Generation. When creating dedicated endpoint select the Text Generation task.
After the endpoint is initialized (for dedicated) or ready (for serverless), ensure it supports the OpenAI API and includes /v1/chat/completions part in URL. Then, copy the full endpoint URL for use.
Recommended models for chat_completion and completion tasks:
Mistral-7B-Instruct-v0.2QwQ-32BPhi-3-mini-128k-instructFor Elastic's rerank task:
The selected model must support the sentence-ranking task and expose OpenAI API.
HuggingFace supports only dedicated (not serverless) endpoints for Rerank so far.
After the endpoint is initialized, copy the full endpoint URL for use.
Tested models for rerank task:
bge-reranker-basejina-reranker-v1-turbo-en-GGUFmanage_inferenceThe type of the inference task that the model will perform.
Values are chat_completion, completion, rerank, or text_embedding.
The unique identifier of the inference endpoint.
PUT _inference/text_embedding/hugging-face-embeddings
{
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
}
resp = client.inference.put(
task_type="text_embedding",
inference_id="hugging-face-embeddings",
inference_config={
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
},
)
const response = await client.inference.put({
task_type: "text_embedding",
inference_id: "hugging-face-embeddings",
inference_config: {
service: "hugging_face",
service_settings: {
api_key: "hugging-face-access-token",
url: "url-endpoint",
},
},
});
response = client.inference.put(
task_type: "text_embedding",
inference_id: "hugging-face-embeddings",
body: {
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
}
)
$resp = $client->inference()->put([
"task_type" => "text_embedding",
"inference_id" => "hugging-face-embeddings",
"body" => [
"service" => "hugging_face",
"service_settings" => [
"api_key" => "hugging-face-access-token",
"url" => "url-endpoint",
],
],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"hugging_face","service_settings":{"api_key":"hugging-face-access-token","url":"url-endpoint"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/hugging-face-embeddings"
client.inference().put(p -> p
.inferenceId("hugging-face-embeddings")
.taskType(TaskType.TextEmbedding)
.inferenceConfig(i -> i
.service("hugging_face")
.serviceSettings(JsonData.fromJson("{\"api_key\":\"hugging-face-access-token\",\"url\":\"url-endpoint\"}"))
)
);
{
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
}
}
{
"service": "hugging_face",
"service_settings": {
"api_key": "hugging-face-access-token",
"url": "url-endpoint"
},
"task_settings": {
"return_documents": true,
"top_n": 3
}
}