Create an OpenShift AI inference endpoint Generally available

View as Markdown
PUT /_inference/{task_type}/{openshiftai_inference_id}

Create an inference endpoint to perform an inference task with the openshift_ai service.

Required authorization

  • Cluster privileges: manage_inference

Path parameters

  • task_type string

    The type of the inference task that the model will perform. NOTE: The chat_completion task type only supports streaming and only through the _stream API.

    Values are text_embedding, completion, chat_completion, or rerank.

  • openshiftai_inference_id string Required

    The unique identifier of the inference endpoint.

Query parameters

application/json

Body Required

  • chunking_settings object

    The chunking configuration object. Applies only to the text_embedding task type. Not applicable to the rerank, completion, or chat_completion task types.

    External documentation
    Hide chunking_settings attributes Show chunking_settings attributes object
    • max_chunk_size number

      The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

      Default value is 250.

    • overlap number

      The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

      Default value is 100.

    • sentence_overlap number

      The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

      Default value is 1.

    • separator_group string

      Only applicable to the recursive strategy and required when using it.

      Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

      Using this parameter is an alternative to manually specifying a custom separators list.

    • separators array[string]

      Only applicable to the recursive strategy and required when using it.

      A list of strings used as possible split points when chunking text.

      Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

      After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

    • strategy string

      The chunking strategy: sentence, word, none or recursive.

      • If strategy is set to recursive, you must also specify:

        • max_chunk_size
        • either separators orseparator_group

      Learn more about different chunking strategies in the linked documentation.

      Default value is sentence.

      External documentation
  • service string Required

    The type of service supported for the specified task type. In this case, openshift_ai.

    Value is openshift_ai.

  • service_settings object Required

    Settings used to install the inference model. These settings are specific to the openshift_ai service.

    Hide service_settings attributes Show service_settings attributes object
    • api_key string Required

      A valid API key for your OpenShift AI endpoint. Can be found in Token authentication section of model related information.

    • url string Required

      The URL of the OpenShift AI hosted model endpoint.

    • model_id string

      The name of the model to use for the inference task. Refer to the hosted model's documentation for the name if needed. Service has been tested and confirmed to be working with the following models:

      • For text_embedding task - gritlm-7b.
      • For completion and chat_completion tasks - llama-31-8b-instruct.
      • For rerank task - bge-reranker-v2-m3.
    • max_input_tokens number

      For a text_embedding task, the maximum number of tokens per input before chunking occurs.

    • similarity string

      For a text_embedding task, the similarity measure. One of cosine, dot_product, l2_norm.

      Values are cosine, dot_product, or l2_norm.

    • rate_limit object

      This setting helps to minimize the number of rate limit errors returned from the OpenShift AI API. By default, the openshift_ai service sets the number of requests allowed per minute to 3000.

      Hide rate_limit attribute Show rate_limit attribute object
      • requests_per_minute number

        The number of requests allowed per minute. By default, the number of requests allowed per minute is set by each service as follows:

        • alibabacloud-ai-search service: 1000
        • anthropic service: 50
        • azureaistudio service: 240
        • azureopenai service and task type text_embedding: 1440
        • azureopenai service and task type completion: 120
        • cohere service: 10000
        • contextualai service: 1000
        • elastic service and task type chat_completion: 240
        • googleaistudio service: 360
        • googlevertexai service: 30000
        • hugging_face service: 3000
        • jinaai service: 2000
        • llama service: 3000
        • mistral service: 240
        • openai service and task type text_embedding: 3000
        • openai service and task type completion: 500
        • openshift_ai service: 3000
        • voyageai service: 2000
        • watsonxai service: 120
  • task_settings object

    Settings to configure the inference task. Applies only to the rerank task type. Not applicable to the text_embedding, completion, or chat_completion task types. These settings are specific to the task type you specified.

    Hide task_settings attributes Show task_settings attributes object
    • return_documents boolean

      For a rerank task, whether to return the source documents in the response.

    • top_n number

      For a rerank task, the number of most relevant documents to return.

Responses

  • 200 application/json
    Hide response attributes Show response attributes object
    • chunking_settings object

      The chunking configuration object. Applies only to the sparse_embedding and text_embedding task types. Not applicable to the rerank, completion, or chat_completion task types.

      Hide chunking_settings attributes Show chunking_settings attributes object
      • max_chunk_size number

        The maximum size of a chunk in words. This value cannot be lower than 20 (for sentence strategy) or 10 (for word strategy). This value should not exceed the window size for the associated model.

        Default value is 250.

      • overlap number

        The number of overlapping words for chunks. It is applicable only to a word chunking strategy. This value cannot be higher than half the max_chunk_size value.

        Default value is 100.

      • sentence_overlap number

        The number of overlapping sentences for chunks. It is applicable only for a sentence chunking strategy. It can be either 1 or 0.

        Default value is 1.

      • separator_group string

        Only applicable to the recursive strategy and required when using it.

        Sets a predefined list of separators in the saved chunking settings based on the selected text type. Values can be markdown or plaintext.

        Using this parameter is an alternative to manually specifying a custom separators list.

      • separators array[string]

        Only applicable to the recursive strategy and required when using it.

        A list of strings used as possible split points when chunking text.

        Each string can be a plain string or a regular expression (regex) pattern. The system tries each separator in order to split the text, starting from the first item in the list.

        After splitting, it attempts to recombine smaller pieces into larger chunks that stay within the max_chunk_size limit, to reduce the total number of chunks generated.

      • strategy string

        The chunking strategy: sentence, word, none or recursive.

        • If strategy is set to recursive, you must also specify:

          • max_chunk_size
          • either separators orseparator_group

        Learn more about different chunking strategies in the linked documentation.

        Default value is sentence.

        External documentation
    • service string Required

      The service type

    • service_settings object Required

      Settings specific to the service

    • task_settings object

      Task settings specific to the service and task type

    • inference_id string Required

      The inference Id

    • task_type string Required

      The task type

      Values are text_embedding, chat_completion, completion, or rerank.

PUT /_inference/{task_type}/{openshiftai_inference_id}
PUT _inference/text_embedding/openshift-ai-text-embedding
{
    "service": "openshift_ai",
    "service_settings": {
        "url": "openshift-ai-embeddings-url",
        "api_key": "openshift-ai-embeddings-token",
        "model_id": "gritlm-7b"
    }
}
resp = client.inference.put(
    task_type="text_embedding",
    inference_id="openshift-ai-text-embedding",
    inference_config={
        "service": "openshift_ai",
        "service_settings": {
            "url": "openshift-ai-embeddings-url",
            "api_key": "openshift-ai-embeddings-token",
            "model_id": "gritlm-7b"
        }
    },
)
const response = await client.inference.put({
  task_type: "text_embedding",
  inference_id: "openshift-ai-text-embedding",
  inference_config: {
    service: "openshift_ai",
    service_settings: {
      url: "openshift-ai-embeddings-url",
      api_key: "openshift-ai-embeddings-token",
      model_id: "gritlm-7b",
    },
  },
});
response = client.inference.put(
  task_type: "text_embedding",
  inference_id: "openshift-ai-text-embedding",
  body: {
    "service": "openshift_ai",
    "service_settings": {
      "url": "openshift-ai-embeddings-url",
      "api_key": "openshift-ai-embeddings-token",
      "model_id": "gritlm-7b"
    }
  }
)
$resp = $client->inference()->put([
    "task_type" => "text_embedding",
    "inference_id" => "openshift-ai-text-embedding",
    "body" => [
        "service" => "openshift_ai",
        "service_settings" => [
            "url" => "openshift-ai-embeddings-url",
            "api_key" => "openshift-ai-embeddings-token",
            "model_id" => "gritlm-7b",
        ],
    ],
]);
curl -X PUT -H "Authorization: ApiKey $ELASTIC_API_KEY" -H "Content-Type: application/json" -d '{"service":"openshift_ai","service_settings":{"url":"openshift-ai-embeddings-url","api_key":"openshift-ai-embeddings-token","model_id":"gritlm-7b"}}' "$ELASTICSEARCH_URL/_inference/text_embedding/openshift-ai-text-embedding"
Request examples
Run `PUT _inference/text_embedding/openshift-ai-text-embedding` to create an inference endpoint that performs a `text_embedding` task.
{
    "service": "openshift_ai",
    "service_settings": {
        "url": "openshift-ai-embeddings-url",
        "api_key": "openshift-ai-embeddings-token",
        "model_id": "gritlm-7b"
    }
}
Run `PUT _inference/completion/openshift-ai-completion` to create an inference endpoint that performs a `completion` task.
{
    "service": "openshift_ai",
    "service_settings": {
        "url": "openshift-ai-completion-url",
        "api_key": "openshift-ai-completion-token",
        "model_id": "llama-31-8b-instruct"
    }
}
Run `PUT _inference/chat_completion/openshift-ai-chat-completion` to create an inference endpoint that performs a `chat_completion` task.
{
    "service": "openshift_ai",
    "service_settings": {
        "url": "openshift-ai-chat-completion-url",
        "api_key": "openshift-ai-chat-completion-token",
        "model_id": "llama-31-8b-instruct"
    }
}
Run `PUT _inference/rerank/openshift-ai-rerank` to create an inference endpoint that performs a `rerank` task.
{
    "service": "openshift_ai",
    "service_settings": {
        "url": "openshift-ai-rerank-url",
        "api_key": "openshift-ai-rerank-token",
        "model_id": "bge-reranker-v2-m3"
    }
}
Run `PUT _inference/rerank/openshift-ai-rerank` to create an inference endpoint that performs a `rerank` task, specifying custom `task_settings` and omitting the `model_id` if deployed model doesn't require it.
{
    "service": "openshift_ai",
    "service_settings": {
        "url": "openshift-ai-rerank-url",
        "api_key": "openshift-ai-rerank-token"
    },
    "task_settings": {
        "return_documents": true,
        "top_n": 2
    }
}
Response examples (200)
A successful response when creating an OpenShift AI `text_embedding` inference endpoint.
{
  "inference_id": "openshift-ai-text-embedding",
  "task_type": "text_embedding",
  "service": "openshift_ai",
  "service_settings": {
    "model_id": "gritlm-7b",
    "url": "openshift-ai-embeddings-url",
    "rate_limit": {
      "requests_per_minute": 3000
    },
    "dimensions": 4096,
    "similarity": "dot_product",
    "dimensions_set_by_user": false
  },
  "chunking_settings": {
    "strategy": "sentence",
    "max_chunk_size": 250,
    "sentence_overlap": 1
  }
}
A successful response when creating an OpenShift AI `completion` inference endpoint.
{
  "inference_id": "openshift-ai-completion",
  "task_type": "completion",
  "service": "openshift_ai",
  "service_settings": {
    "model_id": "llama-31-8b-instruct",
    "url": "openshift-ai-completion-url",
    "rate_limit": {
      "requests_per_minute": 3000
    }
  }
}
A successful response when creating an OpenShift AI `chat_completion` inference endpoint.
{
  "inference_id": "openshift-ai-chat-completion",
  "task_type": "chat_completion",
  "service": "openshift_ai",
  "service_settings": {
    "model_id": "llama-31-8b-instruct",
    "url": "openshift-ai-chat-completion-url",
    "rate_limit": {
      "requests_per_minute": 3000
    }
  }
}
A successful response when creating an OpenShift AI `rerank` inference endpoint.
{
  "inference_id": "openshift-ai-rerank",
  "task_type": "rerank",
  "service": "openshift_ai",
  "service_settings": {
    "model_id": "bge-reranker-v2-m3",
    "url": "openshift-ai-rerank-url",
    "rate_limit": {
      "requests_per_minute": 3000
    }
  }
}
A successful response when creating an OpenShift AI `rerank` inference endpoint with custom `task_settings` and omitted `model_id`
{
  "inference_id": "openshift-ai-rerank",
  "task_type": "rerank",
  "service": "openshift_ai",
  "service_settings": {
    "url": "openshift-ai-rerank-url",
    "rate_limit": {
      "requests_per_minute": 3000
    }
  },
  "task_settings": {
    "return_documents": true,
    "top_n": 2
  }
}