Clear trained model deployment cache APIedit

Clears the inference cache on all nodes where the deployment is assigned.

Requestedit

POST _ml/trained_models/<deployment_id>/deployment/cache/_clear

Prerequisitesedit

Requires the manage_ml cluster privilege. This privilege is included in the machine_learning_admin built-in role.

Descriptionedit

A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.

Path parametersedit

deployment_id
(Required, string) A unique identifier for the deployment of the model.

Examplesedit

The following example clears the cache for the new deployment for the elastic__distilbert-base-uncased-finetuned-conll03-english trained model:

resp = client.ml.clear_trained_model_deployment_cache(
    model_id="elastic__distilbert-base-uncased-finetuned-conll03-english",
)
print(resp)
response = client.ml.clear_trained_model_deployment_cache(
  model_id: 'elastic__distilbert-base-uncased-finetuned-conll03-english'
)
puts response
POST _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/cache/_clear

The API returns the following results:

{
   "cleared": true
}