Clear trained model deployment cache APIedit

Clears the inference cache on all nodes where the deployment is assigned.


POST _ml/trained_models/<deployment_id>/deployment/cache/_clear


Requires the manage_ml cluster privilege. This privilege is included in the machine_learning_admin built-in role.


A trained model deployment may have an inference cache enabled. As requests are handled by each allocated node, their responses may be cached on that individual node. Calling this API clears the caches without restarting the deployment.

Path parametersedit

(Required, string) A unique identifier for the deployment of the model.


The following example clears the cache for the new deployment for the elastic__distilbert-base-uncased-finetuned-conll03-english trained model:

response =
  model_id: 'elastic__distilbert-base-uncased-finetuned-conll03-english'
puts response
POST _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/cache/_clear

The API returns the following results:

   "cleared": true