Stop trained model deployment APIedit

Stops a trained model deployment.

Requestedit

POST _ml/trained_models/<deployment_id>/deployment/_stop

Prerequisitesedit

Requires the manage_ml cluster privilege. This privilege is included in the machine_learning_admin built-in role.

Descriptionedit

Deployment is required only for trained models that have a PyTorch model_type.

Path parametersedit

<deployment_id>
(Required, string) A unique identifier for the deployment of the model.

Query parametersedit

allow_no_match

(Optional, Boolean) Specifies what to do when the request:

  • Contains wildcard expressions and there are no deployments that match.
  • Contains the _all string or no identifiers and there are no matches.
  • Contains wildcard expressions and there are only partial matches.

The default value is true, which returns an empty array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

force
(Optional, Boolean) If true, the deployment is stopped even if it or one of its model aliases is referenced by ingest pipelines. You can’t use these pipelines until you restart the model deployment.
finish_pending_work
(Optional, Boolean) If true, the deployment is stopped after any queued work is completed. Defaults to false.

Examplesedit

The following example stops the my_model_for_search deployment:

resp = client.ml.stop_trained_model_deployment(
    model_id="my_model_for_search",
)
print(resp)
response = client.ml.stop_trained_model_deployment(
  model_id: 'my_model_for_search'
)
puts response
POST _ml/trained_models/my_model_for_search/deployment/_stop