Start trained model deployment APIedit

Starts a new trained model deployment.

This functionality is in technical preview and may be changed or removed in a future release. Elastic will work to fix any issues, but features in technical preview are not subject to the support SLA of official GA features.

Requestedit

POST _ml/trained_models/<model_id>/deployment/_start

Prerequisitesedit

Requires the manage_ml cluster privilege. This privilege is included in the machine_learning_admin built-in role.

Descriptionedit

Currently only pytorch models are supported for deployment. When deployed, the model attempts allocation to every machine learning node. Once deployed the model can be used by the Inference processor in an ingest pipeline or directly in the Infer trained model deployment API.

Path parametersedit

<model_id>
(Required, string) The unique identifier of the trained model.

Query parametersedit

inference_threads
(Optional, integer) Sets the number of threads used by the inference process. This generally increases the inference speed. The inference process is a compute-bound process; any number greater than the number of available CPU cores on the machine does not increase the inference speed. Defaults to 1.
model_threads
(Optional, integer) Indicates how many threads are used when sending inference requests to the model. Increasing this value generally increases the throughput. Defaults to 1.
queue_capacity
(Optional, integer) Controls how many inference requests are allowed in the queue at a time. Every machine learning node in the cluster where the model can be allocated has a queue of this size; when the number of requests exceeds the total value, new requests are rejected with a 429 error. Defaults to 1024.
timeout
(Optional, time) Controls the amount of time to wait for the model to deploy. Defaults to 20 seconds.
wait_for
(Optional, string) Specifies the allocation status to wait for before returning. Defaults to started. The value starting indicates deployment is starting but not yet on any node. The value started indicates the model has started on at least one node. The value fully_allocated indicates the deployment has started on all valid nodes.

Examplesedit

The following example starts a new deployment for a elastic__distilbert-base-uncased-finetuned-conll03-english trained model:

POST _ml/trained_models/elastic__distilbert-base-uncased-finetuned-conll03-english/deployment/_start?wait_for=started&timeout=1m

The API returns the following results:

{
    "allocation": {
        "task_parameters": {
            "model_id": "elastic__distilbert-base-uncased-finetuned-conll03-english",
            "model_bytes": 265632637
        },
        "routing_table": {
            "uckeG3R8TLe2MMNBQ6AGrw": {
                "routing_state": "started",
                "reason": ""
            }
        },
        "allocation_state": "started",
        "start_time": "2021-11-02T11:50:34.766591Z"
    }
}