Get trained models statistics APIedit

Retrieves usage information for trained models.


GET _ml/trained_models/_stats

GET _ml/trained_models/_all/_stats

GET _ml/trained_models/<model_id>/_stats

GET _ml/trained_models/<model_id>,<model_id_2>/_stats

GET _ml/trained_models/<model_id_pattern*>,<model_id_2>/_stats


Requires the monitor_ml cluster privilege. This privilege is included in the machine_learning_user built-in role.


You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.

Path parametersedit

(Optional, string) The unique identifier of the trained model or a model alias.

Query parametersedit


(Optional, Boolean) Specifies what to do when the request:

  • Contains wildcard expressions and there are no models that match.
  • Contains the _all string or no identifiers and there are no matches.
  • Contains wildcard expressions and there are only partial matches.

The default value is true, which returns an empty array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

(Optional, integer) Skips the specified number of models. The default value is 0.
(Optional, integer) Specifies the maximum number of models to obtain. The default value is 100.

Response bodyedit

(integer) The total number of trained model statistics that matched the requested ID patterns. Could be higher than the number of items in the trained_model_stats array as the size of the array is restricted by the supplied size parameter.

(array) An array of trained model statistics, which are sorted by the model_id value in ascending order.

Properties of trained model stats

(list) A collection of deployment stats if one of the provided model_id values is deployed

Properties of deployment stats

(object) The detailed allocation status given the deployment configuration.

Properties of allocation stats
(integer) The current number of nodes where the model is allocated.

(string) The detailed allocation state related to the nodes.

  • starting: Allocations are being attempted but no node currently has the model allocated.
  • started: At least one node has the model allocated.
  • fully_allocated: The deployment is fully allocated and satisfies the target_allocation_count.
(integer) The desired number of nodes for model allocation.
(string) The unique identifier of the trained model.

(array of objects) The deployment stats for each node that currently has the model allocated.

Properties of node stats
(double) The average time for each inference call to complete on this node.
(integer) The total number of inference calls made against this node for this model.
(long) The epoch time stamp of the last inference call for the model on this node.

(object) Information pertaining to the node.

Properties of node
(object) Lists node attributes such as ml.machine_memory or ml.max_open_jobs settings.
(string) The ephemeral ID of the node.
(string) The unique identifier of the node.
(string) The node name.
(string) The host and port where transport HTTP connections are accepted.
(string) The reason for the current state. Usually only populated when the routing_state is failed.

(object) The current routing state and reason for the current routing state for this allocation.

  • starting: The model is attempting to allocate on this model, inference calls are not yet accepted.
  • started: The model is allocated and ready to accept inference requests.
  • stopping: The model is being deallocated from this node.
  • stopped: The model is fully deallocated from this node.
  • failed: The allocation attempt failed, see reason field for the potential cause.
(long) The epoch timestamp when the allocation started.
(long) The epoch timestamp when the deployment started.

(string) The overall state of the deployment. The values may be:

  • starting: The deployment has recently started but is not yet usable as the model is not allocated on any nodes.
  • started: The deployment is usable as at least one node has the model allocated.
  • stopping: The deployment is preparing to stop and deallocate the model from the relevant nodes.

(object) A collection of inference stats fields.

Properties of inference stats
(integer) The number of inference calls where all the training features for the model were missing.
(integer) The total number of times the model has been called for inference. This is across all inference contexts, including all pipelines.
(integer) The number of times the model was loaded for inference and was not retrieved from the cache. If this number is close to the inference_count, then the cache is not being appropriately used. This can be solved by increasing the cache size or its time-to-live (TTL). See General machine learning settings for the appropriate settings.
(integer) The number of failures when using the model for inference.
(time units) The time when the statistics were last updated.
(object) A collection of ingest stats for the model across all nodes. The values are summations of the individual node statistics. The format matches the ingest section in Nodes stats.
(string) The unique identifier of the trained model.

(object) A collection of model size stats fields.

Properties of model size stats
(integer) The size of the model in bytes.
(integer) The amount of memory required to load the model in bytes.
(integer) The number of ingest pipelines that currently refer to the model.

Response codesedit

404 (Missing resources)
If allow_no_match is false, this code indicates that there are no resources that match the request or only partial matches for the request.


The following example gets usage information for all the trained models:

GET _ml/trained_models/_stats

The API returns the following results:

  "count": 2,
  "trained_model_stats": [
      "model_id": "flight-delay-prediction-1574775339910",
      "pipeline_count": 0,
      "inference_stats": {
        "failure_count": 0,
        "inference_count": 4,
        "cache_miss_count": 3,
        "missing_all_fields_count": 0,
        "timestamp": 1592399986979
      "model_id": "regression-job-one-1574775307356",
      "pipeline_count": 1,
      "inference_stats": {
        "failure_count": 0,
        "inference_count": 178,
        "cache_miss_count": 3,
        "missing_all_fields_count": 0,
        "timestamp": 1592399986979
      "ingest": {
        "total": {
          "count": 178,
          "time_in_millis": 8,
          "current": 0,
          "failed": 0
        "pipelines": {
          "flight-delay": {
            "count": 178,
            "time_in_millis": 8,
            "current": 0,
            "failed": 0,
            "processors": [
                "inference": {
                  "type": "inference",
                  "stats": {
                    "count": 178,
                    "time_in_millis": 7,
                    "current": 0,
                    "failed": 0