Get inference trained model statistics APIedit

Retrieves usage information for trained inference models.

This functionality is experimental and may be changed or removed completely in a future release. Elastic will take a best effort approach to fix any issues, but experimental features are not subject to the support SLA of official GA features.

Requestedit

GET _ml/inference/_stats

GET _ml/inference/_all/_stats

GET _ml/inference/<model_id>/_stats

GET _ml/inference/<model_id>,<model_id_2>/_stats

GET _ml/inference/<model_id_pattern*>,<model_id_2>/_stats

Prerequisitesedit

Required privileges which should be added to a custom role:

  • cluster: monitor_ml

For more information, see Security privileges and Machine learning security privileges.

Descriptionedit

You can get usage information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.

Path parametersedit

<model_id>
(Optional, string) The unique identifier of the trained inference model.

Query parametersedit

allow_no_match

(Optional, boolean) Specifies what to do when the request:

  • Contains wildcard expressions and there are no data frame analytics jobs that match.
  • Contains the _all string or no identifiers and there are no matches.
  • Contains wildcard expressions and there are only partial matches.

The default value is true, which returns an empty data_frame_analytics array when there are no matches and the subset of results when there are partial matches. If this parameter is false, the request returns a 404 status code when there are no matches or only partial matches.

from
(Optional, integer) Skips the specified number of data frame analytics jobs. The default value is 0.
size
(Optional, integer) Specifies the maximum number of data frame analytics jobs to obtain. The default value is 100.

Response bodyedit

count
(integer) The total number of trained model statistics that matched the requested ID patterns. Could be higher than the number of items in the trained_model_stats array as the size of the array is restricted by the supplied size parameter.
trained_model_stats

(array) An array of trained model statistics, which are sorted by the model_id value in ascending order.

Properties of trained model stats
model_id
(string) The unique identifier of the trained inference model.
pipeline_count
(integer) The number of ingest pipelines that currently refer to the model.
inference_stats

(object) A collection of inference stats fields.

Properties of inference stats
missing_all_fields_count
(integer) The number of inference calls where all the training features for the model were missing.
inference_count
(integer) The total number of times the model has been called for inference. This is across all inference contexts, including all pipelines.
cache_miss_count
(integer) The number of times the model was loaded for inference and was not retrieved from the cache. If this number is close to the inference_count, then the cache is not being appropriately used. This can be remedied by increasing the cache’s size or its time-to-live (TTL). See General machine learning settings for the appropriate settings.
failure_count
(integer) The number of failures when using the model for inference.
timestamp
(time units) The time when the statistics were last updated.
ingest
(object) A collection of ingest stats for the model across all nodes. The values are summations of the individual node statistics. The format matches the ingest section in Nodes stats.

Response codesedit

404 (Missing resources)
If allow_no_match is false, this code indicates that there are no resources that match the request or only partial matches for the request.

Examplesedit

The following example gets usage information for all the trained models:

GET _ml/inference/_stats

The API returns the following results:

{
  "count": 2,
  "trained_model_stats": [
    {
      "model_id": "flight-delay-prediction-1574775339910",
      "pipeline_count": 0,
      "inference_stats": {
        "failure_count": 0,
        "inference_count": 4,
        "cache_miss_count": 3,
        "missing_all_fields_count": 0,
        "timestamp": 1592399986979
      }
    },
    {
      "model_id": "regression-job-one-1574775307356",
      "pipeline_count": 1,
      "inference_stats": {
        "failure_count": 0,
        "inference_count": 178,
        "cache_miss_count": 3,
        "missing_all_fields_count": 0,
        "timestamp": 1592399986979
      },
      "ingest": {
        "total": {
          "count": 178,
          "time_in_millis": 8,
          "current": 0,
          "failed": 0
        },
        "pipelines": {
          "flight-delay": {
            "count": 178,
            "time_in_millis": 8,
            "current": 0,
            "failed": 0,
            "processors": [
              {
                "inference": {
                  "type": "inference",
                  "stats": {
                    "count": 178,
                    "time_in_millis": 7,
                    "current": 0,
                    "failed": 0
                  }
                }
              }
            ]
          }
        }
      }
    }
  ]
}