The unique identifier of the trained model or a model alias.
You can get information for multiple trained models in a single API request by using a comma-separated list of model IDs or a wildcard expression.
Specifies what to do when the request:
If true, it returns an empty array when there are no matches and the subset of results when there are partial matches.
Specifies whether the included model definition should be returned as a JSON map (true) or in a custom compressed format (false).
Indicates if certain fields should be removed from the configuration on retrieval. This allows the configuration to be in an acceptable format to be retrieved and then added to another cluster.
Skips the specified number of models.
A comma delimited string of optional fields to include in the response body.
Supported values include:
definition: Includes the model definition.feature_importance_baseline: Includes the baseline for feature importance values.hyperparameters: Includes the information about hyperparameters used to train the model.
This information consists of the value, the absolute and relative
importance of the hyperparameter as well as an indicator of whether it was
specified by the user or tuned during hyperparameter optimization.total_feature_importance: Includes the total feature importance for the training data set. The
baseline and total feature importance values are returned in the metadata
field in the response body.definition_status: Includes the model definition status.Values are definition, feature_importance_baseline, hyperparameters, total_feature_importance, or definition_status.
Specifies the maximum number of models to obtain.
A comma delimited string of tags. A trained model can have many tags, or none. When supplied, only trained models that contain all the supplied tags are returned.
An array of trained model resources, which are sorted by the model_id value in ascending order.
Values are tree_ensemble, lang_ident, or pytorch.
A comma delimited string of tags. A trained model can have many tags, or none.
Information on the creator of the trained model.
Any field map described in the inference configuration takes precedence.
The free-text description of the trained model.
The estimated heap usage in bytes to keep the trained model in memory.
The estimated number of operations to use the trained model.
True if the full model definition is present.
Inference configuration provided when storing the model config
Specifies the number of top class predictions to return. Defaults to 0.
Specifies the maximum number of feature importance values per document.
Default value is 0.
Specifies the type of the predicted field to write. Acceptable values are: string, number, boolean. When boolean is provided 1.0 is transformed to true and 0.0 to false.
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Specifies the field to which the top classes are written. Defaults to top_classes.
Text classification configuration options
Specifies the number of top class predictions to return. Defaults to 0.
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Classification labels to apply other than the stored labels. Must have the same deminsions as the default configured labels
Zero shot classification configuration options
Tokenization options stored in inference configuration
Hypothesis template used when tokenizing labels for prediction
Default value is "This example is {}.".
The zero shot classification labels indicating entailment, neutral, and contradiction Must contain exactly and only entailment, neutral, and contradiction
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Indicates if more than one true label exists.
Default value is false.
The labels to predict.
Fill mask inference options
The string/token which will be removed from incoming documents and replaced with the inference prediction(s). In a response, this field contains the mask token for the specified model/tokenizer. Each model and tokenizer has a predefined mask token which cannot be changed. Thus, it is recommended not to set this value in requests. However, if this field is present in a request, its value must match the predefined value for that model/tokenizer, otherwise the request will fail.
Specifies the number of top class predictions to return. Defaults to 0.
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Named entity recognition options
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
The token classification labels. Must be IOB formatted tags
Pass through configuration options
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Text embedding inference options
The number of dimensions in the embedding output
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Text expansion inference options
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
Question answering inference options
Specifies the number of top class predictions to return. Defaults to 0.
Tokenization options stored in inference configuration
The field that is added to incoming documents to contain the inference prediction. Defaults to predicted_value.
The maximum answer length to consider
The license level of the trained model.
An object that contains the baseline for feature importance values. For regression analysis, it is a single value. For classification analysis, there is a value for each class.
List of the available hyperparameters optimized during the fine_parameter_tuning phase as well as specified by the user.
A positive number showing how much the parameter influences the variation of the loss function. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.
A number between 0 and 1 showing the proportion of influence on the variation of the loss function among all tuned hyperparameters. For hyperparameters with values that are not specified by the user but tuned during hyperparameter optimization.
Indicates if the hyperparameter is specified by the user (true) or optimized (false).
The value of the hyperparameter, either optimized or specified by the user.
An array of the total feature importance for each feature used from the training data set. This array of objects is returned if data frame analytics trained the model and the request includes total_feature_importance in the include request parameter.
A collection of feature importance statistics related to the training data set for this particular feature.
If the trained model is a classification model, feature importance statistics are gathered per target class value.
GET _ml/trained_models/
resp = client.ml.get_trained_models()
const response = await client.ml.getTrainedModels();
response = client.ml.get_trained_models
$resp = $client->ml()->getTrainedModels();
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/trained_models/"
client.ml().getTrainedModels(g -> g);