This functionality is in beta and is subject to change. The design and code is less mature than official GA features and is being provided as-is with no warranties. Beta features are not subject to the support SLA of official GA features.
Creates a new trained model for inference.
The API accepts a
PutTrainedModelRequest object as a request and returns a
PutTrainedModelRequest requires the following argument:
TrainedModelConfig object contains all the details about the trained model
configuration and contains the following arguments:
TrainedModelConfig trainedModelConfig = TrainedModelConfig.builder() .setDefinition(definition) .setCompressedDefinition(InferenceToXContentCompressor.deflate(definition)) .setModelId("my-new-trained-model") .setInput(new TrainedModelInput("col1", "col2", "col3", "col4")) .setDescription("test model") .setMetadata(new HashMap<>()) .setTags("my_regression_models") .setInferenceConfig(new RegressionConfig("value", 0)) .build();
The inference definition for the model
Optionally, if the inference definition is large, you may choose to compress it for transport. Do not supply both the compressed and uncompressed definitions.
The unique model id
The input field names for the model definition
Optionally, a human-readable description
Optionally, an object map contain metadata about the model
Optionally, an array of tags to organize the model
The default inference config to use with the model. Must match the underlying definition target_type.
When executing a
PutTrainedModelRequest in the following manner, the client waits
PutTrainedModelResponse to be returned before continuing with code execution:
PutTrainedModelResponse response = client.machineLearning().putTrainedModel(request, RequestOptions.DEFAULT);
Synchronous calls may throw an
IOException in case of either failing to
parse the REST response in the high-level REST client, the request times out
or similar cases where there is no response coming back from the server.
In cases where the server returns a
5xx error code, the high-level
client tries to parse the response body error details instead and then throws
ElasticsearchException and adds the original
ResponseException as a
suppressed exception to it.
PutTrainedModelRequest can also be done in an asynchronous fashion so that
the client can return directly. Users need to specify how the response or
potential failures will be handled by passing the request and a listener to the
asynchronous put-trained-model method:
The asynchronous method does not block and returns immediately. Once it is
ActionListener is called back using the
if the execution successfully completed or using the
onFailure method if
it failed. Failure scenarios and expected exceptions are the same as in the
synchronous execution case.
A typical listener for
put-trained-model looks like:
PutTrainedModelResponse contains the newly created trained model.
PutTrainedModelResponse will omit the model definition as a precaution against
streaming large model definitions back to the client.
TrainedModelConfig model = response.getResponse();