Estimate anomaly detection job model memory APIedit

Estimate the model memory an analysis config is likely to need for the given cardinality of the fields it references.

Estimate anomaly detection job model memory requestedit

A EstimateModelMemoryRequest can be set up as follows:

Detector.Builder detectorBuilder = new Detector.Builder()
    .setFunction("count")
    .setPartitionFieldName("status");
AnalysisConfig.Builder analysisConfigBuilder =
    new AnalysisConfig.Builder(Collections.singletonList(detectorBuilder.build()))
    .setBucketSpan(TimeValue.timeValueMinutes(10))
    .setInfluencers(Collections.singletonList("src_ip"));
EstimateModelMemoryRequest request = new EstimateModelMemoryRequest(analysisConfigBuilder.build()); 
request.setOverallCardinality(Collections.singletonMap("status", 50L));                             
request.setMaxBucketCardinality(Collections.singletonMap("src_ip", 30L));                           

Pass an AnalysisConfig to the constructor.

For any by_field_name, over_field_name or partition_field_name fields referenced by the detectors, supply overall cardinality estimates in a Map.

For any influencers, supply a Map containing estimates of the highest cardinality expected in any single bucket.

Synchronous executionedit

When executing a EstimateModelMemoryRequest in the following manner, the client waits for the EstimateModelMemoryResponse to be returned before continuing with code execution:

EstimateModelMemoryResponse estimateModelMemoryResponse =
    client.machineLearning().estimateModelMemory(request, RequestOptions.DEFAULT);

Synchronous calls may throw an IOException in case of either failing to parse the REST response in the high-level REST client, the request times out or similar cases where there is no response coming back from the server.

In cases where the server returns a 4xx or 5xx error code, the high-level client tries to parse the response body error details instead and then throws a generic ElasticsearchException and adds the original ResponseException as a suppressed exception to it.

Asynchronous executionedit

Executing a EstimateModelMemoryRequest can also be done in an asynchronous fashion so that the client can return directly. Users need to specify how the response or potential failures will be handled by passing the request and a listener to the asynchronous estimate-model-memory method:

client.machineLearning()
    .estimateModelMemoryAsync(request, RequestOptions.DEFAULT, listener); 

The EstimateModelMemoryRequest to execute and the ActionListener to use when the execution completes

The asynchronous method does not block and returns immediately. Once it is completed the ActionListener is called back using the onResponse method if the execution successfully completed or using the onFailure method if it failed. Failure scenarios and expected exceptions are the same as in the synchronous execution case.

A typical listener for estimate-model-memory looks like:

ActionListener<EstimateModelMemoryResponse> listener = new ActionListener<EstimateModelMemoryResponse>() {
    @Override
    public void onResponse(EstimateModelMemoryResponse estimateModelMemoryResponse) {
        
    }

    @Override
    public void onFailure(Exception e) {
        
    }
};

Called when the execution is successfully completed.

Called when the whole EstimateModelMemoryRequest fails.

Estimate anomaly detection job model memory responseedit

The returned EstimateModelMemoryResponse contains the model memory estimate:

ByteSizeValue modelMemoryEstimate = estimateModelMemoryResponse.getModelMemoryEstimate(); 
long estimateInBytes = modelMemoryEstimate.getBytes();

The model memory estimate.