Introducing deployment autoscaling!
New autoscaling support on Elasticsearch Service helps you to balance cost with efficient performance by adjusting the resources available to your deployments automatically. This reduces the need for you to adjust capacity manually as requirements and loads change over time. In this initial release, data tiers can scale up automatically in response to past and present storage usage, and machine learning nodes can scale both up and down based on memory requirements for the current jobs. Learn more…
Publish Elasticsearch Service prices. Elasticsearch service pricing is now published on the Elastic website so that current pricing for all providers, regions, and instance types can be seen in one location.
Include internal details and failure type in logs. The logs generated from a plan change now contain more detail, making it easier for you to debug problems and understand why a plan may have failed. Three new attributes are now included in the step logs:
details: Contains details about a step failure, visible to all console users.
internal_details: Contains sensitive details about the step failure, visible only to admin console users.
failure_type: Describes the type of failure that occurred.
Add Copy ID link to the application links. The deployment overview now has a
Copy ID link. This simplifies getting the application IDs required to set up cross-cluster search and cross-cluster replication.
Add prices app to the billing service. A prices application is added to the billing service to expose the
/v1/prices/adjustment?domain=<domain-id> endpoint as a REST API. Note that the API will validate that
domain-id is one of the
types.adjustments fields (currently "aws", "azure", "gcp", and "found") and will always return the
Enable Elasticsearch searchable snapshots partial cache settings. Searchable Snapshots partial storage settings can now be configured when you create a new deployment.
Send an email on GCP paused/ended unsubscribe events. GCP Marketplace customers are now emailed after disconnecting a project with the list of deployments that will be terminated if they don’t reconnect the project. The email includes the timestamp when deployment termination will occur and other details.
Turn on marketplace toggle always. Users can now see both marketplace and non-marketplace prices for AWS on the pricing page.
Enable subscription self-serve for AWS Marketplace users. AWS Marketplace users can now self-select their billing subscription level.
Improve snapshot repository logging. Error reporting is improved for certain failures that can occur when creating snapshot repositories.
Disable internal collection when Metricbeat enabled. Metricbeat monitoring performance is optimized by disabling legacy monitoring collection in Elasticsearch, Kibana, and APM when Metricbeat is in use.
Change user settings validation to validate objects as a whole . New validation rules for user settings require the order setting when specifying a custom realm through user settings for Elasticsearch clusters on version 8.0 or higher.
Stop sending extraneous exception details. Plan failures shown in the user console now have fewer extraneous, unactionable details in them.
Update go to 1.15.8. The Elasticsearch Service proxy has been updated to go version 1.15.8.
Display all snapshots Fixes a bug where only a subset of available snapshots were displayed in the UI, sometimes causing a message inconsistent with the snapshots that actually exist in the cluster.
Update full name and email for SSO if needed. This fix ensures that Elasticsearch Service users will get an updated display name when they SSO into a Stack application, such as Kibana, after updating their email address.
Use disk queue in Metricbeat. Fixes issues on dedicated master instances on version 7.6+ that use the monitoring feature, where memory pressure is elevated and garbage collection is more frequent on the elected master, by using the Metricbeat disk queue.
Add voting exclusion for instances losing master role. Fixes a variety of edge cases that could lead to cluster quorum loss on 7.x+ clusters, such as running a plan that switches from multiple master nodes to a single master node.
Fix "GC Overhead Per Node" metric. Fixes a bug that prevented the "GC Overhead Per Node" metric on the console Performance page from working properly.
Fix console request metrics query. Fixes bug where user console metrics would not show request metrics.
Handle terminated deployments and missing templates on Edit screen. Editing terminated or certain system deployments should no longer throw an error.
Keep legacy exporter enabled when monitoring with Metricbeat. Fixes the following three bugs:
- Legacy collection monitoring of externally deployed services (e.g. Logstash) was disabled when Metricbeat monitoring is enabled in Cloud.
- Monitoring index retention was not being enforced when self-monitoring is enabled.
- Restoring a snapshot into a new deployment with cluster state could restore broken monitoring settings that require manual Elasticsearch settings changes.
Apply correct timestamps to downloaded bundles. Fixes a bug that could cause instances to bootloop during rolling plans if a cluster is configured with user bundles.
Enable AttemptClusterStabilisation feature flag. Running a plan in which some instances will be mutated will now first (re-)start any other instances that are not running. This mitigates the risk of losing cluster quorum during certain plans (such as adding dedicated masters) when the cluster is in an abnormal state.
Use recommended JVM heap allocation for dedicated masters. Fixes an issue where dedicated masters can OOM due to over-allocated heap size
Get rid of nested retry loop. Cluster creation plans which fail will now fail faster instead of hanging unnecessarily during the rollback-migrate step.
Avoid chown of home directory when log delivery is enabled. Fixes a bug where Kibana can take a long time to start when log delivery is enabled.
Use smaller Elasticsearch heap when Filebeat and Metricbeat are running. Fixes a bug where when logs and metrics are enabled on Elasticsearch clusters: Small, master-only, instances and tiebreaker instances have memory swapping issues.
Make some ES domain fields optional. Fixes a bug that would sometimes cause plans to fail during the
Migrating shard data step.
Clear (don’t set) initial-master-nodes if cluster already bootstrapped. Clusters will no longer end up in a split brain state if masters are added while all other masters are currently offline.