IMPORTANT: No additional bug fixes or documentation updates will be released for this version. For the latest information, see the current release documentation.
Upgrading to Enterprise Search 8.0.0-rc1? See Upgrading & migrating.
Kibana host value in Enterprise Search configuration now defaults to
- Use data streams for log indices
- Deprecate ilm.enabled configuration setting. ILM is now enabled for all deployments.
- Starting with 8.0.0, Enterprise Search requires Java 11 to be installed on the system (Java 8 is approaching end-of-life and is not supported anymore). Docker images with Enterprise Search use Java 11 by default.
- The crawler APIs are Generally Available as of v8.0.
- Add support for crawling with stateless auth by configuring each domain. Requires platinum+ license.
App Search API users can now control the number of Elasticsearch shards that will be used for their engines by providing a
index_create_settings_overrideparameter for the Engine Create API call. If App Search fails to create an Elasticsearch index to power an engine, it rolls back engine creation process to ensure the product remains in a consistent state.
- URL validation checks have been improved to always check the validity of the provided URL.
- Curations organic results displayed in the UI now take into account search relevance tunings.
- Added geo sorting support for Search API
- Brotli content compression algorithm is now supported by the crawler. It is now possible to disable HTTP content compression in Enterprise Search crawler.
- Crawler metrics have been added to Enterprise Search Stats API.
- Update v0 in crawler routes to v1. For now, this is deprecated, meaning v0 still works, but we will remove v0 routes in an upcoming release.
- During purge crawl the crawler would fetch URLs from sitemaps. This is not needed and would potentially result in the crawl taking much longer to finish for configurations with a lot of sitemaps (both manually added sitemaps but also auto-discovered from robots.txt)
- Fixes bug where disabling the Web Crawler logs from Settings still resulted in indexing logs into ES.
- Increased resiliency of background work queueing code (background jobs, crawler jobs, etc) by introducing Elasticsearch transient error retries.
- Improve reliability of the Crawl Queue by retrying transient errors from Elasticsearch.
- Crawler fails gracefully now when detecting an unsupported content encoding method in an HTTP response.
- Allow to index additional fields for predefined Salesforce objects using indexing config.
- Change caching logic for Salesforce connector to cache less data, reducing the memory requirement of the connector.
updated_atfield for documents indexed in Content Sources to be populated as last time when this document was indexed into Workplace Search.