Upgrading to Enterprise Search 8.5.0? See Upgrading and migrating.
Licensing enforcement for the Elastic web crawler and native connectors:
- Web crawlers on self-managed Elasticsearch deployments now require Platinum license at minimum.
Native connectors on self-managed Elasticsearch deployments require Platinum license at minimum.
Existing self-managed Basic+ licensed users will not be able to create new web crawler configurations in 8.5.0, unless they upgrade their license. Note that these features are available with a Standard Elastic Cloud subscription.
Refer to the subscriptions pages for Elastic cloud and self-managed deployments for full licensing information.
Use two new native connectors (Tech Preview) to index content into search-optimized Elasticsearch indices and use them in search engines:
- MongoDB native connector, written in Ruby.
- MySQL native connector, written in Python.
- Deployments using connectors or crawler must include the Enterprise Search service, and the service must have at least 4 GB RAM per zone. To verify or change the RAM available to the Enterprise Search service on an Elastic Cloud deployment, see Infrastructure requirements.
The new Enterprise Search connector framework in Python (Tech preview) enables users to:
- Build and deploy a customized connector and ingest documents into a search-optimized Elasticsearch index.
- Customize the existing library of Python connectors, like MySQL.
- Connectors built with the Python/Ruby connector clients now apply default Elasticsearch index mappings and settings when creating a new documents index.
- Approximate kNN is now available in Elasticsearch search for App Search (Technical preview).
Use new ingest pipelines (Generally Available) to optimize indices created in the Enterprise Search content management UI:
- Apply machine learning models at ingest-time.
- Extract binary content (enabled by default) to generate documents from file formats such as PDF and DOCX.
- Reduce whitespace for incoming documents.
A new generic ingest pipeline,
ent-search-generic-ingestioncan be used by any ingestion mechanism. This is used by the web crawler and connectors by default. Copy and customize enables users to customize and add their own ingest pipeline, to be run alongside the framework-managed pipeline.
- Connectors built with the Python/Ruby connector clients will automatically index documents using the ingest pipeline and pipeline settings specified in the index’s Pipelines tab.
New fields were added to the definition of the
.elastic-connectorsindex to represent ingest pipeline options.
A number of UI updates were added:
- Use the web crawler UI to configure username/password or token-based authentication setups for private websites.
- User agent information is now visible on the crawler overview page.
- You can now delete indices on the Indices section.
Use new fully customizable Workplace Search connector packages (beta) for:
A number of App Search features are now generally available:
You can use the new service account token option to connect to Elasticsearch, by setting the configuration
elasticsearch.service_account_token. This is a recommended alternative to a
For Elasticsearch indices created by Enterprise Search ingestion methods, the underlying shard count was increased to 2 and auto-scaling settings were added to search-optimized indices. The relevant settings are:
- It is now possible to list and clean up indices left behind by older versions of Enterprise Search as a result of application upgrades. See Storage API for details.
Fixed a performance issue with Elastic Crawler where the Elasticsearch index
_mappingwas being queried too frequently.
Fixed a bug where the Elastic web crawler wrote to an incorrect Elasticsearch data stream for logging.
The crawler now writes logs to the
Fixed a bug that would disable precision tuning on an Elasticsearch index engine unless an
enumsubfield was configured, in addition to
- Fixed a security issue with document-level permissions in Github not taking user suspension into account.
- Fixed a bug in the Github and Github Enterprise Server Workplace Search content sources where rate limit errors caused syncs to fail, instead of suspending and retrying.
- Fixed a bug in the Workplace Search search dashboard, where users could be logged out.
- Fixed a bug in Workplace Search, where users with multiple open search dashboard tabs were not properly logged out, when they only logged out in one tab.
- Due to a recent change in the Red Hat scan verification process, this version of Enterprise Search is not available in the Red Hat Ecosystem Catalog. This bug will be fixed in the next release. Please use the Elastic docker registry to download the 8.5.0 image.
- Users should only run one MySQL native connector per deployment. Running multiple MySQL native connectors can result in non-deterministic data, when syncing with the third-party data source.
- Using the MySQL connector to index large datasets (greater than 10K rows per database) can cause memory issues and fail to index all documents.
Newly created Elastic web crawler indices use a new default pipeline that indexes extracted binary content into the
bodyfield. This differs from the usual
body_contentfield that HTML content is indexed into, and may result in unexpected search results. This change does not affect existing Elastic web crawler indices created prior to 8.5.0. The following workarounds may apply:
Search experiences that expect content only in the
body_contentfield can be updated to search across the
bodyfield as well.
You may "Copy and customize" the default pipeline of your crawler index, adding a
setprocessor to copy the
bodyfield into the
body_contentfield, or vice versa as needed.
Any App Search engines that are built on top of an Elastic web crawler index should double check that boosts and weights applied to the
body_contentfield have also been applied to the
bodyfield, where applicable.
- Search experiences that expect content only in the
- Multiple-term searches raise an error in the Documents section of the Content UI, for crawler indices using ML pipelines. Only single-term searches work as expected for browsing such indices in the UI.
- When using self-managed installation packages for Enterprise Search (e.g. Docker image, tarball) with an Elasticsearch cluster that has HTTPS enabled, creating an Elasticsearch index through the Enterprise Search UI using the Web Crawler, native connector or connector client option will result in an unexpected error.
- Enterprise Search may not start when running on Docker engine version 20 on ECE deployments.
- Deployments do not collect Enterprise Search logs when using Enterprise Search service account tokens. See known issues for logs and logging for details.
Intro to Kibana
ELK for Logs & Metrics