Tech Topics

Key points to be aware of when upgrading from Elasticsearch 1.x to 2.x

Overview

As part of any software deployment life cycle, one is often faced with the need to upgrade to the latest release of a product; both to keep up with new features and bug fixes as well as ensure supportability. Elasticsearch is no different and with the release of version 2.0 a number of major changes have been included, which can result in breaking an existing 1.x installation upon upgrade. This is referred to as breaking changes in the documentation. Please note that a FULL cluster restart will be needed.

Watch our video on upgrading from Elasticsearch 1.x to 2.x.

2014-08-29_15_20_20_-Upgrade_-_Maintain_Speed.JPG

So how do I upgrade to Elasticsearch 2.x smoothly?

With a bit of testing in a non-production environment, there should be very little reason for any upgrade to fail or create inconsistencies in your data. This does, however, require some planning which could be summarised as follows:

  • Familiarize yourself with the list of breaking changes included in the new release. This will ensure there are no “hey what happened to this feature I was using before?” surprises.
  • Set the network.host to a _non_loopback address in elasticsearch.yml. Please note that it could also be an interface configured as  "_[networkInterface]_"; e.g., "_en0_", "_ens160_" or "_eth0".
  • Download and install the migration plugin on your cluster before attempting an upgrade. It does not require a node restart. Please note that it only checks (meaning it does not fix) mappings, index settings and segments, thereafter highlighting what needs to be fixed before starting the migration process. As it does not check for index templates, you would still need to review all the breaking changes documentation before making any required changes manually.
  • Use the snapshot and restore feature (‘a MUST’) to have a duplicate copy of any production data in a test environment with which you can run all migration upgrade testing against. Rather break this, if it comes to that, than break your production cluster.
  • Upgrading across major versions (1.x to 2.x) requires full cluster restart.
  • For non-systemd installations, it is highly recommended using the most recent elasticsearch init script from 1.x to 2.x (there are quite a few differences):

        2.1 DEB init
        2.1 RPM init

  • Make all the recommended modifications to your cluster configuration, as well as any back/front-end code your application may use that is related to this, based off the results provided by the migration plugin and the breaking changes document.
  • Follow all upgrade documentation for plugins to Elasticsearch, to ensure they work once on the latest release.
  • Ensure both Logstash and Kibana are upgraded to a supported version, for full compatibility with Elasticsearch 2.x. 
  • Note that a Kibana upgrade will require upgrading of the Marvel plugin and agent as well, with the UI now residing within Kibana.
  • Sense is now Open Source and also a plugin of Kibana rather than Marvel, so needs to be installed manually, post upgrade.
  • Logstash 2.x uses the HTTP protocol by default now, requiring potential manual changes in the logstash configuration. The elasticsearch_java output plugin can be installed if needed.
  • Ensure you are following the correct upgrade path.
  • Indices upgrade process can be monitored. See the 'check upgrade status' API command.

My upgrade is broken, please help!

Should the upgrade gremlins appear and attempt to wreak havoc in your life, regardless of having carefully followed the previous set of steps, there are some additional avenues you can pursue:

  • Raise a question within the community on the discuss forum. You will be pleasantly surprised how helpful our community of users really is.
  • Raise a bug report on GitHub should you feel you have hit a new bug as part of the upgrade process.
  • If you are a subscription customer, raise a ticket on the support portal, for a more dedicated resource within Elastic to assist you in working through any upgrade issues.

Looking for ways to minimize your downtime or reduce your maintenance window for a full cluster restart?

If the risks seem too great or the upgrade mountain too high to climb, there are some options you can also consider:

  • Build a new cluster on the newer version of Elasticsearch, with the compatible later releases of Logstash and Kibana and other plugins (where required).
  • Dual-feed your source event data to both old and new cluster, decommissioning the old cluster once your required retention period has passed. This tends to be more viable with smaller retention periods such as 7 days or 1 month.
  • Reindex your full dataset from source to the new cluster, thereafter decommission the old.

Although this approach will require more physical (or virtual) resources initially (although the likes of a test environment could be repurposed for this function temporarily at little additional capacity cost), it will be for a very short period of time. Once the new cluster is able to match the data of the old, along with all required functionality for your front-end application, the full switch-over may be an easier alternative.

Caveat: any code written to send or receive data from Elasticsearch which relies on mappings or functions that were changed (see breaking changes), will still require manual modification regardless of the chosen upgrade path.

Known Issues

Below are a list of issues that have been seen which may be of value when considering your upgrade plan, along with any troubleshooting you need to perform with failed or partial upgrades:

  • Not setting network.host (or not setting it correctly). Before starting Elasticsearch 2.x, ensure you set the network.host_non_loopback in elasticsearch.yml
  • The migration plugin is only compatible with Elasticsearch version 1.x and so should be removed (along with all 1.x plugins), once it confirms no changes are required prior to upgrading, before performing the actual upgrade.
  • Keep a backup of all configuration files, allowing for their replacement as part of any package upgrade (DEB, RPM, etc), thereafter modifying the newer configuration files to match your required cluster settings.
  • Ensure sufficient disk space is present on any given cluster node prior to an upgrade attempt. This basic upgrade principle is often overlooked.
  • You may be upgrading from a very old release (pre 0.90). Since the Lucene version needs to be compatible with Elasticsearch 2.x, please consult the upgrade API documentation to determine if a manual upgrade API request needs to be made before hand. 
  • Plugins need to be kept up-to-date to ensure compatibility with the latest release of Elasticsearch. If there is a newer plugin version compatible to the Elasticsearch 2.x version you are upgrading to, you must remove the old plugin, perform the upgrade and then install the new plugin afterwards. Similarly, Elasticsearch will not start when there is a plugin folder with an invalid plugin.
  • The marvel-agent needs to be installed on all the nodes in the cluster. Because the marvel-agent plugin requires the license plugin, it needs the license plugin to be installed on all of the nodes first. Failing to do this can cause issues especially when using the java node client, if the license jar is not part of the project.
  • Client nodes running marvel-agent need to be upgraded to Elasticsearch 2.1 or higher, due to known issues with 2.0 for this node type running marvel.
  • You will not be able to upgrade indices with conflicting field mappings to Elasticsearch 2.x (these indices might not even start up!). The earlier mentioned migration plugin should catch this.
  • If you are not able to run the migration plugin successfully when there are closed indices and using a filter in the migration plugin, see this workaround.
  • index_analyzer doesn’t exist anymore and templates are not being checked by the migration plugin, hence you can get some exceptions. You need to manually update old templates to work around this.
  • Ensure your kernel version is current, as older releases have issues with the newer fsync used by Elasticsearch 2.x.
  • Upgrading to a major release such as Elasticsearch 2.x will not work where some nodes are left on older releases. All nodes need to be upgraded to the new release as part of the restart upgrade steps.
  • Running incompatible releases of Logstash and Kibana once upgrading to Elasticsearch 2.x.