Tech Topics

A tool to help with routing issues from Elasticsearch 1.2.0

introduction

In the 1.2.1 release we fixed a routing bug that had been introduced in the 1.2.0 release. This was described in the 1.2.1 blog post. The contents of this post apply only to users that ran 1.2.0.

Before we can discuss the tool we developed to help fix damage from the bug, we have to understand the problem the bug created. When Elasticsearch stores a document it has to decide which shard to put the document in. It does this by computing a hash of the document’s “UID”, which is by default the type#id tuple of the document. Elasticsearch then uses that hash modulo the number of shards to pick a shard. We have to make sure that this hash function works the same across different versions of Elasticsearch. If it doesn’t, version X could expect to find a given document in shard 1 and version X+1 could expect the same document in shard 2. In this case, version X+1 could not directly get the document if it were indexed while running version X.

The bug in 1.2.0 broke this consistent, cross-version hashing. 1.2.1 restored the same hash operation as used in previous versions (except 1.2.0). Once you’ve upgraded to 1.2.1 there are a few potential problems:

  • documents indexed in 1.2.0 are now in the “wrong” shard. You may not be able to retrieve (get) documents by ID when they were indexed in the wrong shard.
  • documents that were updated in 1.2.0 may now exist in two places: the correct shard (as in 1.2.1) and the wrong shard (as in 1.2.0).

The percentage of documents in the wrong shard varies by the number of shards in the index. With 2 shards, all documents are routed correctly. With 4 shards, 50% of the documents will be routed incorrectly. With 5 shards, 80% of documents will be routed incorrectly.

Since you have probably continued to update documents, Elasticsearch cannot tell which of two documents with the same UID is the correct (newer) one. To be clear, it cannot tell if the document in the correct shard was put there recently and is the newer version, or if it was put there some time ago and is the older version. We’ll discuss this more later in this post.

how the tool helps

We’ve developed the fix-routing tool to give you as much help as we can in addressing the issue. We’ll walk through a few specific use cases to illustrate the use of the tool.

re-indexing the data may be the simplest solution

We have provided the tool as a way to help you. After you read the options below you may conclude that re-indexing the data is simpler. That is a valid option, which will work fine. Just be sure that you have upgraded to 1.2.1 before re-indexing.

You will also either want to delete the index you are replacing or index into a new index. If you re-index into the existing index you will not fix the problem; the re-index will not change the location of the documents in the wrong shard.

There is no need to use this tool for any index that you choose to re-index.

tool prerequisites and preparation

The tool has a number of requirements to run correctly:

  • JDK 1.7 installed on the host running the tool
  • _source must have been stored
  • Your cluster is running version 1.2.1
  • You are using the default hashing algorithm (most users are)
  • You are not using the _type in the computation of the routing key (very few users do)

If you do not store the _source in Elasticsearch you can still run the count and delete operations described below. But, the tool will not be able to help with any other parts of conflict resolution.

You can run the tool on a running cluster. You do not have to block client access while running it. If you continue to allow updates to the cluster while the tool runs, you will have to repeat the procedure, with the second run of the procedure cleaning up duplicates created while the first instance ran.

Some operations of the tool will delete documents from an index. As a result we recommend taking a backup of each index before running the tool on that index. See the upgrade instructions for a backup procedure.

The tool uses a MVEL script as part of its execution. This script, called bad_routes.mvel, is included in the Zip file download. You need to place it in the config/scripts directory of all data nodes. There is no need to restart the nodes after putting this script in the directory. By default, it will take one minute for Elasticsearch to load the script and make it available via the API.

Note that the tool will work even if dynamic scripting is disabled.

using the tool

Be sure you have downloaded the tool. After you unzip the download, you will have a jar file and a MVEL script.

understanding the scope of the problem

The tool can give you a count on a per-index basis of the number of documents in the wrong shard:

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 count

You can run this for each index that may have had writes while running 1.2.0 to understand how many documents were impacted.

You can also get an Elasticsearch query that will find the misplaced documents. This can be helpful if you want to write your own conflict resolution program.

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 query

if you do not perform updates

If you do not update existing documents in an index, you can be sure there are no duplicates. Usually, people that are using Elasticsearch for the storage of logs and other machine generated data will want to follow these steps.

In this case the tool can be used to move misplaced documents from the wrong shard to the correct shard. Moving documents with the tool requires first copying them and then deleting them from the original (incorrect) shard.

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 copy_if_missing

At this point you should be able to get – not search for – one of the misplaced documents. You should check a few to make sure the copy worked. Then delete the docs in the wrong shard.

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 delete

Searches may return duplicate documents if you choose to not delete the documents in the wrong shard (as above).

if you have done updates and use external versioning

In general, Elasticsearch cannot tell which version of two duplicate documents is the correct one. However, if you use external versioning, it can. The newer document will have the higher external version number. If this is the case for your system, you can run:

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 copy_version

This command will make sure that the higher version document is in the correct shard. Then, after you check that you now have two copies of the documents that were in the wrong shard, delete the one copy of the document that was in the wrong shard:

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 delete

if you have done updates and do not use external versioning

This is the most difficult case. As mentioned earlier, the fix-routing program cannot tell which copy of the document should be preserved. In this case, you will have to use your domain knowledge of your documents to effect a fix. Start by getting the list of documents that have duplicates and then get the source of the documents that are in the wrong shard. The get command below will produce one JSON file per shard with the contents of misplaced documents for that shard.

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 get_uids > wrong_shard_uids.txt
> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 get

Then iterate over each document returned and record the UID of those documents for which you want to keep the copy that was in the wrong shard. Call this list wrong_shard_restore. You can delete all the documents in the wrong shard since you have extracted them from the index. To delete all the documents that are in the wrong shard:

> java -jar elasticsearch-fix-routing-1.0.jar localhost 9300 index1 delete

Then, for each document in wrong_shard_restore, re-index it using the document in one of the files output by the get command above. This will overwrite the version that was in the correct shard with the version that was in the wrong shard.

The documents that were in the correct shard and that you want to preserve will not be impacted by these operations. Your ability to correctly put only the documents to restore in wrong_shard_restore insures this.

other comments

You can run a count operation after you delete documents. This will help you verify that the delete did what was expected. Be careful here, as if you are still adding documents to the index the counts from the beginning of the procedure may be different from those at the end.

We’re investigating our testing procedures to understand how this bug escaped our notice. We hope to publish a postmortem with the results of our analysis.

You should think about the backups you have made with 1.2.0 and 1.2.1 prior to performing this procedure. You may want to take a snapshot after performing this procedure, then delete the backups that had documents in the wrong shard.