Elasticsearch Testing & QA: Testing Levels of Elasticsearch

We're please to bring your our second article on on our testing and QA processes for Elasticsearch. If you missed last Thursday's piece, Isabel told us all about Elasticsearch Continuous Integration. We'll be bringing you our final installment next Thursday, so stay tuned to this space!

"Works on my machine" is a phrase that became famous for software projects lacking automated testing infrastructure. Even today, when most checks are done automatically on an integration test server, it's still crucial to be able to reproduce bugs and test features on your local box. Ideally, those should be the same tests that are being run by the CI environment (or some stripped down version thereof).

Testing Layers

Elasticsearch tests check the code base from multiple perspectives. Traditional unit tests are the usual check that core algorithm implementations are correct and all methods behave the way they should.

One level up, integration tests run against a locally running cluster, making sure all pieces of the application work nicely together and can be interacted with through the Java Client API. REST tests make sure all REST endpoints work according to their specification.

Backwards compatibility tests are a special case that were introduced recently. Instead of running some test against a cluster containing nodes of only one Elasticsearch version, a previous release can be downloaded, installed and started. Tests then run against a mixed node cluster making sure that everything works as expected and is compatible between releases.

Testing the Elasticsearch Java Layer

Elasticsearch attacks testing from a bunch of different angles. Java code is tested on more than on the unit test level; the Java Client API is also checked by integration tests that pull up complete Elasticsearch clusters to run requests against.

Essentially, the goal for integration tests (based on the class ElasticsearchIntegrationTest) is to make sure Java API calls work against a full running cluster. It is cheap to pull up an example Elasticsearch cluster in terms of CPU power and memory needed, even on an ordinary laptop. When extending the above test class, it also becomes simple in terms of development overhead. The cluster is pulled up for you and reused between tests unless you specify something else in the test's ClusterScope annotation.

Looking at an example integration test, let's walk through the most important annotations and features that makes writing Elasticsearch integration tests so trivial:

01 // make sure all tests in the test suite run on a separate test cluster as we will modify the 
02 // cluster configuration 
03 @ElasticsearchIntegrationTest.ClusterScope(scope = ElasticsearchIntegrationTest.Scope.SUITE)
04 public class TemplateQueryTest extends ElasticsearchIntegrationTest {
06     @Before
07     public void setup() throws IOException {
08         // create an index, make sure it is ready for indexing and add documents to it
09         createIndex("test");
10         ensureGreen("test");
12         index("test", "testtype", "1", jsonBuilder().startObject().field("text",    
13         index("test", "testtype", "2", jsonBuilder().startObject().field("text", 
14         refresh();
15     }
17     // for our test we want to make sure the config path of the cluster actually points
18     // to the test resources that we provide - this is the cluster modification referred
19     // to earlier
20     @Override
21     public Settings nodeSettings(int nodeOrdinal) {
22         return settingsBuilder().put(super.nodeSettings(nodeOrdinal))
23                 .put("path.conf", this.getResource("config").getPath()).build();
24     }
26     @Test
27     public void testTemplateInBody() throws IOException {
28         Map<String, Object> vars = new HashMap<>();
29         vars.put("template", "all");
31         TemplateQueryBuilder builder = new TemplateQueryBuilder(
32                 "{\"match_{{template}}\": {}}\"", vars);
34         // the search client to use in the test comes pre-configured as part of the
35         // integration test
36         SearchResponse sr = client().prepareSearch().setQuery(builder)
37                 .execute().actionGet();
39         // specific assertions make checks very simple
40         assertHitCount(sr, 2);
41     }

In our example, line 03 defines the cluster scope of that test to be only for the test suite. This makes sense if you change the cluster configuration, e.g. hard setting a specific configuration option like we do here in line 22 for the configuration path of the cluster.

Starting in line 06, the setup method simply sets up an index, makes sure it is green before modifying it and adds a couple of test documents to it.

As shown in line 36, the client to use is available as part of the integration test, all pre-configured and ready to use. So are helper assertions like the one in line 40 that make it easy to check the state of results.

In addition to regular integration tests, Elasticsearch also tests backwards compatibility for those versions that should be backwards compatible. Essentially this is achieved in a similar way to how integration tests work. A cluster is started for the test, and the release to check backwards compatibility against is downloaded and installed. Then, for each test, a random number of nodes from the comparison release is added to the test cluster and requests are then executed against this mixed node cluster. This way, we can automatically verify that changes do not break backwards compatibility where they shouldn't, and at the per commit level if needed.

Testing the REST Layer

The Elasticsearch REST API is defined in its own API specification. Based on this spec, tests can be defined declaratively in YAML.

The snippet below defines, in a concise way, a test to check the templating based search query. Line 01 to 04 defines the query to execute. Based on the specification for search requests, this snippet can automatically be turned into a valid GET URL and request body.

01  - do:
02      search_template:
03        body: { "template" : { "query": { "term": { "text": { "value": "{{template}}" } } } }, 
04                               "params": { "template": "value1" } }
06  - match: { hits.total: 1 }

Line 06 then defines the expected result. In this case only the number of hits returned is being checked.

Code Checks Built into Each Compile Run

Tests aren't the only way code quality is ensured within Elasticsearch. On each Maven build, we check for the usage of "forbidden Java APIs." In case there are known faster versions of the same API functionality, the slower API call is detected in the code base, causing the build to fail. Another example would be calls that send output to STDOUT (like calls to System.out.println that are usually a sign of forgotten debug statements), instead of using the logging framework to communicate messages. Also there are certain functions that are downright dangerous to use if one cares about compatibility across systems (think using the default charset of the machine the code is running on instead of explicitly defining which charset is supposed to be used).

For releases, checks are even more restrictive. During development broken tests can be disabled and marked with the special annotation "AwaitsFix." In the case of cutting a release, the AwaitsFix broken tests will fail the build, telling the release manager that there is still functionality that is not yet working.

In our final installment of this series, we'll cover Elasticsearch's randomized testing framework.