Development Tooling Behind Kibana | Elastic Blog
Engineering

# Development Tooling Behind Kibana

Here on the Kibana team, we're in a unique position; we consume the products that other teams are building as part of what we are building. At a minimum, we have to stand up an Elasticsearch instance and we have to index some data. Short of that, we can't run Kibana, and we can't do our jobs.

This may seem simple enough, but it starts to get complicated pretty quickly when we need to test against many different versions of Elasticsearch. We also often need to test different plugin configurations, different datasets, and even run development versions of Elasticsearch. With all these varying requirements, things can get complicated and time consuming quickly.

In order to make our lives easier, it's in our best interest to automate as much of that process as possible. Naturally, we've built some tools to help with that.

## Automating Data

Let's work backwards and start with indexing data first. To help us with that task, we use a tool called makelogs. As the project states, it pushes fake HTTP traffic logs into Elasticsearch. There's a little bit of edge-case data that it generates, but it's basically the kind of logs you expect from any Apache or nginx server, with a bell shaped traffic pattern and everything.

It's a quick and dirty - and very convenient - way for us to get data indexed so that we can start creating visualizations. It's really handy for smoke-testing things, particularly when we are spinning up new clusters, which we do a lot. It's also a great way to get people new to the team up and running, before they find something real that they want to index.

## Automating Elasticsearch

Makelogs is great, but its scope is tiny, and its utility outside of Kibana development is limited. Perhaps more interesting is the tool we use to automate Elasticsearch, a tool named esvm.

Short for Elasticsearch Version Manager, esvm was born out of our need to maintain not just multiple versions of Elasticsearch, but also multiple clusters, each with its own unique configuration and data. It will download the specified version of Elasticsearch, start it up with the default configuration, use your computer's hostname for the cluster name to prevent external auto-joining, and even wrap its output to help make it a little easier to read.

If you want to run the latest version of Elasticsearch, no arguments are required, just run esvm. If you want a different version, just pass it as an argument, like esvm 2.0.1. If, instead of a version, you need to run a build from a specific branch, something like esvm -b 2.x will do just that.

All of that is handy, but esvm gets really interesting, and really powerful, when you use it with a JSON config file.

### Automating Cluster Configuration

The esvm config takes all the settings from the standard elasticsearch.yml and passes them as runtime settings. Common settings like enabling CORS, turning on mlock, and turning off multicast make great default settings. Any clusters you define will inherit the default settings, but also allow you to override them as needed. Plus, you can define the version or branch to use, and any plugins to install. For example, if you wanted to run a 3 node cluster built from the latest commit on the master branch, this is all it takes:

"clusters": {
"latest": {
"branch": "master",
"nodes": 3
},
}


Then run esvm -c esvm.json latest and you're up and running.

From there, adding plugins is easy, just add a "plugins" section to the cluster configuration and a list of the plugins you'd like to install.

"plugins": [
"shield"
]


Relaunch the cluster and now you've got Shield installed with an evaluation license ready to go. And if you'd like to pre-define some users and their roles, just add this to the configuration:

"shield": {
"users": [
{
"roles": ["kibana4_server"]
},
{

And because you can define as many cluster configurations as you'd like, a single JSON file will allow you to easily spin up whatever configuration you need. Of course, once you have a few clusters defined you may find yourself forgetting what's available. Enter the --list flag.