31 May 2017 Releases

A New Elasticsearch Frontier: Elastic Cloud Enterprise 1.0 GA

By Haley EshaghBaha Azarmi

We are doing backflips and handstands about Elastic Cloud Enterprise (ECE) 1.0 officially becoming GA. This new product lets you provision, monitor, and orchestrate a fleet of Elasticsearch clusters and Kibana instances the way you want, in the environment you choose, from a single console.

(And if you want to dive right in, try it out and enjoy. Also, you'll likely fancy our upcoming webinar and demo of the product.)

Leading up to today's launch, we've talked about the product's architecture, the business value, and who it's for. Now, we'd like to take a closer look at the path that leads users to adopt a product like ECE.

Many of our users share a familiar adoption story. Developer at Company X browses cat pictures on the internet fiddles with a data-related problem, stumbles across Elasticsearch, downloads it, spins up a test cluster, ingests some data, finds success. Then a coworker hears about it, they add their data, and to support this the cluster has to get bigger. Soon enough Company X has dozens of nodes of Elasticsearch running in production supporting mission-critical functions.

And that's just the start of it.

Let's say that Company X is actually a bank. Their multi-node production cluster currently supports a logging use case for multiple applications, but now they're looking to power new applications for security analytics and transaction analysis. Plus, the marketing, human resources, and SRE teams have gotten wind of Elastic and have placed requests to either try it out or apply it to solve their own business problems.

Now things are getting interesting. A problem rainbow has emerged, one that consists of the unpleasantries that naturally come with the hard problem of managing scale within a cluster: multiple use cases, multiple tenants, and more and more data.

elasticsearch-multitenant-multi-use-case-management-monitoring-orchestration.png

That said, these are the exciting kinds of problems to have, but challenging nonetheless. To explain, let's break this down by zeroing in on two (unintentionally existential) questions: 1) who are you and 2) what are you trying to do?

To question number one: who are you?

If you are a tenant of a cluster, you care about time-to-insight, quick responses, a custom experience, and expectations being met. But it's unwise to assume that all tenants are alike. They have different needs, habits, and requirements — each a potential source of friction and frustration.

  • Different Access Profiles. Tenants will have different use cases. Some might do full-text search or need suggestions, recommendations. Others might have to do heavy-weight aggregations, logging, or scan-and-scroll on security data.

  • Different SLAs. A security team performing threat hunting likely doesn't want a delayed response for their analysis, but their heavy-duty querying on the cluster might negatively impact the SLA expectations for other tenants.
  • Different Versions. Not all tenants are running the same versions of Elastic products. They might be on a specific version for specific reasons, and pushing them where they aren't ready to go yet makes for a bad experience.

There are other considerations, too, like different backup policies that can incur extra work or infrastructure burden. Or some tenants might be building heavy-duty alerts that query a year's worth of data that bog down the cluster for other tenants.

If you are the production team making all of the Elastic magic happen, you care about good user experience, added ROI to projects or divisions, and making sure your time isn't overwhelmed by putting out fires. And from this perspective there is another set of variables to account for.

  • When to Perform Maintenance. It's an age-old question: when do you perform maintenance to incur minimal impact on the system and users? How do you coordinate across time zones and varied usage patterns?
  • Upgrading. Another classic. Upgrading one tenant might impact performance on another tenant that might not need to upgrade (or can't).
  • One Tenant Crushes All. Internal customers might raise accessibility issues due to one tenant performing large queries, locking up the whole cluster.
  • Security Compliance. Some tenants can't house their data in the same cluster as other tenants for security reasons.
  • Everyone Loves Kibana. Currently, Kibana does not have multitenancy, so users have to spin up multiple instances of Kibana, which means additional management.

So what to do? 

The software works so well and offers so much promise to the organization, how should the production team proceed before the cluster (or possibly their spirit) crumbles?

elasticsearch-kibana-clusters-instances-at-scale-manage-monitor-elastic-cloud-enterprise.png

Because this is your life now....

An option (actually, an eventuality): split the cluster. Breaking it up into smaller bits solves some of the pain points we've discussed, but it does come with a cost.

There needs to be consideration for deployment and provisioning using configuration management or orchestration tools, managing different Elastic versions across each cluster (which gets tricky for releases before 5.x), supporting different SLAs and maybe even different types of infrastructure, and so on.

It becomes a slippery slope: if one tenant gets their own cluster, others will want their own clusters, too. Managing the simple things at a smaller scale become more complex as they compound. Soon, the production team runs the risk of spending more time supporting clusters than they are on the work they originally set out to do.

And now we have arrived. 

This is the type of challenge that ECE eats for breakfast. It makes addressing all of these needs possible from a single console and streamlines them.

Create a new cluster, scale up and down on demand, host and upgrade multiple versions, enable Kibana, and enable X-Pack — and all you have to do is click or submit an API call. Monitor the entire deployment from a single pane of glass to make sure everyone's happy. And then get back to working on projects to move your business forward.

Now, anytime someone wants to try Elastic or go whole hog in production, it's easy to deliver. Looking for a bit of POC play time? Spin up a new cluster. Need a short-lived environment? No problem. Worrying about asking for hardware that's compliant with a security plan? Don't. (One does not simply ask for a new VM, or dedicated machine for testing...) It's already handled with ECE. Does the project need multiple environments for QA and pre-production? Sounds good.

ECE was designed to solves these problems, and solve them well. We use the same code base that powers ECE to power our Elastic Cloud service, that has managed thousands upon thousands of clusters for a few years now. And at the end of the day, this product is ideal for anyone that expects to grow with Elastic. In fact, one of our first customers has only 5 production clusters and finds value in the centralized management the product provides for their future growth expectations

So go ahead and take it for a spin and dive into the details with the step-by-step documentation.