Comparing Elastic’s Elasticsearch Service costs to New Relic and DataDog
Thousands of customers leverage the official Elasticsearch Service (ESS) on top of Elastic Cloud to get the best experience for running Elasticsearch — including all of Elastic’s exclusive products such as Elastic Logs, Elastic APM, Elastic SIEM, and more. ESS is the only managed Elasticsearch service that gives you the complete Elasticsearch experience — all the features, all the solutions, and support from the source — as well as a number of operational and deployment benefits that complement these products.
In a previous post, we’ve explained the cost benefits of Elastic’s resource-based pricing model in comparison with other vendors’ more punitive pricing models. In this series, we’ll highlight additional operational and deployment benefits that allow you to further drive down costs — and increase the value you derive from ESS. This post covers how the extensive cloud provider and region presence of ESS across 22 regions (and counting) and 3 cloud providers (Azure, GCP, and AWS) allow you to benefit from data locality and significantly reduce your cloud provider data transfer costs.
Why are my costs so high?
Some of the most overlooked expenses when running on Amazon, Azure, or Google are the outbound data transfer charges. These charges are incurred whenever data is sent outside a cloud region. This means every log file sent, metric data point shipped, and security event captured and sent outside of the cloud region your application or infrastructure are deployed on incurs an additional fee. While data entering into a cloud provider or moved within a region can be relatively cheap or free, Amazon data-out fees, for example, can be nine times higher per GB than its in-region data transfer costs ($0.09 vs. $0.01 per GB).
With exponentially growing data volumes in use cases such as logging, metrics, and APM in observability as well as networking and endpoint data in security, this can represent significant added costs over time. These fees are even more overwhelming when leveraging other SaaS solutions that do not have a local presence in the region your workloads are running on. For example, both DataDog and New Relic both only expose “US” and “EU” as endpoints to which you can ship your data. These endpoints give zero hints on what underlying cloud provider or which region you are sending data to, almost guaranteeing that you’ll face punitive data-out charges.
How much can data transfer egress charges affect me?
To get a real sense of data transfer egress charges, we will run through some real-world scenarios of shipping various data types such as logs, metrics, and APM to ESS, New Relic, and DataDog. We’ll assume all data sent to services is uncompressed — the default setting in many agents — and is configurable depending on the method used to send data to each service. We’ll also assume our application runs on the Azure East US 2 region and our ESS deployment runs in the same region. With the other SaaS solutions, we’ll use their respective US options.
When running the same Azure region, the bandwidth charges are currently $0 per GB to transfer data in and out of an availability zone. To ensure a fair comparison, we will also include bandwidth charge in case you were running on top of another cloud provider where you could incur in-region data transfer charges. As of this post, AWS charges $0.01 per GB for traffic within a region, and GCP charges the same.
The Azure East US 2 region has different rates for outbound data transfer depending on the total amount of data. At the time of writing this blog, the first 5GB per month are free, the next 9.95TB per month are charged at $0.087, and the next 30TB are charged at $0.083. To ensure we get an accurate number and account for all the tiers properly, we’ll use the Azure pricing calculator’s bandwidth to generate the monthly cost and multiply by 12 to get the annual cost.
Outbound data transfer costs: Logs (annual)
First let’s take an example of comparing outbound data transfer costs for one of the most popular and voluminous data sources, logs. We’ll use three different volumes: 50GB, 500GB, and 1TB per day from an application. For each log volume, we’ll first divide by 24 to get an hourly ingest, then multiply by 730 to get a monthly amount. We’ll then input this into the calculator to get the total monthly, which we will then multiply by 12 to get the total annual cost. These numbers are rounded to the nearest whole number:
|Logging ingest rate per day
|Up to $182
|Up to $1,825
|Up to $3,650
|New Relic / DataDog (US)
We can easily see that data transfer costs are not something we can ignore when performing moderate scale logging.
Logs, however, are only part of the equation to reaching full observability into your application and infrastructure. Customers typically also add metrics and APM traces to enhance their ability to troubleshoot and solve issues. Let’s consider a scenario of adding metrics on top of 50, 1,000, and 5,000 hosts and determine the data transfer cost.
Outbound data transfer costs: Infrastructure metrics (annual)
Similar to the method above, we’ll first calculate total data generated from each host and then use that to generate total data sent out. While metric collection and storage may differ per provider, in an effort to normalize, we’ll say each agent collects 100 bytes per metric and 100 metrics every 10 seconds, system and custom. This totals to 31.5GB per host per year.
|Total number of hosts monitored
|Up to $16
|Up to $315
|Up to $1,575
|New Relic / DataDog (US)
With the rise of containers and microservices, these numbers can grow enormously, as you can start to have thousands of hosts that last less than a minute and contain important metric information. With these next-generation data sources, additional custom metrics, or more frequent metric collection, these outbound data transfer costs also quickly add to your total bill.
Outbound data transfer costs: APM (annual)
Last but not least, let’s add APM data to complete the observability trifecta. This example will use 10, 50, and 100 services, and each of these applications will have 10 medium-sized transactions (~200KB) every minute. This totals to 1.05TB per service per year:
|Total number of services
|Up to $105
|Up to $525
|Up to $1,050
|New Relic / DataDog (US)
Overall you can see that sending data to a SaaS solution that’s not in the same region as your services, infrastructure, or logging devices incurs a hefty fee. These fees get compounded by these services’ complicated billing models for logs ingested, APM agents used, and metrics collected.
In contrast, the Elasticsearch Service offers 22 distinct regions across 3 cloud providers with ESS today and many more coming in the near future, plus a unique resource-based pricing model that doesn’t penalize you for using more APM agents, monitoring more hosts, or ingesting more logs.
Want to experience all these benefits for yourself? Be sure to sign up for a 14-day trial.