DevRel newsletter — January 2026

Karina_Blog_Header_Template_720x420_(31).png

Hello from the Elastic DevRel team! In this newsletter, we cover the first Elastic Jina models, complimentary on-demand trainings, the latest blogs and videos, and upcoming events.

What’s new?

Version 9.2 of Elasticsearch and the Elastic Stack bring:

  • Elastic Agent Builder: A new LLM-powered framework to help developers build custom AI agents that provide the right context from Elasticsearch via conversational interfaces. This simplifies workflows around relevance and agentic automation.

  • Streams (AI-driven log summarization): Automatically parse, compress, and extract insights from unstructured logs, helping SREs accelerate investigations and reduce operational overhead.

  • Elasticsearch Query Language (ES|QL) enhancements with smart lookup joins (enrichment across multiple fields, including comparisons like <>!=, and remote clusters) and native time series analysis in Discover — functions, such as RATETBUCKET, and *_OVER_TIME directly in the UI/queries

  • Vectors excluded from _source by default for newly created indices, which reduces storage overhead and often improves indexing performance

  • Improved vector search efficiency with DiskBBQ: A novel approach to vector index storage and retrieval that reads compact clusters directly from disk, removing the need to load complete indices into memory and significantly reducing memory usage. Performance benchmarks indicate sub-20 ms latency even with tight memory constraints.

ELSER and Jina models for sparse and dense embeddings via Elastic Inference Service (EIS) are now available. These models are accessible directly in Elastic Cloud Serverless as .elser-2-elastic and .jina-embeddings-v3 inference endpoints. EIS generates embeddings and performs vector search in a predictable, pay-as-you-go plan (per millions of tokens) without setting up ML nodes. 

With the Jina model available through EIS and using semantic_text, multilingual semantic search just got easier and more predictable cost-wise.It’s straightforward to create an index which will be used in semantic search scenarios backed up by dense vectors (no model setup needed):

PUT inventory
{
 "mappings": {
   "properties": {
     "item": {
       "type": "semantic_text",
       "inference_id": ".jina-embeddings-v3"
     }
   }
 }
}

Adding data doesn’t require any extra network roundtrips; everything is handled by Elasticsearch and EIS under the hood:

POST inventory/_bulk?refresh=true
{ "index": { } }
{ "item": "cherries 🍒" }
{ "index": { } }
{ "item": "train 🚆" }
{ "index": { } }
{ "item": "bananas 🍌" }
{ "index": { } }
{ "item": "computer 💻" }
{ "index": { } }
{ "item": "apple 🍎" }
{ "index": { } }
{ "item": "framboises 🍓" }
{ "index": { } }
{ "item": "der Apfel 🍏" }
{ "index": { } }
{ "item": "tomato 🍅" }
{ "index": { } }
{ "item": "das Auto 🚗" }
{ "index": { } }
{ "item": "bicycle 🚲" }
{ "index": { } }
{ "item": "naranjas 🍊" }

Then, perform multilingual semantic search:

POST inventory/_search
{
 "query": {
   "match": {
     // stands for "fruit" in Spanish
     "item": "frutas"
   }
 }
}

Returning the results as cherries, naranjas, bananas, framboises, apple, der Apfel, and tomato:

 "hits": {
   "total": {
     "value": 11,
     "relation": "eq"
   },
   "max_score": 0.67841315,
   "hits": [
     {
       "_index": "inventory",
       "_id": "8EtNK5sBRerpcHC7zVrq",
       "_score": 0.67841315,
       "_source": {
         "item": "cherries 🍒"
       }
     },
     {
       "_index": "inventory",
       "_id": "-ktNK5sBRerpcHC7zVrr",
       "_score": 0.63476694,
       "_source": {
         "item": "naranjas 🍊"
       }
     },
   
  {
       "_index": "inventory",
       "_id": "8ktNK5sBRerpcHC7zVrr",
       "_score": 0.6138144,
       "_source": {
         "item": "bananas 🍌"
       }
     },
     // more results

Upcoming events

Join our first Elastic Agent Builder livestream on January 22, 2026.

Elastic{ON} Tour, the one-day Elastic conference series around the world, is back. Register and join us in:

  • Paris — January 27, 2026

  • London  — February 26, 2026

  • São Paulo  March 5, 2026

  • Sydney  March 5, 2026

  • Tokyo  March 10, 2026

  • Singapore  March 17, 2026

  • Washington, D.C.  March 19, 2026

We’d like to have a good representation of the Elastic community on stage. Submit your ideas even if they’re still a bit raw. We're happy to iterate on them with you.

Join your local Elastic User Group chapter for the latest news on upcoming events! You can also find us on Meetup.com. If you’re interested in presenting at a meetup, send an email to meetups@elastic.co.

The release and timing of any features or functionality described in this post remain at Elastic's sole discretion. Any features or functionality not currently available may not be delivered on time or at all.

In this blog post, we may have used or referred to third party generative AI tools, which are owned and operated by their respective owners. Elastic does not have any control over the third party tools and we have no responsibility or liability for their content, operation or use, nor for any loss or damage that may arise from your use of such tools. Please exercise caution when using AI tools with personal, sensitive or confidential information. Any data you submit may be used for AI training or other purposes. There is no guarantee that information you provide will be kept secure or confidential. You should familiarize yourself with the privacy practices and terms of use of any generative AI tools prior to use. 

Elastic, Elasticsearch, and associated marks are trademarks, logos or registered trademarks of Elasticsearch B.V. in the United States and other countries. All other company and product names are trademarks, logos or registered trademarks of their respective owners.