Powering the glomex Marketplace with Elasticsearch

glomex – The Global Media Exchange is a one-of-a-kind, open marketplace for premium video content. Bred from the ProSiebenSat.1 Group, Germany’s number one TV broadcaster both in the TV-advertising and in the audience market, glomex provides content owners, publishers and advertisers with a web-based platform where they can easily and fairly trade content for reach.


Figure 1: glomex Media Exchange homepage

When the glomex engineering team started to build the glomex marketplace from scratch, we focused on some key principles that our architecture needed to provide.

First, launching a new product required flexibility in our architecture to include new features quickly and learn from our users’ behavior. Secondly, due to the projected exponential increase of requests to our services, we needed a technology platform that was easily and quickly scalable.

To meet both of these core requirements, we designed a scalable microservice architecture based on Amazon Web Services (AWS). Since the team had a lot of positive experience building large-scale products with Elasticsearch, one central decision we made early on was to make Elasticsearch a core component of our architecture.

At glomex, Elasticsearch powers two of our central systems: Search through our growing content catalogue and our Real-Time Data Platform. The following section will provide an overview of both use cases and highlights a couple of technical considerations about how glomex integrated Elasticsearch into its architecture.

Search at Scale

Search plays a critical role on the glomex marketplace. Offering an extensive catalogue of video content requires a superior search experience for our business customers, who require easy and fast selection of content to be published on their websites. To satisfy the requirements of our customers, the search interface in the portal provides various features like faceted search, autocompletion, and the support for multiple languages.

The following figure presents the glomex Extended Search interface for customers within the portal. This interface is targeting B2B customers, like editorial teams of both small and large content portals, searching for relevant content to be embedded into their websites. This presents challenges for the user experience of the search interface: the time users spend within the portal performing searches needs to be minimized, but at the same time provide enough depth for very specific queries.

Figure 2: glomex Media Exchange search interface

Elasticsearch is a great fit for this set of requirements since many of the features we need (like faceted search, query-time boosting, multi-language support) are automatically available. Elasticsearch provides fast results - due to reasonable defaults. In addition, Elastic and the community provide an extensive set of documentation and best practises for various areas to implement more specific features (like search-term auto-completion, search-term suggestions).

While search via the portal is a central feature of the glomex marketplace, the majority of requests hitting our service infrastructure result from the integrations on our customer's websites via our glomex Embed-Video-Player or from our APIs. Since we cannot predict which content will go viral at which point in time, our architecture is designed to be highly scalable, leveraging several layers of caching.

glomex Search Architecture

All glomex products are running on a microservice-based architecture. The figure presents a high-level overview of the services involved in powering the glomex search experience and the API.


Figure 3: MES - Search Architecture

In an earlier version of the architecture, Elasticsearch was used as the central datastore for all video metadata. While this is a very common architecture adapted by many projects, we migrated our video metadata to a separate data store using Amazon DynamoDB and use Elasticsearch to focus on indexing the data. One key benefit resulting from this separation of concerns is that both kinds of data, search index and video-metadata, can be changed in parallel, without affecting one another. For example, our search team can now iterate quickly in setting up new search prototypes without affecting the metadata. This gives us the flexibility to put even more focus on our search experience.

VideoExplorer Service - All search requests originating from the portal and public APIs are dispatched to our VideoExplorer Service. Elasticsearch provides a very rich set of features for searching content, but the data model of the glomex marketplace is focused on searching video metadata. Therefore, this service implements a domain-specific language (DSL) to reduce the set of available search queries to a set of features targeted specifically to search the glomex video catalogue. The DSL therefore provides a level of abstraction above the Elasticsearch interface, focusing on ease of use. This reduces the complexity for the upstream services and limits the possibilities for errors. A simplified example of the DSL is shown in the following code-block:

    "tenant_id": "t-1",
    "geolocation": "en",
    "filters": [
        { "available_on_portal": true },
           { "not_blacklisted": true }
        { "ids": ["v-3", "v-1", "v-2"] },
        { "any_taxonomy": ["tx-1", "tx-2"] },
        { "language": "de" },
        { "raw_query": { "query": { "match_all": {} } } }
    "order": "modified_at desc, tenant_id asc",
    "limit": 100,
    "offset": 50,
    "serializer": "player",
    "fields": "clip_id,titles,ad_tags"

MetadataSearch Service - This service represents the interface to Elasticsearch. It dispatches search queries to the respective index. The search result is a list of unique IDs representing the individual entities found by the search query. Due to the set of features that come with Elasticsearch out of the box, like running searches over a cluster of machines, this service can be very simple, since most of the complexity of performing search queries is handled by Elasticsearch. Using a separate service acting as an abstraction layer makes it possible to transparently switch between different index instances.

Serializer Service - Given the list of unique IDs returned from MetadataSearch Service, the Serializer Service maps these IDs to the respective metadata and serializes the results into the respective format. Serialization, in our case, includes transforming as well as filtering content that will then be returned by search. Elasticsearch itself includes powerful features to filter and transform search results and we used them in the first versions of our architecture, too.

With growing business requirements, though, we needed additional flexibility to implement more complex business rules for filtering and the option to transform results into different formats.

Splitting responsibilities into these specific services presents many benefits compared to a monolithic approach: among those benefits are caching responses to improve service performance, extended implementation of business logic, and improved reliability. This separation of concerns also leads to more flexibility in evolving the platform.

As mentioned earlier in this post, the metadata is stable compared to the search index, which changes frequently while optimizing search experience and performance. By providing a service abstraction above the data stores, both can evolve at different paces.

Business Insights in Real-Time

The second main use case of Elasticsearch at glomex engineering is our Data Platform. At glomex, we believe that product decisions need to be driven by data. To enable the organization to make decisions based on data, our Data Platform team has designed and implemented an architecture that supports analytics on large sets of data with a focus on real-time analytics at the same time.

Data Platform Architecture

Besides the data collected from our system infrastructure, data resulting from the usage of video content on our customers’ website is generated by our glomex embed video player, which gets ingested into the Data Platform as a stream of events.

Figure 5: Data Platform Analytics Pipeline - Schematic Overview

Since glomex relies heavily on AWS for its system architecture, many components of the Data Platform are based on AWS services like Kinesis, Lambda, EMR and Redshift. After events are persisted by Amazon Kinesis Firehose, the Data Pipeline applies various transformations and aggregations to the data before loading it into the two datastores.

By leveraging serverless computing with AWS Lambda, we can process virtually any data workloads by automatically scaling all steps involved in the pipeline. We use Amazon Redshift as a central data warehouse to store all event data and enable queries over long time ranges. For fast data availability, our team built a system for real-time analytics based on Elasticsearch and different client technologies. Despite all of the steps and systems involved in ingesting, processing and loading the data, our team has managed to reduce latency of new data to 5 from 15 minutes depending on the type of data involved.

Our Data Platform architecture received the Gartner award for “Data & Analytics Excellence” in 2017.

Consuming Data

All of the data generated by our systems is made available to our teams using different interfaces to support the specific needs of our different stakeholder groups.

For our business departments as well as our management team, our Data Platform team has implemented a set of dashboards using Grafana to visualize the most important metrics driving our business, but also to enable deeper insights into how our content is consumed. Our data scientists access the same data with either Kibana or Jupyter Notebooks to gain even deeper insights. That way, all of our clients enable our teams to focus on data to make better decisions.

For our B2B customers, the glomex marketplace provides direct access to a rich set of metrics so that they themselves can use those insights to drive their own publishing decisions.

Figure 6: Real-Time Analytics - Client Architecture

From an engineering perspective, the integrations of both Kibana and Jupyter Notebooks with Elasticsearch are straightforward. With the release of Elastic Stack 5.0, the integration of Elasticsearch and Kibana has become even easier, as it now provides a consistent versioning for both Elasticsearch and Kibana, thus removing the friction of finding matching versions for both. For the integration of Jupyter Notebooks with Elasticsearch, there are different open source alternatives.

In contrast to the Data Science environment, both our Analytics environments, including Grafana dashboards and our Analytics Portal, require additional systems to ensure data integrity and security. The central component in this architecture is SearchProxy, which serves as an abstraction layer above the Elasticsearch cluster. Having this extra level of abstraction enables us to limit the set of features exposed to these clients. Also, it imposes additional restrictions to data access as well as caching of query results.


One of the strengths of Elasticsearch is its perfect fit with the whole lifecycle of a product, from its early stages to its hopefully prosperous growth phase. When launching a new project, or, like in our case, launching an entire new company, Elasticsearch provides results very quickly while keeping engineering complexity and efforts at a low level. It also supports rapid learning via fast prototyping. As the product matures, Elasticsearch provides the flexibility required to scale and to implement more sophisticated features. While this comes with additional technical complexity, both Elastic and the community provide a rich set of resources to deal with it.

At the glomex engineering team, we made Elasticsearch and its ecosystem of tools a core component of our technology stack. We are constantly building up expertise. Using Elasticsearch, we can focus not on implementing search technology, but on building better features that are loved by our users.


Michael Muckel is Vice President of Engineering at glomex, where he is responsible for R&D, analytics and machine learning. He has a background in designing large-scale application in various industries like media, telecommunication and security systems.

  • We're hiring

    Work for a global, distributed team where finding someone like you is just a Zoom meeting away. Flexible work with impact? Development opportunities from the start?