Thinking about building an end-to-end security analytics platform with the Elastic Stack? This talk explores how to do it with a homegrown solution that’s fast and scalable so you can increase team impact by having more data faster and gaining back time for threat hunting versus responding to alerts.
From the technical talent behind some of the on-screen hacks on USA Network’s Mr. Robot, this talk covers how to improve incident response by combining technologies like Elasticsearch with distributed, on-endpoint analysis for comprehensive, high-speed and efficient visibility at any scale.
Smart Tracing at Deutsche Telekom - Revealing the secrets behind modern networks with the Elastic Stack
Data drives our modern world but most data is never seen and yet its implications can be wide ranging. Using the Elastic Stack and its data analytics techniques, Smart Tracing works like an x-ray to reveal what sits behind modern networks based on network data. It uses the hidden data which network equipment and devices use to communicate with each other to create new insights into performance issues, security threats and network faults.
Cross-cluster search, ingest node, rollover API, shrink API, field collapsing, unified highlighter . . . there's lots to love in Elasticsearch these days. Get up to speed on all things 5.x and see how 6.x will address pain points around scale, upgrading, recovery, and sparse data and disk usage.
Walk through all things ingest for Logstash 5.x, from dead letter and persistent queues to the Grok Debugger and new monitoring APIs. Then get caught up on new lightweight data shipper additions like Heartbeat and Metricbeat, as well as new modules that simplify the getting started process.
To be an independent cloud provider and enforce additional security guidelines, Volkswagen developed their internal central logging and monitoring solution. The solution is based on Microservices implemented in Java as a log endpoint, Kafka for the processing pipeline, Elasticsearch for log storage and Kibana for visualization. In this talk we’ll explain the rationale around the decision to build our own service, the architecture and describe our experiences with the platform.
Learn how to easily deploy and manage secure Elasticsearch clusters at scale and on the infrastructure of your choice using Elastic Cloud Enterprise.
Car2go is an always-on business offering mobility service with cars to customers living in urban areas. Customers and cars constitute an IoT service generating data which must be processed and analyzed in real-time. E.g. vehicle connectivity and condition, position data, reservation and payment, registration and validation. Elasticsearch was introduced to all development teams as an offering, and as a result high quality data analysis can be generated based on systems inside status to all parts of the organization. Using DevOps methods each team is able to implement, modify and visualize data effectively. This gives a fast understanding of capacity, errors and business opportunities in real-time.
Cross-cluster search, ingest node, rollover API, shrink API, field collapsing, unified highlighter… tant de nouvelles fonctionnalités qui vous feront aimer Elasticsearch. Mettez-vous à niveau sur 5.x et découvrez comment 6.x résout les problèmes de scaling, mise à niveau, recovery, sparse data et l’utilisation de disque.
Entrez dans le monde de l’ingestion de données avec Logstash 5.x qui inclut des nouvelles fonctionnalités tels que dead letter, persistent queues, Grok Debugger et les nouvelles API de monitoring. Puis familiarisez-vous avec les agents légers pour le transfert de données : Heartbeat et Metricbeat.
Apprenez comment facilement déployer et gérer des clusters Elasticsearch sécurisés à l’échelle et sur l’infrastructure de votre choix en utilisant Elastic Cloud Enterprise (ECE).
Lancée officiellement dans la version 5.5, machine learning (ML) vous permet de tirer parti de vos données automatiquement. Cette session vous montrera comment utiliser la Suite Elastic pour ingérer, enrichir, visualiser, analyser et créer des alertes sur des logs NGINX pour détecter, et éventuellement prédire, les anomalies au sein de vos données.