26 Januar 2017 Engineering

Integrating the Elastic Stack with ArcSight SIEM - Part 3

Von Samir Bennacer

In our previous blog in this series (part1, part2) we demonstrated a simple architecture for integrating ArcSight with Elasticsearch.

In this blog post,  we will continue with the same theme by showing you how to scale the architecture.

If you need to have high indexing throughput and a long retention policy, you can set up a hot-warm architecture for Elasticsearch. For the ingestion side, you can use a message queue between ArcSight and Elasticsearch. This provides architectural isolation between the producer (Arcsight connector) and the consumer, thus allowing Logstash to be scaled independently. To process more data, you simply have to add more Logstash instances for a topic.

Kafka also helps to buffer the incoming data in case it exceeds the Elasticsearch cluster’s ability to ingest the data during a peak periods or spikes. Whilst Logstash persistent queues ensure end to end delivery, data loss is still possible due to the loss of a Logstash instance. Through data replication, Kafka protects against this scenario.

How to integrate ArcSight with Elasticsearch using Kafka as a messaging queue:

The following architecture becomes possible with the ArcSight connector, with HP recently adding Kafka as a supported destination.  This architecture also requires the user to add a Kafka destination to the ArcSight connector, with Logstash nodes pulling data from the appropriate Kafka topic before indexing into Elasticsearch.

Logstash can read directly from Kafka using the Kafka input plugin, which integrates natively using the Java APIs.

You can find more details about using Kafka with the Elastic Stack in the blog series “Just Enough Kafka for the Elastic Stack”  Part 1 and part 2.

arcsight-3.png

Set up the Elastic Stack and Kafka with Docker

In this example, we use Docker to simplify the installation and configuration of the Elastic and Kafka components.

  1. Ensure that:
  2. Download an example from here to help you installing the full Elastic Stack with X-Pack plugin and also install Kafka.
  3. Set the environment variable  KAFKA_ADVERTISED_HOST_NAME in the docker-compose.yml file.
  4. Then issue in the command line:
$ docker-compose up.

Add Kafka destination to ArcSight Connectors

  • Run the command <installdir>\current\bin\arcsight agentsetup
  • Select ‘Yes’ to start the ‘wizard mode’
  • Select ‘I want to add/remove/modify ArcSight Manager destinations’
  • Select ‘add new destination’
  • Select ‘Event Broker (CEF Kafka)’
  • Add the information of the Kafka server and port you used in the docker-compose for the environments variable  KAFKA_ADVERTISED_HOST_NAME and KAFKA_ADVERTISED_PORT
  • Add the Topic name cef

Visualize your data

  • Once data are indexed to Elasticsearch, Point your web browser to http://localhost:5601/ to open Kibana. You should be prompted to log in to Kibana. To log in, you can use the built-in ‘elastic’ user and the password ‘changeme’. NOTE: These are the same credentials used in the logstash.conf download from above. When you change them ensure you update your logstash configuration and restart the pipeline.
  • Configure the index pattern cef-* and select the @timestamp and check the data in the discovery
  • Import dashboard, provided here.

The visualizations and the dashboard can be imported into Kibana through the Management > Saved Objects tab.

arcsight-3.png

In our next post we will talk about  X-Pack alerting features to detect and alert on more complex patterns within the same dataset.

Like this topic? Dive deeper at Elastic{ON}, our user conference, and discuss it with our engineering team face to face. We hope to see you there.