Security Analytics

セキュリティ分析

お決まりの脅威なんてありません。守る方だってそうです。Elastic Stackは、日々進化する攻撃手法に優位な強みを与えます。その方法をご紹介します。

Elasticのセキュリティで様々な脅威からリアルタイムでデータを守る。 今すぐトライ

New X-PackのSercurityはGDPRのセキュリティフレームワークにも対応。 詳細はこちら。

何よりも迅速であること

攻撃で問題になるのは「もし」ではなく「いつ」です。攻撃者にはできるだけ早く出て行ってもらいたいものです。

Elasticは、データの取り込みとインデックスが高速に行われるように設計されています。秒単位でデータへのアクセスが可能で、アドホックなクエリ、リアルタイムの可視化を実現します。

データは投げ出さず、「投げ入れる」

脅威を発見するカギは、どこにでもあるもの。ですので、システム全体の状況をリアルタイムに、かつ完全に把握することが重要です。

ファイアウォール、プロキシ、不正侵入検知システムから出力されるペタバイト級のデータであっても、Elasticsearchにとっては朝飯前です。

Other
search...
0 matched | 0 scanned
0 Unique Request
Elastic
search...
hits
Unique Request

いつ侵入されたか? いつ、どこに向かったか? 他に攻撃されたものはないか?

このような問いに答えるには、たった7日間程度の振り返りでは不十分です。脅威の解決には、容易に100日などかかるものです。Elasticは、長期間に渡るデータの検索を可能にするだけでなく、実践的で、簡単で、高速です。

Slackのように、自家製のソリューションを白紙から始めるか、USAAのように既存のSIEMへの投資を補うこともできます。Elasticは柔軟です。必要なものがわからなくてもまずは構築してみるか、コミュニティを活用してください。八方塞がりにはなりません。そのためのオープンソースなのです。

トライアルを始める

スモールスタート。大規模にもできます。お客様次第です。最新版をインストールして、発見しましょう。
In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Filebeat install directory:
Ctrl + C to Copy
What just happened?

Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.

Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Auditbeat install directory:
Ctrl + C to Copy
What just happened?
Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.
Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Logstash install directory:

Modify logstash.yml to set ArcSight module details

modules:
       - name: arcsight
         var.inputs: smartconnector
Ctrl + C to Copy

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing ArcSight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Packetbeat install directory:
Ctrl + C to Copy
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Logstash install directory:

Modify logstash.yml to set Netflow module details

modules:
       - name: netflow
         var.input.udp.port: <netflow_port>
Ctrl + C to Copy

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Netflow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the Netflow solution, however you can override defaults. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave cluster size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id> and the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Filebeat

In Filebeat install directory:

Paste in the <password> for the elastic user when prompted

Open browser @
<kibana_url> (login: elastic/<password>)
Open dashboard:
"[Filebeat System] SSH login attempts" or "[Filebeat System] Sudo commands"
What just happened?
Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.
Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave cluster size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id> and the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Auditbeat

In Auditbeat install directory:

Paste in the <password> for the elastic user when prompted

Open browser @
<kibana_url> (login: elastic/<password>)
Open dashboard:
"[Auditbeat File] File Integrity"
What just happened?
Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.
Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave cluster size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id> and the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Logstash

In Logstash install directory, run:

Modify logstash.yml to set ArcSight module details

modules:
- name: arcsight
  var.inputs: smartconnector
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "<password>"
                

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

Open browser @
<kibana_url> (login: elastic/<password>)
Open dashboard:
"[ArcSight] Network Overview Dashboard"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Arcsight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave cluster size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id> and the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Packetbeat

In Packetbeat install directory:

Paste in the <password> for the elastic user when prompted

Open browser @
<kibana_url> (login: elastic/<password>)
Open dashboard:
"[Packetbeat] DNS Tunneling"
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave cluster size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id> and the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Logstash

In Logstash install directory:

Paste in the <password> for the elastic user when prompted

Modify logstash.yml to set Netflow module details

cloud.id: 
     cloud.auth: elastic:${ES_PWD}
     modules:
       - name: netflow
         var.input.udp.port: <netflow_port>

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

Open browser @
<kibana_url> (login: elastic/<password>)
Open dashboard:
"Netflow: Overview"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Netflow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the Netflow solution, however you can override defaults. See the documentation for more details.

何十億というシグネチャーをどうしたら最新の状態に保つことができるでしょうか。何百万ものIPアドレスから有意な接続を特定できますか? Machine LearningとGraphを追加することにより、想定内のものであろうと想定外のものであろうと、あらゆる雑音の中からサイバー脅威を速やかに検知できます。

ご一緒にどうぞ

USAAも、自社のセキュリティラボで始めた時には、わずかなElasticsearchノードでした。今では、ArcSight SIEMを補強するプロダクション環境が整いました。以前、USAAの前線のセキュリティ担当者たちは、脅威分析のために、ログ管理アプライアンスをクエリーし結果を得るまでに数分間(数時間)待機していました。Elasticでは、そんな必要はありません。

これらは、Elasticでセキュリティイベントを管理するひと握りの例です。 他のユーザー事例も併せてご覧ください。

メトリックも、インフラのログも、膨大なドキュメントも、これら全てをセキュリティイベント共にElastic Stackで組み合わせれば、分析の幅を広げ、リスクを最小化し、アーキテクチャをシンプルにすることができます。

ログ

高速でスケーラブルなログをリアルタイムに分析。

詳細

メトリックス

CPUやメモリなど、幅広いデータを収集し、分析。

詳細

Swiftype Site Search

ウェブサイトの検索機能を大幅に向上。

詳細

APM

パフォーマンスメトリックを収集し、分析。

詳細

アプリ検索

ドキュメント、地理データなどを検索。

詳細