Security Analytics

セキュリティ分析

お決まりの脅威なんてありません。そして守り方も。Elastic Stackは、日々進化する攻撃手法に優位に立つための手段となります。その仕組みをご紹介しましょう。

さまざまな脅威からリアルタイムでデータを守るElasticのセキュリティ対策。今すぐトライ

New GDPR準拠の取り組みをサポートするElasticのセキュリティ機能についてご覧ください。 詳細はこちら。

何よりも迅速であること

攻撃で問題になるのは可能性ではなく、タイミングです。できるだけ早く攻撃者を排除するに越したことはありません。

Elastic Stackは速さを重視して設計され、取り込んだデータはすぐにインデックスされます。秒単位でデータへのアクセスが可能で、アドホックなクエリの実行、リアルタイムの可視化を実現します。

データを投げ出さず、「投げ入れる」

脅威を検知するカギは、あらゆる場所で見つかる可能性があります。したがって、システム全体の状況をリアルタイムに、かつ完全に把握することが重要です。

ファイアウォール、プロキシ、不正侵入検知システムから出力されるペタバイト級のデータも、Elasticsearchには朝飯前です。

Other
search...
0 matched | 0 scanned
0 Unique Request
Elastic
search...
hits
Unique Request

調査のために、オンラインでデータを長期間保管する

いつ侵入されたか?いつ、どこに向かったか?他に攻撃されたものはないか?

このような問いに答えるには、たった7日分のログでは不十分です。脅威の解決には、平均で100日程度かかります。Elastic Stackは、長期間に渡るデータの検索を可能にするだけでなく、実践的で、簡単で、高速です。

新しいサービスを構築したらSIEMを強化

Slackのように、自家製のソリューションを白紙から始めることも、USAAのように既存のSIEMへの投資を補うこともできます。Elastic Stackは柔軟です。必要なものがわからなくてもまずは構築してみるか、コミュニティを活用してください。八方塞がりにはなりません。それがオープンソースの良いところです。

トライアルを始める

スモールスタート。大規模にもできます。お客様次第です。最新版をインストールして、発見しましょう。
  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Filebeat

Open terminal (varies depending on your client OS) and in the Metricbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Filebeat System] SSH login attempts" or "[Filebeat System] Sudo commands"
What just happened?
Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.
Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Auditbeat

Open terminal (varies depending on your client OS) and in the Auditbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Auditbeat File] File Integrity"
What just happened?
Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.
Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Logstash

Open terminal (varies depending on your client OS) and in the Logstash install directory, type:

Modify logstash.yml to set ArcSight module details

modules:
- name: arcsight
  var.inputs: smartconnector
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "<password>"
				

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[ArcSight] Network Overview Dashboard"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Arcsight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Packetbeat

Open terminal (varies depending on your client OS) and in the Packetbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Packetbeat] DNS Tunneling"
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Logstash

Open terminal (varies depending on your client OS) and in the Logstash install directory, type:

Paste in the <password> for the elastic user when prompted

Modify logstash.yml to set Netflow module details

cloud.id: <cloud.id>
     cloud.auth: elastic:${ES_PWD}
     modules:
       - name: netflow
         var.input.udp.port: <netflow_port>
				

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"Netflow: Overview"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Netflow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the Netflow solution, however you can override defaults. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Filebeat install directory:
Ctrl + C to Copy
What just happened?

Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.

Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Auditbeat install directory:
Ctrl + C to Copy
What just happened?
Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.
Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Logstash install directory:

Modify logstash.yml to set ArcSight module details

modules:
       - name: arcsight
         var.inputs: smartconnector
			
Ctrl + C to Copy

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing ArcSight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Packetbeat install directory:
Ctrl + C to Copy
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Logstash install directory:

Modify logstash.yml to set Netflow module details

modules:
       - name: netflow
         var.input.udp.port: <netflow_port>
			
Ctrl + C to Copy

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Netflow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the Netflow solution, however you can override defaults. See the documentation for more details.

異常検知を自動化し、不自然な接続を調査

何十億というシグネチャーの定義を最新の状態に保つには、どうすればよいでしょうか?あるいは、何百万ものIPアドレスから有意な接続を特定するには?Machine LearningとGraphを追加すれば、想定内でも、想定外でも、あらゆる雑音の中からサイバー脅威を速やかに検知できます。

仲間がいます

USAAは、社内のセキュリティラボでわずかなElasticsearchのノードを使いはじめました。今では稼働環境に全面的に展開させ、ArcSight SIEMを補強しています。以前、USAAのセキュリティ担当者たちは、脅威分析のために、ログ管理アプライアンスをクエリし、結果を得るまでに数分間(ときに数時間も)待機していました。Elasticsearch導入後は、その必要はなくなりました。

これらは、Elastic Stackでセキュリティイベントを管理するひと握りの例です。他のユーザー事例も併せてご覧ください。

イベント検知にとどまらないセキュリティ分析のカタチ

メトリックも、インフラのログも、膨大なテキストドキュメントもすべて、セキュリティイベントとともにElastic Stackに投入しましょう。分析の幅が広がり、リスクを最小化し、アーキテクチャをシンプルにすることができます。

ログ分析

高速でスケーラブルなログをリアルタイムに分析。

さらに詳しく

メトリック分析

CPUやメモリなど、幅広いデータを収集し、分析。

さらに詳しく

Swiftypeサイト内検索

ウェブサイトの検索機能を大幅に向上。

さらに詳しく

APM

パフォーマンスメトリックを収集し、分析。

さらに詳しく

アプリ検索

ドキュメント、地理データなどを検索。

さらに詳しく