Security Analytics

安全分析

威胁不会遵循模板行事。您也不该这样。Elastic Stack 为您提供了与目前和未来的攻击向量保持同步所需的优势。下文将介绍它是如何做到的。

体验 Elastic 的安全分析能力。 试试看

New 详细了解 X-Pack security 特性是如何帮助您加速达成 GDPR 合规。 了解更多。

首先,您的速度必须非常快

攻击并不是假设后果的问题,但何时到来很重要。那么,问问自己,您希望敌手在您的系统中呆多久?

Elastic 专为提高速度而设计,并且在摄取数据时对其进行索引。这样可使您获取信息的时间缩短到几秒钟,并使得各类查询和实时的可视化分析变得简单。

不要像扔垃圾一样丢弃数据,而是把所有数据抛给 Elasticsearch 消化

检测威胁的关键可能来自任何地方。因此,全面了解系统中实时发生的事件就变得至关重要。

Elasticsearch 早餐便可消化 PB 级的数据 — 从防火墙、Web 代理、检测系统到您喜欢的任何来源 — 因此不要再犹豫不决了。

Other
search...
0 matched | 0 scanned
0 Unique Request
Elastic
search...
hits
Unique Request

延长数据的在线保留时间以方便调查

它们是什么时候进来的?要去哪里?它们都做了些什么?还有其他哪些方面受到了影响?

要回答这些问题,仅仅将数据保留七天时间是无法进行适当的历史回顾的。一般来说,威胁在得到解决之前可以潜伏 100 天。Elastic 不仅使得搜索长期历史数据成为可能,而且这种方式既实用简单又方便快捷。

创建新鲜事物,增强您的 SIEM

从一块空白的石板开始,创建像 Slack 这样的本土安全解决方案,或者增加现有的 SIEM 投资,例如 USAA。Elastic 还很灵活。如果您没有发现自己的需求,可以构建它或利用开源社区。您不会陷入其中而停滞不前。取得胜利需要开源。

立即试用

从小处着手,慢慢发展壮大。这是您的选择。重新安装,探索您之前未发现的功能。
In Elasticsearch install directory:
Once Elasticsearch starts, in Elasticsearch install directory (separate window):

Note the password for elastic user as <es_pw>

Note the password for kibana user as <kibana_pw>

In Kibana install directory:

Modify config/kibana.yml to set credentials for Elasticsearch

elasticsearch.username: "kibana"
elasticsearch.password: "<kibana_pw>"
			
In Filebeat install directory:

Modify filebeat.yml to set credentials for Elasticsearch output

output.elasticsearch:
  username: "elastic"
  password: "<es_pw>"
What just happened?

Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.

Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

In Elasticsearch install directory:
Once Elasticsearch starts, in Elasticsearch install directory (separate window):

Note the password for elastic user as <es_pw>

Note the password for kibana user as <kibana_pw>

In Kibana install directory:

Modify config/kibana.yml to set credentials for Elasticsearch

elasticsearch.username: "kibana"
elasticsearch.password: "<kibana_pw>"
			
In Auditbeat install directory:

Modify auditbeat.yml to set credentials for Elasticsearch output

output.elasticsearch:
  username: "elastic"
  password: "<es_pw>"
Open browser @
http://localhost:5601 (login: elastic/<es_pw>)
What just happened?

Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.

Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

In Elasticsearch install directory:
Once Elasticsearch starts, in Elasticsearch install directory (separate window):

Note the password for elastic user as <es_pw>

Note the password for kibana user as <kibana_pw>

In Kibana install directory:

Modify config/kibana.yml to set credentials for Elasticsearch

elasticsearch.username: "kibana"
elasticsearch.password: "<kibana_pw>"
			
In Logstash install directory, run:

Modify logstash.yml to set NetFlow module details

modules:
- name: netflow
  var.input.udp.port: <netflow_port>
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "<es_pw>"
			

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

Open browser @
http://localhost:5601 (login: elastic/<es_pw>)
Open dashboard:
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Netflow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the NetFlow solution, however you can override defaults. See the documentation for more details.

In Elasticsearch install directory:
Once Elasticsearch starts, in Elasticsearch install directory (separate window):

Note the password for elastic user as <es_pw>

Note the password for kibana user as <kibana_pw>

In Kibana install directory:

Modify config/kibana.yml to set credentials for Elasticsearch

elasticsearch.username: "kibana"
elasticsearch.password: "<kibana_pw>"
            
In Packetbeat install directory:

Modify packetbeat.yml to set credentials for Elasticsearch output

output.elasticsearch:
  username: "elastic"
  password: "<es_pw>"
Open browser @
http://localhost:5601 (login: elastic/<es_pw>)
Open dashboard:
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

In Elasticsearch install directory:
Once Elasticsearch starts, in Elasticsearch install directory (separate window):

Note the password for elastic user as <es_pw>

Note the password for kibana user as <kibana_pw>

In Kibana install directory:

Modify config/kibana.yml to set credentials for Elasticsearch

elasticsearch.username: "kibana"
elasticsearch.password: "<kibana_pw>"
			
In Logstash install directory, run:

Modify logstash.yml to set ArcSight module details

modules:
- name: arcsight
  var.inputs: smartconnector
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "<es_pw>"
			

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing ArcSight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id>
  • Note the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Filebeat

In Filebeat install directory:

Modify filebeat.yml to set credentials for Elasticsearch output

output.elasticsearch:
  username: "elastic"
  password: "<password>"
				
Open browser @
http://<kibana_url>:5601 (login: elastic/<password>)
Open dashboard:
"[Filebeat System] SSH login attempts" or "[Filebeat System] Sudo commands"
What just happened?

Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.

Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id>
  • Note the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Auditbeat

In Auditbeat install directory:

Modify auditbeat.yml to set credentials for Elasticsearch output

output.elasticsearch:
  username: "elastic"
  password: "<password>"
				
Open browser @
http://<kibana_url>:5601 (login: elastic/<password>)
Open dashboard:
"[Auditbeat File] File Integrity"
What just happened?

Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.

Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id>
  • Note the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Logstash

In Logstash install directory:

Modify logstash.yml to set Netflow module details

modules:
- name: netflow
  var.input.udp.port: <netflow_port>
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "<password>"
				

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

Open browser @
http://<kibana_url>:5601 (login: elastic/<password>)
Open dashboard:
"Netflow: Overview"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing NetFlow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the Netflow solution, however you can override defaults. See the documentation for more details.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id>
  • Note the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Packetbeat

In Packetbeat install directory:

Modify packetbeat.yml to set credentials for Elasticsearch output

output.elasticsearch:
  username: "elastic"
  password: "<password>"
				
Open browser @
http://<kibana_url>:5601 (login: elastic/<password>)
Open dashboard:
"[Packetbeat] DNS Tunneling"
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

  • Register, if you do not already have an account
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Cluster, leave size slider at 4 GB RAM, and click Create
  • Note the Cloud ID as <cloud.id>
  • Note the cluster Password as <password>
  • In Overview >> Endpoints section note Kibana URL as <kibana_url>
  • Wait until cluster plan completes

Download and unpack Logstash

In Logstash install directory, run:

Modify logstash.yml to set ArcSight module details

modules:
- name: arcsight
  var.inputs: smartconnector
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "<password>"
                

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

Open browser @
http://<kibana_url>:5601 (login: elastic/<password>)
Open dashboard:
"[ArcSight] Network Overview Dashboard"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Arcsight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

自动执行异常检测,探索奇怪的连接

如何与数十亿的签名保持同步?亦或在数百万个 IP 地址中建立有意义的连接?将机器学习和图分析添加到您的 Elastic 方程中,以便在繁杂喧嚣的环境中快速检测网络威胁 - 无论是您预料到的威胁还是您没有预料到的威胁。

很多公司有同样的苦恼

USAA 一开始在其安全实验室中使用几个 Elasticsearch 节点。而现在,他们拥有一套完整的在生产环境部署,增强的 ArcSight SIEM。在此之前,USAA 第一出动人员曾经需要等待日志管理设备花费几分钟(甚至几小时)的时间来进行查询,以生成威胁追踪分析的输出结果。而有了 Elastic,他们再也不用这样了。

他们只是使用 Elastic 管理安全事件的众多公司之一。探索更多客户示例。

安全分析不仅仅只是安全事件

有指标吗?有基础设施日志吗?亦或是包含大量文本的文档?通过安全事件将所有这些集中起来,并收集进 Elastic Stack,来丰富您的分析、将风险降至最低,并简化您的系统架构。

日志分析

快速、可扩展的日志记录分析。

了解详情

指标分析

数字统计:CPU、内存等。

了解详情

网站搜索

为您的站点轻松创建良好的搜索体验。

了解详情

APM

深入了解应用程序的性能。

了解详情

应用搜索

搜索文档、地理数据等。

了解详情