Security Analytics

安全分析

威胁不会遵循模板行事。您也不该这样。Elastic Stack 为您提供了与目前和未来的攻击向量保持同步所需的优势。下文将介绍它是如何做到的。

体验 Elastic 的安全分析能力。试试看

New A detailed look at how Elastic security features can accelerate your path to GDPR compliance. 了解更多。

首先,您的速度必须非常快

攻击并不是假设后果的问题,但何时到来很重要。那么,问问自己,您希望敌手在您的系统中呆多久?

Elastic 专为提高速度而设计,并且在摄取数据时对其进行索引。这样可使您获取信息的时间缩短到几秒钟,并使得各类查询和实时的可视化分析变得简单。

不要像扔垃圾一样丢弃数据,而是把所有数据抛给 Elasticsearch 消化

检测威胁的关键可能来自任何地方。因此,全面了解系统中实时发生的事件就变得至关重要。

Elasticsearch 早餐便可消化 PB 级的数据 — 从防火墙、Web 代理、检测系统到您喜欢的任何来源 — 因此不要再犹豫不决了。

Other
search...
0 matched | 0 scanned
0 Unique Request
Elastic
search...
hits
Unique Request

延长数据的在线保留时间以方便调查

它们是什么时候进来的?要去哪里?它们都做了些什么?还有其他哪些方面受到了影响?

要回答这些问题,仅仅将数据保留七天时间是无法进行适当的历史回顾的。一般来说,威胁在得到解决之前可以潜伏 100 天。Elastic 不仅使得搜索长期历史数据成为可能,而且这种方式既实用简单又方便快捷。

创建新鲜事物,增强您的 SIEM

从一块空白的石板开始,创建像 Slack 这样的本土安全解决方案,或者增加现有的 SIEM 投资,例如 USAA。Elastic 还很灵活。如果您没有发现自己的需求,可以构建它或利用开源社区。您不会陷入其中而停滞不前。取得胜利需要开源。

立即试用

从小处着手,慢慢发展壮大。这是您的选择。重新安装,探索您之前未发现的功能。
  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Filebeat

Open terminal (varies depending on your client OS) and in the Metricbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Filebeat System] SSH login attempts" or "[Filebeat System] Sudo commands"
What just happened?
Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.
Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Auditbeat

Open terminal (varies depending on your client OS) and in the Auditbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Auditbeat File] File Integrity"
What just happened?
Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.
Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Logstash

Open terminal (varies depending on your client OS) and in the Logstash install directory, type:

Modify logstash.yml to set ArcSight module details

modules:
- name: arcsight
  var.inputs: smartconnector
  var.elasticsearch.username: "elastic"
  var.elasticsearch.password: "<password>"
				

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[ArcSight] Network Overview Dashboard"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Arcsight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Packetbeat

Open terminal (varies depending on your client OS) and in the Packetbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Packetbeat] DNS Tunneling"
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Logstash

Open terminal (varies depending on your client OS) and in the Logstash install directory, type:

Paste in the <password> for the elastic user when prompted

Modify logstash.yml to set Netflow module details

cloud.id: <cloud.id>
     cloud.auth: elastic:${ES_PWD}
     modules:
       - name: netflow
         var.input.udp.port: <netflow_port>
				

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"Netflow: Overview"
What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Netflow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the Netflow solution, however you can override defaults. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Filebeat install directory:
Ctrl + C to Copy
What just happened?

Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system log messages, and reporting on SSH login attempts and other authentication events.

Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Auditbeat install directory:
Ctrl + C to Copy
What just happened?
Auditbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing latest system audit information.
Didn't work for you?

Auditbeat module assumes default operating system configuration. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Logstash install directory:

Modify logstash.yml to set ArcSight module details

modules:
       - name: arcsight
         var.inputs: smartconnector
			
Ctrl + C to Copy

Configure Smart Connectors to send CEF events to Logstash via TCP on default port 5000.

What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing ArcSight events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the ArcSight solution, however you can override defaults. See the documentation for more details.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Packetbeat install directory:
Ctrl + C to Copy
What just happened?

Packetbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing details of your DNS traffic.

Didn't work for you?

Packetbeat makes a set of assumptions around defaults, such as default network ports. See the documentation for more details on how to further configure your deployment.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Logstash install directory:

Modify logstash.yml to set Netflow module details

modules:
       - name: netflow
         var.input.udp.port: <netflow_port>
			
Ctrl + C to Copy

Configure NetFlow to export flow events to Logstash via UDP on default port 2055.

What just happened?

Logstash created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing Netflow events.

Didn't work for you?

Logstash module makes a set of assumptions around default configuration of the Netflow solution, however you can override defaults. See the documentation for more details.

自动执行异常检测,探索奇怪的连接

如何与数十亿的签名保持同步?亦或在数百万个 IP 地址中建立有意义的连接?将机器学习和图分析添加到您的 Elastic 方程中,以便在繁杂喧嚣的环境中快速检测网络威胁 - 无论是您预料到的威胁还是您没有预料到的威胁。

很多公司有同样的苦恼

USAA 一开始在其安全实验室中使用几个 Elasticsearch 节点。而现在,他们拥有一套完整的在生产环境部署,增强的 ArcSight SIEM。在此之前,USAA 第一出动人员曾经需要等待日志管理设备花费几分钟(甚至几小时)的时间来进行查询,以生成威胁追踪分析的输出结果。而有了 Elastic,他们再也不用这样了。

他们只是使用 Elastic 管理安全事件的众多公司之一。探索更多客户示例。

安全分析不仅仅只是安全事件

有指标吗?有基础设施日志吗?亦或是包含大量文本的文档?通过安全事件将所有这些集中起来,并收集进 Elastic Stack,来丰富您的分析、将风险降至最低,并简化您的系统架构。

日志分析

快速、可扩展的日志记录分析。

了解详情

指标分析

数字统计:CPU、内存等。

了解详情

网站搜索

为您的站点轻松创建良好的搜索体验。

了解详情

APM

深入了解应用程序的性能。

了解详情

应用搜索

搜索文档、地理数据等。

了解详情