容器监测

您的应用程序和环境在不断演变,Elastic Stack 亦要如此。 监测和搜索在您的应用程序、Docker 和 Kubernetes 内发生的事情,并对这些事情进行可视化,而且一切均在一个位置集中完成。

借助 Elastic 体验容器监测。立即试用

整合

在单一平台上查看您的日志和指标

监测您的应用程序,随时关注 Kubernetes 指标和活动,并且分析 Docker 容器的性能。使用专门针对基础架构操作而构建的单个 UI,便可对所有这些内容进行可视化和搜索。指标和日志在这里和谐共存。

部署

借助 Beats,轻松进行监测

借助 Filebeat、Metricbeat 等工具,开始从应用程序、Docker 容器和 Kubernetes 编排中传输日志指标。只需一两分钟即可设置完毕,然后便能自动发现。 查看此篇博文GitHub 存储库了解从应用程序、Docker 和 Kubernetes (k8s) 收集日志和指标的详细示例。

Icon

关注 Kubernetes 及其内运行的应用程序。

Icon

管理您的应用程序和 Docker 基础架构。

Icon

想在 Docker 内部署 Elastic Stack?欢迎使用我们的官方容器!

自动发现

Beats 针对您平台的动态特性做出反应

Metricbeat 和 Filebeat 的自动发现功能能够让您随时了解环境中的变化。通过使用 Docker 和 Kubernetes API 挂钩函数,实现模块和日志路径添加的自动化,并调整监测设置。然后添加元数据,这样您便能知道所有数据的初始点。

微调

实现数据可视化,获取变更通知

借助预配置的 Kibana 仪表板,满怀热情地开始使用之旅。然后创建自定义仪表板和告警,随时了解最重要的指标。您的系统中有新容器吗?您的应用程序、Docker 和 Kubernetes 的运行状况如何?对变化进行可视化,确定问题,进行控制,然后接收通知。 没有问题。

分分钟入门

只需几个步骤,即可将您的数据放入 Elastic Stack 中,并密切关注您的部署。
  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Filebeat

Open terminal (varies depending on your client OS) and in the Filebeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

From your machine or wherever you run kubectl:

env:
  - name: ELASTIC_CLOUD_ID
    value: <cloud.id>
  - name: ELASTIC_CLOUD_AUTH
    value: <cloud.auth>
				
Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Go to Discover to search your logs
What just happened?
Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start exploring your logs from your app and services running in Kubernetes.
Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for more details.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Metricbeat

Open terminal (varies depending on your client OS) and in the Metricbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

From your machine or wherever you run kubectl:

env:
  - name: ELASTIC_CLOUD_ID
    value: <cloud.id>
  - name: ELASTIC_CLOUD_AUTH
    value: <cloud.auth>
				

Optionally, you can enable kube-state-metrics for more detail.

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Metricbeat Kubernetes] Overview"
What just happened?

Metricbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can monitor your Kubernetes cluster.

Didn't work for you?

Metricbeat modules have defaults and configurations for each system they connect to. See the documentation for supported versions and configuration options.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Filebeat

Open terminal (varies depending on your client OS) and in the Filebeat install directory inside a Docker container, type:

As a user that has read access to /var/lib/docker/containers (usually root) modify filebeat.yml to send logs enhanced with Docker metadata to Elastic

filebeat.inputs:
     - type: docker
       containers.ids:
         - '*'
       processors:
       - add_docker_metadata: ~
				

As a user that has read access to /var/lib/docker/containers (usually root) run:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Go to Discover to search logs for your application or service running in Docker
What just happened?

Filebeat created an index pattern in Kibana with defined fields for logs residing in the default directory where Docker puts logs from your applications (/var/lib/docker/containers/*/*.log), and enhanced them with Docker container metadata. You can now look at logs from Docker in one central place in Kibana.

Didn't work for you?

Filebeat Docker metadata processor can be tuned further for your use case. See the documentation for more information.

  • Register, if you do not already have an account. Free 14-day trial available.
  • Log into the Elastic Cloud console
To create a cluster, in Elastic Cloud console:
  • Select Create Deployment, and specify the Deployment Name
  • Modify the other deployment options as needed (or not, the defaults are great to get started)
  • Click Create Deployment
  • Save the Cloud ID and the cluster Password for your records, we will refer to these as <cloud.id> and <password> below
  • Wait until deployment creation completes

Download and unpack Metricbeat

Open terminal (varies depending on your client OS) and in the Metricbeat install directory, type:

Paste in the <password> for the elastic user when prompted

Paste in the <cloud.id> for the cluster when prompted

To modify defaults, edit modules.d/docker.yml.

Open Kibana from Kibana section of the Elastic Cloud console (login: elastic/<password>)
Open dashboard:
"[Metricbeat Docker] Overview"
What just happened?

Metricbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing data statistics, health and status information about your Docker deployment.

Didn't work for you?

Metricbeat modules have defaults and configurations for each system they connect to. See the documentation for supported versions and configuration options.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Filebeat install directory:
Ctrl + C to Copy
From your machine or wherever you run kubectl:
  • Download filebeat-kubernetes.yml
  • Edit filebeat-kubernetes.yml and specify the host for your Elasticsearch server (If you are connecting back to your host from kubernetes running locally then set ELASTICSEARCH_HOST to host.docker.internal):
  - name: ELASTICSEARCH_HOST
    value: host.docker.internal
			
Ctrl + C to Copy
Open browser @
Go to Discover to search your logs
What just happened?

Filebeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start exploring your logs from your app and services running in Kubernetes.

Didn't work for you?

Filebeat module assumes default log locations, unmodified file formats, and supported versions of the products generating the logs. See the documentation for supported versions and configuration options.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Filebeat install directory:
Ctrl + C to Copy
Ctrl + C to Copy
From your machine or wherever you run kubectl:
  • Download metricbeat-kubernetes.yml
  • Edit metricbeat-kubernetes.yml and specify the host for your Elasticsearch server (If you are connecting back to your host from kubernetes running locally then set ELASTICSEARCH_HOST to host.docker.internal). There is a DaemonSet and a singleton, edit the HOST for both:
  - name: ELASTICSEARCH_HOST
    value: host.docker.internal
			

Optionally, you can enable kube-state-metrics for more detail.

Ctrl + C to Copy
What just happened?

Metricbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can monitor your Kubernetes cluster.

Didn't work for you?

Metricbeat modules have defaults and configurations for each system they connect to. See the documentation for supported versions and configuration options.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Filebeat install directory on your Docker Host:

As a user that has read access to /var/lib/docker/containers (usually root) modify filebeat.yml to send logs enhanced with Docker metadata to Elastic

filebeat.inputs:
     - type: docker
       containers.ids:
         - '*'
       processors:
       - add_docker_metadata: ~
			

As a user that has read access to /var/lib/docker/containers (usually root) run:

Ctrl + C to Copy
Open browser @
Go to Discover to search logs for your application or service running in Docker
What just happened?

Filebeat created an index pattern in Kibana with defined fields for logs residing in the default directory where Docker puts logs from your applications (/var/lib/docker/containers/*/*.log), and enhanced them with Docker container metadata. You can now look at logs from Docker in one central place in Kibana.

Didn't work for you?

Filebeat Docker metadata processor can be tuned further for your use case. See the documentation for more information.

In Elasticsearch install directory:
Ctrl + C to Copy
In Kibana install directory:
Ctrl + C to Copy
In Metricbeat install directory:
Ctrl + C to Copy
Ctrl + C to Copy

To modify defaults, edit modules.d/docker.yml.

What just happened?

Metricbeat created an index pattern in Kibana with defined fields, searches, visualizations, and dashboards. In a matter of minutes you can start viewing data statistics, health and status information about your Docker deployment.

Didn't work for you?

Metricbeat modules have defaults and configurations for each system they connect to. See the documentation for supported versions and configuration options.

很多公司有同样的苦恼

不要盲目听从我们

了解 eBay 如何从 Kubernetes 内的应用程序中收集日志和指标。

容器仅是个起点

有网络数据吗?有基础架构日志吗?亦或是包含大量文本的文档?通过将您的所有这些指标都集中进 Elastic Stack,来丰富您的分析,精简工作流程并简化架构。

应用搜索

搜索文档、地理数据等。

了解详情

安全分析

快速且规模化的交互式调查。

了解详情

指标分析

数字统计:CPU、内存等。

了解详情

网站搜索

为您的站点轻松创建良好的搜索体验。

了解详情

APM

深入了解应用程序的性能。

了解详情