Quickstartedit

With Elastic Cloud on Kubernetes (ECK) you can extend the basic Kubernetes orchestration capabilities to easily deploy, secure, upgrade your Elasticsearch cluster, and much more.

Eager to get started? This quick guide shows you how to:

Supported versions

  • kubectl 1.11+
  • Kubernetes 1.12+ or OpenShift 3.11+
  • Elastic Stack: 6.8+, 7.1+

Deploy ECK in your Kubernetes clusteredit

Read the upgrade notes first if you are attempting to upgrade an existing ECK deployment.

If you are using GKE, make sure your user has cluster-admin permissions. For more information, see Prerequisites for using Kubernetes RBAC on GKE.

If you are using Amazon EKS, make sure the Kubernetes control plane is allowed to communicate with the Kubernetes nodes on port 443. This is required for communication with the Validating Webhook. For more information, see Recommended inbound traffic.

  1. Install custom resource definitions and the operator with its RBAC rules:

    kubectl apply -f https://download.elastic.co/downloads/eck/1.0.1/all-in-one.yaml
  2. Monitor the operator logs:

    kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

Deploy an Elasticsearch clusteredit

Apply a simple Elasticsearch cluster specification, with one Elasticsearch node:

If your Kubernetes cluster does not have any Kubernetes nodes with at least 2GiB of free memory, the pod will be stuck in Pending state. See Managing compute resources for more information about resource requirements and how to configure them.

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 8.13.2
  nodeSets:
  - name: default
    count: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
EOF

The operator automatically creates and manages Kubernetes resources to achieve the desired state of the Elasticsearch cluster. It may take up to a few minutes until all the resources are created and the cluster is ready for use.

Setting node.store.allow_mmap: false has performance implications and should be tuned for production workloads as described in the Virtual memory section.

Monitor cluster health and creation progressedit

Get an overview of the current Elasticsearch clusters in the Kubernetes cluster, including health, version and number of nodes:

kubectl get elasticsearch
NAME          HEALTH    NODES     VERSION   PHASE         AGE
quickstart    green     1         8.13.2     Ready         1m

When you create the cluster, there is no HEALTH status and the PHASE is empty. After a while, the PHASE turns into Ready, and HEALTH becomes green.

You can see that one Pod is in the process of being started:

kubectl get pods --selector='elasticsearch.k8s.elastic.co/cluster-name=quickstart'
NAME                      READY   STATUS    RESTARTS   AGE
quickstart-es-default-0   1/1     Running   0          79s

Access the logs for that Pod:

kubectl logs -f quickstart-es-default-0

Request Elasticsearch accessedit

A ClusterIP Service is automatically created for your cluster:

kubectl get service quickstart-es-http
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
quickstart-es-http   ClusterIP   10.15.251.145   <none>        9200/TCP   34m
  1. Get the credentials.

    A default user named elastic is automatically created with the password stored in a Kubernetes secret:

    PASSWORD=$(kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode)
  2. Request the Elasticsearch endpoint.

    From inside the Kubernetes cluster:

    curl -u "elastic:$PASSWORD" -k "https://quickstart-es-http:9200"

    From your local workstation, use the following command in a separate terminal:

    kubectl port-forward service/quickstart-es-http 9200

    Then request localhost:

    curl -u "elastic:$PASSWORD" -k "https://localhost:9200"

Disabling certificate verification using the -k flag is not recommended and should be used for testing purposes only. See: Setting up your own certificate

{
  "name" : "quickstart-es-default-0",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "XqWg0xIiRmmEBg4NMhnYPg",
  "version" : {...},
  "tagline" : "You Know, for Search"
}

Deploy a Kibana instanceedit

To deploy your Kibana instance go through the following steps.

  1. Specify a Kibana instance and associate it with your Elasticsearch cluster:

    cat <<EOF | kubectl apply -f -
    apiVersion: kibana.k8s.elastic.co/v1
    kind: Kibana
    metadata:
      name: quickstart
    spec:
      version: 8.13.2
      count: 1
      elasticsearchRef:
        name: quickstart
    EOF
  2. Monitor Kibana health and creation progress.

    Similar to Elasticsearch, you can retrieve details about Kibana instances:

    kubectl get kibana

    And the associated Pods:

    kubectl get pod --selector='kibana.k8s.elastic.co/name=quickstart'
  3. Access Kibana.

    A ClusterIP Service is automatically created for Kibana:

    kubectl get service quickstart-kb-http

    Use kubectl port-forward to access Kibana from your local workstation:

    kubectl port-forward service/quickstart-kb-http 5601

    Open https://localhost:5601 in your browser. Your browser will show a warning because the self-signed certificate configured by default is not verified by a third party certificate authority and not trusted by your browser. You can temporarily acknowledge the warning for the purposes of this quick start but it is highly recommended that you configure valid certificates for any production deployments.

    Login as the elastic user. The password can be obtained with the following command:

    kubectl get secret quickstart-es-elastic-user -o=jsonpath='{.data.elastic}' | base64 --decode; echo

Upgrade your deploymentedit

You can add and modify most elements of the original cluster specification provided that they translate to valid transformations of the underlying Kubernetes resources (e.g., existing volume claims cannot be resized). The operator will attempt to apply your changes with minimal disruption to the existing cluster. You should ensure that the Kubernetes cluster has sufficient resources to accommodate the changes (extra storage space, sufficient memory and CPU resources to temporarily spin up new pods etc.).

For example, you can grow the cluster to three Elasticsearch nodes:

cat <<EOF | kubectl apply -f -
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: quickstart
spec:
  version: 8.13.2
  nodeSets:
  - name: default
    count: 3
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
EOF

Use persistent storageedit

The cluster that you deployed in this quickstart guide only allocates a persistent volume of 1GiB for storage using the default storage class defined for the Kubernetes cluster. You will most likely want to have more control over this for production workloads. Refer to Volume claim templates for more information.

Check out the samplesedit

You can find a set of sample resources in the project repository. To customize the Elasticsearch resource, check the Elasticsearch sample.

For a full description of each CustomResourceDefinition, go to the project repository. You can also retrieve it from the cluster. For example, describe the Elasticsearch CRD specification with:

kubectl describe crd elasticsearch