Installing in an air-gapped environment


Installing in an air-gapped environmentedit

Some components of the Elastic Stack require additional configuration and local dependencies in order to deploy in environments without internet access. This guide gives an overview of this setup scenario and helps bridge together existing documentation for individual parts of the stack.

If you’re working in an air-gapped environment and have a subscription level that includes Support coverage, contact us if you’d like to request an offline version of the Elastic documentation.

1. Self-Managed Install (Linux)edit

Refer to the section for each Elastic component for air-gapped installation configuration and dependencies in a self-managed Linux environment.

1.1. Elasticsearchedit

Air-gapped install of Elasticsearch may require additional steps in order to access some of the features. General install and configuration guides are available in the Elasticsearch install documentation.


  • To be able to use the GeoIP processor, refer to the GeoIP processor documentation for instructions on downloading and deploying the required databases.
  • Refer to Machine learning for instructions on deploying the Elastic Learned Sparse EncodeR (ELSER) natural language processing (NLP) model and other trained machine learning models.

1.2. Kibanaedit

Air-gapped install of Kibana may require a number of additional services in the local network in order to access some of the features. General install and configuration guides are available in the Kibana install documentation.


  • To be able to use Kibana mapping visualizations, you need to set up and configure the Elastic Maps Service.
  • To be able to use Kibana sample data, install or update hundreds of prebuilt alert rules, and explore available data integrations, you need to set up and configure the Elastic Package Registry.
  • To provide detection rule updates for Endpoint Security agents, you need to set up and configure the Elastic Endpoint Artifact Repository.
  • To access Enterprise Search capabilities (in addition to the general search capabilities of Elasticsearch), you need to set up and configure Enterprise Search.
  • To access the APM integration, you need to set up and configure Elastic APM.

1.3. Beatsedit

Elastic Beats are light-weight data shippers. They do not require any unique setup in the air-gapped scenario. To learn more, refer to the Beats documentation.

1.4. Logstashedit

Logstash is a versatile data shipping and processing application. It does not require any unique setup in the air-gapped scenario. To learn more, refer to the Logstash documentation.

1.5. Elastic Agentedit

Air-gapped install of Elastic Agent depends on the Elastic Package Registry and the Elastic Artifact Registry for most use-cases. The agent itself is fairly lightweight and installs dependencies only as required by its configuration. In terms of connections to these dependencies, Elastic Agents need to be able to connect to the Elastic Artifact Registry directly, but Elastic Package Registry connections are handled through Kibana.

Additionally, if the Elastic Agent Elastic Defend integration is used, then access to the Elastic Endpoint Artifact Repository is necessary in order to deploy updates for some of the detection and prevention capabilities.

To learn more about install and configuration, refer to the Elastic Agent install documentation. Make sure to check the requirements specific to running Elastic Agents in an air-gapped environment.

To get a better understanding of how to work with Elastic Agent configuration settings and policies, refer to Appendix D - Agent Integration Guide.

1.6. Fleet Serveredit

Fleet Server is a required middleware component for any scalable deployment of the Elastic Agent. The air-gapped dependencies of Fleet Server are the same as those of the Elastic Agent.

To learn more about installing Fleet Server, refer to the Fleet Server set up documentation.

1.7. Elastic APMedit

Air-gapped setup of the APM server is possible in two ways:

1.8. Elastic Maps Serviceedit

Refer to Connect to Elastic Maps Service in the Kibana documentation to learn how to configure your firewall to connect to Elastic Maps Service, host it locally, or disable it completely.

1.9. Enterprise Searchedit

Detailed install and configuration instructions are available in the Enterprise Search install documentation.

1.10. Elastic Package Registryedit

Air-gapped install of the EPR is possible using any OCI-compatible runtime like Podman (a typical choice for RHEL-like Linux systems) or Docker. Links to the official container image and usage guide is available on the Air-gapped environments page in the Fleet and Elastic Agent Guide.

Refer to Appendix A - Elastic Package Registry for additional setup examples.

Besides setting up the EPR service, you also need to configure Kibana to use this service. If using TLS with the EPR service, it is also necessary to set up Kibana to trust the certificate presented by the EPR.

1.11. Elastic Artifact Registryedit

Air-gapped install of the Elastic Artifact Registry is necessary in order to enable Elastic Agent deployments to perform self-upgrades and install certain components which are needed for some of the data integrations (that is, in addition to what is also retrieved from the EPR). To learn more, refer to Host your own artifact registry for binary downloads in the Fleet and Elastic Agent Guide.

Refer to Appendix B - Elastic Artifact Registry for additional setup examples.

When setting up own web server, such as NGINX, to function as the Elastic Artifact Registry, it is recommended not to use TLS as there are, currently, no direct ways to establish certificate trust between Elastic Agents and this service.

1.12. Elastic Endpoint Artifact Repositoryedit

Air-gapped setup of this component is, essentially, identical to the setup of the Elastic Artifact Registry except that different artifacts are served. To learn more, refer to Configure offline endpoints and air-gapped environments in the Elastic Security guide.

1.13 Machine learningedit

Some machine learning features, like natural language processing (NLP), require you to deploy trained models. To learn about deploying machine learning models in an air-gapped environment, refer to:

2. Kubernetes & OpenShift Installedit

Setting up air-gapped Kubernetes or OpenShift installs of the Elastic Stack has some unique concerns, but the general dependencies are the same as in the self-managed install case on a regular Linux machine.

2.1. Elastic Kubernetes Operator (ECK)edit

The Elastic Kubernetes operator is an additional component in the Kubernetes OpenShift install that, essentially, does a lot of the work in installing, configuring, and updating deployments of the Elastic Stack. For details, refer to the Elastic Cloud on Kubernetes install instructions.

The main requirements are:

  • Syncing container images for ECK and all other Elastic Stack components over to a locally-accessible container repository.
  • Modifying the ECK helm chart configuration so that ECK is aware that it is supposed to use your offline container repository instead of the public Elastic repository.
  • Optionally, disabling ECK telemetry collection in the ECK helm chart. This configuration propagates to all other Elastic components, such as Kibana.
  • Building your custom deployment container image for the Elastic Artifact Registry.
  • Building your custom deployment container image for the Elastic Endpoint Artifact Repository.

2.2. Elastic Package Registryedit

The container image can be downloaded from the official Elastic Docker repository, as described in the Fleet and Elastic Agent air-gapped environments documentation.

This container would, ideally, run as a Kubernetes deployment. Refer to Appendix C - EPR Kubernetes Deployment for examples.

2.3. Elastic Artifact Registryedit

A custom container would need to be created following similar instructions to setting up a web server in the self-managed install case. For example, a container file using an NGINX base image could be used to run a build similar to the example described in Appendix B - Elastic Artifact Registry.

2.4. Elastic Endpoint Artifact Repositoryedit

Just like the Elastic Artifact Registry. A custom container needs to be created following similar instructions to setting up a web server for the self-managed install case.

2.5. Ironbank Secure Images for Elasticedit

Besides the public Elastic container repository, most Elastic Stack container images are also available in Platform One’s Iron Bank.

3.0 Elastic Cloud Enterpriseedit

To install Elastic Cloud Enterprise in an air-gapped environment you’ll need to host your own 1.10. Elastic Package Registry. Refer to the ECE offline install instructions for details.

Appendix A - Elastic Package Registryedit

The following script generates a SystemD service file on a RHEL 8 system in order to run EPR with Podman in a production environment.

#!/usr/bin/env bash


podman create \
  --name "elastic-epr" \
  -v "$EPR_TLS_CERT:/etc/ssl/epr.crt:ro" \
  -v "$EPR_TLS_KEY:/etc/ssl/epr.key:ro" \
  -e "EPR_TLS_CERT=/etc/ssl/epr.crt" \
  -e "EPR_TLS_KEY=/etc/ssl/epr.key" \

## creates service file in the root directory
# podman generate systemd --new --files --name elastic-epr --restart-policy always

The following is an example of an actual SystemD service file for an EPR, launched as a Podman service.

# container-elastic-epr.service
# autogenerated by Podman 4.1.1
# Wed Oct 19 13:12:33 UTC 2022

Description=Podman container-elastic-epr.service

ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
	--cidfile=%t/%n.ctr-id \
	--cgroups=no-conmon \
	--rm \
	--sdnotify=conmon \
	-d \
	--replace \
	--name elastic-epr \
	-p \
	-v /etc/elastic/epr/epr.pem:/etc/ssl/epr.crt:ro \
	-v /etc/elastic/epr/epr-key.pem:/etc/ssl/epr.key:ro \
	-e EPR_TLS_CERT=/etc/ssl/epr.crt \
	-e EPR_TLS_KEY=/etc/ssl/epr.key
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id


Appendix B - Elastic Artifact Registryedit

The following example script downloads artifacts from the internet to be later served as a private Elastic Package Registry.

#!/usr/bin/env bash
set -o nounset -o errexit -o pipefail


DOWNLOAD_BASE_DIR=${DOWNLOAD_BASE_DIR:?"Make sure to set DOWNLOAD_BASE_DIR when running this script"}

COMMON_PACKAGE_PREFIXES="apm-server/apm-server beats/auditbeat/auditbeat beats/elastic-agent/elastic-agent beats/filebeat/filebeat beats/heartbeat/heartbeat beats/metricbeat/metricbeat beats/osquerybeat/osquerybeat beats/packetbeat/packetbeat cloudbeat/cloudbeat endpoint-dev/endpoint-security fleet-server/fleet-server"



function download_packages() {
  local url_suffix="$1"
  local package_prefixes="$2"

  local _url_suffixes="$url_suffix ${url_suffix}.sha512 ${url_suffix}.asc"
  local _pkg_dir=""
  local _dl_url=""

  for _download_prefix in $package_prefixes; do
    for _pkg_url_suffix in $_url_suffixes; do
          _pkg_dir=$(dirname ${DOWNLOAD_BASE_DIR}/${_download_prefix})
          (mkdir -p $_pkg_dir && cd $_pkg_dir && curl -O "$_dl_url")

# and we download
for _os in linux windows; do
  case "$_os" in
      echo "[ERROR] Something happened"
      exit 1


  if [[ "$_os" = "windows" ]]; then
    download_packages "$PKG_URL_SUFFIX" "$WIN_ONLY_PACKAGE_PREFIXES"

  if [[ "$_os" = "linux" ]]; then
    download_packages "${STACK_VERSION}-x86_64.rpm" "$RPM_PACKAGES"
    download_packages "${STACK_VERSION}-amd64.deb" "$DEB_PACKAGES"

## selinux tweaks
# semanage fcontext -a -t "httpd_sys_content_t" '/opt/elastic-packages(/.*)?'
# restorecon -Rv /opt/elastic-packages

The following is an example NGINX configuration for running a web server for the Elastic Artifact Registry.

user  nginx;
worker_processes  2;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/;

events {
    worker_connections  1024;

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log          /var/log/nginx/access.log  main;
    sendfile            on;
    keepalive_timeout   65;

    server {
        listen                  9080 default_server;
        server_name             _;
        root                    /opt/elastic-packages;

        location / {



Appendix C - EPR Kubernetes Deploymentedit

The following is a sample EPR Kubernetes deployment YAML file.

apiVersion: apps/v1
kind: Deployment
  name: elastic-package-registry
  namespace: default
    app: elastic-package-registry
  replicas: 1
      app: elastic-package-registry
      name: elastic-package-registry
        app: elastic-package-registry
        - name: epr
            - containerPort: 8080
              name: http
              port: 8080
            initialDelaySeconds: 20
            periodSeconds: 30
              cpu: 125m
              memory: 128Mi
              cpu: 1000m
              memory: 512Mi
            - name: EPR_ADDRESS
              value: ""
apiVersion: v1
kind: Service
    app: elastic-package-registry
  name: elastic-package-registry
  - port: 80
    name: http
    protocol: TCP
    targetPort: http
    app: elastic-package-registry

Appendix D - Agent Integration Guideedit

When configuring any integration in Elastic Agent, you need to set up integration settings within whatever policy is ultimately assigned to that agent.

D.1. Terminologyedit

Note the following terms and definitions:

A variety of optional capabilities that can be deployed on top of the Elastic Stack. Refer to Integrations to learn more.
Agent integration
The integrations that require Elastic Agent to run. For example, the Sample Data integration requires only Elasticsearch and Kibana and consists of dashboards, data, and related objects, but the APM integration not only has some Elasticsearch objects, but also needs Elastic Agent to run the APM Server.
A set of dependencies (such as dashboards, scripts, and others) for a given integration that, typically, needs to be retrieved from the Elastic Package Registry before an integration can be correctly installed and configured.
Agent policy
A configuration for the Elastic Agent that may include one or more Elastic Agent integrations, and configurations for each of those integrations.

D.2. How to configureedit

There are three ways to configure Elastic Agent integrations:

D.2.1. Using the Kibana UIedit

Best option for: Manual configuration and users who prefer using a UI over scripting.

Example: Get started with logs and metrics

Agent policies and integration settings can be managed using the Kibana UI. For example, the following shows the configuration of logging for the System integration in an Elastic Agent policy:

Configuration of a logging integration in an agent policy

D.2.2. Using the kibana.yml config fileedit

Good option for: Declarative configuration and users who need reproducible and automated deployments.

Example: Fleet settings in Kibana

This documentation is still under development; there may be gaps around building custom agent policies.

You can have Kibana create Elastic Agent policies on your behalf by adding appropriate configuration parameters in the kibana.yml settings file, these include:

Takes a list of all integration package names and versions that Kibana should download from the Elastic Package Registry (EPR). This is done because Elastic Agents themselves do not directly fetch packages from the EPR.
Takes a list of Elastic Agent policies in the format expected by the Kibana Fleet HTTP API. Refer to the setting in Preconfiguration settings for the format. See also D.2.3. Using the Kibana Fleet API.
Takes a URL of the Elastic Package Registry that can be reached by the Kibana server. Enable this setting only when deploying in an air-gapped environment.
Other settings
You can add other, more discretionary settings for Fleet, Elastic Agents, & policies. Refer to Fleet settings in Kibana.

D.2.3. Using the Kibana Fleet APIedit

Best option for: Declarative configuration and users who need reproducible and automated deployments in even the trickiest of environments.

Example: See the following.

It is possible to use custom scripts that call the Kibana Fleet API to create or update policies without restarting Kibana, and also allowing for custom error handling and update logic.

At this time, you can refer to the the Kibana Fleet HTTP API documentation, however additional resources from public code repositories should be consulted to capture the full set of configuration options available for a given integration. Specifically, many integrations have configuration options such as inputs and data_streams that are unique.

In particular, the *.yml.hbs templates should be consulted to determine which vars are available for configuring a particular integration using the Kibana Fleet API.