Shared file system repositoryedit

This repository type is only available if you run Elasticsearch on your own hardware. If you use Elasticsearch Service, see Elasticsearch Service repository types.

Use a shared file system repository to store snapshots on a shared file system.

To register a shared file system repository, first mount the file system to the same location on all master and data nodes. Then add the file system’s path or parent directory to the path.repo setting in elasticsearch.yml for each master and data node. For running clusters, this requires a rolling restart of each node.

By default, a network file system (NFS) uses user IDs (UIDs) and group IDs (GIDs) to match accounts across nodes. If your shared file system is an NFS and your nodes don’t use the same UIDs and GIDs, update your NFS configuration to account for this.

Supported path.repo values vary by platform:

Linux and macOS installations support Unix-style paths:

path:
  repo:
    - /mount/backups
    - /mount/long_term_backups

After restarting each node, use Kibana or the create snapshot repository API to register the repository. When registering the repository, specify the file system’s path:

PUT _snapshot/my_fs_backup
{
  "type": "fs",
  "settings": {
    "location": "/mount/backups/my_fs_backup_location"
  }
}

If you specify a relative path, Elasticsearch resolves the path using the first value in the path.repo setting.

PUT _snapshot/my_fs_backup
{
  "type": "fs",
  "settings": {
    "location": "my_fs_backup_location"        
  }
}

The first value in the path.repo setting is /mount/backups. This relative path, my_fs_backup_location, resolves to /mount/backups/my_fs_backup_location.

If you register the same snapshot repository with multiple clusters, only one cluster should have write access to the repository. On other clusters, register the repository as read-only.

This prevents multiple clusters from writing to the repository at the same time and corrupting the repository’s contents. It also prevents Elasticsearch from caching the repository’s contents, which means that changes made by other clusters will become visible straight away.

To register a file system repository as read-only using the create snapshot repository API, set the readonly parameter to true. Alternatively, you can register a URL repository for the file system.

PUT _snapshot/my_fs_backup
{
  "type": "fs",
  "settings": {
    "location": "my_fs_backup_location",
    "readonly": true
  }
}

Repository settingsedit

chunk_size
(Optional, byte value) Maximum size of files in snapshots. In snapshots, files larger than this are broken down into chunks of this size or smaller. Defaults to null (unlimited file size).
compress
(Optional, Boolean) If true, metadata files, such as index mappings and settings, are compressed in snapshots. Data files are not compressed. Defaults to true.
location
(Required, string) Location of the shared filesystem used to store and retrieve snapshots. This location must be registered in the path.repo setting on all master and data nodes in the cluster.
max_number_of_snapshots
(Optional, integer) Maximum number of snapshots the repository can contain. Defaults to Integer.MAX_VALUE, which is 2^31-1 or 2147483647.
max_restore_bytes_per_sec
(Optional, byte value) Maximum snapshot restore rate per node. Defaults to unlimited. Note that restores are also throttled through recovery settings.
max_snapshot_bytes_per_sec
(Optional, byte value) Maximum snapshot creation rate per node. Defaults to 40mb per second.
readonly

(Optional, Boolean) If true, the repository is read-only. The cluster can retrieve and restore snapshots from the repository but not write to the repository or create snapshots in it.

Only a cluster with write access can create snapshots in the repository. All other clusters connected to the repository should have the readonly parameter set to true.

If false, the cluster can write to the repository and create snapshots in it. Defaults to false.

If you register the same snapshot repository with multiple clusters, only one cluster should have write access to the repository. Having multiple clusters write to the repository at the same time risks corrupting the contents of the repository.