deleteedit

Summaryedit

Delete indices or snapshots

For delete operations, all Kibana indices (.kibana, kibana-int, .marvel-kibana) will be filtered to prevent accidental deletion. If you wish to delete one of these indices, please use the --index flag to manually supply an index name.

Flagsedit

$ curator delete --help
Usage: curator delete [OPTIONS] COMMAND [ARGS]...

  Delete indices or snapshots

Options:
  --disk-space FLOAT  Delete indices beyond DISK_SPACE gigabytes.
  --reverse BOOLEAN   Only valid with --disk-space. Affects sort order of the
                      indices.  True means reverse-alphabetical (if dates are
                      involved, older is deleted first).  [default: True]
  --help              Show this message and exit.

Commands:
  indices    Index selection.
  snapshots  Snapshot selection.

 

This command requires either the indices subcommand for index selection, or the snapshots subcommand for snapshot selection.

Examplesedit

Delete indices:

curator delete indices <<index selection parameters>>

 

Delete snapshots:

curator delete snapshots <<snapshot selection parameters>>

 

Delete indices where disk space is in excess of 1024 gigabytes (1 terabyte):

curator delete --disk-space 1024 indices <<index selection parameters>>

 

Deleting snapshots by space is not yet possible.

Deleting Indices By Spaceedit

This option is for those who want to retain indices based on disk consumption, rather than by a set number of days. There are some important caveats surrounding this choice.

Caveatsedit
  • Elasticsearch cannot calculate the size of closed indices. Elasticsearch does not keep tabs on how much disk-space closed indices consume. If you close indices, your space calculations will be inaccurate.
  • Indices consume resources just by existing. You could run into performance and/or operational snags in Elasticsearch as the count of indices climbs.
  • You need to manually calculate how much space across all nodes. The total you give will be the sum of all space consumed across all nodes in your cluster. If you use shard allocation to put more shards or indices on a single node, it will not affect the total space reported by the cluster, but you may still run out of space on that node.

These are only a few of the caveats. This is still a valid use-case, especially for those running a single-node test box, however, so we include this option for your convenience.