NoSQL databases differ in almost every aspect when compared to traditional RDBMS. While classic DBs (such as PostgreSQL) have, for example, dumps, Elasticsearch has the snapshotting capability that can be good in some cases but overwhelming in others.

In this blog post, we have covered two types of Elasticsearch backup - one is official and recommended, while the other one is more a result of a community effort.

Elasticsearch snapshot and restore

If you opt for Elasticsearch snapshot and restore, each time you take a snapshot, the backups are done incrementally. Meaning, only the changes that have occurred between two backups (differentials) are stored. The files are saved in the folder labeled as repo in Elasticsearch configuration. Should you prefer to save files on some other location, there are numerous plugins for S3, Azure and similar, that can come in handy.

In our example, we will backup to our local folder. Also, we have used Ubuntu machine, yet all the steps are still applicable should you prefer some other Linux distro or Windows.

1. Prepare the filesystem

mkdir /home/backup
chown elasticsearch:elasticsearch /home/backup

2. Add repo to elasticsearch.yml file

echo 'path.repo: ["/home/backup"]'>>/etc/elasticsearch/elasticsearch.yml
systemctl restart elasticsearch

3. After the preparations on the filesystem side we have just made, now we can enable repo in Elasticsearch. All commands are executed directly on the Elasticseach host, however you can use Kibana or Postman if this is too bulky for you.

curl -XPUT -H "Content-Type: application/json" 'http://localhost:9200/_snapshot/backup' -d '{
  "type": "fs",
  "settings": {
      "location": "/home/backup",
      "compress": true
  }
}'

4. Let's create our first snapshot

curl -XPUT-H "Content-Type: application/json" 'http://localhost:9200/_snapshot/backup/snapshot1' -d '{ "indices": "netflow*", "include_global_state": true }'

include_global_state parameter is added because we want to backup and restore index mapping along with indices.

The snapshotting process can be monitored in various APIs while it is in progress and upon completion:

curl -XGET localhost:9200/_cat/snapshots/backup?v
curl -XGET localhost:9200_snapshot/_status
curl -XGET localhost:9200/_snapshot/backup/_current

Once we have completed the backup phase, we can head onto recovery or restore. On the target machine, you need to create folders and execute all the commands until systemctl restart elasticsearch from the aforementioned steps 1 and 2 . Then copy or rsync the entire folder to the new machine. Use the following set of commands to restore:

curl-XPOST -H "Content-Type: application/json"
'http://localhost:9200/_snapshot/backup/snapshot1/_restore' -d ' { "indices": "netflow*" , "include_global_state":true" }'

Here we are restoring all the indices, starting with NetFlow and also including mappings. This process can take a lot of time sometimes, depending on your machine. You can monitor the status of every index with _recovery API. Moreover, there are some optimizations available that can speed up the restore process:

I Switch to default refresh interval with: "ignore_index_settings": [ "index.refresh_interval" ]

II Use the dynamic setting that limits inbound and outbound recovery traffic, and is set to 40mb by default: indices.recovery.max_bytes_per_sec

There are also other, expert recovery settings available, but you have to be really careful not to overload your Elasticsearch cluster, which can lead to OOM and crashing.

 

Elasticdump

Elasticdump is a nodeJS application than can easily be downloaded and used for backup and recovery. In this example we will show how to apply it.

1. Download and install LTS version of nodeJS

curl -sL https://deb.nodesource.com/setup_10.x |
sudo -E bash -sudo apt-get install -y nodejs

2. Download and install elasticdump globally

sudo npm install elasticdump -g

Elasticdump is a command that is executed on Elaticsearch node.

3. Backup the template of NetFlow indices

elasticdump --input=http://localhost:9200/netflow --output=templates.json --type=template

4. Use multielasticdump to export all indices, as well as their mapping,settings and template

multielasticdump --direction=dump --input=http://localhost:9200 --output=/home/backup

5. Copy the whole folder and template.json file to the new machine.

The restore process is basically just reversing the order of -input and -output in backup commands written above in steps 3 and 4:

elasticdump --input=templates.json --output=http://localhost:9200/netflow --type=template
multielasticdump --direction=load --input=/home/backup --output=http://localhost:9200

Elasticdump and multielasticdump can be really slow processes too, therefore here are some tips and tricks on how to get faster export/import:

I Use flag that defines the number of objects to be moved in batch per operation. Default is 100, although depending on your RAM you can set it to 1000 or even 10000: -limit

II Apply flag that adds in-flight compression of the JSON files to avoid gzip or zstd commands afterward: -fsCompress

III Also, there is a specific flag used to disable index refresh. It improves index speed and has lower hardware requirements. However, recently indexed data might not be collected. This flag is usually recommended when it comes to big data indexing, where speed is more important than the data novelty: -noRefresh

IV Useful can be the flag that determines how many forks should be run simultaneously, and this number is by default set to correspond the number of CPUs: -parallel

V Last but not least, you may use the flag that designates a type that will be ignored from the dump/load. Six options are supported: data,mapping,analyzer,alias,settings and template. Multi-type support is possible, however then each type must be separated with a comma: -ignoreType

If these fixes are not providing satisfying results for you, there is one dirty fix, however, it is not recommended on low RAM systems. This setting configures max result windows and the number of objects limit that are going be retrieved from Elasticsearch. After applying the described setting to all indices, you should be able to export and import with higher speed although the cost would be higher RAM consumption.

curl -H "Content-Type: application/json" -XPUT 'http://localhost:9200/_all/_settings?preserve_existing=true' -d '{"index.max_result_window" : "100000"}'

Using Elasticdump with all the mentioned flags can sometimes crash your Elasticsearch instance if you don't have enough RAM, so you need to be careful when exporting data this way.