Running Elasticsearch and Kibana locally using Docker

In this post, I will show how to run ElasticSearch and Kibana in Docker containers on your local machine, which can be helpful when you need to set up a quick test environment.

References

Most of what I’m about to describe came from the following ElasticSearch and Kibana references. Version 6.4 is the current version as of the date of this blog post.

Prerequisites

Defining Docker Containers

docker-compose.yml
version: '3'

services:

  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    environment:
      - discovery.type=single-node

  kibana:
    container_name: kibana
    image: docker.elastic.co/kibana/kibana:6.4.0
    ports:
      - 5601:5601

volumes:

  esdata:
    driver: local

This file defines two docker containers, one for Elasticsearch, and another for Kibana. Once you have this file, you can run docker-compose up to start Elasticsearch and Kibana in Docker on your machine. It might take a minute for the containers to fully launch, but once they do, you should be able to open a browser and navigate to Kibana using http://localhost:5601.

Loading Some Sample Data

The Kibana user guide provides a tutorial with some sample data sets. We can use the following scripts to download the sample data sets and load them into Elasticsearch and created indexes for them.

One thing to note, since this is a non-production environment, and in order to keep things as simple as possible, I set the number of replicas to 0 (the default is 1) for each of the indexes. This keeps Elasticsearch from reporting the health of the indexes as yellow, since there is no other Elasticsearch node to replicate with.

load-bank
#!/bin/bash
temp=$(mktemp -d)
cd $temp
curl -s -X PUT "localhost:9200/bank" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0}}'
curl -s -O https://download.elastic.co/demos/kibana/gettingstarted/accounts.zip
unzip accounts.zip
curl -s -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/bank/account/_bulk?pretty' --data-binary @accounts.json > /dev/null
cd - > /dev/null
rm -r $temp
load-logstash
#!/bin/bash
temp=$(mktemp -d)
cd $temp
curl -s -X PUT "localhost:9200/logstash-2015.05.18" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0},"mappings":{"log":{"properties":{"geo":{"properties":{"coordinates":{"type":"geo_point"}}}}}}}'
curl -s -X PUT "localhost:9200/logstash-2015.05.19" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0},"mappings":{"log":{"properties":{"geo":{"properties":{"coordinates":{"type":"geo_point"}}}}}}}'
curl -s -X PUT "localhost:9200/logstash-2015.05.20" -H 'Content-Type: application/json' -d'{"settings":{"number_of_replicas":0},"mappings":{"log":{"properties":{"geo":{"properties":{"coordinates":{"type":"geo_point"}}}}}}}'
curl -s https://download.elastic.co/demos/kibana/gettingstarted/logs.jsonl.gz | gunzip > logs.jsonl
curl -s -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/_bulk?pretty' --data-binary @logs.jsonl > /dev/null
cd - > /dev/null
rm -r $temp
load-shakespeare
#!/bin/bash
temp=$(mktemp -d)
cd $temp
curl -s -X PUT "localhost:9200/shakespeare" -H 'Content-Type: application/json' -d'
{
 "settings": {"number_of_replicas":0},
 "mappings": {
  "doc": {
   "properties": {
    "speaker": {"type": "keyword"},
    "play_name": {"type": "keyword"},
    "line_id": {"type": "integer"},
    "speech_number": {"type": "integer"}
   }
  }
 }
}'
curl -s -O https://download.elastic.co/demos/kibana/gettingstarted/shakespeare_6.0.json
curl -s -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/shakespeare/doc/_bulk?pretty' --data-binary @shakespeare_6.0.json > /dev/null
cd - > /dev/null
rm -r $temp

Using these scripts, we can download some sample data and load it into our Elasticsearch instance using ./load-bank, ./load-logstash, and ./load-shakespeare.

You can verify the newly created indexes using curl -X GET "localhost:9200/_cat/indices?v" or by opening a browser and navigating to http://localhost:5601/app/kibana#/management/elasticsearch/index_management where you should see something like this:

Resetting the environment

All of the data from your dockerized Elasticsearch instance is stored in a docker volume called esdata. This volume will be persisted even after you stop and restart Docker. If you want to start from a clean slate you can run docker-compose down -v which will delete the esdata volume. The next time you run docker-compose up the volume will be recreated empty.

Conclusion

In this post, I showed how you can run Elasticsearh and Kibana locally using Docker. While the configuration I presented is not suitable for a production environment, it does offer a simple way to quickly spin up an environment for testing or experimentation.

Leave a Reply

Your email address will not be published. Required fields are marked *