This is a guide for starting a multi-node Elasticsearch 2.3 cluster from Docker containers residing on different hosts. This is not a guide for creating a production worthy ES cluster, but is more for edification (perhaps another guide will be released with some production best practices).

In this example, we'll start a 3 server cluster. This assumes that you have Docker installed on all three hosts. It also assumes that all hosts have inter-connectivity on ports 9200 and 9300.

Once these pre-reqs are met, you will need to collect the IP addresses for each host. In my example, the following host/IP combinations are available:

es1: 172.30.84.45
es2: 172.30.84.24
es3: 172.30.84.35

Next we will start a container on each machine:

$ docker run -d -p 9200:9200 -p 9300:9300 elasticsearch \
-Des.node.name="es1" \
-Des.cluster.name="mycluster" \
-Des.network.host=_eth0_ \
-Des.network.publish_host=172.30.84.45 \
-Des.discovery.zen.ping.unicast.hosts=172.30.84.45,172.30.84.24,172.30.84.35 \
-Des.discovery.zen.minimum_master_nodes=1
$ docker run -d -p 9200:9200 -p 9300:9300 elasticsearch \
-Des.node.name="es2" \
-Des.cluster.name="mycluster" \
-Des.network.host=_eth0_ \
-Des.network.publish_host=172.30.84.24 \
-Des.discovery.zen.ping.unicast.hosts=172.30.84.45,172.30.84.24,172.30.84.35 \
-Des.discovery.zen.minimum_master_nodes=1
$ docker run -d -p 9200:9200 -p 9300:9300 elasticsearch \
-Des.node.name="es3" \
-Des.cluster.name="mycluster" \
-Des.network.host=_eth0_ \
-Des.network.publish_host=172.30.84.35 \
-Des.discovery.zen.ping.unicast.hosts=172.30.84.45,172.30.84.24,172.30.84.35 \
-Des.discovery.zen.minimum_master_nodes=1

So lets explain these a bit by picking apart the first command.

We use the standard docker run arguments to start the service as a daemon -d.

We also map ports 9200 and 9300 to the host with -p 9200:9200 -p 9300:9300.

After we specify the image to run, elasticsearch, we provide runtime options to elasticsearch via the -Des options. These options are defined in the documentation as follows:

  • node.name - You may also want to change the default node name for each node to something like the display hostname. By default Elasticsearch will randomly pick a Marvel character name from a list of around 3000 names when your node starts up.

  • cluster.name - The cluster.name allows to create separated clusters from one another. The default value for the cluster name is elasticsearch, though it is recommended to change this to reflect the logical group name of the cluster running.

  • network.host - The node will bind to this hostname or IP address and publish (advertise) this host to other nodes in the cluster. Accepts an IP address, hostname, a special value, or an array of any combination of these.

    In the case we bind it to the first ethernet address by using _eth0_. The default is _local_ which isn't going to work for us since we need the clusters to communicate with each other on different devices.

  • network.publish_host - The publish host is the single interface that the node advertises to other nodes in the cluster, so that those nodes can connect to it. Currently an elasticsearch node may be bound to multiple addresses, but only publishes one. If not specified, this defaults to the “best” address from network.host, sorted by IPv4/IPv6 stack preference, then by reachability.

    In this case we use the actual host of the machine. Docker will bind the internal IP on eth0 to the host's IP address and automatically NAT it for us. So the previous option we configure Elasticsearch to listen on the internal network of Docker. This option will allow other nodes in the cluster to reach our node.

The remaining options deal with cluster discovery. In this case, we use the unicast mode of zen discovery to search for nodes in the cluster. The following options are configured and the unicast hosts are the complete set of

  • discovery.zen.ping.unicast.hosts - The comma delimited list of hosts belonging to the cluster.
  • discovery.zen.minimum_master_nodes - The minimum number of master nodes in the cluster.

Cluster Health

Now that things are running you'll likely want to verify the cluster is behaving. You can check the health of your cluster via a simple GET request:

http://172.30.84.45:9200/_cluster/health

More thanlikely you'll want a better instrument to view your cluster.

ElasticHQ

A simple cluster manager is Elastic HQ.

To install, first run docker ps to display your container ID. In this case it's a0dafdeff031.

Then run the install command with your container ID:

$ docker exec a0da plugin install royrusso/elasticsearch-HQ

You can access it from a browser via one of your nodes:

http://172.30.84.45:9200/_plugin/hq
Kopf

Another popular cluster viewer is Kopf.

To install, first run docker ps to display your container ID. In this case it's a0dafdeff031.

Then the install command with your container ID:

$ docker exec a0da plugin install lmenezes/elasticsearch-kopf

You can access it from a browser via one of your nodes:

http://172.30.84.45:9200/_plugin/kopf

Conclusion

So you've seen how to get a cluster running with Docker. This is all rather brute force. To help ease this configuration you can use a tool such as Weave or Consul to help automate the IP address configuration.


*Addendum - checking the network adapter interfaces on a docker container

As mentioned in the network.host section above, we're binding to the first network adapter. We can see the network adapter via the command line for a container by dropping into the shell of Docker, in this case Container 98a9

$ docker exec -it 98a9 /bin/bash

Once you run this command you drop into bash for machine 98a9 and can run ip addr show to view the IP information.

root@98a92c567426:/usr/share/elasticsearch# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
40: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever