About a week ago I set up an ELK stack as part of a home project. I wanted to get the metrics of my network usage, latency, log aggregation, everything. And the ELK stack seemed promising. Since then it's been unstable to say the least.

While it was running Kibana and Elasticsearch were taking up 2 full cores each. Kibana stayed running the whole time while Elasticsearch would randomly crash saying the heap was full. After following the guidelines from Elastic and setting the heap, it still kept crashing after a couple of hours with the same error. After noticing the high CPU I was wondering if whatever is causing that is causing the memory issues. Turns out, it did.

Someone had suggested that x-pack may be using the CPU so I set out to disable it since I don't have a license for it.

While following Elastic's directions to remove it, I found that the module wasn't even installed. While searching on how to disable it, I read that you could disabling parts of the x-pack features in the .yml files. Since the modules weren't installed I was pretty skeptical that disabling it would work, but, what the heck, I'll try it. It worked! No more high cpu usage, the VM sits around 1% cpu usage when it's not bringing in a bunch of logs. So far the elastic process has been running for about a day, so good there so far.

A small note, don't put all of the disable flags in all of the .yml files. Parts of the stack do not support some of the x-pack features and won't start if the disable setting is in the config. Kibana was forgiving about the extra flags, the others were not.

Here's what I added to each .yml file that worked.

In /etc/elasticsearch/elasticsearch.yml

xpack.ml.enabled: false
xpack.monitoring.enabled: false
xpack.security.enabled: false
xpack.watcher.enabled: false

In /etc/logstash/logstash.yml

xpack.monitoring.enabled: false

In /etc/kibana/kibana.yml

xpack.graph.enabled: false
xpack.ml.enabled: false
xpack.monitoring.enabled: false
xpack.reporting.enabled: false
xpack.security.enabled: false
xpack.watcher.enabled: false

Just for reference, I'm running this on a 4 core VM with 8 gigs of memory on Debian 9.

Now to add more metrics.