1. Overview
In this quick tutorial, we’re going to have a look at how to send JMX data from our Tomcat server to the Elastic Stack (formerly known as ELK).
We’ll discuss how to configure Logstash to read data from JMX and send it to Elasticsearch.
2. Install the Elastic Stack
First, we need to install Elastic stack (Elasticsearch – Logstash – Kibana)
Then, to make sure everything is connected and working properly, we’ll send the JMX data to Logstash and visualize it over on Kibana.
2.1. Test Logstash
First, we will go to the Logstash installation directory which varies by the operating system (in our case Ubuntu):
cd /opt/logstash
We can set a simple configuration to Logstash from the command line:
bin/logstash -e 'input { stdin { } } output { elasticsearch { hosts => ["localhost:9200"] } }'
Then, we can simply type some sample data in the console – and use the CTRL-D command to close pipeline when we’re done.
2.2. Test Elasticsearch
After adding the sample data, a Logstash index should be available on Elasticsearch – which we can check as follows:
curl -X GET 'http://localhost:9200/_cat/indices'
Sample Output:
yellow open logstash-2017.11.10 5 1 3531 0 506.3kb 506.3kb
yellow open .kibana 1 1 3 0 9.5kb 9.5kb
yellow open logstash-2017.11.11 5 1 8671 0 1.4mb 1.4mb
2.3. Test Kibana
Kibana runs by default on port 5601 – we can access the homepage at:
http://localhost:5601/app/kibana
We should be able to create a new index with the pattern “*logstash-**” – and see our sample data there.
3. Configure Tomcat
Next, we need to enable JMX by adding the following to CATALINA_OPTS:
-Dcom.sun.management.jmxremote
-Dcom.sun.management.jmxremote.port=9000
-Dcom.sun.management.jmxremote.ssl=false
-Dcom.sun.management.jmxremote.authenticate=false
Note that:
- You can configure CATALINA_OPTS by modifying setenv.sh
- For Ubuntu users setenv.sh can be found in ‘/usr/share/tomcat8/bin’
4. Connect JMX and Logstash
Now, let’s connect our JMX metrics to Logstash – for which we’ll need to have the JMX input plugin installed there (more on that later).
4.1. Configure JMX Metrics
First, we need to configure the JMX metrics we want to stash; we’ll provide the configuration in JSON format.
Here’s our jmx_config.json:
{
"host" : "localhost",
"port" : 9000,
"alias" : "reddit.jmx.elasticsearch",
"queries" : [
{
"object_name" : "java.lang:type=Memory",
"object_alias" : "Memory"
}, {
"object_name" : "java.lang:type=Threading",
"object_alias" : "Threading"
}, {
"object_name" : "java.lang:type=Runtime",
"attributes" : [ "Uptime", "StartTime" ],
"object_alias" : "Runtime"
}]
}
Note that:
- We used the same port for JMX from CATALINA_OPTS
- We can provide as many configuration files as we want, but we need them to be in the same directory (in our case, we saved jmx_config.json in ‘/monitor/jmx/’)
4.2. JMX Input Plugin
Next, let’s install JMX input plugin by running the following command in the Logstash installation directory:
bin/logstash-plugin install logstash-input-jmx
Then, we need to create a Logstash configuration file (jmx.conf), where the input is JMX metrics and output directed to Elasticsearch:
input {
jmx {
path => "/monitor/jmx"
polling_frequency => 60
type => "jmx"
nb_thread => 3
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
}
}
Finally, we need to run Logstash and specify our configuration file:
bin/logstash -f jmx.conf
Note that our Logstash configuration file jmx.conf is saved in the Logstash home directory (in our case /opt/logstash)
5. Visualize JMX Metrics
Finally, let’s create a simple visualization of our JMX metrics data, over on Kibana. We’ll create a simple chart – to monitor the heap memory usage.
5.1. Create New Search
First, we’ll create a new search to get metrics related to heap memory usage:
- Click on “New Search” icon in search bar
- Type the following query
metric_path:reddit.jmx.elasticsearch.Memory.HeapMemoryUsage.used
- Press Enter
- Make sure to add ‘metric_path‘ and ‘metric_value_number‘ fields from sidebar
- Click on ‘Save Search’ icon in search bar
- Name the search ‘used memory’
In case any fields from sidebar marked as unindexed, go to ‘Settings’ tab and refresh the field list in the ‘*logstash-**‘ index.
5.2. Create Line Chart
Next, we’ll create a simple line chart to monitor our heap memory usage over time:
- Go to ‘Visualize’ tab
- Choose ‘Line Chart’
- Choose ‘From saved search’
- Choose ‘used memory’ search that we created earlier
For Y-Axis, make sure to choose:
- Aggregation: Average
- Field: metric_value_number
For the X-Axis, choose ‘Date Histogram’ – then save the visualization.
5.3. Use Scripted Field
As the memory usage is in bytes, it’s not very readable. We can convert the metric type and value by adding a scripted field in Kibana:
- From ‘Settings’, go to indices and choose ‘*logstash-**‘ index
- Go to ‘Scripted fields’ tab and click ‘Add Scripted Field’
- Name: metric_value_formatted
- Format: Bytes
- For Script, we will simply use the value of ‘metric_value_number‘:
doc['metric_value_number'].value
Now, you can change your search and visualization to use field ‘metric_value_formatted‘ instead of ‘metric_value_number‘ – and the data is going to be properly displayed.
Here’s what this very simple dashboard looks like:
6. Conclusion
And we’re done. As you can see, the configuration isn’t particularly difficult, and getting the JMX data to be visible in Kibana allows us to do a lot of interesting visualization work to create a fantastic production monitoring dashboard.