Publish WSO2 APIM Logs to Elasticsearch

WSO2 API Manager is a fully open-source full lifecycle API Management solution that can be run anywhere. It can be deployed on-prem, on a private cloud, is available as a service on cloud or deployed in a hybrid fashion where its components can be distributed and deployed across multiple cloud and on-prem infrastructures.

Even though WSO2 API Manager comes with its own Analytics Components sometimes it’s very useful to have a central Analytics Dashboard for all the components in the enterprise including WSO2 Components. Whenever there is a requirement like this ELK Stack is one of the best solutions available these days. Analytics statistics can be published to ELK Stack we can get rid of the API Manager’s analytics component as well but this post is on publishing only the logs to ELK Stack.

Components of the ELK Stack

  • Beats –
    Beats is a free and open platform for single-purpose data shippers. They send data from hundreds or thousands of machines and systems to Logstash or Elasticsearch.
  • Logstash –
    Logstash is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite “stash.”
  • Elasticsearch –
    Elasticsearch is a distributed, open source search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. Elasticsearch is built on Apache Lucene and was first released in 2010 by Elasticsearch N.V. (now known as Elastic). Known for its simple REST APIs, distributed nature, speed, and scalability, Elasticsearch is the central component of the Elastic Stack, a set of open source tools for data ingestion, enrichment, storage, analysis, and visualization. Commonly referred to as the ELK Stack (after Elasticsearch, Logstash, and Kibana), the Elastic Stack now includes a rich collection of lightweight shipping agents known as Beats for sending data to Elasticsearch.
  • Kibana –
    Kibana is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps.

Components Diagram

WSO2 APIM ELK Integration

The logging framework of WSO2 products is built on top of log4j and all the log configurations can be changed on the file. Please refer to this for more information on setting up logging on WSO2 API Manager

As shown in the above diagram, the API Manager will publish logs to a predefined directory and that directory is mounted into Filebeat. Then Filebeat will stream the new logs to Logstash. In Logstash we can have filters and based on the predefined filters it will decide what logs should be fed into ElasticSearch. When passing logs to ElasticSearch, Logstash has the capability to parameterize the data, so those parameters can be used to query them accordingly. Kibana will be fetching data from ElasticSearch and visualize them according to the user needs. Any custom dashboard can be drawn there. Those defined parameters can be used for this.


Download the components

    WSO2 API Manager can be downloaded from the website and start as it is. Please refer to this for more information on this. For the moment let’s go ahead with the product binary.
  • ELK Stack
    Filebeat, Logstash, ElasticSearch, and Kibana can be downloaded from their website.

Configuring and starting the components

There is nothing to configure here. Please note that the logs will be persisted in ‘wso2am-3.1.0/repository/logs/’ directory. Start the product using the startup script.

rnavagamuwa@randikan bin % ./

After extracting the binary open the ‘filebeat.yml’ file. All the configuration required for this needs to be done here. As a Filebeat input let’s configure log type and point the path to ‘wso2carbon.log’ file. If there is a requirement to feed all the log files then instead of specifying the log file name, ‘*’ can be used. The configured ‘filebeat.yml’ should look like this. Change the path according to your file system. Then start the component by executing the ‘filebeat‘ script.

rnavagamuwa@randikan filebeat-7.7.1-darwin-x86_64 % ./filebeat

A GROK Filter has to be configured here and following is the complete GROK filter I have used. Save it as ‘logstash-beat.conf’. Here we are directly passing all the logs, but you can write your own GROK filter according to the requirement.

This is a very handy tool that can be used to write proper GROK filters. You can debug even before deploying it. When starting the product don’t forget to pass the created GROK filter config file.

rnavagamuwa@randikan bin % ./logstash -f ../config/logstash-beat.conf

Since we are using all the default configurations, there is nothing for us to do on ElasticSearch.

rnavagamuwa@randikan bin % ./elasticsearch

Finally let’s start the Kibana server too.

rnavagamuwa@randikan bin % ./kibana

Visualizing in Kibana

Once all the above steps are completed, you can open Kibana by visiting http(s)://your_kibana_host/5061 in your browser. Make sure that you have pushed some data (logs and statistics) to Elasticsearch beforehand. Make your first Index Pattern – “logstash-*” , by clicking the create button. All the available variables including the custom variables will be available on Kibana dashboard.

Available fields on Kibana

Then create the required dashboard by choosing the needed variables.

This is a very short blog post on configuring ELK Stack with WSO2 APIM and I’m planning to write another post on having distributed tracing across multiple WSO2 products using the ELK stack.

Please comment below or email me if you have any concerns regarding this blog post.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.