Containers: How to run ELK Stack on Linux on IBM Z with filebeat

by Alice Frosi

This is a a following post how to use ELK stack on IBM Z. Check the previous post to build the ELK stack.

This setup is using docker-compose to start the containers. This configuration aims to be an example how to configure ELK with some working configuration. This example will use filebeat to collect some metrics about file logs and using the syslog module.

The repo s390x-container-logging contains the configurations and the docker-compose.yaml file.

$ git clone https://github.com/s390x-container-samples/s390x-container-logging
$ cd s390x-container-logging

Now, you can start the containers using docker-compose
$ docker-compose start
Starting elasticsearch ... done
Starting logstash      ... done
Starting beats         ... done
Starting kibana        ... done
$ docker-compose ps
                Name                               Command               State                     Ports                  
--------------------------------------------------------------------------------------------------------------------------
s390xcontainerlogging_beats_1           /bin/sh -c $BEATSNAME -e - ...   Up                                               
s390xcontainerlogging_elasticsearch_1   elasticsearch                    Up      0.0.0.0:9200->9200/tcp,                  
                                                                                 0.0.0.0:9300->9300/tcp                   
s390xcontainerlogging_kibana_1          kibana -H 0.0.0.0                Up      0.0.0.0:5601->5601/tcp                   
s390xcontainerlogging_logstash_1        logstash                         Up      0.0.0.0:5000->5000/tcp, 5043/tcp,        
                                                                                 0.0.0.0:5044->5044/tcp, 514/tcp,         
                                                                                 9292/tcp, 0.0.0.0:9600->9600/tcp   

You can browse to Kibana UI at http://<IP>:5601/app/kibana.

For any troubles, you can user docker logs <container-id> to check the container output.
$ docker logs s390xcontainerlogging_logstash_1
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-09-30T15:09:49,257][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2019-09-30T15:09:49,326][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2019-09-30T15:09:52,056][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-09-30T15:09:52,104][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-09-30T15:09:52,382][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"7c500aa0-82a1-4b8c-b807-15c7750db17d", :path=>"/usr/share/logstash/data/uuid"}
[2019-09-30T15:10:15,337][INFO ][org.reflections.Reflections] Reflections took 112 ms to scan 1 urls, producing 19 keys and 39 values 
[2019-09-30T15:10:25,953][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elastic:xxxxxx@elasticsearch:9200/]}}
[2019-09-30T15:10:27,852][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://elastic:xxxxxx@elasticsearch:9200/"}
[2019-09-30T15:10:28,066][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>7}
[2019-09-30T15:10:28,084][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>7}
[2019-09-30T15:10:28,260][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//elasticsearch:9200"]}
[2019-09-30T15:10:28,335][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-filter-geoip-6.0.1-java/vendor/GeoLite2-City.mmdb"}
[2019-09-30T15:10:29,116][INFO ][logstash.outputs.elasticsearch] Index Lifecycle Management is set to 'auto', but will be disabled - Index Lifecycle management is not installed on your Elasticsearch cluster
[2019-09-30T15:10:33,342][WARN ][org.logstash.instrument.metrics.gauge.LazyDelegatingGauge] A gauge metric of an unknown type (org.jruby.specialized.RubyArrayOneObject) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[2019-09-30T15:10:33,359][INFO ][logstash.javapipeline    ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>32, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>4000, :thread=>"#<Thread:0x554d95ea run>"}
[2019-09-30T15:10:33,926][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2019-09-30T15:10:34,037][INFO ][logstash.javapipeline    ] Pipeline started {"pipeline.id"=>"main"}
[2019-09-30T15:10:34,736][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2019-09-30T15:10:34,847][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2019-09-30T15:10:36,866][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}

Elasticsearch needs to be reachable at http://localhost:9200/. A quick check is to query the elasticsearch cluster.
$ curl -X GET http://localhost:9200/
{
  "name" : "elastisearch",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "w87vepkPTNqlAxrZIZA1NA",
  "version" : {
    "number" : "7.3.0-SNAPSHOT",
    "build_flavor" : "oss",
    "build_type" : "tar",
    "build_hash" : "de777fa",
    "build_date" : "2019-09-25T11:24:55.832607Z",
    "build_snapshot" : true,
    "lucene_version" : "8.1.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

The example uses filebeat to monitor the logfile at /var/log/ dir. You can uses the scripts in demo dir to generate some samples (you need to have sudo privileges).
$ demo/generate_kernel_msg.sh 
+ LOG=/var/log/demo.log
+ '[' '!' -f /var/log/demo.log ']'
+ for i in {1..100}
+ su root -c 'echo Some kernel message for demo > /dev/kmsg'
+ sleep 1
+ for i in {1..100}
+ su root -c 'echo Some kernel message for demo > /dev/kmsg'
[...]

$ demo/generate_logs.sh 
+ LOG=/var/log/demo.log
+ '[' '!' -f /var/log/demo.log ']'
+ for i in {1..1000}
++ date +%d.%m.%y
++ date +%T
+ echo '30.09.19 17:17:36' 'Some logs for elk demo'
+ sleep 1
+ for i in {1..1000}
++ date +%d.%m.%y
++ date +%T
[...]

The first time you access Kibana, you will be asked to create an index. You can choose the filebeat* index
After, you will be able to see the log that have been generated by the scripts




















This configuration is achieved by adding in beats/config/filebeat.yaml 
filebeat.inputs:
  - type: log
    enabled: true
    paths:
     - /var/log/*.log

and in the docker-compose.yaml
      - type: bind
        source: /var/log/
        target: /var/log/

In this same way, you can enable additional filebeat modules. An overview of what is available here.

Another example is the enablement of the docker module to collect the containers log by adding in beats/config/filebeat.yaml.
filebeat.inputs:
  - type: docker
    enabled: true
    containers.ids: '*'

and in the docker-compose.yaml file:
      - type: bind
        source: /var/lib/docker/containers
        target: /var/lib/docker/containers

Now, you are able to collect container logs. You can test the modules by generating some container logs with:

$ docker run -ti alpine sh -c 'while [ true ]; do echo "Printi something" && sleep 5 ; done'
Printing something
Printing something
Printing something
^C

No comments:

Post a Comment

Popular Posts