SCSB Logging
SCSB Logging
SCSB Centralized Logging is done from efficient log analysis. SCSB opts the Elastic stack with Elasticsearch, Kibana and Filebeat with Docker.
The Elastic stack used in SCSB is composed of the following components:
• Elasticsearch, a database and search engine that stores and indexes data in a scalable way.
• Kibana, a UI, to easily access the data in Elasticsearch.
• Filebeat, a completely redesigned data collector which collects and forwards data from SCSB applications to Elasticsearch.
To deploy our stack, SCSB uses a pre installed Linux Ubuntu 18.04 LTS with Docker 19.03.8 and docker-compose version 1.25.5, build 8a1c60f6 along with Elasticsearch 7.8.0, and Kibana 7.8.0 and Filebeat 7.8.0.
The setup meets the following requirements:
• All the docker container logs (available with the docker logs command) must be searchable in the Kibana interface.
• Even after being imported into ElasticSearch, the logs must remain available with the docker logs command.
• It should be as efficient as possible in terms of resource consumption (cpu and memory).
• It should be able to decode logs encoded in JSON.
Architecture:
Flow Diagram
SCSC uses a unique Filebeat agent in the server where SCSB middleware applications are running to catch all the microservices logs and send them to Elasticsearch.
SCSB applications uses logback to convert the required fields in SLF4J’s Mapped Diagnostic Context into appropriate JSON format for every log events occurring in the application. Logback is configured in the logback-spring.xml file, located under the resources folder of the application.
 In Linux, the Docker containers log files are in this location :
/var/lib/docker/containers/<container-id>/<container-id>-json.log .Therefore all the SCSB applications docker logs are stored in the this location by default. We use that to collect the Docker logs via Filebeat to enrich important Docker metadata and send it to Elasticsearch. A single Filebeat container is being installed on Docker host.
Applications doesn’t need to know any details about the logging architecture and doesn’t have to worry about organizing log files. Filebeat is solely responsible for sending log entries to Elasticsearch.
The setup works as shown in the following diagram:
Logical Architecture:
Docker JSON File Logging Driver with Filebeat as a docker container
Installation:
There are many ways to install FileBeat, ElasticSearch and Kibana. To make things as simple as possible, SCSB uses docker compose to set them up. SCSB use the official docker images and there will be a single Elasticsearch node.
Filebeat container is installed in the server where SCSB applications are running whereas Elasticsearch and Kibana are installed in the separate server dedicated for logging.
The docker compose file docker-compose.yml is used for filebeat installation and Elasticsearch-Kibana installation.
Filebeat docker-compose.yml on the server where application are running.
version: '2.2'
services:
filebeat:
image: docker.elastic.co/beats/filebeat:7.8.0
hostname: test
container_name: filebeat
volumes:
- ./filebeat/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:rw # Configuration file
- /var/lib/docker/containers:/var/lib/docker/containers:ro
# Docker logs - needed to access all docker logs (read only)
- /var/run/docker.sock:/var/run/docker.sock:rw
# Additional information about containers
- ./filebeat/data:/usr/share/filebeat/data:rw
# Persistence data - needed to persist filebeat tracking data
user: root
# Allow access to log files and docker.sock
restart: on-failure
To share this configuration file with the container, we need a read-only volume /usr/share/filebeat/filebeat.yml:ro.
FileBeat also needs to have access to the docker log files. You can usually find them in /var/lib/docker/containers but that may depends on your docker installation. The docker socket /var/run/docker.sock is also shared with the container. That allows FileBeat to use the docker daemon to retrieve information and enrich the logs with things that are not directly in the log files, such as the name of the image or the name of the container.
 The user running FileBeat needs to be able to access all these shared elements. Unfortunately, the user filebeat used in the official docker image does not have the privileges to access them. That is why the user was changed to root in the docker compose file.
Filebeat Configuration file
filebeat.autodiscover:
providers:
-
labels.dedot: true
templates:
-
condition:
contains:
container.labels.collect_logs_with_filebeat: "true"
config:
-
format: docker
paths:
- "/var/lib/docker/containers/${data.docker.container.id}/*.log"
processors:
-
decode_json_fields:
add_error_key: true
fields:
- message
- stack_trace
overwrite_keys: true
process_array: true
target: ""
when.equals:
docker.container.labels.decode_log_event_to_json_object: "true"
type: container
type: docker
logging.metrics.enabled: false
output.elasticsearch:
hosts:
- "elasticsearch-host-goes-here:port-goes-here"
index: "filebeat-%{+yyyy.MM.dd}"
setup.template.name: filebeat
setup.template.pattern: filebeat-*
Running everything
Now that we are done with the configuration part, you can start the docker-compose file with:
Run docker-compose on filebeat docker-compose and elasticsearch-kibana docker-compose.yml .
# in the directory containing the docker-compose.yml file
docker-compose up -d
 Once Filebeat stack and Microservice stack are deployed in Docker, the log entries will now be sent to Elasticsearch, Docker metadata will be added by default due to configuration of docker auto-discover feature of filebeat and all functional JSON log fields will be decoded.
 Filebeat creates an index automatically in the elasticsearch once everything in its place and connected .
 In Kibana, you’ll be able to exploit the logs in it’s dashboards.
 Access Kibana in your web browser: http://localhost:kibana-port-goes-here.
The first thing you have to do is to configure the ElasticSearch indices that can be displayed in Kibana.
You can use the pattern filebeat-*
to include all the logs coming from Filebeat.
You also need to define the field used as the log timestamp. You should use @timestamp
as shown below:
Â