This article explains how to configure Fluent Bit for monitoring application logs in a Kubernetes environment.
Welcome to this lesson. In the previous lesson, we successfully installed our Event Generator App. In this session, we configure Fluent Bit to monitor application logs. All configuration files are located in the current working directory, and each file plays a crucial role in deploying Fluent Bit on your Kubernetes cluster.
First, let’s list the files related to Fluent Bit:
Copy
Ask AI
controlplane efk-stack/event-generator on ⎈ main ➜ ls -lrttotal 24-rw-r--r-- 1 root root 372 Jun 29 13:49 webapp-fluent-bit.yaml-rw-r--r-- 1 root root 2245 Jun 29 13:49 fluent-bit.yaml-rw-r--r-- 1 root root 112 Jun 29 13:49 fluent-bit-sa.yaml-rw-r--r-- 1 root root 1400 Jun 29 13:49 fluent-bit-configmap.yaml-rw-r--r-- 1 root root 181 Jun 29 13:49 fluent-bit-clusterrole.yaml-rw-r--r-- 1 root root 260 Jun 29 13:49 fluent-bit-clusterrolebinding.yaml
Each file is responsible for different configuration aspects of Fluent Bit—from deploying a DaemonSet to specifying the ServiceAccount, ConfigMap, and RBAC permissions.
The fluent-bit.yaml file creates a DaemonSet in the efk namespace. This DaemonSet deploys Fluent Bit on every node, ensuring that logs are collected consistently. Below is an excerpt from the file:
The ConfigMap (fluent-bit-configmap.yaml) defines how Fluent Bit processes logs, sets service parameters, and configures inputs, filters, and outputs. A custom parser docker_no_time is defined to correctly handle Docker JSON logs. Here is the consolidated configuration:
Copy
Ask AI
apiVersion: v1kind: ConfigMapmetadata: name: fluent-bit namespace: efkdata: custom_parsers.conf: | [PARSER] Name docker_no_time Format json Time_Keep Off Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L fluent-bit.conf: | [SERVICE] Daemon Off Flush 1 Log_Level info Parsers_File /fluent-bit/etc/parsers.conf Parsers_File /fluent-bit/etc/conf/custom_parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 Health_Check On [INPUT] Name tail Path /var/log/containers/app-event-simulator*.log Multiline.parser docker, cri Tag kube.* Mem_Buf_Limit 5MB Skip_Long_Lines On [INPUT] Name systemd Tag host.* Systemd_Filter _SYSTEMD_UNIT=kubelet.service Read_From_Tail On [FILTER] Name kubernetes Match kube.* Merge_Log On Keep_Log Off K8S-Logging.Parser On K8S-Logging.Exclude On [OUTPUT] Name es Match kube.* Host elasticsearch Logstash_Format On Retry_Limit False Suppress_Type_Name On [OUTPUT] Name es Match host.* Host elasticsearch Logstash_Format On Logstash_Prefix node Retry_Limit False Suppress_Type_Name On
Key points in this configuration:
The [SERVICE] section sets global properties such as flush intervals, log level, and HTTP server settings used for health checks.
The [INPUT] sections define sources: one tailing container logs and one collecting systemd logs (filtered for kubelet.service).
The [FILTER] section enriches logs with Kubernetes metadata and handles merging and parsing.
The [OUTPUT] sections forward logs to Elasticsearch with proper formatting and connection parameters.
To grant Fluent Bit the necessary permissions to access Kubernetes resources, a ClusterRole is defined in fluent-bit-clusterrole.yaml. This role grants comprehensive access to the required resources:
In the next lesson, we will explore how to visualize the log data in Kibana, confirming that logs are successfully reaching Elasticsearch via Fluent Bit.Happy logging!