This article provides a guide on deploying and scaling Elasticsearch and Kibana in a Kubernetes environment using YAML manifests.
Welcome to this detailed guide on scaling Elasticsearch and Kibana within a Kubernetes environment. In this tutorial, you will learn how to deploy a highly scalable Elasticsearch and Kibana stack using YAML manifests obtained from a GitHub repository.
First, clone the repository containing the required YAML manifests and configure your Kubernetes context by removing the control-plane taint, creating the “efk” namespace, and setting the current context to use that namespace. Execute the following commands:
Next, navigate into the repository and change your working directory to elasticsearch-kibana/scaling-ek-stack. This folder contains four essential files. You can list the contents using:
Copy
Ask AI
ls -lrt
Copy
Ask AI
total 16-rw-r--r-- 1 root 697 Aug 8 14:06 pv.yml-rw-r--r-- 1 root 791 Aug 8 14:06 kibana.yml-rw-r--r-- 1 root 1619 Aug 8 14:06 es.yml-rw-r--r-- 1 root 207 Aug 8 14:06 config-map.yml
In the following step, review the pv.yml file. This manifest creates three Persistent Volumes (PV), each associated with a different node in the cluster (e.g., controlplane, node01, and node02). In enterprise-scale environments, dedicating specific nodes entirely for Elasticsearch storage can optimize performance and scalability.
These persistent volumes ensure that each Elasticsearch pod has its dedicated storage, which is crucial for data persistence in a distributed database architecture.
The es.yml file is used to define a StatefulSet deployment for Elasticsearch. This configuration deploys three replicas of Elasticsearch across different nodes using node affinity rules:
By configuring three replicas, you ensure that Elasticsearch pods are distributed across distinct nodes. The accompanying persistent volumes from the previous step guarantee data persistence for each replica. If you plan to scale out further, remember that each additional replica must be paired with its own persistent volume.
When scaling horizontally, ensure that your infrastructure can support the increased number of persistent volumes and consider adjusting resource limits to prevent issues such as out-of-memory errors.
The kibana.yml file in the same directory closely follows the configuration guidelines for Kibana, and the config-map.yml file is used to set up the necessary configuration map for the Elasticsearch cluster.
This output shows that during rollout, some Elasticsearch pods may initially fail due to reasons such as out-of-memory issues. However, each Elasticsearch pod is tied to its own persistent volume, ensuring data integrity even during restarts. The Kibana service is deployed as an independent pod.After allowing time for all services to initialize, verify that each pod is running properly:
Copy
Ask AI
kubectl get pods
A correctly deployed stack should return a result similar to:
Scaling Considerations for Elasticsearch and Kibana
This demonstration emphasizes two critical aspects of scaling:
Scaling Type
Description
Consideration
Horizontal Scaling
Increase or decrease the number of replicas in the Elasticsearch StatefulSet to add or remove nodes from the cluster.
Each replica needs an associated persistent volume.
Vertical Scaling
Adjust resource allocations (CPU and memory) in the YAML manifests to meet different workload demands or address performance issues such as out-of-memory errors.
Make sure to update resource requests and limits accordingly.
Utilizing these scaling strategies provides a robust framework for adapting Elasticsearch and Kibana deployments to meet growing data and query workloads.That concludes our guide on scaling Elasticsearch and Kibana in Kubernetes. Happy deploying!