This guide explores troubleshooting missing pods in Kubernetes, focusing on deployment issues and resource quotas in the staging namespace.
In this guide, we explore a common Kubernetes issue—missing pods. You will learn how to deploy applications in the staging namespace and troubleshoot why the expected pods fail to start.
We begin by creating a deployment for a simple web application named “api” in the staging namespace. The deployment manifest specifies that five replicas should run.Before applying the new deployment, inspect the current resources in the staging namespace. At this point, only one deployment, “data processor,” exists with three running pods:
Copy
Ask AI
controlplane ~ ➜ cat api-deployment.ymlapiVersion: apps/v1kind: Deploymentmetadata: name: api namespace: stagingspec: replicas: 5 selector: matchLabels: app: api template: metadata: labels: app: api spec: containers: - name: api image: kodekloud/webapp-color ports: - containerPort: 8080
Copy
Ask AI
controlplane ~ ➜ k get all -n stagingNAME READY STATUS RESTARTS AGEpod/data-processor-75597df6-6kkst 1/1 Running 0 2m41spod/data-processor-75597df6-bzd1q 1/1 Running 0 2m41spod/data-processor-75597df6-gnthx 1/1 Running 0 2m41sNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/data-processor 3/3 3 3 2m41sNAME READY DESIRED CURRENT READY AGEreplicaset.apps/data-processor-75597df6 3 3 3 3 2m41s
Apply the API deployment with the following command:
Copy
Ask AI
controlplane ~ ➜ k apply -f api-deployment.ymldeployment.apps/api created
Even though five replicas were specified for the API deployment, only two pods are running. There are no pods in a pending or container-creating state, which suggests that node resource unavailability or taints are not the issue. The deployment is attempting to create additional pods, but three replicas remain unavailable.
controlplane ~ ➜ k describe deployment -n staging apiName: apiNamespace: stagingCreationTimestamp: Sat, 18 May 2024 00:08:41 +0000Labels: <none>Annotations: deployment.kubernetes.io/revision: 1Selector: app=apiReplicas: 5 desired | 2 updated | 2 total | 2 available | 3 unavailableStrategyType: RollingUpdateMinReadySeconds: 0RollingUpdateStrategy: 25% max unavailable, 25% max surgePod Template: Labels: app=api Containers: api: Image: kodekloud/webapp-color Port: 8080/TCP Host Port: 0/TCPConditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable ReplicaFailure True FailedCreate Progressing True ReplicaSetUpdatedOldReplicaSets: <none>NewReplicaSet: api-7548899bdb (2/5 replicas created)Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 85s deployment-controller Scaled up replica set api-7548899bdb to 5
The events reveal that the deployment controller scaled the replica set to five, but there is an error related to resource quotas. A closer look at the events shows the following error:
This error indicates that the “pod-quota” resource quota is capping the number of pods in the staging namespace at five. Since there are already five pods running (including those from the data processor deployment), the API deployment cannot create additional pods.
Next, deploy another web application, “analytics,” with a single replica. With the updated namespace quota, this deployment should not encounter quota issues. The deployment manifest is as follows:
In this guide, we addressed two common issues that can lead to missing pods in a Kubernetes cluster:
A resource quota that restricts the creation of new pods in a namespace.
A missing dependency—in this case, a required service account.
Check resource quotas imposed on the namespace if pods are not being created as expected.
Verify that all required service accounts and other dependencies are present.
Use “kubectl describe” to access detailed event logs and error messages.
By increasing the pod quota and creating the missing service account, the deployments functioned as intended, ensuring proper pod creation in the staging namespace.For more in-depth Kubernetes troubleshooting, consider reviewing the Kubernetes Documentation for additional best practices.