This article demonstrates configuring node affinity in Kubernetes to manage pod scheduling based on node labels.
In this lesson, we demonstrate a practical test for node affinity configuration in Kubernetes. You will learn how to identify node labels, assign a new label to a node, and configure deployments with node affinity rules that restrict pod scheduling to specific nodes.──────────────────────────────
root@controlplane:~# kubectl get nodesNAME STATUS ROLES AGE VERSIONcontrolplane Ready control-plane,master 15m v1.20.0node01 Ready <none> 14m v1.20.0
Since there are no taints on the nodes, pods can be scheduled on any node by default.Now, update the blue deployment to enforce node affinity. Edit the deployment’s pod specification to include a node affinity rule that restricts pods to nodes labeled with color=blue. Integrate the following YAML snippet under the spec.template.spec section:
Copy
Ask AI
apiVersion: apps/v1kind: Deploymentmetadata: name: blue labels: app: bluespec: replicas: 3 selector: matchLabels: app: blue strategy: type: RollingUpdate rollingUpdate: maxSurge: 25% maxUnavailable: 25% template: metadata: labels: app: blue spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: color operator: In values: - blue containers: - name: nginx image: nginx imagePullPolicy: Always resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always
Save your changes. To verify that the pods are scheduled on node01, execute:
Copy
Ask AI
root@controlplane:~# kubectl get pods -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESblue-566c768bd6-f8xzm 1/1 Running 0 16s 10.244.1.5 node01 <none> <none>blue-566c768bd6-jsz95 1/1 Running 0 9s 10.244.1.7 node01 <none> <none>blue-566c768bd6-sf9dk 1/1 Running 0 13s 10.244.1.6 node01 <none> <none>
All blue deployment pods are now correctly placed on node01 as dictated by the node affinity rule.──────────────────────────────
Step 4: Create the “red” Deployment with Node Affinity for the Control Plane
In the next step, we create a deployment named red using the nginx image with two replicas. The deployment is configured to run its pods exclusively on the control plane node by leveraging the node-role.kubernetes.io/master label.First, generate the deployment YAML file using a dry run:
Save the changes and create the deployment by running:
Copy
Ask AI
root@controlplane:~# kubectl create -f red.yamldeployment.apps/red created
Verify that the pods of the red deployment are scheduled on the control plane by checking their node assignments:
Copy
Ask AI
root@controlplane:~# kubectl get pods -o wide
Using a dry run to generate deployment YAML files allows you to safely modify pod specifications—such as adding node affinity—before applying the changes to your cluster.
The image below illustrates the process for creating the red deployment with the nginx image, two replicas, and node affinity targeting the control plane node by checking for the label node-role.kubernetes.io/master:
Identify node labels with the kubectl describe node command.
Apply custom labels to nodes using the kubectl label command.
Configure node affinity in a deployment to restrict pod scheduling based on node labels.
Generate and modify deployment YAML files using a dry run to enforce node affinity settings for both worker nodes and control plane nodes.
This completes the lab on node affinity configuration in Kubernetes.For further reading and more advanced scenarios, you may refer to Kubernetes Documentation.