Skip to main content
Google Kubernetes Engine (GKE) delivers a high-performance networking stack built on Google’s global backbone. By integrating the Container Network Interface (CNI) plugin, GKE provisions virtual networks, assigns IP addresses to pods, and ensures low-latency communication both inside and across clusters.
The image is an overview of GKE Networking, highlighting components like Global Network Infrastructure and Container Network Interface, and features such as low-latency communication, virtual network creation, IP address assignment, and pod communication.

Core Networking Features in GKE

GKE includes built-in load balancing, network policies, and Ingress controllers to manage traffic flow:
FeatureDescriptionExample
Service Load BalancingAutomatically provision internal/external LBskubectl expose deployment nginx --port=80
Network PoliciesDefine pod-to-pod and pod-to-external ruleskubectl apply -f network-policy.yaml
Ingress ControllersHTTP(S) routing and host/path-based traffic ruleskubectl apply -f ingress-controller.yaml
The image is an overview of GKE Networking, highlighting components such as Load Balancing, Network Policies, Ingress Controllers, and Traffic Management.

Cluster Connectivity Requirements

GKE clusters run within a Google Cloud VPC, giving you private isolation and direct access to Google services such as BigQuery and Cloud Storage. You can deploy:
  • Public clusters: Nodes have public IPs.
  • Private clusters: Nodes use only private IPs and need Cloud NAT or proxy for internet egress.
The image illustrates GKE networking requirements, showing a diagram with public and private network components, and mentions domains like .googleapis.com and .gcr.io along with egress and firewall rules.
If you add high-priority firewall rules that block egress, you must explicitly allow:
  • *.googleapis.com
  • *.gcr.io
  • The control plane IP address

IP Address Allocation in GKE

Proper IP planning ensures each component has a unique address space. GKE allocates addresses for:
The image illustrates networking inside a cluster, showing different types of IP addresses (Node, Pod, Service, and Control Plane) associated with GKE (Google Kubernetes Engine).
  1. Node IP Addresses: Assigned from the VPC to enable kubelet, kube-proxy, and system components to communicate with the API server.
  2. Pod IP Addresses:
    • By default, each node gets a /24 CIDR block for pod IPs.
Use the flexible pod range feature to adjust the CIDR size per node pool.
The image illustrates networking inside a cluster, showing a diagram with "GKE Standard," "Pod IP Addresses," "CIDR: 23," and "Pods: 256."
A /23 block yields 512 addresses (up to 256 pods), though GKE Standard limits pods per node to 110 by default.
3. Service IP Addresses: Each Service receives a stable Cluster IP from a dedicated pool.
4. Control Plane IP Address: May be public or private based on cluster settings and version.
The image is a diagram titled "Networking Inside the Cluster," showing control plane IP addresses with options for public and private IPs, and a reference to GKE (Google Kubernetes Engine).

Pod Networking: A Conference Analogy

Think of a large conference with multiple breakout sessions. Each session has a dedicated speaker, and every participant has a unique badge number.
The image is an analogy for POD networking, depicting a diagram with a "Dedicated Speaker" at the top, "Attendees" on the left, and "Unique IDs" on the right, connected by arrows.
  • Sessions (Pods): Units of work; each gets a unique IP “badge.”
  • Rooms (Nodes): Physical hosts for sessions.
  • Badges (IP Addresses): Ensure messages reach the correct session.
The image illustrates POD networking in Google Kubernetes Engine (GKE), comparing it to a conference and breakout room setup, with elements like containers and a network interface.
Each pod shares:
  • A pod IP from the node’s CIDR block.
  • A network namespace with a virtual Ethernet (veth) pair linked to the node’s eth0.
  • Common volumes for storage.
The image illustrates pod networking in Google Kubernetes Engine (GKE), showing two pods with network interfaces, containers, and volumes, connected for effective exchange.
When Kubernetes schedules a pod:
  1. It creates a network namespace on the node.
  2. Attaches the pod’s veth interface to the node network.
  3. Routes traffic seamlessly to and from the pod.
The image illustrates a diagram of pod networking in Google Kubernetes Engine (GKE), showing the connection between containers, pod network interfaces, node interfaces, and the internet.
GKE’s CNI implementation orchestrates this networking; your choice of CNI can influence intra-cluster performance and features.

Service Networking

Kubernetes Services group pods using label selectors, providing:
  • A stable Cluster IP.
  • A DNS entry for easy discovery.
  • Built-in load balancing across healthy pods.
Service TypeDescriptionExample
ClusterIPInternal load balancingkubectl expose deployment app --port=80
NodePortExposes service on a node’s porttype: NodePort
LoadBalancerProvisions a GCP external LBtype: LoadBalancer
The image is a diagram illustrating service networking in Google Kubernetes Engine (GKE), showing components like ClusterIP, load balancer, pods, and control plane, emphasizing high availability and fault tolerance.

kube-proxy and Traffic Flow

GKE deploys kube-proxy as a DaemonSet so each node runs an instance that:
  1. Watches the Kubernetes API for Service-to-pod endpoint mappings.
  2. Updates iptables rules (DNAT) on the node.
  3. Routes Service IP traffic to healthy pod IPs.
When a client pod connects to 170.16.0.100:80, kube-proxy:
  • Selects a healthy endpoint (e.g., 10.16.2.102:8080).
  • Applies a DNAT rule to forward the packet.
Clients remain unaware of pod IPs or node topology—kube-proxy handles routing transparently.
The image illustrates the networking flow of Kube-Proxy in Google Kubernetes Engine (GKE), showing how traffic is routed through nodes, IP tables, and pod network interfaces. It includes details like source and destination IP addresses and ports.