Health checks often fail due to firewalls or misconfigured backend server software.
Why can't my VPC-native pods connect to my Droplets?
Validated on 5 Feb 2025 • Last edited on 17 Apr 2025
VPC-native DOKS clusters communicate with outside resources differently than DOKS clusters using legacy networking.
VPC-native networking allows for communication directly from pods to Droplets and other resources within VPC networks. For this to work, the Droplets need to have the necessary routes enabled.
If your pods in VPC-native clusters are unable to connect to Droplets created on or before 2 October 2024, but can connect with hostNetwork: true
, you may need to enable route injection for the Droplets.
Explanation of VPC-Native vs Legacy Cluster Networking
VPC-native DOKS clusters work differently than DOKS clusters on legacy networking.
One key difference between networking for VPC-native and legacy DOKS clusters is that outbound connections to VPC resources do not have the source IP substituted for the node IP. In practice, this means that outside resources see the pod IP rather than the node IP, and outside connections can be made directly to resources inside a cluster, such as making a connection from a Droplet to a pod.
Networking Example
The following example illustrates these networking differences in more detail.
This example VPC contains two Droplets: one with route injection configured, and one without.
Droplet Name | Route Injection | Droplet IP |
---|---|---|
native-vpc-droplet |
Yes | 10.2.0.2 |
legacy-vpc-droplet |
No | 10.2.0.4 |
The following output of ip route
shows the routing table of native-vpc-droplet
:
native-vpc-droplet
routing tabledefault via <GATEWAY-IP> dev eth0 proto static
10.0.0.0/8 via 10.2.0.1 dev eth1 metric 101 mtu 1500
10.2.0.0/24 dev eth1 proto kernel scope link src 10.2.0.2
10.16.0.0/16 dev eth0 proto kernel scope link src 10.16.0.5
167.99.80.0/20 dev eth0 proto kernel scope link src <GATEWAY-IP>
172.16.0.0/12 via 10.2.0.1 dev eth1 metric 101 mtu 1500
192.168.0.0/16 via 10.2.0.1 dev eth1 metric 101 mtu 1500
The highlighted lines (2, 6, and 7) are the routes configured by route injection. legacy-vpc-droplet
does not have these routes.
This example also has two DOKS clusters in the same VPC: one configured with VPC-native networking and one without.
Cluster Name | VPC-Native | Pod Subnet | Node Subnet |
---|---|---|---|
vpc-native |
Yes | 10.150.0.0/16 |
10.2.0.0/24 |
vpc-legacy |
No | 10.244.0.0/16 |
10.2.0.0/24 |
Finally, these clusters have one pod, each located on the vpc-native
and vpc-legacy
cluster respectively.
Pod Name | Cluster Name | Pod IP | Node IP |
---|---|---|---|
vpc-native-pod |
vpc-native |
10.150.0.103 |
10.2.0.3 |
vpc-legacy-pod |
vpc-legacy |
10.244.0.86 |
10.2.0.5 |
If vpc-native-pod
(10.150.0.103
) attempts to reach out to native-vpc-droplet
(10.2.0.2
) with curl --connect-timeout 1 native-vpc-droplet
, the connection succeeds and native-vpc-droplet
sees an inbound connection from 10.150.0.103
(the pod IP).
curl
to native-vpc-droplet
on VPC-native podI am a Droplet
10.150.0.103 - - [03/Feb/2025 12:29:12] "GET / HTTP/1.1" 200 -
However, if the pod attempts to reach legacy-vpc-droplet
with curl --connect-timeout 1 legacy-vpc-droplet
, the connection fails instead:
curl
to legacy-vpc-droplet
on VPC-native podcurl: (28) Failed to connect to legacy-vpc-droplet port 80 after 1000 ms: Timeout was reached
The output of tcpdump -i eth1 -s 0 'tcp port http'
shows that the packet does reach the Droplet:
tcpdump
command12:35:51.865393 IP 10.150.0.103.44260 > legacy-vpc-droplet.http
The connection fails because legacy-vpc-droplet
does not have the necessary routes to handle a connection from 10.150.0.103
. Instead, it sends it via the default gateway (towards the internet, rather than through the VPC).
Related Topics
Enable DNS caching, use non-shared machine types for the cluster, and scale out or reduce DNS traffic.
Edit the ConfigMap which nginx uses to enable PROXY protocol.