# Why can't my VPC-native pods connect to my Droplets? VPC-native DOKS clusters communicate with outside resources differently than DOKS clusters using legacy networking. [VPC-native networking](https://docs.digitalocean.com/products/kubernetes/details/features/index.html.md#VPC-Native-networking) allows for communication directly from pods to Droplets and other resources within VPC networks. For this to work, the Droplets need to have the necessary routes enabled. If your pods in VPC-native clusters are unable to connect to Droplets created on or before 2 October 2024, but can connect with `hostNetwork: true`, you may need to [enable route injection for the Droplets](https://docs.digitalocean.com/products/networking/vpc/how-to/update-peering-routes/index.html.md). ## Explanation of VPC-Native vs Legacy Cluster Networking VPC-native DOKS clusters work differently than DOKS clusters on legacy networking. One key difference between networking for VPC-native and legacy DOKS clusters is that outbound connections to VPC resources do not have the source IP substituted for the node IP. In practice, this means that outside resources see the pod IP rather than the node IP, and outside connections can be made directly to resources inside a cluster, such as making a connection from a Droplet to a pod. ### Networking Example The following example illustrates these networking differences in more detail. This example VPC contains two Droplets: one with [route injection configured](https://docs.digitalocean.com/products/networking/vpc/how-to/update-peering-routes/index.html.md), and one without. | Droplet Name | Route Injection | Droplet IP | |---|---|---| | `native-vpc-droplet` | Yes | `10.2.0.2` | | `legacy-vpc-droplet` | No | `10.2.0.4` | The following output of `ip route` shows the routing table of `native-vpc-droplet`: `native-vpc-droplet` routing table ```text default via dev eth0 proto static 10.0.0.0/8 via 10.2.0.1 dev eth1 metric 101 mtu 1500 10.2.0.0/24 dev eth1 proto kernel scope link src 10.2.0.2 10.16.0.0/16 dev eth0 proto kernel scope link src 10.16.0.5 167.99.80.0/20 dev eth0 proto kernel scope link src 172.16.0.0/12 via 10.2.0.1 dev eth1 metric 101 mtu 1500 192.168.0.0/16 via 10.2.0.1 dev eth1 metric 101 mtu 1500 ``` The highlighted lines (2, 6, and 7) are the routes configured by route injection. `legacy-vpc-droplet` does not have these routes. This example also has two DOKS clusters in the same VPC: one configured with VPC-native networking and one without. | Cluster Name | VPC-Native | Pod Subnet | Node Subnet | |---|---|---|---| | `vpc-native` | Yes | `10.150.0.0/16` | `10.2.0.0/24` | | `vpc-legacy` | No | `10.244.0.0/16` | `10.2.0.0/24` | Finally, these clusters have one pod, each located on the `vpc-native` and `vpc-legacy` cluster respectively. | Pod Name | Cluster Name | Pod IP | Node IP | |---|---|---|---| | `vpc-native-pod` | `vpc-native` | `10.150.0.103` | `10.2.0.3` | | `vpc-legacy-pod` | `vpc-legacy` | `10.244.0.86` | `10.2.0.5` | If `vpc-native-pod` (`10.150.0.103`) attempts to reach out to `native-vpc-droplet` (`10.2.0.2`) with `curl --connect-timeout 1 native-vpc-droplet` , the connection succeeds and `native-vpc-droplet` sees an inbound connection from `10.150.0.103` (the pod IP). Output of `curl` to `native-vpc-droplet` on VPC-native pod ```text I am a Droplet ``` VPC-native Droplet log ```text 10.150.0.103 - - [03/Feb/2025 12:29:12] "GET / HTTP/1.1" 200 - ``` However, if the pod attempts to reach `legacy-vpc-droplet` with `curl --connect-timeout 1 legacy-vpc-droplet` , the connection fails instead: Output of `curl` to `legacy-vpc-droplet` on VPC-native pod ```text curl: (28) Failed to connect to legacy-vpc-droplet port 80 after 1000 ms: Timeout was reached ``` The output of `tcpdump -i eth1 -s 0 'tcp port http'` shows that the packet does reach the Droplet: Output of `tcpdump` command ```text 12:35:51.865393 IP 10.150.0.103.44260 > legacy-vpc-droplet.http ``` The connection fails because `legacy-vpc-droplet` does not have the necessary routes to handle a connection from `10.150.0.103`. Instead, it sends it via the default gateway (towards the internet, rather than through the VPC). ## Related Topics [How to Troubleshoot CoreDNS Issues in DOKS Clusters](https://docs.digitalocean.com/support/how-to-troubleshoot-coredns-issues-in-doks-clusters/index.html.md): Gather information to resolve CoreDNS-related DNS problems in DOKS clusters. [How to Troubleshoot Load Balancer Health Check Issues](https://docs.digitalocean.com/support/how-to-troubleshoot-load-balancer-health-check-issues/index.html.md): Health checks often fail due to firewalls or misconfigured backend server software. [How can I improve the performance of cluster DNS?](https://docs.digitalocean.com/support/how-can-i-improve-the-performance-of-cluster-dns/index.html.md): Enable DNS caching, use non-shared machine types for the cluster, and scale out or reduce DNS traffic.