vpc-flow-logs-1

Private Google Access: a DNS Adjustment That Helped Reduce Egress Costs

Published on February 07, 2026, 14:00 UTC 4 minutes New!

I enabled Private Google Access (PGA) so our private GKE workloads could reach Google APIs without public node IPs. Everything “worked,” but when I dug into billing and flow logs, I kept seeing Google API traffic (notably storage.googleapis.com) classified as PUBLIC_IP connectivity - and I were getting surprisingly high “carrier peering / egress” charges.

This was for my personal cluster which I pay for and experiment with so I want to cut down any excess charges. The increase in costs correlated with an increase of GCS uploads which did not make sense. I enabled flow logs for a VPC and started to take a look at traffic.

vpc-flow-logs-1

The above image showed that traffic from the cluster to storage.googleapis.com was going out to the public internet, which was not what I expected with PGA enabled. I expected it to route privately via Google's internal network.

vpc-flow-logs-2

The detailed view of the flow logs confirmed that the traffic was indeed going to Google's public VIPs.

The key realization: PGA doesn’t change DNS. By default, *.googleapis.com resolves to Google’s public VIPs. PGA can still route that privately in many cases, but if you’re trying to reduce costs and make routing more predictable, it helps to use Google’s private API VIPs via private.googleapis.com and point DNS there.

The nice part: you don’t have to change every pod or app. You can do this once, at the VPC level, using Cloud DNS private zones. After applying the change, a quick in-cluster check showed:

  • storage.googleapis.com canonical name → private.googleapis.com
  • IPs → 199.36.153.8–11
  • API calls still worked as expected

Below is the minimal Terraform snippet I used.

Terraform: Private DNS override for *.googleapis.com to private.googleapis.com

This is VPC-wide: all workloads using the VPC resolver (including GKE pods) will pick it up automatically.

# Private zone for googleapis.com visible only inside your VPC
resource "google_dns_managed_zone" "googleapis_private" {
  name        = "googleapis-private"
  dns_name    = "googleapis.com."
  visibility  = "private"
  description = "Route *.googleapis.com to private.googleapis.com VIPs for Private Google Access"


  private_visibility_config {
    networks {
      # If you're using terraform-google-network module:
      # network_url = module.gcp_vpc.network_self_link
      network_url = google_compute_network.vpc.self_link
    }
  }
}


# private.googleapis.com -> Private Google Access VIPs (IPv4)
resource "google_dns_record_set" "private_googleapis_a" {
  managed_zone = google_dns_managed_zone.googleapis_private.name
  name         = "private.googleapis.com."
  type         = "A"
  ttl          = 300
  rrdatas      = ["199.36.153.8", "199.36.153.9", "199.36.153.10", "199.36.153.11"]
}


# Make *.googleapis.com resolve via private.googleapis.com
# This includes storage.googleapis.com, www.googleapis.com, etc.
resource "google_dns_record_set" "wildcard_googleapis_cname" {
  managed_zone = google_dns_managed_zone.googleapis_private.name
  name         = "*.googleapis.com."
  type         = "CNAME"
  ttl          = 300
  rrdatas      = ["private.googleapis.com."]
}

If you want a smaller blast radius

Instead of the wildcard, start with only what you care about:

resource "google_dns_record_set" "storage_googleapis_cname" {
  managed_zone = google_dns_managed_zone.googleapis_private.name
  name         = "storage.googleapis.com."
  type         = "CNAME"
  ttl          = 300
  rrdatas      = ["private.googleapis.com."]
}

Quick verification from GKE:

kubectl run gcs-check --rm -it --restart=Never --image=curlimages/curl --command -- \
  sh -c "nslookup storage.googleapis.com && curl -I https://storage.googleapis.com"

You should see storage.googleapis.com resolving as a CNAME to private.googleapis.com and returning 199.36.153.8–11.

The below images show the DNS resolution after the change. Before, storage.googleapis.com resolved to Google's public VIPs. After, it resolves to private.googleapis.com and the private VIPs.

vpc-flow-logs-3

The detailed view of the flow logs after the change shows that traffic is now going to the private VIPs, which should reduce egress costs and make routing more predictable.

vpc-flow-logs-4

Notes

This doesn't replace PGA, you still need PGA enabled on the subnet (private_ip_google_access / subnet_private_access = true) and private nodes.

That’s it - a small DNS adjustment, but it made our Google API routing more predictable and helped reduce egress surprises.

Related Posts

May 07, 2022 4 minutes

IRSA and Workload Identity with Terraform

The go-to practice when pods require permissions to access cloud services when using Kubernetes is using service accounts. The various clouds offering managed Kubernetes solutions have different implementations but they have the same concept, EKS has IRSA and GKE has Workload Identity. The service accounts that your containers use will have the required permissions to impersonate cloud IAM roles(AWS) or service accounts(GCP) so that they can access cloud resources. There are other alternatives as AWS instance roles but they are not fine-grained enough when running containerized workflows, every container has access to the resources the node is allowed to access. It might be a bit more complex and different coming from a non Kubernetes background but preexisting Terraform modules simplify the creation of the required resources to allow Kubernetes service accounts to impersonate and access cloud resources.