Lately, at Koyeb, I’ve been playing a bit more with Kubernetes, more specifically ingresses and Google Cloud Platform load balancers.

Kubernetes was much more powerful than I knew

Until playing with Nginx and GCE (Google Cloud Load Balancing) ingress classes, I did not realize how one of the key feature of Kubernetes is its ability to interoperate with external APIs.

For example, consider this manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    kubernetes.io/ingress.class: "gce"
spec:
  rules:
  - http:
      paths:
      - path: /*
        pathType: ImplementationSpecific
        backend:
          service:
            name: hello-world
            port:
              number: 60000

The annotation kubernetes.io/ingress.class: "gce" instructs Kubernetes to talk to Google APIs to provision an external L7 load balancer on Google Cloud.

What blew my mind at that time is that I realized that I could apply native Kubernetes resources to the cluster and that it could do all kind of non-Kubernetes plumbing in the background.

Exploring Google Cloud Load Balancing

With that in mind, I started to leverage more features from GCE and found out handy stuff.

“Container Native Load Balancing” allows GCE to route more finely

What Google calls “Container Native Load Balancing”: a fancy term to say that the load balancer has knowledge of pods. Given that each pod of a cluster has a dedicated IP address, the load balancer can manage targets more effectively: routing is direct, health checks can be done on a per pod basis… This can be enabled by adding the annotation cloud.google.com/neg: '{"ingress": true}' to the Kubernetes Services where one wants this enabled

Direct routing to pods as presented by Google Cloud documentation

Direct routing to pods as presented by Google Cloud documentation

Policies can be applied to GCE with Kubernetes manifests

All kind of policies we usually setup on load balancers can be configured with Google-defined CRDs (Custom Resource Definitions): health checking, http -> https redirection, TLS certificate…

For example, an annotation and a FrontendConfig can enable http to https redirection for incoming requests:

Custom Resource Definition:

apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
  name: my-frontend-config
spec:
  redirectToHttps:
    enabled: true
    responseCodeName: 301

Annotation to put on an Ingress:

networking.gke.io/v1beta1.FrontendConfig: "my-frontend-config"

A use case: simple OAuth2 with Identity-Aware-Proxy

I noticed that there was a policy to setup a Google OAuth wall in front of web applications. I tried it and found out that it was pretty easy to set up. Here is an example configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    # Ask Google to spawn a public load balancer
    kubernetes.io/ingress.class: "gce"
    # Expose load balancer to the static IP referenced by the name "my-public-ip"
    # It must be reserved on Google Cloud
    kubernetes.io/ingress.global-static-ip-name: "my-public-ip"
spec:
  rules:
  - http:
      paths:
      - path: /*
        pathType: ImplementationSpecific
        backend:
          service:
            name: hello-world
            port:
              number: 60000

---
# BackendConfig: a Google CRD which defines policies for services.
# This one ensures that a Google OAuth2 wall is enforced if clients do
# not present a valid token with sufficient permissions
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
  name: my-backendconfig
spec:
  iap:
    enabled: true
    oauthclientCredentials:
      # This kubernetes secret should exist, see later in the post
      secretName: iap-client-secret
---
apiVersion: v1
kind: Service
metadata:
  name: hello-world
  annotations:
    # Require Google OAuth2 to access that service
    cloud.google.com/backend-config: '{"default": "my-backendconfig"}'
    # Enable container native load balancing (optional)
    cloud.google.com/neg: '{"ingress": true}'
spec:
  type: NodePort
  selector:
    greeting: hello
    version: one
  ports:
  - protocol: TCP
    port: 60000
    targetPort: 50000
# There must, of course, be some kind of pod that we can route to
(...)

This ensures that not logged-in users hit the infamous authentication wall when trying to access the webapp hello-world:

Google OAuth2 wall for requests which do not embed an id token

Google OAuth2 wall for requests which do not embed an id token

I believe that reaching this behavior with only a few YAML manifests is pretty powerful. It is a very simple setup for a feature that can usually get cumbersome to configure and maintain. In particular, the integrations with both Google Identity and Access Management (IAM) and Kubernetes make it very flexible for a low complexity overhead:

  • The authentication policy is applied based on an annotation on Kubernetes services. As such, it is easy to enable or disable the authentication wall on services. It is also easy to know which services are protected
  • We can assign permissions on a per member and on a per service basis. This allows for flexible access control inside of an organization. For example, we could give access to the very privileged Service vault to admin@myorg.com and access to the less privileged Service backoffice to both admin@myorg.com and engineering@myorg.com

Limitations encountered

I am starting to realize that this article sounds like a big advertisement for Google Cloud. It is not; I will be going through the limitations I encountered using GCE ingress class and Identity-Aware Proxy (IAP) on Kubernetes.

Some stuff needs to be done outside of Kubernetes

If you read closely the example , you may have noticed the need for a public IP address and OAuth2 credentials. Those are examples of resources that we need but that cannot be created inside of Kubernetes. The more advanced the features we want to use, the most probable it is that we cannot configure all that we need for them to work in Kubernetes manifests.

Most issues I encountered are solved by a few lines of Terraform. However, this definitely takes a hit on the simplicity of the setup.

Debugging configuration mistakes is difficult

While when we setup a GCE Ingress, we can mostly sit and watch Google do all sort of plumbing, the caveat is that it’s sometimes difficult to understand what is going on.

I sometimes had to juggle in between the gcloud and kubectl CLIs and Google’s console to debug misconfigurations, which was cumbersome.

Advanced features may lack maturity: IAP example

I had difficulties to concile a simple and automatic setup when using advanced features.

Fine-grained permissions are difficult to automate

Let’s take the setup of fine-grained access control mentioned in the previous section. While it’s powerful to be able to give access to specific members to services with Google IAMs, the way to automate it is to use the google_iap_web_backend_service_iam_member terraform resource:

resource "google_iap_web_backend_service_iam_member" "john_access_to_my_service" {
  #                     This name is partly randomly generated by Google
  web_backend_service = "k8s1-11ae7c3d-k8snamespace-my-service-8-1e7c3861"
  role                = "roles/iap.httpsResourceAccessor"
  member              = "user:john@mycompany.com"
}

As you can see, we give access to John to a web_backend_service, which name is partly randomly generated. The web_backend_service here is a Network Endpoint Group - a Google-managed resource, automatically created.

Since we cannot know that name in advance, it makes it hard to automate this chunk of code.

Authentication by machines is not widely supported

Machines can also go through the authentication wall, given the right credentials. However, to make an application authenticate correctly is not always an easy deal.

GCP IAP relies on JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants as defined in RFC 7523. According to the Internet, “this RFC is profiled (i.e. further specified) by Open ID Connect”.

In practice, while it’s easy to integrate this authentication method in an application, most open source programs you use will not expose a way to do it without changing their source code.


Google Cloud Load Balancer is pretty exciting! I like that we can have a LB up and running with just a few lines of YAML, while knowing it will scale correctly. I hope that the feature set will expand in the future and that the pain points I encountered get fixed.