Dapr and Azure Functions: Part 5c – Deploying to Google with GKE

Previously, we’ve deployed our Dapr Hello World project to both AWS and Azure.

This time, we’re going to deploy to Google GKE.

Step 1: Get Google Cloud Set up

Start by following the instructions here to get your Google Cloud account set up.

Set your default region to simplify some of the commands we’ll use later on:

Do not use something like us-east1-c  since this is a “zone” and not a “region”. You will get an error later on:

Step 2: First Attempt Set up GKE Autopilot

Note: this will fail; but I left it here for posterity.

For our first attempt, we’ll try Google GKE Autopilot

Follow the instructions and we should see our cluster created:

Conveniently, the kubectl context is automatically set.

Step 3: Setup Google Artifact Registry

Next, we’ll set up the Google Artifact Registry where we can store our container images.

From the Google web Console, create a Docker Repository helloworld-dapr-func .

You’ll need to run run:

to authorize your machine to push to the Artifact Registry (replace us-east1 with your region).

There are multiple ways of doing this but the command above seems the simplest.

Step 4: Tag and Push

Like before, we now need to tag and push our local images to the Artifact Registry.  There are specifics to the formatting of the image names that differ a bit from AWS and Azure so pay attention to the naming.

Once we push it up, we can see the images in our Artifact Registry:

Step 5: Install Dapr

As we did with Azure, we can now install Dapr to the cluster by running: dapr init -k .

But as it turns out, this is a limitation of GKE Autopilot; mutating WebHooks are not supported so it is not possible to install Dapr using this mechanism.

So back to the drawing board.  I deleted my GKE Autopilot cluster and set up a regular cluster:

If we run kubectl get pods --all-namespaces -o wide, we can see:

Then after dapr init -k :

We’ll need to make some minor modifications to the .yaml file for GKE:

Run kubectl apply -f deploy/gke-deployment.yaml and we can see our pods now:

Run kubectl get ingress/helloworld-ingress -n helloworld  to get the IP address and we can use that to hit the API endpoint:

Closing Thoughts

If I had to rank the experience of working with all three now considering documentation, ergonomics, and ease of use, I’d rank them as:

  1. Microsoft Azure
  2. Google
  3. Amazon

Google has the most coherent user interface, but the documentation was not quite as good as Microsoft’s IMO (is it personal bias?  I’m not sure).  All three are functionally very similar, but the AWS ergonomics and documentation left a lot to be desired, IMO.

Google also offers $300 of credits for 90 days so it’s relatively risk free to try it out!

What I still struggle with a bit is why Kubernetes?  If anything, this experience has reinforced the case for the convenience of serverless options that let teams focus on delivering value rather than managing yet another layer.  Sure, it’s better than VMs, but I have a hard time justifying this additional layer of architecture when serverless options exist and if your application code is segregated the right way, just as portable (treat the serverless interface as an eventing source and put your actual domain logic separate from your eventing logic).

You may also like...

1 Response

  1. October 31, 2021

    […] Part 5c – Deploying to Google Cloud with Artifact Registry and GKE […]