Dapr and Azure Functions: Part 5b – Deploying to Azure with ACR and AKS

In our last exciting episode, we deployed the solution to AWS EKS.

This time, we’ll deploy to Azure AKS!

AWS EKS with Fargate turned out to be a lot more difficult to get up and running than I expected, even following along with the AWS provided guides.  Hopefully, the AKS documentation is better.

I’m working from the Azure AKS deployment quickstart and using the Azure CLI.

Step 1: Create the Cluster

  1. Log in to the cluster via az login
  2. Run az group create --name helloworldk8s --location eastus to create the resource group that will hold all of our assets
  3. Run az aks create --resource-group helloworldk8s --name helloworld-cluster --node-count 1 --enable-addons monitoring --generate-ssh-keys to create the cluster
  4. Configure kubectl to connect to the cluster by running az aks get-credentials --resource-group helloworldk8s --name helloworld-cluster
  5. Running kubectl get nodes will display the running pods

At this point, we should have a set of kube-system pods which have been initialized for the cluster.

Step 2: Deploy Dapr to the Cluster

We’ll switch up the order and deploy Dapr to the cluster first this time.

Run dapr init -k to deploy to the cluster.

If we run kubectl get pods --all-namespaces -o wide, we can see:

Step 3: Publish to Azure Container Registry

Azure Container Registry is the Azure equivalent of ECR.  We’ll provision the registry and push our images into the registry.

  1. Provision the registry by running: az acr create --resource-group helloworldk8s --name helloworldregistry808626 --sku basic (the registry names are unique so you will need to pick your own random name)
  2. Tag the images as we did with the AWS example.
    1. docker tag helloworldfuncdapr/helloworld.api:linux-latest helloworldregistry808626.azurecr.io/hello-world:helloworld.api-latest
    2. docker tag helloworldfuncdapr/date.api:linux-latest helloworldregistry808626.azurecr.io/hello-world:date.api-latest
  3. Log into ACR using az acr login --name helloworldregistry808626
  4. Now push the images to ACR:
    1. docker push helloworldregistry808626.azurecr.io/hello-world:helloworld.api-latest
    2. docker push helloworldregistry808626.azurecr.io/hello-world:date.api-latest
  5. Connect ACR to our AKS instance: az aks update --resource-group helloworldk8s --name helloworld-cluster --attach-acr helloworldregistry808626 (this can be done alternatively at the time of AKS deployment if you created your ACR first using the same flag).

Step 4: Add HTTP Routing

To provide external routing to our cluster, we need to enable the http_application_routing add on which can be used for testing purposes:

az aks enable-addons --resource-group helloworldk8s --name helloworld-cluster --addons http_application_routing

This operates similarly to the aws-load-balancer-controller that we installed in Part 5a.

Once this is deployed, run:

az aks show --resource-group helloworldk8s --name helloworld-cluster --query addonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName -o table

This will provide the DNS name of the endpoint (a96a3353f3f44c9ca52e.eastus.aksapp.io in my case)

Step 5: Deploy HelloWorld

We can copy the eks-deployment.yaml file that we used for the EKS deployment and name it aks-deployment.yaml.

The image paths have to be updated to: helloworldregistry808626.azurecr.io

The Ingress specification has to be updated:

Now we can deploy to AKS using:

kubectl apply -f aks-deployment.yaml

Then monitor the deployment with kubectl get pods --all-namespaces and kubectl describe pod/helloapp-57766888c4-nlg48 -n helloworld

If we hit the URL: http://helloapp-svc.a96a3353f3f44c9ca52e.eastus.aksapp.io/:

And our API endpoint http://helloapp-svc.a96a3353f3f44c9ca52e.eastus.aksapp.io/api/HelloWorld

🎉🎊🎆

Thoughts: AWS EKS vs Azure AKS

After the absolute throwdown with EKS, I was pleasantly surprised by AKS.  I was up and running with AKS in less than an hour after spending close to 12 hours working through issues with EKS and Fargate.

Here’s my take purely from an operational standpoint:

  1. The AKS documentation is clearer; absolutely sailed through it.  Every step is clear, well organized, and well annotated.
  2. The AKS documentation that I came across had two sets of steps for whether you provisioned a resource with a particular setting or you were updating the configuration after the fact.  This is incredibly helpful rather than EKS where often the only recourse was to undo the previous step or start over.  For example, the intructions for connecting ACR to AKS account for whether you are deploying the ACR before AKS or deploying AKS after ACR.  Adding Fargate to an EKS cluster after the fact is messy (and to be fair, this may not be an apples-to-apples comparison).
  3. The Azure CLI has all of the tooling that you need whereas AWS really wants you to use eksctl on top of kubectl, AWS CLI, and docker CLI.
  4. The AKS web console is much more cohesive compared to EKS where you are jumping between EKS, EC2, and VPC configuration
  5. The cleanup on Azure is at least 100x easier; just delete the resource group!  There’s a GitHub thread on adding an --all-dependencies  flag to the AWS CLI to make cleanup easier and it’s gone nowhere.

If I were running ops, I’d choose AKS in a heartbeat!

As an example of how broken the AWS documentation is, here’s a screenshot of the ALB configuration for Fargate:

Notice anything missing? For some reason, there is no contextual navigation to related topics.  Compare this to the Azure documentation for adding the HTTP application routing and there is no contest.

As far as actually getting a managed Kubernetes cluster up and running, there is no contest: Azure AKS is significantly easier and more cohesive from an operations standpoint.

Up Next?

No, I’m not going to deploy to GKE; I think two clouds is enough for me.  I’ve read that GKE provides some of the best functionality and pricing around Kubernetes, but I’m all set with EKS and AKS 😁

For me, what’s more interesting is a comment I read on an /r/devops thread:

On the macro level, I’m sort of disappointed in the emergence and widespread adoption of Kubernetes. It’s like being excited about an electric typewriter; it’s a great improvement to the typewriter…but it’s still a typewriter. In that sense, the advanced features of container orchestration via Kubernetes is hindering cloud adoption strategies because companies now feel empowered by Kubernetes to keep their on-premise, hybrid strategies. This enterprise attention makes providers like AWS/GCP focus their innovation and resources towards the best managed K8s service.

As cloud providers mature and differentiate their serverless offerings, we can realize the full value. Sticking with a container first strategy, no matter how you dress it up, is a larger investment into operations than is necessary. I’m not saying serverless is completely no-ops or appropriate for every scenario, but it saddens me that CTOs everywhere (and for lots of clients we do consulting for) are empowered to keep traditional application models and strategies.

I absolutely love this analogy.

If you’ve seen my post on Containers vs Functions as a Service, then you know that I could not agree more with this comment.  Unless your use case has a very compelling reason to deploy as a container, why not choose a “low-ops” or “no-ops” approach and forget about the routing, cluster, and resource management?

But if your project cannot compromise on the 8 facets I outlined, then Dapr is a great foundation to build from.

At the end of all this, I really feel this tweet:

🤣

You may also like...

3 Responses

  1. July 8, 2021

    […] In Part 5b, we’ll examine how to do the same with Azure AKS. […]

  2. July 8, 2021

    […] Part 5b – Deploying to Azure with ACR and AKS […]

  3. October 30, 2021

    […] Previously, we’ve deployed our Dapr Hello World project to both AWS and Azure. […]