Dapr and Azure Functions: Part 5b – Deploying to Azure with ACR and AKS
This time, we’ll deploy to Azure AKS!
AWS EKS with Fargate turned out to be a lot more difficult to get up and running than I expected, even following along with the AWS provided guides. Hopefully, the AKS documentation is better.
I’m working from the Azure AKS deployment quickstart and using the Azure CLI.
Step 1: Create the Cluster
- Log in to the cluster via
az group create --name helloworldk8s --location eastusto create the resource group that will hold all of our assets
az aks create --resource-group helloworldk8s --name helloworld-cluster --node-count 1 --enable-addons monitoring --generate-ssh-keysto create the cluster
kubectlto connect to the cluster by running
az aks get-credentials --resource-group helloworldk8s --name helloworld-cluster
kubectl get nodeswill display the running pods
At this point, we should have a set of
kube-system pods which have been initialized for the cluster.
Step 2: Deploy Dapr to the Cluster
We’ll switch up the order and deploy Dapr to the cluster first this time.
dapr init -k to deploy to the cluster.
If we run
kubectl get pods --all-namespaces -o wide, we can see:
Step 3: Publish to Azure Container Registry
Azure Container Registry is the Azure equivalent of ECR. We’ll provision the registry and push our images into the registry.
- Provision the registry by running:
az acr create --resource-group helloworldk8s --name helloworldregistry808626 --sku basic(the registry names are unique so you will need to pick your own random name)
- Tag the images as we did with the AWS example.
docker tag helloworldfuncdapr/helloworld.api:linux-latest helloworldregistry808626.azurecr.io/hello-world:helloworld.api-latest
docker tag helloworldfuncdapr/date.api:linux-latest helloworldregistry808626.azurecr.io/hello-world:date.api-latest
- Log into ACR using
az acr login --name helloworldregistry808626
- Now push the images to ACR:
docker push helloworldregistry808626.azurecr.io/hello-world:helloworld.api-latest
docker push helloworldregistry808626.azurecr.io/hello-world:date.api-latest
- Connect ACR to our AKS instance:
az aks update --resource-group helloworldk8s --name helloworld-cluster --attach-acr helloworldregistry808626(this can be done alternatively at the time of AKS deployment if you created your ACR first using the same flag).
Step 4: Add HTTP Routing
To provide external routing to our cluster, we need to enable the
http_application_routing add on which can be used for testing purposes:
az aks enable-addons --resource-group helloworldk8s --name helloworld-cluster --addons http_application_routing
This operates similarly to the
aws-load-balancer-controller that we installed in Part 5a.
Once this is deployed, run:
az aks show --resource-group helloworldk8s --name helloworld-cluster --query addonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName -o table
This will provide the DNS name of the endpoint (
a96a3353f3f44c9ca52e.eastus.aksapp.io in my case)
Step 5: Deploy HelloWorld
We can copy the
eks-deployment.yaml file that we used for the EKS deployment and name it
The image paths have to be updated to:
- name: helloapp
# AWS ECR
# image: YOUR_ACCOUNT_NUMBER_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:helloworld.api-latest
# Azure ACR
- name: http
The Ingress specification has to be updated:
# The host value is from the http_application_routing add on
- path: /
Now we can deploy to AKS using:
kubectl apply -f aks-deployment.yaml
Then monitor the deployment with
kubectl get pods --all-namespaces and
kubectl describe pod/helloapp-57766888c4-nlg48 -n helloworld
If we hit the URL:
And our API endpoint
Thoughts: AWS EKS vs Azure AKS
After the absolute throwdown with EKS, I was pleasantly surprised by AKS. I was up and running with AKS in less than an hour after spending close to 12 hours working through issues with EKS and Fargate.
Here’s my take purely from an operational standpoint:
- The AKS documentation is clearer; absolutely sailed through it. Every step is clear, well organized, and well annotated.
- The AKS documentation that I came across had two sets of steps for whether you provisioned a resource with a particular setting or you were updating the configuration after the fact. This is incredibly helpful rather than EKS where often the only recourse was to undo the previous step or start over. For example, the intructions for connecting ACR to AKS account for whether you are deploying the ACR before AKS or deploying AKS after ACR. Adding Fargate to an EKS cluster after the fact is messy (and to be fair, this may not be an apples-to-apples comparison).
- The Azure CLI has all of the tooling that you need whereas AWS really wants you to use
eksctlon top of
kubectl, AWS CLI, and docker CLI.
- The AKS web console is much more cohesive compared to EKS where you are jumping between EKS, EC2, and VPC configuration
- The cleanup on Azure is at least 100x easier; just delete the resource group! There’s a GitHub thread on adding an
--all-dependenciesflag to the AWS CLI to make cleanup easier and it’s gone nowhere.
If I were running ops, I’d choose AKS in a heartbeat!
As an example of how broken the AWS documentation is, here’s a screenshot of the ALB configuration for Fargate:
Notice anything missing? For some reason, there is no contextual navigation to related topics. Compare this to the Azure documentation for adding the HTTP application routing and there is no contest.
As far as actually getting a managed Kubernetes cluster up and running, there is no contest: Azure AKS is significantly easier and more cohesive from an operations standpoint.
No, I’m not going to deploy to GKE; I think two clouds is enough for me. I’ve read that GKE provides some of the best functionality and pricing around Kubernetes, but I’m all set with EKS and AKS 😁
On the macro level, I’m sort of disappointed in the emergence and widespread adoption of Kubernetes. It’s like being excited about an electric typewriter; it’s a great improvement to the typewriter…but it’s still a typewriter. In that sense, the advanced features of container orchestration via Kubernetes is hindering cloud adoption strategies because companies now feel empowered by Kubernetes to keep their on-premise, hybrid strategies. This enterprise attention makes providers like AWS/GCP focus their innovation and resources towards the best managed K8s service.
As cloud providers mature and differentiate their serverless offerings, we can realize the full value. Sticking with a container first strategy, no matter how you dress it up, is a larger investment into operations than is necessary. I’m not saying serverless is completely no-ops or appropriate for every scenario, but it saddens me that CTOs everywhere (and for lots of clients we do consulting for) are empowered to keep traditional application models and strategies.
I absolutely love this analogy.
If you’ve seen my post on Containers vs Functions as a Service, then you know that I could not agree more with this comment. Unless your use case has a very compelling reason to deploy as a container, why not choose a “low-ops” or “no-ops” approach and forget about the routing, cluster, and resource management?
But if your project cannot compromise on the 8 facets I outlined, then Dapr is a great foundation to build from.
At the end of all this, I really feel this tweet: