Dapr and Azure Functions: Part 5a – Deploying to AWS with ECR and EKS Fargate
If you’ve seen my post on choosing Containers + Dapr vs Functions as a Service, one of the benefits of Dapr is that it allows your code to plug into AWS, Azure, or GCP and thus provides a great deal of flexibility in terms of how you can deploy the solution.
Parts 5a and 5b will examine how we operationalize this in AWS and Azure respectively. Keep in mind that I’m trying to keep these articles to the bare minimum level of complexity; you will undoubtedly need to consider best practices and security as you move forward!
I’m going to make an assumption that if you made it this far, you already have an AWS account and you are somewhat familiar with AWS operations.
If not, you can easily sign up for one and then download the AWS CLI which we’ll be using for this. The overall cost for running this is just a few pennies (assuming you clean everything up correctly!)
Step 1: Prepare AWS
If you are doing this as a sandbox, then using your root account is fine. But if you want to follow best practices, start from this AWS documentation and add an IAM user that is NOT your root user.
When working with the CLI, we will need to have an account ID. You can find it here once you’ve decided which account you’ll be using:
Step 2: Publish the Images to AWS Elastic Container Registry (ECR)
AWS ECR is a managed container registry where we will publish the images for our apps. We can perform this activity via the CLI.
First, using the account ID, we configure Docker to access this endpoint:
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com
If you get an error like this:
error during connect: Post "http://%2F%2F.%2Fpipe%2FdockerDesktopLinuxEngine/v1.24/auth"
Start your Docker Desktop.
Once logged in, we will create a repository:
aws ecr create-repository --repository-name hello-world --image-scanning-configuration scanOnPush=true --region us-east-1
Now we can publish our images. To do that, we’ll need the identifiers for our images which you can find in Docker Desktop:
docker images at the command line.
We’re going to tag our two images:
docker tag helloworldfuncdapr/helloworld.api:linux-latest YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:helloworld.api-latest
docker tag helloworldfuncdapr/date.api:linux-latest YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:date.api-latest
And then push both into AWS ECR:
docker push YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:helloworld.api-latest
docker push YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:date.api-latest
In the ECR repositories, we can see the following:
Step 3: Create EKS Cluster
For creating the cluster, I strongly recommend using the
eksctl tool; it is really, really difficult to get the cluster in an operational state with Fargate doing so manually via the console. I constantly ran into an issue when provisioning the AWS Application Load Balancer, even when following the AWS documentation to a T…multiple times.
I started by creating a YAML file (
eks-init.yaml) to customize the deployment:
# An example of ClusterConfig with a normal nodegroup and a Fargate profile.
- name: fargate-profile
# All workloads in the "default" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: default
# All workloads in the "kube-system" Kubernetes namespace will be
# scheduled onto Fargate:
- namespace: kube-system
- name: helloworld-profile
# All workloads in the "helloworld" Kubernetes namespace matching the following
# label selectors will be scheduled onto Fargate:
- namespace: helloworld
And this gets executed as:
eksctl create cluster -f eks-init.yaml
The main reason is two fold:
- I want all of my containers in Fargate
us-east-1, there was an issue with AWS EKS capacity so I needed to be able to specify the exact availability zones (you need at least two) to use. On this day, it was
us-east-1dwith the lack of capacity. So if I did not specify the exact AZ’s, it would always try to use
us-east-1dfor one of the AZ’s and the deployment would fail with an error. If you run into this, just try different AZ’s in your region or use a different region.
Next, we need to follow the instructions in this AWS documentation to install the AWS Application Load Balancer controller which we will need to route Internet traffic to our cluster by processing the annotations in our application deployment.
If you didn’t use
eksctl to create the EKS Fargete deployment, you may run into an issue where the
aws-load-balancer-controller is stuck in a pending state. When you run a command like
kubectl describe pod/aws-load-balancer-controller-77b9b48bff-fd5lw -n kube-system you will encounter the following:
Manually removing the taint on the nodes doesn’t solve the issue; save yourself the pain and just use
eksctl and if you need to, use the configuration file to drop your deployment into an existing VPC. No point in fighting EKS!
(In retrospect, this probably could have been fixed when I tried my first manual passes by adding a Fargate profile for
kube-system and then restarting the deployment to force all pods to run on Fargate nodes)
Step 4: Operationalize the Cluster
Back in our codebase, we need to create another K8s deployment file. This one called
eks-deployment.yaml and we modify the images to point to our ECR images (update both images):
- name: helloapp
- name: http
- port: 80
# This gets processed by the aws-load-balancer-controller and creates the ALB endpoint
- path: /*
- name: dateapp
- name: http
Now when we run this from the command line with:
kubectl apply -f eks-deployment.yaml
We will see the following in the AWS EKS console:
Alternatively, from the command line:
kubectl get pods --all-namespaces -o wide
Now the moment of truth. To get the public URL of the ALB, use the command:
kubectl get ingress/helloworld-ingress -n helloworld
If all has gone well, you should see:
In the AWS console, you can find it under EC2/Load Balancers:
If we hit the URL, we’ll see the Azure Functions default page:
But if you hit the API endpoint at
/api/HelloWorld at this point, you’ll get an HTTP 500 error. This is because without the Dapr controllers that we saw in Part 4, there is no mechanism to inject Dapr into the pods.
Step 5: Install Dapr and Redeploy
Before deploying Dapr, we need to create another Fargate profile so that the Dapr components are also deployed into Fargate (otherwise, the deployments will be stuck on pending).
I named mine
dapr-profile and set the namespace to
The Deploy Dapr on a Kubernetes cluster guide contains instructions on how to deploy the components. Read it carefully, especially on how to select the correct context for
Once we verify we are in the correct context, simply run
dapr init -k
dapr status -k will show the current status of the deployment. Here are all of our pieces in the AWS console:
Once it’s all deployed, we’ll redeploy our app:
kubectl delete -f eks-deployment.yaml
kubectl apply -f eks-deployment.yaml
kubectl get ingress/helloworld-ingress -n helloworldto get the new endpoint
If you’re doing this from scratch again, you can just start by adding the Fargate profile for Dapr at the point of deployment in the
Before continuing, though, be sure to clean up the resources in AWS!
The gist of it is to simply
kubectl delete deployment on all of the deployments and then delete the cluster:
dapr uninstall -k
kubectl delete deployment dateapp -n helloworld
kubectl delete deployment helloapp -n helloworld
kubectl delete deployment aws-load-balancer-controller -n kube-system
kubectl delete deployment coredns -n kube-system
Then delete the profiles, clean up the cluster, and remove the VPC.
Once you’re done, you can use
kubectl config to delete the AWS contexts if you are no longer using them and switch your context back to the default