Dapr and Azure Functions: Part 5a – Deploying to AWS with ECR and EKS Fargate

In Part 4, we deployed our simple Hello World solution via Kubernetes locally.

If you’ve seen my post on choosing Containers + Dapr vs Functions as a Service, one of the benefits of Dapr is that it allows your code to plug into AWS, Azure, or GCP and thus provides a great deal of flexibility in terms of how you can deploy the solution.

Parts 5a and 5b will examine how we operationalize this in AWS and Azure respectively.  Keep in mind that I’m trying to keep these articles to the bare minimum level of complexity; you will undoubtedly need to consider best practices and security as you move forward!

I’m going to make an assumption that if you made it this far, you already have an AWS account and you are somewhat familiar with AWS operations.

If not, you can easily sign up for one and then download the AWS CLI which we’ll be using for this.  The overall cost for running this is just a few pennies (assuming you clean everything up correctly!)

Step 1: Prepare AWS

If you are doing this as a sandbox, then using your root account is fine.  But if you want to follow best practices, start from this AWS documentation and add an IAM user that is NOT your root user.

When working with the CLI, we will need to have an account ID.  You can find it here once you’ve decided which account you’ll be using:

Step 2: Publish the Images to AWS Elastic Container Registry (ECR)

AWS ECR is a managed container registry where we will publish the images for our apps.  We can perform this activity via the CLI.

First, using the account ID, we configure Docker to access this endpoint:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com

If you get an error like this:

error during connect: Post "http://%2F%2F.%2Fpipe%2FdockerDesktopLinuxEngine/v1.24/auth"

Start your Docker Desktop.

Once logged in, we will create a repository:

aws ecr create-repository --repository-name hello-world --image-scanning-configuration scanOnPush=true --region us-east-1

Now we can publish our images.  To do that, we’ll need the identifiers for our images which you can find in Docker Desktop:

Or via docker images at the command line.

We’re going to tag our two images:

  1. docker tag helloworldfuncdapr/helloworld.api:linux-latest YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:helloworld.api-latest
  2. docker tag helloworldfuncdapr/date.api:linux-latest YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:date.api-latest

And then push both into AWS ECR:

  1. docker push YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:helloworld.api-latest
  2. docker push YOUR_ACCOUNT_ID_HERE.dkr.ecr.us-east-1.amazonaws.com/hello-world:date.api-latest

In the ECR repositories, we can see the following:

Step 3: Create EKS Cluster

For creating the cluster, I strongly recommend using the eksctl tool; it is really, really difficult to get the cluster in an operational state with Fargate doing so manually via the console.  I constantly ran into an issue when provisioning the AWS Application Load Balancer, even when following the AWS documentation to a T…multiple times.

I started by creating a YAML file (eks-init.yaml) to customize the deployment:

And this gets executed as:

eksctl create cluster -f eks-init.yaml

(See the specific documentation on Fargate support).

The main reason is two fold:

  1. I want all of my containers in Fargate
  2. In us-east-1, there was an issue with AWS EKS capacity so I needed to be able to specify the exact availability zones (you need at least two) to use.  On this day, it was us-east-1d with the lack of capacity.  So if I did not specify the exact AZ’s, it would always try to use us-east-1d for one of the AZ’s and the deployment would fail with an error.  If you run into this, just try different AZ’s in your region or use a different region.

Next, we need to follow the instructions in this AWS documentation to install the AWS Application Load Balancer controller which we will need to route Internet traffic to our cluster by processing the annotations in our application deployment.

If you didn’t use eksctl to create the EKS Fargete deployment, you may run into an issue where the aws-load-balancer-controller is stuck in a pending state.  When you run a command like kubectl describe pod/aws-load-balancer-controller-77b9b48bff-fd5lw -n kube-system you will encounter the following:

Events: Type Warning Reason Failedscheduling Age 8S 015 over 15m) From default—scheduler Message 6/2 nodes are available: 2 node(s) had taint {eks .amazonaws . com/compute—type: fargate} , that the pod didn't tolerate.

Manually removing the taint on the nodes doesn’t solve the issue; save yourself the pain and just use eksctl and if you need to, use the configuration file to drop your deployment into an existing VPC.  No point in fighting EKS!

(In retrospect, this probably could have been fixed when I tried my first manual passes by adding a Fargate profile for kube-system and then restarting the deployment to force all pods to run on Fargate nodes)

Step 4: Operationalize the Cluster

Back in our codebase, we need to create another K8s deployment file.  This one called eks-deployment.yaml and we modify the images to point to our ECR images (update both images):

Now when we run this from the command line with:

kubectl apply -f eks-deployment.yaml

We will see the following in the AWS EKS console:

Alternatively, from the command line:

kubectl get pods --all-namespaces -o wide

Now the moment of truth.  To get the public URL of the ALB, use the command:

kubectl get ingress/helloworld-ingress -n helloworld

If all has gone well, you should see:

In the AWS console, you can find it under EC2/Load Balancers:

If we hit the URL, we’ll see the Azure Functions default page:

But if you hit the API endpoint at /api/HelloWorld at this point, you’ll get an HTTP 500 error.  This is because without the Dapr controllers that we saw in Part 4, there is no mechanism to inject Dapr into the pods.

Step 5: Install Dapr and Redeploy

Before deploying Dapr, we need to create another Fargate profile so that the Dapr components are also deployed into Fargate (otherwise, the deployments will be stuck on pending).

I named mine dapr-profile and set the namespace to dapr-system.

The Deploy Dapr on a Kubernetes cluster guide contains instructions on how to deploy the components.  Read it carefully, especially on how to select the correct context for kubectl.

Once we verify we are in the correct context, simply run dapr init -k

dapr status -k will show the current status of the deployment.  Here are all of our pieces in the AWS console:

Once it’s all deployed, we’ll redeploy our app:

  1. kubectl delete -f eks-deployment.yaml
  2. kubectl apply -f eks-deployment.yaml
  3. kubectl get ingress/helloworld-ingress -n helloworld to get the new endpoint

And finally:

🎉🎊🎆

If you’re doing this from scratch again, you can just start by adding the Fargate profile for Dapr at the point of deployment in the eks-init.yaml file.

Next Part

In Part 5b, we’ll examine how to do the same with Azure AKS.

Before continuing, though, be sure to clean up the resources in AWS!

The AWS EKS Workshop guide has an article on this.

The gist of it is to simply kubectl delete deployment on all of the deployments and then delete the cluster:

  1. dapr uninstall -k
  2. kubectl delete deployment dateapp -n helloworld
  3. kubectl delete deployment helloapp -n helloworld
  4. kubectl delete deployment aws-load-balancer-controller -n kube-system
  5. kubectl delete deployment coredns -n kube-system

Then delete the profiles, clean up the cluster, and remove the VPC.

Once you’re done, you can use kubectl config to delete the AWS contexts if you are no longer using them and switch your context back to the default docker-desktop.

You may also like...

3 Responses

  1. July 7, 2021

    […] originally going to cut this short here at getting it deployed locally via Kubernetes, but why not push it into AWS and/or Azure and see how that works?  Lets fully operationalize this bad boy in Part […]

  2. July 7, 2021

    […] Part 5a – Deploying to AWS with ECR and EKS Fargate […]

  3. July 8, 2021

    […] In our last exciting episode, we deployed the solution to AWS EKS. […]

Leave a Reply

Your email address will not be published. Required fields are marked *