Containers + Dapr vs Functions as a Service
In this video, I want to talk about two different approaches to building highly scalable, modern web applications and APIs.
I’ve been a big fan of Function as a Service technologies such as AWS Lambda, Azure Functions, and Google Cloud Functions. Mainly because it takes the next logical step from “DevOps” to “NoOps”…with some tradeoffs and that’s actually what we want to talk about in this video: what facets should you consider when evaluating containers versus functions as a service.\
Gregor Hohpe — the author of Enterprise Integration Patterns — has an article with a great take on the evolution of compute away from bare metal to modern Function as a Service that provides more nuance if you’re interested. I recommend checking it out if you’re curious.
This video is based on my own exploration of the topic and I would love to hear back from you on any perspectives, ideas, and experiences that you would like to share.
I recently came across a project from Microsoft called Dapr which stands for Distributed Application Runtime. The project aims to simplify the process of adopting a container-based approach and adds a lot of additional capabilities for building microservices.
One of the reasons I am a big fan of Functions as a Service is because of the productivity that your team can gain by handing off a lot of the complexity of connecting services, managing scalability, and managing other aspects of operations.
Dapr brings many of these benefits to a container-based approach by imbuing your application with a layer of pluggable components that provide many of the same benefits that you would get with a Functions as a Service approach.
This video isn’t intended to dive into the details of Dapr, so please check out dapr.io if you want to learn more. It is a fantastic project and definitely worth your time to investigate if you are building container based microservices.
What got me thinking about the topic of this video is the eShop reference project on github which provides a starting point for exploring how to build a real-world application with Dapr.
Looking through this project, I could not get over just how complex the project is despite the fact that Dapr is supposed to make working with containerized workloads easier. Compared to a Functions as a Service approach, it seems like there’s a ton of added complexity from environment setup to running the application to packaging the application to deploying the application.
If you are evaluating Dapr or any containerized approach versus Functions as a Service for your next app, then the question is: when does this complexity make sense.
I think there are at least 8 facets we can use to help make this decision.
Let’s start with the obvious: functions as a service are proprietary and lock you into the platform one way or another.
Whether you’re building your solution for Azure Functions, AWS Lambda, or Google Cloud Functions, that runtime layer locks you in with a vendor specific implementation.
Now, there are ways to work around this.
For example, in Lambda, you could write your domain logic as a layer that would be platform agnostic and the actual Lambda function would primarily be used as an interface to the proprietary AWS runtime.
In Azure Functions, you could think of this like separating your business logic into a discrete library and the actual Functions code becomes the interface layer to the runtime that only handles reacting to an input and sending messages to an output.
On the other hand, the container will be portable and it doesn’t matter whether you’re running that container in AWS, Azure, Google Cloud, or even an on-premise container runtime. Not only that, it will be consistent. Even if we write our Functions as a Service implementation as platform agnostic, the platforms themselves have different capabilities such as the maximum duration for a function.
So portability is a big win if you want to have the option of running your solution in any cloud provider.
Which leads us to our next point.
This is really an extension of portability, but having the option to run your solution in any cloud with minimal change might be a critical facet of your evaluation.
For mission critical systems, the ability to operate on multiple cloud providers could insulate against an outage in any single cloud provider.
In some cases, it can also be a competitive advantage allowing you to deploy to a region where one provider has a data center but another does not.
Multi-cloud is one of the areas where Dapr really shines because its pluggable component architecture means that you can replace any of its building blocks with an equivalent platform specific service without changing your application code.
For example, the state management building block can interface with AWS DynamoDB or Azure CosmosDB or a MySQL database without changing the code; you can change the configuration and deploy your solution to any cloud provider.
3) Long Running App Logic
One problem with Functions as a service is that they generally have short quotas for runtime timeouts.
For example, AWS Lambda has a limit of 15 minutes.
Azure Functions by default are limited to 5 minutes on the consumption plan and 30 minutes on all other plans. But Functions also allows you to set the max timeout duration to unbounded with the exception of functions on the the consumption plan.
As we discussed in our first point on portability: even if you could write your core business logic to abstract the interface to the function runtime, you simply cannot avoid some of the inconsistency you face when moving across platforms whereas containers will offer much, much more consistency.
4) Persistent or Consistent Throughput Workloads
Somewhat related to the previous point, Functions may not be the most economical option when you have persistent or consistent workloads.
This is particularly true if your application has a low, consistent, predictable load. You can benefit from reserving the capacity with a container-based approach versus paying for the consumption based approach with Lambda or Functions.
For example, an application which experiences consistent load 24×7 may have a cutover point where it is more economical to pay for the container versus implementing it as a function.
Take a case where the code is responding to sensor readings. The compute and resource load may be low, but the high frequency of invocation may ultimately be quite expensive in a function runtime.
5) Memory or Compute Sensitive Workloads
While AWS Lambda, Azure Functions, and Google Cloud Functions provide some options with regards to tuning the system capacity, through memory and virtual CPU, you get richer fine-tuned controls with container runtimes.
For example, with an EC2 deployment model on AWS, you can pick any EC2 configuration to match your workload. That includes GPU instances for running machine learning workloads.
Because Function as a Service offerings are priced by a memory-time component, compute operations which have a high memory-time profile may benefit from a container pricing model.
6) Legacy Apps and Lift and Shift Scenarios
With Functions as a Service, it’s always going to be a re-write if you have legacy applications.
Containers offer a middle ground between virtual machines and serverless and can offer a much easier path to deploying and scaling legacy apps in the cloud, especially for daemons.
In many cases, you may be able to simply lift-and-shift your legacy applications into the cloud.
If you have code that runs as a daemon, its probably easier and more cost effective to containerize it than to rebuild it.
7) Broader Programming Language Support
Function as a Service platforms are generally restricted to a few programming languages and often have different levels of support for these languages.
So if you are using a language like C++, Rust, or Erlang, you’re out of luck!
With containers, you can write your application logic in any language.
And it’s not just limited to programming languages. Want to use a specialized database that doesn’t have a managed offering on your cloud provider? Deploy it in a container!
8) Simplified Ops…In Some Cases
Because you can deploy nearly anything in a container, it can be easier to build your entire system architecture using containers rather than mucking around with platform specific CLIs or infrastructure as code or markup.
I’ve met very few people who actually like ARM templates or CloudFormation templates.
If you’re not using platform managed services for your database, for example, you can deploy the database fully pre-packaged and pre-configured in a container rather than instantiating and managing a VM instance.
To summarize, I think that there are at least 8 facets we can use when deciding between a container based approach or a Function as a Service approach for your applications.
Let’s review the 8 facets:
- Multi-cloud scenarios
- Long running application logic
- Persistent or consistent throughput workloads
- Memory or CPU intensive workloads
- Legacy apps or lift and shift scenarios
- Broader programming language support
- Simplified operations
Some of these could be instant deal-breakers. Portability and multi-cloud for example.
Some of these you could compromise on. For example, programming language support: you could just pick another language. Legacy apps: maybe it’s time for a re-write anyways.
It may also be the case that you adopt a hybrid approach with some logic that is particularly bursty using Functions as a Service and lift-and-shift some parts of legacy apps as containers.
That’s my conclusion.
If you have other thoughts, opinions, or ideas, share your comments below. I would love to hear about your experiences with containers versus functions as a service for building modern, scalable, distributed web apps and APIS.