.NET 6 Web APIs with OpenAPI TypeScript Client Generation
It’s the year 2022. You’re building a new web API from the ground up.
Which approach should you pick for building your API?
- gRPC?
- GraphQL?
- REST?
Some would make the case that it’s time to put REST…out to rest, but I beg to differ.
(If you just want to see the code, jump to my sample application on GitHub)
Why Not gRPC?
REST is a style of accessing remote server resources using HTTP semantics. As such, REST itself enforces no schemas unlike a technology such as SOAP and WSDL. While this provides great flexibility in building APIs, it can be challenging in terms of productivity.
gRPC, on the other hand, takes a schema-driven approach and creates strong contracts that can increase productivity.
The problem with gRPC APIs for the web is that it feels like it’s probably still a year or two away. Namely, browser support for HTTP/2 seems lacking at the moment. For example, building web APIs with gRPC currently requires middleware or a proxy that will upgrade HTTP/1.1 traffic to HTTP/2 to be consumed by a server-side gRPC endpoint.
This is from the gRPC blog…in 2019:
It is currently impossible to implement the HTTP/2 gRPC spec in the browser, as there is simply no browser API with enough fine-grained control over the requests. For example: there is no way to force the use of HTTP/2, and even if there was, raw HTTP/2 frames are inaccessible in browsers.
The blog itself is problematic because the number of posts has dropped off significantly in 2021. So make of that what you will.
My take is that in 2022, I would not choose gRPC for a front-end API (it’s a great choice for a back-end API).
Why Not GraphQL?
Like gRPC, GraphQL provides a schema-driven approach to building APIs. On top of that, GraphQL provides much richer capabilities for interacting with your back-end APIs.
The problem with GraphQL really boils down to one thing in my opinion: the complexity cliff. As your application approaches a certain level of complexity, your GraphQL layer’s complexity does not scale linearly and you’re quickly facing a cliff that is difficult for a small team to manage. The initial productivity afforded by the schema-driven approach starts to drop off as your team starts to grapple with the challenges around managing performance, security, and scalability in GraphQL.
I think that for large enterprises, the power of GraphQL as a federation layer for APIs and internal endpoints is incredibly powerful. Amazon’s AppSync is a great example as it provides a single entry point to access nearly any resource you have sitting in your AWS deployment. To me, GraphQL makes the most sense for large enterprises who have sprawling systems developed by a myriad of discrete teams. A well-architected, centrally managed GraphQL interface can be the layer that unifies these otherwise disparate systems and endpoints.
For small teams, the operational considerations for getting it right at scale are very challenging based on my experience.
Why REST?
If you’ve read my previous post on accidental complexity and YAGNI, then you know that I have a penchant for simple, stupid, mature technologies that are hard to get wrong and have complexity curves that scale linearly with the application.
To that end, REST is:
- Proven, battle-tested, and very mature at this point
- Widely adopted, well known, and easy to hire for
- Easy to reason about
- Relatively transparent as far as flow of data and control; it’s just HTTP request/response with little fanfare which makes it easy to profile performance, trace, and secure
- Well supported in terms of tooling with no specialized tools needed to debug or interact with REST endpoints
- Not going away any time soon
While there are a variety of places where REST comes up short against GraphQL and gRPC, one of the biggest ones is that REST is a style of interaction with a remote resource over HTTP; it does not prescribe any particular mechanism. Without a schema, productivity becomes a challenge in terms of developer productivity when interacting with a REST API.
Enter OpenAPI, a Linux Foundation project. It layers a schema on top of REST web services and brings many of the benefits associated with GraphQL and gRPC as far as developer productivity goes. Specifically, it exposes a schema file which allows tooling to automatically generate strongly typed clients, for example. This schema file can also be used to automatically generate documentation using tools like ReDoc, WidderShins, and RapiDoc.
And one extra nice thing about starting with REST is that if you find yourself needing GraphQL in the future, you can always add resolvers to your REST endpoints or generate GraphQL schemas from OpenAPI schemas.
Working with .NET 6 Web APIs and OpenAPI Tooling
To extract the productivity benefits of working with REST APIs, we need some tooling support to:
- Automatically generate a schema from our REST APIs
- Use that schema to automatically generate client code
The .NET 6 Web API project template ships with OpenAPI support already built in. Our goal is to extend that to first generate a schema file at build time. To do that, we can follow this guide from Khalid Abuhakmeh.
(For a full walkthrough, see my sample GitHub project with .NET 6 and Svelte)
First, install the tooling to generate the schema at build time:
1 2 |
dotnet new tool-manifest dotnet tool install SwashBuckle.AspNetCore.Cli |
Then we update the .csproj
file to execute the CLI on build:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
<Project Sdk="Microsoft.NET.Sdk.Web"> <PropertyGroup> <TargetFramework>net6.0</TargetFramework> <Nullable>enable</Nullable> <ImplicitUsings>enable</ImplicitUsings> </PropertyGroup> <ItemGroup> <PackageReference Include="Swashbuckle.AspNetCore" Version="6.2.3" /> </ItemGroup> <Target Name="OpenAPI" AfterTargets="Build" Condition="$(Configuration)=='Debug'"> <Exec Command="dotnet swagger tofile --output ../web/references/swagger.yaml --yaml $(OutputPath)$(AssemblyName).dll v1" WorkingDirectory="$(ProjectDir)" /> <Exec Command="dotnet swagger tofile --output ../web/references/swagger.json $(OutputPath)$(AssemblyName).dll v1" WorkingDirectory="$(ProjectDir)" /> </Target> </Project> |
In this case, the --output ../web/references/swagger.yaml
references a top level directory in a mono-repo setup where our static web front-end client is located.
Now when we build our project, the schema gets automatically generated from our codebase.
Next, we want to be able to generate a TypeScript client and strongly typed data model automatically from this schema.
To do so, we’ll need to use the OpenAPI TypeScript Codegen project.
Using yarn
(or npm
), we simply install the tooling and we can automatically generate our client code:
1 2 3 4 5 6 7 |
yarn add --dev openapi-typescript-codegen yarn openapi --input references/swagger.json \ --output references/codegen \ --client axios \ --postfix Service \ --useOptions \ --useUnionTypes |
And if we add this to our package.json
, we can automatically generate our strongly typed front-end client and data model in one go:
1 2 3 4 5 |
{ "scripts": { "codegen": "cd ../api && dotnet build && cd ../web && yarn openapi --input references/swagger.json --output references/codegen --client axios --postfix Service --useOptions --useUnionTypes" } } |
Now when we run a command like yarn run codegen
, this will automatically rebuild our API, generate a new schema, and generate a new front-end client and data model.
We can use our client like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
// Import our client import { OpenAPI, WeatherForecast, WeatherForecastService, } from "../references/codegen/index"; OpenAPI.BASE = "https://localhost:7277"; // Set this to match your local API endpoint. // Async function async function loadForecast(): Promise<WeatherForecast[]> { return await WeatherForecastService.getWeatherForecast(); } |
This brings the productivity of a code-first approach while providing the benefits of a schema-based approach such a strongly typed client and data model generation that can boost front-end development productivity. Incorporating documentation tools such as ReDoc or RapiDoc (or the out of the box Swagger UI that ships with .NET Web APIs) further boosts productivity when interacting with the API.
I would argue that REST’s explicitness also aids in productivity as it allows consumers to easily see the capabilities of the API as REST APIs tend to be flatter than GraphQL, for example.
These days, REST doesn’t quite have the cachet of gRPC or GraphQL; however, it is more productive than ever while still being dead simple to build solutions of all sizes for teams of all levels of experience. My favorite part about REST is that it’s very difficult to screw it up while still relatively easy to layer complexity as necessary over time (e.g. proxy with a GraphQL resolver in the future).