Blogger is terrific. However, after nine great years, I decided to replatform my blog to Docusaurus. You can find it at: https://johnnyreilly.com/. See you there!
Thursday, 23 December 2021
Sunday, 19 December 2021
Google APIs: authentication with TypeScript
Google has a wealth of APIs which we can interact with. At the time of writing, there's more than two hundred available; including YouTube, Google Calendar and GMail (alongside many others). To integrate with these APIs, it's necessary to authenticate and then use that credential with the API. This post will take you through how to do just that using TypeScript. It will also demonstrate how to use one of those APIs: the Google Calendar API.
Creating an OAuth 2.0 Client ID on the Google Cloud Platform
The first thing we need to do is go to the Google Cloud Platform to create a project. The name of the project doesn't matter particularly; although it can be helpful to name the project to align with the API you're intending to consume. That's what we'll do here as we plan to integrate with the Google Calendar API:
The project is the container in which the OAuth 2.0 Client ID will be housed. Now we've created the project, let's go to the credentials screen and create an OAuth Client ID using the Create Credentials dropdown:
You'll likely have to create an OAuth consent screen before you can create the OAuth Client ID. Going through the journey of doing that feels a little daunting as many questions have to be answered. This is because the consent screen can be used for a variety of purposes beyond the API authentication we're looking at today.
When challenged, you can generally accept the defaults and proceed. The user type you'll require will be "External":
You'll also be required to create an app registration - all that's really required here is a name (which can be anything) and your email address:
You don't need to worry about scopes. You can either plan to publish the app, or alternately set yourself up to be a test user - you'll need to do one of these in order that you can authenticate with the app. Continuing to the end of the journey should provide you with the OAuth consent screen which you need in order that you may then create the OAuth Client ID.
Creating the OAuth Client ID is slightly confusing as the "Application type" required is "TVs and Limited Input devices".
We're using this type of application as we want to acquire a refresh token which we'll be able to use in future to aquire access tokens which will be used to access the Google APIs.
Once it's created, you'll be able to download the Client ID from the Google Cloud Platform:
When you download it, it should look something like this:
{
"installed": {
"client_id": "CLIENT_ID",
"project_id": "PROJECT_ID",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "CLIENT_SECRET",
"redirect_uris": ["urn:ietf:wg:oauth:2.0:oob", "http://localhost"]
}
}
You'll need the client_id
, client_secret
and redirect_uris
- but keep them in a safe place and don't commit client_id
and client_secret
to source control!
Acquiring a refresh token
Now we've got our client_id
and client_secret
, we're ready to write a simple node command line application which we can use to obtain a refresh token. This is actually a multi-stage process that will end up looking like this:
- Provide the Google authentication provider with the
client_id
andclient_secret
, in return it will provide an authentication URL. - Open the authentication URL in the browser and grant consent, the provider will hand over a code.
- Provide the Google authentication provider with the
client_id
,client_secret
and the code, it will acquire and provide users with a refresh token.
Let's start coding. We'll initialise a TypeScript Node project like so:
mkdir src
cd src
npm init -y
npm install googleapis ts-node typescript yargs @types/yargs @types/node
npx tsc --init
We've added a number of dependencies that will allow us to write a TypeScript Node command line application. We've also added a dependency to the googleapis
package which describes itself as:
Node.js client library for using Google APIs. Support for authorization and authentication with OAuth 2.0, API Keys and JWT tokens is included.
We're going to make use of the OAuth 2.0 part. We'll start our journey by creating a file called google-api-auth.ts
:
import { getArgs, makeOAuth2Client } from './shared';
async function getToken() {
const { clientId, clientSecret, code } = await getArgs();
const oauth2Client = makeOAuth2Client({ clientId, clientSecret });
if (code) await getRefreshToken(code);
else getAuthUrl();
async function getAuthUrl() {
const url = oauth2Client.generateAuthUrl({
// 'online' (default) or 'offline' (gets refresh_token)
access_type: 'offline',
// scopes are documented here: https://developers.google.com/identity/protocols/oauth2/scopes#calendar
scope: [
'https://www.googleapis.com/auth/calendar',
'https://www.googleapis.com/auth/calendar.events',
],
});
console.log(`Go to this URL to acquire a refresh token:\n\n${url}\n`);
}
async function getRefreshToken(code: string) {
const token = await oauth2Client.getToken(code);
console.log(token);
}
}
getToken();
And a common file named shared.ts
which google-api-auth.ts
imports and which we'll re-use later:
import { google } from 'googleapis';
import yargs from 'yargs/yargs';
const { hideBin } = require('yargs/helpers');
export async function getArgs() {
const argv = await Promise.resolve(yargs(hideBin(process.argv)).argv);
const clientId = argv['clientId'] as string;
const clientSecret = argv['clientSecret'] as string;
const code = argv.code as string | undefined;
const refreshToken = argv.refreshToken as string | undefined;
const test = argv.test as boolean;
if (!clientId) throw new Error('No clientId ');
console.log('We have a clientId');
if (!clientSecret) throw new Error('No clientSecret');
console.log('We have a clientSecret');
if (code) console.log('We have a code');
if (refreshToken) console.log('We have a refreshToken');
return { code, clientId, clientSecret, refreshToken, test };
}
export function makeOAuth2Client({
clientId,
clientSecret,
}: {
clientId: string;
clientSecret: string;
}) {
return new google.auth.OAuth2(
/* YOUR_CLIENT_ID */ clientId,
/* YOUR_CLIENT_SECRET */ clientSecret,
/* YOUR_REDIRECT_URL */ 'urn:ietf:wg:oauth:2.0:oob'
);
}
The getToken
function above does these things:
- If given a
client_id
andclient_secret
it will obtain an authentication URL. - If given a
client_id
,client_secret
andcode
it will obtain a refresh token (scoped to access the Google Calendar API).
We'll add an entry to our package.json
which will allow us to run our console app:
"google-api-auth": "ts-node google-api-auth.ts"
Now we're ready to acquire the refresh token. We'll run the following command (substituting in the appropriate values):
npm run google-api-auth -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET
Click on the URL that is generated in the console, it should open up a consent screen in the browser which looks like this:
Authenticate and grant consent and you should get a code:
Then (quickly) paste the acquired code into the following command:
npm run google-api-auth -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET --code THISISTHECODE
The refresh_token
(alongside much else) will be printed to the console. Grab it and put it somewhere secure. Again, no storing in source control!
It's worth taking a moment to reflect on what we've done. We've acquired a refresh token which involved a certain amount of human interaction. We've had to run a console command, do some work in a browser and run another commmand. You wouldn't want to do this repeatedly because it involves human interaction. Intentionally it cannot be automated. However, once you've acquired the refresh token, you can use it repeatedly until it expires (which may be never or at least years in the future). So once you have the refresh token, and you've stored it securely, you have what you need to be able to automate an API interaction.
Accessing the Google Calendar API
Let's test out our refresh token by attempting to access the Google Calendar API. We'll create a calendar.ts
file
import { google } from 'googleapis';
import { getArgs, makeOAuth2Client } from './shared';
async function makeCalendarClient() {
const { clientId, clientSecret, refreshToken } = await getArgs();
const oauth2Client = makeOAuth2Client({ clientId, clientSecret });
oauth2Client.setCredentials({
refresh_token: refreshToken,
});
const calendarClient = google.calendar({
version: 'v3',
auth: oauth2Client,
});
return calendarClient;
}
async function getCalendar() {
const calendarClient = await makeCalendarClient();
const { data: calendars, status } = await calendarClient.calendarList.list();
if (status === 200) {
console.log('calendars', calendars);
} else {
console.log('there was an issue...', status);
}
}
getCalendar();
The getCalendar
function above uses the client_id
, client_secret
and refresh_token
to access the Google Calendar API and retrieve the list of calendars.
We'll add an entry to our package.json
which will allow us to run this function:
"calendar": "ts-node calendar.ts",
Now we're ready to test calendar.ts
. We'll run the following command (substituting in the appropriate values):
npm run calendar -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET --refreshToken REFRESH_TOKEN
When we run for the first time, we may encounter a self explanatory message which tells us that we need enable the calendar API for our application:
(node:31563) UnhandledPromiseRejectionWarning: Error: Google Calendar API has not been used in project 77777777777777 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/calendar-json.googleapis.com/overview?project=77777777777777 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Once enabled, we can run successfully for the first time. Consequently we should see something like this showing up in the console:
This demonstrates that we're successfully integrating with a Google API using our refresh token.
Today the Google Calendar API, tomorrow the (Google API) world!
What we've demonstrated here is integrating with the Google Calendar API. However, that is not the limit of what we can do. As we discussed earlier, Google has more than two hundred APIs we can interact with, and the key to that interaction is following the same steps for authentication that this post outlines.
Let's imagine that we want to integrate with the YouTube API or the GMail API. We'd be able to follow the steps in this post, using different scopes for the refresh token appropriate to the API, and build an integration against that API. Take a look at the available APIs here.
The approach outlined by this post is the key to integrating with a multitude of Google APIs. Happy integrating!
The idea of this was sparked by Martin Fowler's post on the topic which comes from a Ruby angle.
Publish Azure Static Web Apps with Bicep and Azure DevOps
This post demonstrates how to deploy Azure Static Web Apps using Bicep and Azure DevOps. It includes a few workarounds for the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.
Bicep template
The first thing we're going to do is create a folder where our Bicep file for deploying our Azure Static Web App will live:
mkdir infra/static-web-app -p
Then we'll create a main.bicep
file:
param repositoryUrl string
param repositoryBranch string
param location string = 'westeurope'
param skuName string = 'Free'
param skuTier string = 'Free'
param appName string
resource staticWebApp 'Microsoft.Web/staticSites@2020-12-01' = {
name: appName
location: location
sku: {
name: skuName
tier: skuTier
}
properties: {
// The provider, repositoryUrl and branch fields are required for successive deployments to succeed
// for more details see: https://github.com/Azure/static-web-apps/issues/516
provider: 'DevOps'
repositoryUrl: repositoryUrl
branch: repositoryBranch
buildProperties: {
skipGithubActionWorkflowGeneration: true
}
}
}
output deployment_token string = listSecrets(staticWebApp.id, staticWebApp.apiVersion).properties.apiKey
There's some things to draw attention to in the code above:
- The
provider
,repositoryUrl
andbranch
fields are required for successive deployments to succeed. In our case we're deploying via Azure DevOps and so our provider is'DevOps'
. For more details, look at this issue. - We're creating a
deployment_token
which we'll need in order that we can deploy into the Azure Static Web App resource.
Static Web App
In order that we can test out Azure Static Web Apps, what we need is a static web app. You could use pretty much anything here; we're going to use Docusaurus. We'll execute this single command:
npx @docusaurus/init@latest init static-web-app classic
Which will scaffold a Docusaurus site in a folder named static-web-app
. We don't need to change it any further; let's just see if we can deploy it.
Azure Pipeline
We're going to add an azure-pipelines.yml
file which Azure DevOps can use to power a pipeline:
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
submodules: true
- bash: az bicep build --file infra/static-web-app/main.bicep
displayName: 'Compile Bicep to ARM'
- task: AzureResourceManagerTemplateDeployment@3
name: DeployStaticWebAppInfra
displayName: Deploy Static Web App infra
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: $(serviceConnection)
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/static-web-app/main.json' # created by bash script
overrideParameters: >-
-repositoryUrl $(repo)
-repositoryBranch $(Build.SourceBranchName)
-appName $(staticWebAppName)
deploymentMode: Incremental
deploymentOutputs: deploymentOutputs
- task: PowerShell@2
name: 'SetDeploymentOutputVariables'
displayName: 'Set Deployment Output Variables'
inputs:
targetType: inline
script: |
$armOutputObj = '$(deploymentOutputs)' | ConvertFrom-Json
$armOutputObj.PSObject.Properties | ForEach-Object {
$keyname = $_.Name
$value = $_.Value.value
# Creates a standard pipeline variable
Write-Output "##vso[task.setvariable variable=$keyName;issecret=true]$value"
# Display keys in pipeline
Write-Output "output variable: $keyName"
}
pwsh: true
- task: AzureStaticWebApp@0
name: DeployStaticWebApp
displayName: Deploy Static Web App
inputs:
app_location: 'static-web-app'
# api_location: 'api' # we don't have an API
output_location: 'build'
azure_static_web_apps_api_token: $(deployment_token) # captured from deploymentOutputs
When the pipeline is run, it does the following:
- Compiles our Bicep into an ARM template
- Deploys the compiled ARM template to Azure
- Captures the deployment outputs (essentially the
deployment_token
) and converts them into variables to use in the pipeline - Deploys our Static Web App using the
deployment_token
The pipeline depends upon a number of variables:
azureResourceGroup
- the name of your resource group in Azure where the app will be deployedlocation
- where your app is deployed, egnortheurope
repo
- the URL of your repository in Azure DevOps, eg https://dev.azure.com/johnnyreilly/_git/azure-static-web-appsserviceConnection
- the name of your AzureRM service connection in Azure DevOpsstaticWebAppName
- the name of your static web app, egazure-static-web-apps-johnnyreilly
subscriptionId
- your Azure subscription id from the Azure Portal
A successful pipeline looks something like this:
What you might notice is that the AzureStaticWebApp
is itself installing and building our application. This is handled by Microsoft Oryx. The upshot of this is that we don't need to manually run npm install
and npm build
ourselves; the AzureStaticWebApp
task will take care of it for us.
Finally, let's see if we've deployed something successfully…
We have! It's worth noting that you'll likely want to give your Azure Static Web App a lovelier URL, and perhaps even put it behind Azure Front Door as well.
Provider is invalid
workaround 2
Shane Neff was attempting to follow the instructions in this post and encountered issues. He shared his struggles with me as he encountered the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.
He was good enough to share his solution as well, which is inserting this task at the start of the pipeline (before the az bicep build
step):
- task: AzureCLI@2
inputs:
azureSubscription: '<name of your service connection>'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: 'az staticwebapp disconnect -n <name of your app>'
I haven't had the problems that Shane has had myself, but I wanted to share his fix for the people out there who almost certainly are bumping on this.
TypeScript, abstract classes, and constructors
TypeScript has the ability to define classes as abstract. This means they cannot be instantiated directly, only non-abstract subclasses can be. Let's take a look at what this means when it comes to constructor usage.
Making a scratchpad
In order that we can dig into this, let's create ourselves a scratchpad project to work with. We're going to create a node project and install TypeScript as a dependency.
mkdir ts-abstract-constructors
cd ts-abstract-constructors
npm init --yes
npm install typescript @types/node --save-dev
We now have a package.json
file set up. We need to initialise a TypeScript project as well:
npx tsc --init
This will give us a tsconfig.json
file that will drive configuration of TypeScript. By default TypeScript transpiles to an older version of JavaScript that predates classes. So we'll update the config to target a newer version of the language that does include them:
"target": "es2020",
"lib": ["es2020"],
Let's create ourselves a TypeScript file called index.ts
. The name is not significant; we just need a file to develop in.
Finally we'll add a script to our package.json
that compiles our TypeScript to JavaScript, and then runs the JS with node:
"start": "tsc --project \".\" && node index.js"
Making an abstract class
Now we're ready. Let's add an abstract class with a constructor to our index.ts
file:
abstract class ViewModel {
id: string;
constructor(id: string) {
this.id = id;
}
}
Consider the ViewModel
class above. Let's say we're building some kind of CRUD app, we'll have different views. Each of those views will have a corresponding viewmodel which is a subclass of the ViewModel
abstract class. The ViewModel
class has a mandatory id
parameter in the constructor. This is to ensure that every viewmodel has an id
value. If this were a real app, id
would likely be the value with which an entity was looked up in some kind of database.
Importantly, all subclasses of ViewModel
should either:
not implement a constructor at all, leaving the base class constructor to become the default constructor of the subclass or
implement their own constructor which invokes the
ViewModel
base class constructor.
Taking our abstract class for a spin
Now we have it, let's see what we can do with our abstract class. First of all, can we instantiate our abstract class? We shouldn't be able to do this:
const viewModel = new ViewModel('my-id');
console.log(`the id is: ${viewModel.id}`);
And sure enough, running npm start
results in the following error (which is also being reported by our editor; VS Code).
index.ts:9:19 - error TS2511: Cannot create an instance of an abstract class.
const viewModel = new ViewModel('my-id');
Tremendous. However, it's worth remembering that abstract
is a TypeScript concept. When we compile our TS, although it's throwing a compilation error, it still transpiles an index.js
file that looks like this:
'use strict';
class ViewModel {
constructor(id) {
this.id = id;
}
}
const viewModel = new ViewModel('my-id');
console.log(`the id is: ${viewModel.id}`);
As we can see, there's no mention of abstract
; it's just a straightforward class
. In fact, if we directly execute the file with node index.js
we can see an output of:
the id is: my-id
So the transpiled code is valid JavaScript even if the source code isn't valid TypeScript. This all reminds us that abstract
is a TypeScript construct.
Subclassing without a new constructor
Let's now create our first subclass of ViewModel
and attempt to instantiate it:
class NoNewConstructorViewModel extends ViewModel {}
// error TS2554: Expected 1 arguments, but got 0.
const viewModel1 = new NoNewConstructorViewModel();
const viewModel2 = new NoNewConstructorViewModel('my-id');
As the TypeScript compiler tells us, the second of these instantiations is legitimate as it relies upon the constructor from the base class as we'd hope. The first is not as there is no parameterless constructor.
Subclassing with a new constructor
Having done that, let's try subclassing and implementing a new constructor which has two parameters (to differentiate from the constructor we're overriding):
class NewConstructorViewModel extends ViewModel {
data: string;
constructor(id: string, data: string) {
super(id);
this.data = data;
}
}
// error TS2554: Expected 2 arguments, but got 0.
const viewModel3 = new NewConstructorViewModel();
// error TS2554: Expected 2 arguments, but got 1.
const viewModel4 = new NewConstructorViewModel('my-id');
const viewModel5 = new NewConstructorViewModel('my-id', 'important info');
Again, only one of the attempted instantiations is legitimate. viewModel3
is not as there is no parameterless constructor. viewModel4
is not as we have overridden the base class constructor with our new one that has two parameters. Hence viewModel5
is our "Goldilocks" instantiation; it's just right!
It's also worth noting that we're calling super
in the NewConstructorViewModel
constructor. This invokes the constructor of the ViewModel
base (or "super") class. TypeScript enforces that we pass the appropriate arguments (in our case a single string
).
Wrapping it up
We've seen that TypeScript ensures correct usage of constructors when we have an abstract class. Importantly, all subclasses of abstract classes either:
do not implement a constructor at all, leaving the base class constructor (the abstract constructor) to become the default constructor of the subclass or
implement their own constructor which invokes the base (or "super") class constructor with the correct arguments.
C# 9 in-process Azure Functions
C# 9 has some amazing features. Azure Functions are have two modes: isolated and in-process. Whilst isolated supports .NET 5 (and hence C# 9), in-process supports .NET Core 3.1 (C# 8). This post shows how we can use C# 9 with in-process Azure Functions running on .NET Core 3.1.
Azure Functions: in-process and isolated
Historically .NET Azure Functions have been in-process. This changed with .NET 5 where a new model was introduced named "isolated". To quote from the roadmap:
Running in an isolated process decouples .NET functions from the Azure Functions host—allowing us to more easily support new .NET versions and address pain points associated with sharing a single process.
However, the initial launch of isolated functions does not have the full level of functionality enjoyed by in-process functions. This will happen, according the roadmap:
Long term, our vision is to have full feature parity out of process, bringing many of the features that are currently exclusive to the in-process model to the isolated model. We plan to begin delivering improvements to the isolated model after the .NET 6 general availability release.
In the future, in-process functions will be retired in favour of isolated functions. However, it will be .NET 7 (scheduled to ship in November 2022) before that takes place:
As the image taken from the roadmap shows, when .NET 5 shipped, it did not support in-process Azure Functions. When .NET 6 ships in November, it should.
In the meantime, we would like to use C# 9.
Setting up a C# 8 project
We're have the Azure Functions Core Tools installed, so let's create a new function project:
func new --worker-runtime dotnet --template "Http Trigger" --name "HelloRecord"
The above command scaffolds out a .NET Core 3.1 Azure function project which contains a single Azure function. The --worker-runtime dotnet
parameter is what causes an in-process .NET Core 3.1 function being created. You should have a .csproj
file that looks like this:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<AzureFunctionsVersion>v3</AzureFunctionsVersion>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.11" />
</ItemGroup>
<ItemGroup>
<None Update="host.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="local.settings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<CopyToPublishDirectory>Never</CopyToPublishDirectory>
</None>
</ItemGroup>
</Project>
We're running with C# 8 and .NET Core 3.1 at this point. What does it take to get us to C# 9?
What does it take to get to C# 9?
There's a great post on Reddit addressing using C# 9 with .NET Core 3.1 which says:
You can use
<LangVersion>9.0</LangVersion>
, and VS even includes support for suggesting a language upgrade.However, there are three categories of features in C#:
features that are entirely part of the compiler. Those will work.
features that require BCL additions. Since you're on the older BCL, those will need to be backported. For example, to use init; and record, you can use https://github.com/manuelroemer/IsExternalInit.
features that require runtime additions. Those cannot be added at all. For example, default interface members in C# 8, and covariant return types in C# 9.
Of the above, 1 and 2 add a tremendous amount of value. The features of 3 are great, but more niche. Speaking personally, I care a great deal about Record types. So let's apply this.
Adding C# 9 to the in-process function
To get C# into the mix, we want to make two changes:
- add a
<LangVersion>9.0</LangVersion>
to the<PropertyGroup>
element of our.csproj
file - add a package reference to the
IsExternalInit
The applied changes look like this:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
+ <LangVersion>9.0</LangVersion>
<AzureFunctionsVersion>v3</AzureFunctionsVersion>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.11" />
+ <PackageReference Include="IsExternalInit" Version="1.0.1" PrivateAssets="all" />
</ItemGroup>
<ItemGroup>
<None Update="host.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="local.settings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<CopyToPublishDirectory>Never</CopyToPublishDirectory>
</None>
</ItemGroup>
</Project>
If we used dotnet add package IsExternalInit
, we might be using a different syntax in the .csproj
. Be not afeard - that won't affect usage.
Making a C# 9 program
Now we can theoretically use C# 9…. Let's use C# 9. We'll tweak our HelloRecord.cs
file, add in a simple record
named MessageRecord
and tweak the Run
method to use it:
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
namespace tmp
{
public record MessageRecord(string message);
public static class HelloRecord
{
[FunctionName("HelloRecord")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
var responseMessage = new MessageRecord(string.IsNullOrEmpty(name)
? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
: $"Hello, {name}. This HTTP triggered function executed successfully.");
return new OkObjectResult(responseMessage);
}
}
}
If we kick off our function with func start
:
We can see we can compile, and output is as we might expect and hope. Likewise if we try and debug in VS Code, we can:
Best before…
So, we've now a way to use C# 9 (or most of it) with in-process .NET Core 3.1 apps. This should serve until .NET 6 ships in November 2021 and we're able to use C# 9 by default.
The Service Now API and TypeScript Conditional Types
The Service Now REST API is an API which allows you to interact with Service Now. It produces different shaped results based upon the sysparm_display_value
query parameter. This post looks at how we can model these API results with TypeScripts conditional types. The aim being to minimise repetition whilst remaining strongly typed. This post is specifically about the Service Now API, but the principles around conditional type usage are generally applicable.
The power of a query parameter
There is a query parameter which many endpoints in Service Nows Table API support named sysparm_display_value
. The docs describe it thus:
Data retrieval operation for reference and choice fields. Based on this value, retrieves the display value and/or the actual value from the database.
Valid values:
true
: Returns the display values for all fields.false
: Returns the actual values from the database.all
: Returns both actual and display value
Let's see what that looks like when it comes to loading a Change Request. Consider the following curls:
# sysparm_display_value=all
curl "https://ourcompanyinstance.service-now.com/api/now/table/change_request?sysparm_query=number=CHG0122585&sysparm_limit=1&sysparm_display_value=all" --request GET --header "Accept:application/json" --user 'API_USERNAME':'API_PASSWORD' | jq '.result[0] | { state, sys_id, number, requested_by, reason }'
# sysparm_display_value=true
curl "https://ourcompanyinstance.service-now.com/api/now/table/change_request?sysparm_query=number=CHG0122585&sysparm_limit=1&sysparm_display_value=true" --request GET --header "Accept:application/json" --user 'API_USERNAME':'API_PASSWORD' | jq '.result[0] | { state, sys_id, number, requested_by, reason }'
# sysparm_display_value=false
curl "https://ourcompanyinstance.service-now.com/api/now/table/change_request?sysparm_query=number=CHG0122585&sysparm_limit=1&sysparm_display_value=false" --request GET --header "Accept:application/json" --user 'API_USERNAME':'API_PASSWORD' | jq '.result[0] | { state, sys_id, number, requested_by, reason }'
When executed, they each load the same Change Request from Service Now with a different value for sysparm_display_value
. You'll notice there's some jq
in the mix as well. This is because there's a lot of data in a Change Request. Rather than display everything, we're displaying a subset of fields. The first curl has a sysparm_display_value
value of all
, the second false
and the third true
. What do the results look like?
sysparm_display_value=all
:
{
"state": {
"display_value": "Closed",
"value": "3"
},
"sys_id": {
"display_value": "4d54d7481b37e010d315cbb5464bcb95",
"value": "4d54d7481b37e010d315cbb5464bcb95"
},
"number": {
"display_value": "CHG0122595",
"value": "CHG0122595"
},
"requested_by": {
"display_value": "Sally Omer",
"link": "https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999",
"value": "b15cf3ebdbe11300f196f3651d961999"
},
"reason": {
"display_value": null,
"value": ""
}
}
sysparm_display_value=true
:
{
"state": "Closed",
"sys_id": "4d54d7481b37e010d315cbb5464bcb95",
"number": "CHG0122595",
"requested_by": {
"display_value": "Sally Omer",
"link": "https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999"
},
"reason": null
}
sysparm_display_value=false
:
{
"state": "3",
"sys_id": "4d54d7481b37e010d315cbb5464bcb95",
"number": "CHG0122595",
"requested_by": {
"link": "https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999",
"value": "b15cf3ebdbe11300f196f3651d961999"
},
"reason": ""
}
As you can see, we have the same properties being returned each time, but with a different shape. Let's call out some interesting highlights:
requested_by
is always an object which containslink
. It may also containvalue
anddisplay_value
depending uponsysparm_display_value
state
,sys_id
,number
andreason
are objects containingvalue
anddisplay_value
whensysparm_display_value
isall
. Otherwise, the value ofvalue
ordisplay_value
is surfaced up directly; not in an object.- most values are strings, even if they represent another data type. So
state.value
is always a stringified number. The only exception to this rule isreason.display_value
which can benull
Type Definition time
We want to create type definitions for these API results. We could of course create three different results, but that would involve duplication. Boo! It's worth bearing in mind we're looking at a subset of five properties in this example. In reality, there are many, many properties on a Change Request. Whilst this example is for a subset, if we wanted to go on to create the full type definition the duplication would become very impractical.
What can we do? Well, if all of the underlying properties were of the same type, we could use a generic and be done. But given the underlying types can vary, that's not going to work. We can achieve this though through using a combination of generics and conditional types.
Let's begin by creating a string literal type of the possible values of sysparm_display_value
:
export type DisplayValue = 'all' | 'true' | 'false';
Making a PropertyValue
type
Next we need to create a type that models the object with display_value
and value
properties.
:::info a type for state, sys_id, number and reason
state
,sys_id
,number
andreason
are objects containingvalue
anddisplay_value
whensysparm_display_value
is'all'
. Otherwise, the value ofvalue
ordisplay
is surfaced up directly; not in an object.- most values are strings, even if they represent another data type. So
state.value
is always a stringified number. The only exception to this rule isreason.display_value
which can benull
:::
export interface ValueAndDisplayValue<TValue = string, TDisplayValue = string> {
display_value: TDisplayValue;
value: TValue;
}
Note that this is a generic property with a default type of string
for both display_value
and value
. Most of the time, string
is the type in question so it's great that TypeScript allows us to cut down on the amount of syntax we use.
Now we're going to create our first conditional type:
export type PropertyValue<
TAllTrueFalse extends DisplayValue,
TValue = string,
TDisplayValue = string
> = TAllTrueFalse extends 'all'
? ValueAndDisplayValue<TValue, TDisplayValue>
: TAllTrueFalse extends 'true'
? TDisplayValue
: TValue;
The PropertyValue
will either be a ValueAndDisplayValue
, a TDisplayValue
or a TValue
, depending upon whether PropertyValue
is 'all'
, 'true'
or 'false'
respectively. That's hard to grok. Let's look at an example of each of those cases using the reason
property, which allows a TValue
of string
and a TDisplayValue
of string | null
:
const reasonAll: PropertyValue<'all', string, string | null> = {
display_value: null,
value: '',
};
const reasonTrue: PropertyValue<'true', string, string | null> = null;
const reasonFalse: PropertyValue<'false', string, string | null> = '';
Consider the type on the left and the value on the right. We're successfully modelling our PropertyValue
s. I've deliberately picked an edge case example to push our conditional type to its limits.
Service Now Change Request States
Let's look at another usage. We'll create a type that repesents the possible values of a Change Request's state
in Service Now. Do take a moment to appreciate these values. Many engineers were lost in the numerous missions to obtain these rare and secret enums. Alas, the Service Now API docs have some significant gaps.
/** represents the possible Change Request "State" values in Service Now */
export const STATE = {
NEW: '-5',
ASSESS: '-4',
SENT_FOR_APPROVAL: '-3',
SCHEDULED: '-2',
APPROVED: '-1',
WAITING: '1',
IN_PROGRESS: '2',
COMPLETE: '3',
ERROR: '4',
CLOSED: '7',
} as const;
export type State = typeof STATE[keyof typeof STATE];
By combining State
and PropertyValue
, we can strongly type the state
property of Change Requests. Consider:
const stateAll: PropertyValue<'all', State> = {
display_value: 'Closed',
value: '3',
};
const stateTrue: PropertyValue<'true', State> = 'Closed';
const stateFalse: PropertyValue<'false', State> = '3';
With that in place, let's turn our attention to our other natural type that the requested_by
property demonstrates.
Making a LinkValue
type
:::info a type for requested_by
requested_by
is always an object which contains link
. It may also contain value
and display_value
depending upon sysparm_display_value
:::
interface Link {
link: string;
}
/** when TAllTrueFalse is 'false' */
export interface LinkAndValue extends Link {
value: string;
}
/** when TAllTrueFalse is 'true' */
export interface LinkAndDisplayValue extends Link {
display_value: string;
}
/** when TAllTrueFalse is 'all' */
export interface LinkValueAndDisplayValue
extends LinkAndValue,
LinkAndDisplayValue {}
The three types above model the different scenarios. Now we need a conditional type to make use of them:
export type LinkValue<TAllTrueFalse extends DisplayValue> =
TAllTrueFalse extends 'all'
? LinkValueAndDisplayValue
: TAllTrueFalse extends 'true'
? LinkAndDisplayValue
: LinkAndValue;
This is hopefully simpler to read than the PropertyValue
type, and if you look at the examples below you can see what usage looks like:
const requested_byAll: LinkValue<'all'> = {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
};
const requested_byTrue: LinkValue<'true'> = {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
};
const requested_byFalse: LinkValue<'false'> = {
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
};
Making our complete type
With these primitives in place, we can now build ourself a (cut-down) type that models a Change Request:
export interface ServiceNowChangeRequest<TAllTrueFalse extends DisplayValue> {
state: PropertyValue<TAllTrueFalse, State>;
sys_id: PropertyValue<TAllTrueFalse>;
number: PropertyValue<TAllTrueFalse>;
requested_by: LinkValue<TAllTrueFalse>;
reason: PropertyValue<TAllTrueFalse, string, string | null>;
// there are *way* more properties in reality
}
This is a generic type which will accept 'all'
, 'true'
or 'false'
and will use that type to drive the type of the properties inside the object. And now we have successfully typed our Service Now Change Request, thanks to TypeScript's conditional types.
To test it out, let's take the JSON responses we got back from our curls at the start, and see if we can make ServiceNowChangeRequest
s with them.
const changeRequestFalse: ServiceNowChangeRequest<'false'> = {
state: '3',
sys_id: '4d54d7481b37e010d315cbb5464bcb95',
number: 'CHG0122595',
requested_by: {
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
},
reason: '',
};
const changeRequestTrue: ServiceNowChangeRequest<'true'> = {
state: 'Closed',
sys_id: '4d54d7481b37e010d315cbb5464bcb95',
number: 'CHG0122595',
requested_by: {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
},
reason: null,
};
const changeRequestAll: ServiceNowChangeRequest<'all'> = {
state: {
display_value: 'Closed',
value: '3',
},
sys_id: {
display_value: '4d54d7481b37e010d315cbb5464bcb95',
value: '4d54d7481b37e010d315cbb5464bcb95',
},
number: {
display_value: 'CHG0122595',
value: 'CHG0122595',
},
requested_by: {
display_value: 'Sally Omer',
link: 'https://ourcompanyinstance.service-now.com/api/now/table/sys_user/b15cf3ebdbe11300f196f3651d961999',
value: 'b15cf3ebdbe11300f196f3651d961999',
},
reason: {
display_value: null,
value: '',
},
};
We can! Do take a look at this in the TypeScript playground.
Sunday, 12 December 2021
Google APIs: authentication with TypeScript
Google has a wealth of APIs which we can interact with. At the time of writing, there's more than two hundred available; including YouTube, Google Calendar and GMail (alongside many others). To integrate with these APIs, it's necessary to authenticate and then use that credential with the API. This post will take you through how to do just that using TypeScript. It will also demonstrate how to use one of those APIs: the Google Calendar API.
Creating an OAuth 2.0 Client ID on the Google Cloud Platform
The first thing we need to do is go to the Google Cloud Platform to create a project. The name of the project doesn't matter particularly; although it can be helpful to name the project to align with the API you're intending to consume. That's what we'll do here as we plan to integrate with the Google Calendar API:
The project is the container in which the OAuth 2.0 Client ID will be housed. Now we've created the project, let's go to the credentials screen and create an OAuth Client ID using the Create Credentials dropdown:
You'll likely have to create an OAuth consent screen before you can create the OAuth Client ID. Going through the journey of doing that feels a little daunting as many questions have to be answered. This is because the consent screen can be used for a variety of purposes beyond the API authentication we're looking at today.
When challenged, you can generally accept the defaults and proceed. The user type you'll require will be "External":
You'll also be required to create an app registration - all that's really required here is a name (which can be anything) and your email address:
You don't need to worry about scopes. You can either plan to publish the app, or alternately set yourself up to be a test user - you'll need to do one of these in order that you can authenticate with the app. Continuing to the end of the journey should provide you with the OAuth consent screen which you need in order that you may then create the OAuth Client ID.
Creating the OAuth Client ID is slightly confusing as the "Application type" required is "TVs and Limited Input devices".
We're using this type of application as we want to acquire a refresh token which we'll be able to use in future to aquire access tokens which will be used to access the Google APIs.
Once it's created, you'll be able to download the Client ID from the Google Cloud Platform:
When you download it, it should look something like this:
{
"installed": {
"client_id": "CLIENT_ID",
"project_id": "PROJECT_ID",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "CLIENT_SECRET",
"redirect_uris": ["urn:ietf:wg:oauth:2.0:oob", "http://localhost"]
}
}
You'll need the client_id
, client_secret
and redirect_uris
- but keep them in a safe place and don't commit client_id
and client_secret
to source control!
Acquiring a refresh token
Now we've got our client_id
and client_secret
, we're ready to write a simple node command line application which we can use to obtain a refresh token. This is actually a multi-stage process that will end up looking like this:
- Provide the Google authentication provider with the
client_id
andclient_secret
, in return it will provide an authentication URL. - Open the authentication URL in the browser and grant consent, the provider will hand over a code.
- Provide the Google authentication provider with the
client_id
,client_secret
and the code, it will acquire and provide users with a refresh token.
Let's start coding. We'll initialise a TypeScript Node project like so:
mkdir src
cd src
npm init -y
npm install googleapis ts-node typescript yargs @types/yargs @types/node
npx tsc --init
We've added a number of dependencies that will allow us to write a TypeScript Node command line application. We've also added a dependency to the googleapis
package which describes itself as:
Node.js client library for using Google APIs. Support for authorization and authentication with OAuth 2.0, API Keys and JWT tokens is included.
We're going to make use of the OAuth 2.0 part. We'll start our journey by creating a file called google-api-auth.ts
:
import { getArgs, makeOAuth2Client } from './shared';
async function getToken() {
const { clientId, clientSecret, code } = await getArgs();
const oauth2Client = makeOAuth2Client({ clientId, clientSecret });
if (code) await getRefreshToken(code);
else getAuthUrl();
async function getAuthUrl() {
const url = oauth2Client.generateAuthUrl({
// 'online' (default) or 'offline' (gets refresh_token)
access_type: 'offline',
// scopes are documented here: https://developers.google.com/identity/protocols/oauth2/scopes#calendar
scope: [
'https://www.googleapis.com/auth/calendar',
'https://www.googleapis.com/auth/calendar.events',
],
});
console.log(`Go to this URL to acquire a refresh token:\n\n${url}\n`);
}
async function getRefreshToken(code: string) {
const token = await oauth2Client.getToken(code);
console.log(token);
}
}
getToken();
And a common file named shared.ts
which google-api-auth.ts
imports and which we'll re-use later:
import { google } from 'googleapis';
import yargs from 'yargs/yargs';
const { hideBin } = require('yargs/helpers');
export async function getArgs() {
const argv = await Promise.resolve(yargs(hideBin(process.argv)).argv);
const clientId = argv['clientId'] as string;
const clientSecret = argv['clientSecret'] as string;
const code = argv.code as string | undefined;
const refreshToken = argv.refreshToken as string | undefined;
const test = argv.test as boolean;
if (!clientId) throw new Error('No clientId ');
console.log('We have a clientId');
if (!clientSecret) throw new Error('No clientSecret');
console.log('We have a clientSecret');
if (code) console.log('We have a code');
if (refreshToken) console.log('We have a refreshToken');
return { code, clientId, clientSecret, refreshToken, test };
}
export function makeOAuth2Client({
clientId,
clientSecret,
}: {
clientId: string;
clientSecret: string;
}) {
return new google.auth.OAuth2(
/* YOUR_CLIENT_ID */ clientId,
/* YOUR_CLIENT_SECRET */ clientSecret,
/* YOUR_REDIRECT_URL */ 'urn:ietf:wg:oauth:2.0:oob'
);
}
The getToken
function above does these things:
- If given a
client_id
andclient_secret
it will obtain an authentication URL. - If given a
client_id
,client_secret
andcode
it will obtain a refresh token (scoped to access the Google Calendar API).
We'll add an entry to our package.json
which will allow us to run our console app:
"google-api-auth": "ts-node google-api-auth.ts"
Now we're ready to acquire the refresh token. We'll run the following command (substituting in the appropriate values):
npm run google-api-auth -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET
Click on the URL that is generated in the console, it should open up a consent screen in the browser which looks like this:
Authenticate and grant consent and you should get a code:
Then (quickly) paste the acquired code into the following command:
npm run google-api-auth -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET --code THISISTHECODE
The refresh_token
(alongside much else) will be printed to the console. Grab it and put it somewhere secure. Again, no storing in source control!
It's worth taking a moment to reflect on what we've done. We've acquired a refresh token which involved a certain amount of human interaction. We've had to run a console command, do some work in a browser and run another commmand. You wouldn't want to do this repeatedly because it involves human interaction. Intentionally it cannot be automated. However, once you've acquired the refresh token, you can use it repeatedly until it expires (which may be never or at least years in the future). So once you have the refresh token, and you've stored it securely, you have what you need to be able to automate an API interaction.
Accessing the Google Calendar API
Let's test out our refresh token by attempting to access the Google Calendar API. We'll create a calendar.ts
file
import { google } from 'googleapis';
import { getArgs, makeOAuth2Client } from './shared';
async function makeCalendarClient() {
const { clientId, clientSecret, refreshToken } = await getArgs();
const oauth2Client = makeOAuth2Client({ clientId, clientSecret });
oauth2Client.setCredentials({
refresh_token: refreshToken,
});
const calendarClient = google.calendar({
version: 'v3',
auth: oauth2Client,
});
return calendarClient;
}
async function getCalendar() {
const calendarClient = await makeCalendarClient();
const { data: calendars, status } = await calendarClient.calendarList.list();
if (status === 200) {
console.log('calendars', calendars);
} else {
console.log('there was an issue...', status);
}
}
getCalendar();
The getCalendar
function above uses the client_id
, client_secret
and refresh_token
to access the Google Calendar API and retrieve the list of calendars.
We'll add an entry to our package.json
which will allow us to run this function:
"calendar": "ts-node calendar.ts",
Now we're ready to test calendar.ts
. We'll run the following command (substituting in the appropriate values):
npm run calendar -- --clientId CLIENT_ID --clientSecret CLIENT_SECRET --refreshToken REFRESH_TOKEN
When we run for the first time, we may encounter a self explanatory message which tells us that we need enable the calendar API for our application:
(node:31563) UnhandledPromiseRejectionWarning: Error: Google Calendar API has not been used in project 77777777777777 before or it is disabled. Enable it by visiting https://console.developers.google.com/apis/api/calendar-json.googleapis.com/overview?project=77777777777777 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our systems and retry.
Once enabled, we can run successfully for the first time. Consequently we should see something like this showing up in the console:
This demonstrates that we're successfully integrating with a Google API using our refresh token.
Today the Google Calendar API, tomorrow the (Google API) world!
What we've demonstrated here is integrating with the Google Calendar API. However, that is not the limit of what we can do. As we discussed earlier, Google has more than two hundred APIs we can interact with, and the key to that interaction is following the same steps for authentication that this post outlines.
Let's imagine that we want to integrate with the YouTube API or the GMail API. We'd be able to follow the steps in this post, using different scopes for the refresh token appropriate to the API, and build an integration against that API. Take a look at the available APIs here.
The approach outlined by this post is the key to integrating with a multitude of Google APIs. Happy integrating!
The idea of this was sparked by Martin Fowler's post on the topic which comes from a Ruby angle.
Publish Azure Static Web Apps with Bicep and Azure DevOps
This post demonstrates how to deploy Azure Static Web Apps using Bicep and Azure DevOps. It includes a few workarounds for the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.
Bicep template
The first thing we're going to do is create a folder where our Bicep file for deploying our Azure Static Web App will live:
mkdir infra/static-web-app -p
Then we'll create a main.bicep
file:
param repositoryUrl string
param repositoryBranch string
param location string = 'westeurope'
param skuName string = 'Free'
param skuTier string = 'Free'
param appName string
resource staticWebApp 'Microsoft.Web/staticSites@2020-12-01' = {
name: appName
location: location
sku: {
name: skuName
tier: skuTier
}
properties: {
// The provider, repositoryUrl and branch fields are required for successive deployments to succeed
// for more details see: https://github.com/Azure/static-web-apps/issues/516
provider: 'DevOps'
repositoryUrl: repositoryUrl
branch: repositoryBranch
buildProperties: {
skipGithubActionWorkflowGeneration: true
}
}
}
output deployment_token string = listSecrets(staticWebApp.id, staticWebApp.apiVersion).properties.apiKey
There's some things to draw attention to in the code above:
- The
provider
,repositoryUrl
andbranch
fields are required for successive deployments to succeed. In our case we're deploying via Azure DevOps and so our provider is'DevOps'
. For more details, look at this issue. - We're creating a
deployment_token
which we'll need in order that we can deploy into the Azure Static Web App resource.
Static Web App
In order that we can test out Azure Static Web Apps, what we need is a static web app. You could use pretty much anything here; we're going to use Docusaurus. We'll execute this single command:
npx @docusaurus/init@latest init static-web-app classic
Which will scaffold a Docusaurus site in a folder named static-web-app
. We don't need to change it any further; let's just see if we can deploy it.
Azure Pipeline
We're going to add an azure-pipelines.yml
file which Azure DevOps can use to power a pipeline:
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
submodules: true
- bash: az bicep build --file infra/static-web-app/main.bicep
displayName: 'Compile Bicep to ARM'
- task: AzureResourceManagerTemplateDeployment@3
name: DeployStaticWebAppInfra
displayName: Deploy Static Web App infra
inputs:
deploymentScope: Resource Group
azureResourceManagerConnection: $(serviceConnection)
subscriptionId: $(subscriptionId)
action: Create Or Update Resource Group
resourceGroupName: $(azureResourceGroup)
location: $(location)
templateLocation: Linked artifact
csmFile: 'infra/static-web-app/main.json' # created by bash script
overrideParameters: >-
-repositoryUrl $(repo)
-repositoryBranch $(Build.SourceBranchName)
-appName $(staticWebAppName)
deploymentMode: Incremental
deploymentOutputs: deploymentOutputs
- task: PowerShell@2
name: 'SetDeploymentOutputVariables'
displayName: 'Set Deployment Output Variables'
inputs:
targetType: inline
script: |
$armOutputObj = '$(deploymentOutputs)' | ConvertFrom-Json
$armOutputObj.PSObject.Properties | ForEach-Object {
$keyname = $_.Name
$value = $_.Value.value
# Creates a standard pipeline variable
Write-Output "##vso[task.setvariable variable=$keyName;issecret=true]$value"
# Display keys in pipeline
Write-Output "output variable: $keyName"
}
pwsh: true
- task: AzureStaticWebApp@0
name: DeployStaticWebApp
displayName: Deploy Static Web App
inputs:
app_location: 'static-web-app'
# api_location: 'api' # we don't have an API
output_location: 'build'
azure_static_web_apps_api_token: $(deployment_token) # captured from deploymentOutputs
When the pipeline is run, it does the following:
- Compiles our Bicep into an ARM template
- Deploys the compiled ARM template to Azure
- Captures the deployment outputs (essentially the
deployment_token
) and converts them into variables to use in the pipeline - Deploys our Static Web App using the
deployment_token
The pipeline depends upon a number of variables:
azureResourceGroup
- the name of your resource group in Azure where the app will be deployedlocation
- where your app is deployed, egnortheurope
repo
- the URL of your repository in Azure DevOps, eg https://dev.azure.com/johnnyreilly/_git/azure-static-web-appsserviceConnection
- the name of your AzureRM service connection in Azure DevOpsstaticWebAppName
- the name of your static web app, egazure-static-web-apps-johnnyreilly
subscriptionId
- your Azure subscription id from the Azure Portal
A successful pipeline looks something like this:
What you might notice is that the AzureStaticWebApp
is itself installing and building our application. This is handled by Microsoft Oryx. The upshot of this is that we don't need to manually run npm install
and npm build
ourselves; the AzureStaticWebApp
task will take care of it for us.
Finally, let's see if we've deployed something successfully…
We have! It's worth noting that you'll likely want to give your Azure Static Web App a lovelier URL, and perhaps even put it behind Azure Front Door as well.
Provider is invalid
workaround 2
Shane Neff was attempting to follow the instructions in this post and encountered issues. He shared his struggles with me as he encountered the "Provider is invalid. Cannot change the Provider. Please detach your static site first if you wish to use to another deployment provider." issue.
He was good enough to share his solution as well, which is inserting this task at the start of the pipeline (before the az bicep build
step):
- task: AzureCLI@2
inputs:
azureSubscription: '<name of your service connection>'
scriptType: 'bash'
scriptLocation: 'inlineScript'
inlineScript: 'az staticwebapp disconnect -n <name of your app>'
I haven't had the problems that Shane has had myself, but I wanted to share his fix for the people out there who almost certainly are bumping on this.
TypeScript, abstract classes, and constructors
TypeScript has the ability to define classes as abstract. This means they cannot be instantiated directly, only non-abstract subclasses can be. Let's take a look at what this means when it comes to constructor usage.
Making a scratchpad
In order that we can dig into this, let's create ourselves a scratchpad project to work with. We're going to create a node project and install TypeScript as a dependency.
mkdir ts-abstract-constructors
cd ts-abstract-constructors
npm init --yes
npm install typescript @types/node --save-dev
We now have a package.json
file set up. We need to initialise a TypeScript project as well:
npx tsc --init
This will give us a tsconfig.json
file that will drive configuration of TypeScript. By default TypeScript transpiles to an older version of JavaScript that predates classes. So we'll update the config to target a newer version of the language that does include them:
"target": "es2020",
"lib": ["es2020"],
Let's create ourselves a TypeScript file called index.ts
. The name is not significant; we just need a file to develop in.
Finally we'll add a script to our package.json
that compiles our TypeScript to JavaScript, and then runs the JS with node:
"start": "tsc --project \".\" && node index.js"
Making an abstract class
Now we're ready. Let's add an abstract class with a constructor to our index.ts
file:
abstract class ViewModel {
id: string;
constructor(id: string) {
this.id = id;
}
}
Consider the ViewModel
class above. Let's say we're building some kind of CRUD app, we'll have different views. Each of those views will have a corresponding viewmodel which is a subclass of the ViewModel
abstract class. The ViewModel
class has a mandatory id
parameter in the constructor. This is to ensure that every viewmodel has an id
value. If this were a real app, id
would likely be the value with which an entity was looked up in some kind of database.
Importantly, all subclasses of ViewModel
should either:
not implement a constructor at all, leaving the base class constructor to become the default constructor of the subclass or
implement their own constructor which invokes the
ViewModel
base class constructor.
Taking our abstract class for a spin
Now we have it, let's see what we can do with our abstract class. First of all, can we instantiate our abstract class? We shouldn't be able to do this:
const viewModel = new ViewModel('my-id');
console.log(`the id is: ${viewModel.id}`);
And sure enough, running npm start
results in the following error (which is also being reported by our editor; VS Code).
index.ts:9:19 - error TS2511: Cannot create an instance of an abstract class.
const viewModel = new ViewModel('my-id');
Tremendous. However, it's worth remembering that abstract
is a TypeScript concept. When we compile our TS, although it's throwing a compilation error, it still transpiles an index.js
file that looks like this:
'use strict';
class ViewModel {
constructor(id) {
this.id = id;
}
}
const viewModel = new ViewModel('my-id');
console.log(`the id is: ${viewModel.id}`);
As we can see, there's no mention of abstract
; it's just a straightforward class
. In fact, if we directly execute the file with node index.js
we can see an output of:
the id is: my-id
So the transpiled code is valid JavaScript even if the source code isn't valid TypeScript. This all reminds us that abstract
is a TypeScript construct.
Subclassing without a new constructor
Let's now create our first subclass of ViewModel
and attempt to instantiate it:
class NoNewConstructorViewModel extends ViewModel {}
// error TS2554: Expected 1 arguments, but got 0.
const viewModel1 = new NoNewConstructorViewModel();
const viewModel2 = new NoNewConstructorViewModel('my-id');
As the TypeScript compiler tells us, the second of these instantiations is legitimate as it relies upon the constructor from the base class as we'd hope. The first is not as there is no parameterless constructor.
Subclassing with a new constructor
Having done that, let's try subclassing and implementing a new constructor which has two parameters (to differentiate from the constructor we're overriding):
class NewConstructorViewModel extends ViewModel {
data: string;
constructor(id: string, data: string) {
super(id);
this.data = data;
}
}
// error TS2554: Expected 2 arguments, but got 0.
const viewModel3 = new NewConstructorViewModel();
// error TS2554: Expected 2 arguments, but got 1.
const viewModel4 = new NewConstructorViewModel('my-id');
const viewModel5 = new NewConstructorViewModel('my-id', 'important info');
Again, only one of the attempted instantiations is legitimate. viewModel3
is not as there is no parameterless constructor. viewModel4
is not as we have overridden the base class constructor with our new one that has two parameters. Hence viewModel5
is our "Goldilocks" instantiation; it's just right!
It's also worth noting that we're calling super
in the NewConstructorViewModel
constructor. This invokes the constructor of the ViewModel
base (or "super") class. TypeScript enforces that we pass the appropriate arguments (in our case a single string
).
Wrapping it up
We've seen that TypeScript ensures correct usage of constructors when we have an abstract class. Importantly, all subclasses of abstract classes either:
do not implement a constructor at all, leaving the base class constructor (the abstract constructor) to become the default constructor of the subclass or
implement their own constructor which invokes the base (or "super") class constructor with the correct arguments.
C# 9 in-process Azure Functions
C# 9 has some amazing features. Azure Functions are have two modes: isolated and in-process. Whilst isolated supports .NET 5 (and hence C# 9), in-process supports .NET Core 3.1 (C# 8). This post shows how we can use C# 9 with in-process Azure Functions running on .NET Core 3.1.
Azure Functions: in-process and isolated
Historically .NET Azure Functions have been in-process. This changed with .NET 5 where a new model was introduced named "isolated". To quote from the roadmap:
Running in an isolated process decouples .NET functions from the Azure Functions host—allowing us to more easily support new .NET versions and address pain points associated with sharing a single process.
However, the initial launch of isolated functions does not have the full level of functionality enjoyed by in-process functions. This will happen, according the roadmap:
Long term, our vision is to have full feature parity out of process, bringing many of the features that are currently exclusive to the in-process model to the isolated model. We plan to begin delivering improvements to the isolated model after the .NET 6 general availability release.
In the future, in-process functions will be retired in favour of isolated functions. However, it will be .NET 7 (scheduled to ship in November 2022) before that takes place:
As the image taken from the roadmap shows, when .NET 5 shipped, it did not support in-process Azure Functions. When .NET 6 ships in November, it should.
In the meantime, we would like to use C# 9.
Setting up a C# 8 project
We're have the Azure Functions Core Tools installed, so let's create a new function project:
func new --worker-runtime dotnet --template "Http Trigger" --name "HelloRecord"
The above command scaffolds out a .NET Core 3.1 Azure function project which contains a single Azure function. The --worker-runtime dotnet
parameter is what causes an in-process .NET Core 3.1 function being created. You should have a .csproj
file that looks like this:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
<AzureFunctionsVersion>v3</AzureFunctionsVersion>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.11" />
</ItemGroup>
<ItemGroup>
<None Update="host.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="local.settings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<CopyToPublishDirectory>Never</CopyToPublishDirectory>
</None>
</ItemGroup>
</Project>
We're running with C# 8 and .NET Core 3.1 at this point. What does it take to get us to C# 9?
What does it take to get to C# 9?
There's a great post on Reddit addressing using C# 9 with .NET Core 3.1 which says:
You can use
<LangVersion>9.0</LangVersion>
, and VS even includes support for suggesting a language upgrade.However, there are three categories of features in C#:
features that are entirely part of the compiler. Those will work.
features that require BCL additions. Since you're on the older BCL, those will need to be backported. For example, to use init; and record, you can use https://github.com/manuelroemer/IsExternalInit.
features that require runtime additions. Those cannot be added at all. For example, default interface members in C# 8, and covariant return types in C# 9.
Of the above, 1 and 2 add a tremendous amount of value. The features of 3 are great, but more niche. Speaking personally, I care a great deal about Record types. So let's apply this.
Adding C# 9 to the in-process function
To get C# into the mix, we want to make two changes:
- add a
<LangVersion>9.0</LangVersion>
to the<PropertyGroup>
element of our.csproj
file - add a package reference to the
IsExternalInit
The applied changes look like this:
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netcoreapp3.1</TargetFramework>
+ <LangVersion>9.0</LangVersion>
<AzureFunctionsVersion>v3</AzureFunctionsVersion>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.11" />
+ <PackageReference Include="IsExternalInit" Version="1.0.1" PrivateAssets="all" />
</ItemGroup>
<ItemGroup>
<None Update="host.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
</None>
<None Update="local.settings.json">
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
<CopyToPublishDirectory>Never</CopyToPublishDirectory>
</None>
</ItemGroup>
</Project>
If we used dotnet add package IsExternalInit
, we might be using a different syntax in the .csproj
. Be not afeard - that won't affect usage.
Making a C# 9 program
Now we can theoretically use C# 9…. Let's use C# 9. We'll tweak our HelloRecord.cs
file, add in a simple record
named MessageRecord
and tweak the Run
method to use it:
using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
namespace tmp
{
public record MessageRecord(string message);
public static class HelloRecord
{
[FunctionName("HelloRecord")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
var responseMessage = new MessageRecord(string.IsNullOrEmpty(name)
? "This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response."
: $"Hello, {name}. This HTTP triggered function executed successfully.");
return new OkObjectResult(responseMessage);
}
}
}
If we kick off our function with func start
:
We can see we can compile, and output is as we might expect and hope. Likewise if we try and debug in VS Code, we can:
Best before…
So, we've now a way to use C# 9 (or most of it) with in-process .NET Core 3.1 apps. This should serve until .NET 6 ships in November 2021 and we're able to use C# 9 by default.