Tuesday, 23 March 2021

Bicep meet Azure Pipelines 2

Last time I wrote about how to use the Azure CLI to run Bicep within the context of an Azure Pipeline. The solution was relatively straightforward, and involved using az deployment group create in a task. There's an easier way.

Bicep meet Azure Pipelines

The easier way

The target reader of the previous post was someone who was already using AzureResourceManagerTemplateDeployment@3 in an Azure Pipeline to deploy an ARM template. Rather than replacing your existing AzureResourceManagerTemplateDeployment@3 tasks, all you need do is insert a prior bash step that compiles the Bicep to ARM, which your existing template can then process. It looks like this:

- bash: az bicep build --file infra/app-service/azuredeploy.bicep
  displayName: 'Compile Bicep to ARM'

This will take your Bicep template of azuredeploy.bicep, transpile it into an ARM template named azuredeploy.json which a subsequent AzureResourceManagerTemplateDeployment@3 task can process. Since this is just exercising the Azure CLI, using bash is not required; powershell etc would also be fine; it's just required that the Azure CLI is available in a pipeline.

In fact this simple task could even be a one-liner if you didn't fancy using the displayName. (Though I say keep it; optimising for readability is generally a good shout.) A full pipeline could look like this:

- bash: az bicep build --file infra/app-service/azuredeploy.bicep
  displayName: 'Compile Bicep to ARM'

- task: AzureResourceManagerTemplateDeployment@3
  displayName: 'Deploy Hello Azure ARM'
  inputs:
    azureResourceManagerConnection: '$(azureSubscription)'
    action: Create Or Update Resource Group
    resourceGroupName: '$(resourceGroupName)'
    location: 'North Europe'
    templateLocation: Linked artifact
    csmFile: 'infra/app-service/azuredeploy.json' # created by bash script
    csmParametersFile: 'infra/app-service/azuredeploy.parameters.json'
    deploymentMode: Incremental
    deploymentOutputs: resourceGroupDeploymentOutputs
    overrideParameters: -applicationName $(Build.Repository.Name)

- pwsh: |
    $outputs = ConvertFrom-Json '$(resourceGroupDeploymentOutputs)'
    foreach ($output in $outputs.PSObject.Properties) {
        Write-Host "##vso[task.setvariable variable=RGDO_$($output.Name)]$($output.Value.value)"
    }
  displayName: 'Turn ARM outputs into variables'

And when it's run, it may result in something along these lines:

Bicep in an Azure Pipeline

So if you want to get using Bicep right now with minimal effort, this an on ramp that could work for you! Props to Jamie McCrindle for suggesting this.

Saturday, 20 March 2021

Bicep meet Azure Pipelines

Bicep is a terser and more readable alternative language to ARM templates. Running ARM templates in Azure Pipelines is straightforward. However, there isn't yet a first class experience for running Bicep in Azure Pipelines. This post demonstrates an approach that can be used until a Bicep task is available.

Bicep meet Azure Pipelines

Bicep: mostly ARMless

If you've been working with Azure and infrastructure as code, you'll likely have encountered ARM templates. They're a domain specific language that lives inside JSON, used to define the infrastructure that is deployed to Azure; App Services, Key Vaults and the like.

ARM templates are quite verbose and not the easiest thing to read. This is a consequence of being effectively a language nestled inside another language. Bicep is an alternative language which is far more readable. Bicep transpiles down to ARM templates, in the same way that TypeScript transpiles down to JavaScript.

Bicep is quite new, but already it enjoys feature parity with ARM templates (as of v0.3) and ships as part of the Azure CLI. However, as Bicep is new, it doesn't yet have a dedicated Azure Pipelines task for deployment. This should exist in future, perhaps as soon as the v0.4 release. In the meantime there's an alternative way to achieve this which we'll go through.

App Service with Bicep

Let's take a simple Bicep file, azuredeploy.bicep, which is designed to deploy an App Service resource to Azure. It looks like this:

@description('Tags that our resources need')
param tags object = {
  costCenter: 'todo: replace'
  environment: 'todo: replace'
  application: 'todo: replace with app name'
  description: 'todo: replace'
  managedBy: 'ARM'
}

@minLength(2)
@description('Base name of the resource such as web app name and app service plan')
param applicationName string

@description('Location for all resources.')
param location string = resourceGroup().location

@description('The SKU of App Service Plan')
param sku string

var appServicePlanName_var = 'plan-${applicationName}-${tags.environment}'
var linuxFxVersion = 'DOTNETCORE|5.0'
var fullApplicationName_var = 'app-${applicationName}-${uniqueString(applicationName)}'

resource appServicePlanName 'Microsoft.Web/serverfarms@2019-08-01' = {
  name: appServicePlanName_var
  location: location
  sku: {
    name: sku
  }
  kind: 'linux'
  tags: {
    CostCenter: tags.costCenter
    Environment: tags.environment
    Description: tags.description
    ManagedBy: tags.managedBy
  }
  properties: {
    reserved: true
  }
}

resource fullApplicationName 'Microsoft.Web/sites@2018-11-01' = {
  name: fullApplicationName_var
  location: location
  kind: 'app'
  tags: {
    CostCenter: tags.costCenter
    Environment: tags.environment
    Description: tags.description
    ManagedBy: tags.managedBy
  }
  properties: {
    serverFarmId: appServicePlanName.id
    clientAffinityEnabled: true
    siteConfig: {
      appSettings: []
      linuxFxVersion: linuxFxVersion
      alwaysOn: false
      ftpsState: 'Disabled'
      http20Enabled: true
      minTlsVersion: '1.2'
      remoteDebuggingEnabled: false
    }
    httpsOnly: true
  }
  identity: {
    type: 'SystemAssigned'
  }
}

output fullApplicationName string = fullApplicationName_var

When transpiled down to an ARM template, this Bicep file more than doubles in size:

  • azuredeploy.bicep - 1782 bytes
  • azuredeploy.json - 3863 bytes

This tells you something of the advantage of Bicep. The template comes with an associated azuredeploy.parameters.json file:

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "tags": {
      "value": {
        "costCenter": "8888",
        "environment": "stg",
        "application": "hello-azure",
        "description": "App Service for hello-azure",
        "managedBy": "ARM"
      }
    },
    "sku": {
      "value": "B1"
    }
  }
}

It's worth remembering that you can use the same parameters files with Bicep that you can use with ARM templates. This is great for minimising friction when it comes to migrating.

Bicep in azure-pipelines.yml

Now we have our Bicep file, we want to execute it from the context of an Azure Pipeline. If we were working directly with the ARM template we'd likely have something like this in place:

- task: AzureResourceManagerTemplateDeployment@3
  displayName: 'Deploy Hello Azure ARM'
  inputs:
    azureResourceManagerConnection: '$(azureSubscription)'
    action: Create Or Update Resource Group
    resourceGroupName: '$(resourceGroupName)'
    location: 'North Europe'
    templateLocation: Linked artifact
    csmFile: 'infra/app-service/azuredeploy.json'
    csmParametersFile: 'infra/app-service/azuredeploy.parameters.json'
    deploymentMode: Incremental
    deploymentOutputs: resourceGroupDeploymentOutputs
    overrideParameters: -applicationName $(Build.Repository.Name)

- pwsh: |
    $outputs = ConvertFrom-Json '$(resourceGroupDeploymentOutputs)'
    foreach ($output in $outputs.PSObject.Properties) {
        Write-Host "##vso[task.setvariable variable=RGDO_$($output.Name)]$($output.Value.value)"
    }
  displayName: 'Turn ARM outputs into variables'

There's two tasks above. The first is the native task for ARM deployments which takes our ARM template and our parameters and deploys them. The second task takes the output variables from the first task and converts them into Azure Pipeline variables such that they can be referenced later in the pipeline. In this case this variablifies our fullApplicationName output.

There is, as yet, no BicepTemplateDeployment@1. Though it's coming. In the meantime, the marvellous Alex Frankel advised:

I'd recommend using the Azure CLI task to deploy. As long as that task is updated to Az CLI version 2.20 or later, it will automatically install the bicep CLI when calling az deployment group create -f main.bicep.

Let's give it a go!

- task: AzureCLI@2
  displayName: 'Deploy Hello Azure Bicep'
  inputs:
    azureSubscription: '$(azureSubscription)'
    scriptType: bash
    scriptLocation: inlineScript
    inlineScript: |
      az --version

      echo "az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy"
      az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy \
        --template-file infra/app-service/azuredeploy.bicep \
        --parameters infra/app-service/azuredeploy.parameters.json \
        --parameters applicationName='$(Build.Repository.Name)'

      echo "az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy"
      deploymentoutputs=$(az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy \
        --query properties.outputs)

      echo 'convert outputs to variables'
      echo $deploymentoutputs | jq -c '. | to_entries[] | [.key, .value.value]' |
        while IFS=$"\n" read -r c; do
          outputname=$(echo "$c" | jq -r '.[0]')
          outputvalue=$(echo "$c" | jq -r '.[1]')
          echo "setting variable RGDO_$outputname=$outputvalue"
          echo "##vso[task.setvariable variable=RGDO_$outputname]$outputvalue"
        done

The above is just a single Azure CLI task (as advised). It invokes az deployment group create passing the relevant parameters. It then acquires the output properties using az deployment group show. Finally it once again converts these outputs to Azure Pipeline variables with some jq smarts.

This works right now, and running it results in something like the output below. So if you're excited about Bicep and don't want to wait for 0.4 to start moving on this, then this can get you going. To track the progress of the custom task, keep an eye on this issue.

Bicep in an Azure Pipeline

Update: an even simpler alternative

There is even a simpler way to do this which I discovered subsequent to writing this. Have a read.

Bicep in Azure Pipelines

Bicep is a terser and more readable alternative language to ARM templates. Running ARM templates in Azure Pipelines is straightforward. However, there isn't yet a first class experience for running Bicep in Azure Pipelines. This post demonstrates an approach that can be used until a Bicep task is available.

Bicep meet Azure Pipelines

Bicep: mostly ARMless

If you've been working with Azure and infrastructure as code, you'll likely have encountered ARM templates. They're a domain specific language that lives inside JSON, used to define the infrastructure that is deployed to Azure; App Services, Key Vaults and the like.

ARM templates are quite verbose and not the easiest thing to read. This is a consequence of being effectively a language nestled inside another language. Bicep is an alternative language which is far more readable. Bicep transpiles down to ARM templates, in the same way that TypeScript transpiles down to JavaScript.

Bicep is quite new, but already it enjoys feature parity with ARM templates (as of v0.3) and ships as part of the Azure CLI. However, as Bicep is new, it doesn't yet have a dedicated Azure Pipelines task for deployment. This should exist in future, perhaps as soon as the v0.4 release. In the meantime there's an alternative way to achieve this which we'll go through.

App Service with Bicep

Let's take a simple Bicep file, azuredeploy.bicep, which is designed to deploy an App Service resource to Azure. It looks like this:

@description('Tags that our resources need')
param tags object = {
  costCenter: 'todo: replace'
  environment: 'todo: replace'
  application: 'todo: replace with app name'
  description: 'todo: replace'
  managedBy: 'ARM'
}

@minLength(2)
@description('Base name of the resource such as web app name and app service plan')
param applicationName string

@description('Location for all resources.')
param location string = resourceGroup().location

@description('The SKU of App Service Plan')
param sku string

var appServicePlanName_var = 'plan-${applicationName}-${tags.environment}'
var linuxFxVersion = 'DOTNETCORE|5.0'
var fullApplicationName_var = 'app-${applicationName}-${uniqueString(applicationName)}'

resource appServicePlanName 'Microsoft.Web/serverfarms@2019-08-01' = {
  name: appServicePlanName_var
  location: location
  sku: {
    name: sku
  }
  kind: 'linux'
  tags: {
    CostCenter: tags.costCenter
    Environment: tags.environment
    Description: tags.description
    ManagedBy: tags.managedBy
  }
  properties: {
    reserved: true
  }
}

resource fullApplicationName 'Microsoft.Web/sites@2018-11-01' = {
  name: fullApplicationName_var
  location: location
  kind: 'app'
  tags: {
    CostCenter: tags.costCenter
    Environment: tags.environment
    Description: tags.description
    ManagedBy: tags.managedBy
  }
  properties: {
    serverFarmId: appServicePlanName.id
    clientAffinityEnabled: true
    siteConfig: {
      appSettings: []
      linuxFxVersion: linuxFxVersion
      alwaysOn: false
      ftpsState: 'Disabled'
      http20Enabled: true
      minTlsVersion: '1.2'
      remoteDebuggingEnabled: false
    }
    httpsOnly: true
  }
  identity: {
    type: 'SystemAssigned'
  }
}

output fullApplicationName string = fullApplicationName_var

When transpiled down to an ARM template, this Bicep file more than doubles in size:

  • azuredeploy.bicep - 1782 bytes
  • azuredeploy.json - 3863 bytes

This tells you something of the advantage of Bicep. The template comes with an associated azuredeploy.parameters.json file:

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "tags": {
            "value": {
                "costCenter": "8888",
                "environment": "stg",
                "application": "hello-azure",
                "description": "App Service for hello-azure",
                "managedBy": "ARM"
            }
        },
        "sku": {
            "value": "B1"
        }
    }
}

It's worth remembering that you can use the same parameters files with Bicep that you can use with ARM templates. This is great for minimising friction when it comes to migrating.

Bicep in azure-pipelines.yml

Now we have our Bicep file, we want to execute it from the context of an Azure Pipeline. If we were working directly with the ARM template we'd likely have something like this in place:

        - task: AzureResourceManagerTemplateDeployment@3
          displayName: "Deploy Hello Azure ARM"
          inputs:
            azureResourceManagerConnection: '$(azureSubscription)'
            action: Create Or Update Resource Group
            resourceGroupName: '$(resourceGroupName)'
            location: 'North Europe'
            templateLocation: Linked artifact
            csmFile: 'infra/app-service/azuredeploy.json'
            csmParametersFile: 'infra/app-service/azuredeploy.parameters.json'
            deploymentMode: Incremental
            deploymentOutputs: resourceGroupDeploymentOutputs
            overrideParameters: -applicationName $(Build.Repository.Name)

        - pwsh: |
            $outputs = ConvertFrom-Json '$(resourceGroupDeploymentOutputs)'
            foreach ($output in $outputs.PSObject.Properties) {
                Write-Host "##vso[task.setvariable variable=RGDO_$($output.Name)]$($output.Value.value)"
            }
          displayName: "Turn ARM outputs into variables"

There's two tasks above. The first is the native task for ARM deployments which takes our ARM template and our parameters and deploys them. The second task takes the output variables from the first task and converts them into Azure Pipeline variables such that they can be referenced later in the pipeline. In this case this variablifies our fullApplicationName output.

There is, as yet, no BicepTemplateDeployment@1. Though it's coming. In the meantime, the marvellous Alex Frankel advised:

I'd recommend using the Azure CLI task to deploy. As long as that task is updated to Az CLI version 2.20 or later, it will automatically install the bicep CLI when calling az deployment group create -f main.bicep.

Let's give it a go!

        - task: AzureCLI@2
          displayName: "Deploy Hello Azure Bicep"
          inputs:
            azureSubscription: '$(azureSubscription)'
            scriptType: bash
            scriptLocation: inlineScript
            inlineScript: |
              az --version

              echo "az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy"
              az deployment group create --resource-group '$(resourceGroupName)' --name appservicedeploy \
                --template-file infra/app-service/azuredeploy.bicep \
                --parameters infra/app-service/azuredeploy.parameters.json \
                --parameters applicationName='$(Build.Repository.Name)'

              echo "az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy"
              deploymentoutputs=$(az deployment group show --resource-group '$(resourceGroupName)' --name appservicedeploy \
                --query properties.outputs)

              echo 'convert outputs to variables'
              echo $deploymentoutputs | jq -c '. | to_entries[] | [.key, .value.value]' |
                while IFS=$"\n" read -r c; do
                  outputname=$(echo "$c" | jq -r '.[0]')
                  outputvalue=$(echo "$c" | jq -r '.[1]')
                  echo "setting variable RGDO_$outputname=$outputvalue"
                  echo "##vso[task.setvariable variable=RGDO_$outputname]$outputvalue"
                done

The above is just a single Azure CLI task (as advised). It invokes az deployment group create passing the relevant parameters. It then acquires the output properties using az deployment group show. Finally it once again converts these outputs to Azure Pipeline variables with some jq smarts.

This works right now, and running it results in something like the output below. So if you're excited about Bicep and don't want to wait for 0.4 to start moving on this, then this can get you going. To track the progress of the custom task, keep an eye on this issue.

Bicep in an Azure Pipeline

Wednesday, 17 March 2021

RSS update; we moved to Docusaurus

My blog lived happily on Blogger for the past decade. It's now built with Docusaurus and hosted on GitHub Pages. To understand the why, read my last post. This post serves purely to share details of feed updates for RSS / Atom subscribers.

The Atom feed at this location no longer exists: https://blog.johnnyreilly.com/feeds/posts/default

The following feeds are new and different:

  • RSS - https://blog.johnnyreilly.com/rss.xml
  • Atom - https://blog.johnnyreilly.com/atom.xml

The new format might mess with any feed reader you have set up. I do apologise for the friction; hopefully it shouldn't cause you too much drama.

Finally, all historic links should continue to work with the new site; redirects have been implemented.

Monday, 15 March 2021

From Blogger to Docusaurus

Docusaurus is, amongst other things, a Markdown powered blogging platform. My blog has lived happily on Blogger for the past decade. I'm considering moving, but losing my historic content as part of the move was never an option. This post goes through what it would look like to move from Blogger to Docusaurus without losing your content.

It is imperative that the world never forgets what I was doing with jQuery in 2012.

Blog as code

Everything is better when it's code. Infrastructure as code. Awesome right? So naturally "blog as code" must be better than just a blog. More seriously, Markdown is a tremendous documentation format. Simple, straightforward and, like Goldilocks, "just right". For a long time I've written everything as Markdown. My years of toil down the Open Source mines have preconditioned me to be very MD-disposed.

I started out writing this blog a long time ago as pure HTML. Not the smoothest of writing formats. At some point I got into the habit of spinning up a new repo in GitHub for a new blogpost, writing it in Markdown and piping it through a variety of tools to convert it into HTML for publication on Blogger. As time passed I felt I'd be a lot happier if I wasn't creating a repo each time. What if I did all my blogging in a single repo and used that as the code that represented my blog?

Just having that thought laid the seeds for what was to follow:

  1. An investigation into importing my content from Blogger into a GitHub repo
  2. An experimental port to Docusaurus
  3. The automation of publication to Docusaurus and Blogger

We're going to go through 1 and 2 now. But before we do that, let's create ourselves a Docusaurus site for our blog:

npx @docusaurus/init@latest init blog-website classic

I want everything

The first thing to do, was obtain my blog content. This is a mass of HTML that lived inside Blogger's database. (One assumes they have a database; I haven't actually checked.) There's a "Back up content" option inside Blogger to allow this:

Download content from Blogger

It provides you with an XML file with a dispiritingly small size. Ten years blogging? You'll get change out of 4Mb it turns out.

From HTML in XML to Markdown

We now want to take that XML and:

  • Extract each blog post (and it's associated metadata; title / tags and whatnot)
  • Convert the HTML content of each blog post from HTML to Markdown and save it as a .md file
  • Download the images used in the blogpost so they can be stored in the repo alongside

To do this we're going to whip up a smallish TypeScript console app. Let's initialise it with the packages we're going to need:

mkdir from-blogger-to-docusaurus
cd from-blogger-to-docusaurus
npx typescript --init
yarn init
yarn add @types/axios @types/he @types/jsdom @types/node @types/showdown axios fast-xml-parser he jsdom showdown ts-node typescript

We're using:

Now we have all the packages we need, it's time to write our script.

import fs from 'fs';
import path from 'path';
import showdown from 'showdown';
import he from 'he';
import jsdom from 'jsdom';
import axios from 'axios';
import fastXmlParser from 'fast-xml-parser';

const bloggerXmlPath = './blog-03-13-2021.xml';
const docusaurusDirectory = '../blog-website';
const notMarkdownable: string[] = [];

async function fromXmlToMarkDown() {
  const posts = await getPosts();

  for (const post of posts) {
    await makePostIntoMarkDownAndDownloadImages(post);
  }
  if (notMarkdownable.length)
    console.log(
      'These blog posts could not be turned into MarkDown - go find out why!',
      notMarkdownable
    );
}

async function getPosts(): Promise<Post[]> {
  const xml = await fs.promises.readFile(bloggerXmlPath, 'utf-8');

  const options = {
    attributeNamePrefix: '@_',
    attrNodeName: 'attr', //default is 'false'
    textNodeName: '#text',
    ignoreAttributes: false,
    ignoreNameSpace: false,
    allowBooleanAttributes: true,
    parseNodeValue: true,
    parseAttributeValue: true,
    trimValues: true,
    cdataTagName: '__cdata', //default is 'false'
    cdataPositionChar: '\\c',
    parseTrueNumberOnly: false,
    arrayMode: true, //"strict"
    attrValueProcessor: (val: string, attrName: string) =>
      he.decode(val, { isAttributeValue: true }), //default is a=>a
    tagValueProcessor: (val: string, tagName: string) => he.decode(val), //default is a=>a
  };

  const traversalObj = fastXmlParser.getTraversalObj(xml, options);
  const blog = fastXmlParser.convertToJson(traversalObj, options);

  const postsRaw = blog.feed[0].entry.filter(
    (entry: any) =>
      entry.category.some(
        (category: any) =>
          category.attr['@_term'] ===
          'http://schemas.google.com/blogger/2008/kind#post'
      ) &&
      entry.link.some(
        (link: any) =>
          link.attr['@_href'] && link.attr['@_type'] === 'text/html'
      ) &&
      entry.published < '2021-03-07'
  );

  const posts: Post[] = postsRaw.map((entry: any) => {
    return {
      title: entry.title[0]['#text'],
      content: entry.content[0]['#text'],
      published: entry.published,
      link: entry.link.find(
        (link: any) =>
          link.attr['@_href'] && link.attr['@_type'] === 'text/html'
      )
        ? entry.link.find(
            (link: any) =>
              link.attr['@_href'] && link.attr['@_type'] === 'text/html'
          ).attr['@_href']
        : undefined,
      tags:
        Array.isArray(entry.category) &&
        entry.category.some(
          (category: any) =>
            category.attr['@_scheme'] === 'http://www.blogger.com/atom/ns#'
        )
          ? entry.category
              .filter(
                (category: any) =>
                  category.attr['@_scheme'] ===
                    'http://www.blogger.com/atom/ns#' &&
                  category.attr['@_term'] !== 'constructor'
              ) // 'constructor' will make docusaurus choke
              .map((category: any) => category.attr['@_term'])
          : [],
    };
  });

  for (const post of posts) {
    const { content, ...others } = post;
    console.log(others, content.length);
    if (!content || !others.title || !others.published)
      throw new Error('No content');
  }

  return posts.filter((post) => post.link);
}

async function makePostIntoMarkDownAndDownloadImages(post: Post) {
  const converter = new showdown.Converter({
    ghCodeBlocks: true,
  });
  const linkSections = post.link.split('/');
  const linkSlug = linkSections[linkSections.length - 1];
  const filename =
    post.published.substr(0, 10) + '-' + linkSlug.replace('.html', '.md');

  const contentProcessed = post.content
    // remove stray <br /> tags
    .replace(/<br\s*\/?>/gi, '\n')
    // translate <code class="lang-cs" into <code class="language-cs"> to be showdown friendly
    .replace(/code class="lang-/gi, 'code class="language-');

  const images: string[] = [];
  const dom = new jsdom.JSDOM(contentProcessed);
  let markdown = '';
  try {
    markdown = converter
      .makeMarkdown(contentProcessed, dom.window.document)
      // bigger titles
      .replace(/###### /g, '#### ')

      // <div style="width:100%;height:0;padding-bottom:56%;position:relative;"><iframe src="https://giphy.com/embed/l7JDTHpsXM26k" width="100%" height="100%" style="position:absolute" frameBorder="0" class="giphy-embed" allowfullscreen=""></iframe></div>

      // The mechanism below extracts the underlying iframe
      .replace(/<div.*(<iframe.*">).*<\/div>/g, (replacer) => {
        const dom = new jsdom.JSDOM(replacer);
        const iframe = dom?.window?.document?.querySelector('iframe');
        return iframe?.outerHTML ?? '';
      })

      // The mechanism below strips class and style attributes from iframes - react hates them
      .replace(/<iframe.*<\/iframe>/g, (replacer) => {
        const dom = new jsdom.JSDOM(replacer);
        const iframe = dom?.window?.document?.querySelector('iframe');
        iframe?.removeAttribute('class');
        iframe?.removeAttribute('style');
        return iframe?.outerHTML ?? '';
      })

      // capitalise appropriately
      .replace(/frameBorder/g, 'frameBorder')
      .replace(/allowfullscreen/g, 'allowFullScreen')
      .replace(/charset/g, 'charSet')

      // Deals with these:
      // [![null](<https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh70dz990U-UPGIzBSL9S5xETMYMhirrxfxgG9dR04ZW7DHR1ygnaRU4mUy2uewHqx66SlNuI-Y2MNIVjJWMWm05RZF1DjPpSiOcbrGK97r6jwR8JtbT0SQS0vMDyKmOaxLt91Ul_FJEjo/s640/hello_world_idb_keyval.png> =640x484)](<https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh70dz990U-UPGIzBSL9S5xETMYMhirrxfxgG9dR04ZW7DHR1ygnaRU4mUy2uewHqx66SlNuI-Y2MNIVjJWMWm05RZF1DjPpSiOcbrGK97r6jwR8JtbT0SQS0vMDyKmOaxLt91Ul_FJEjo/s1600/hello_world_idb_keyval.png>)We successfully wrote something into IndexedDB, read it back and printed that value to the console. Amazing!
      .replace(
        /\[!\[null\]\(<(.*?)>\)/g,
        (match) =>
          `![](${match.slice(match.indexOf('<') + 1, match.indexOf('>'))})\n\n`
      )

      // Blogger tends to put images in HTML that looks like this:
      // <div class="separator" style="clear: both;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg36t2vAklaM8K8H9-OvqozrFAVNEFTXERHkV4CFRnjVszoGTJDAhJd7mxFCgsuJOOjYznro7SCNuKrf2wCjQsJE9h-mc1XgXAtSZNT-85dvFWwEtUmruBww1SUMQ6fLHyyvUaQM4QM4l0/s783/traffic-to-app-service.png" style="display: block; padding: 1em 0; text-align: center; "><img alt="traffic to app service" border="0" width="600" data-original-height="753" data-original-width="783" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg36t2vAklaM8K8H9-OvqozrFAVNEFTXERHkV4CFRnjVszoGTJDAhJd7mxFCgsuJOOjYznro7SCNuKrf2wCjQsJE9h-mc1XgXAtSZNT-85dvFWwEtUmruBww1SUMQ6fLHyyvUaQM4QM4l0/s600/traffic-to-app-service.png"></a></div>

      // The mechanism below extracts the underlying image path and it's alt text
      .replace(/<div.*(<img.*">).*<\/div>/g, (replacer) => {
        const div = new jsdom.JSDOM(replacer);
        const img = div?.window?.document?.querySelector('img');
        const alt = img?.getAttribute('alt') ?? '';
        const src = img?.getAttribute('src') ?? '';

        if (src) images.push(src);

        return `![${alt}](${src})`;
      });
  } catch (e) {
    console.log(post.link);
    console.log(e);
    notMarkdownable.push(post.link);
    return;
  }

  const imageDirectory = filename.replace('.md', '');
  for (const url of images) {
    try {
      const localUrl = await downloadImage(url, imageDirectory);
      markdown = markdown.replace(url, 'https://raw.githubusercontent.com/johnnyreilly/blog.johnnyreilly.com/main/blog-website/static/blog/' + localUrl);
    } catch (e) {
      console.error(`Failed to download ${url}`);
    }
  }

  const content = `---
title: "${post.title}"
author: John Reilly
author_url: https://github.com/johnnyreilly
author_image_url: https://avatars.githubusercontent.com/u/1010525?s=400&u=294033082cfecf8ad1645b4290e362583b33094a&v=4
tags: [${post.tags.join(', ')}]
hide_table_of_contents: false
---
${markdown}
`;

  await fs.promises.writeFile(
    path.resolve(docusaurusDirectory, 'blog', filename),
    content
  );
}

async function downloadImage(url: string, directory: string) {
  console.log(`Downloading ${url}`);
  const pathParts = new URL(url).pathname.split('/');
  const filename = pathParts[pathParts.length - 1];
  const directoryTo = path.resolve(
    docusaurusDirectory,
    'static',
    'blog',
    directory
  );
  const pathTo = path.resolve(
    docusaurusDirectory,
    'static',
    'blog',
    directory,
    filename
  );

  if (!fs.existsSync(directoryTo)) {
    fs.mkdirSync(directoryTo);
  }

  const writer = fs.createWriteStream(pathTo);

  const response = await axios({
    url,
    method: 'GET',
    responseType: 'stream',
  });

  response.data.pipe(writer);

  return new Promise<string>((resolve, reject) => {
    writer.on('finish', () => resolve(directory + '/' + filename));
    writer.on('error', reject);
  });
}

interface Post {
  title: string;
  content: string;
  published: string;
  link: string;
  tags: string[];
}

// do it!
fromXmlToMarkDown();

To summarise what the script does, it:

  • parses the blog XML into an array of Posts
  • each post is then converted from HTML into Markdown, a Docusaurus header is created and prepended, then the file is saved to the blog-website/blog directory
  • the images of each post are downloaded with Axios and saved to the blog-websihttps://raw.githubusercontent.com/johnnyreilly/blog.johnnyreilly.com/main/blog-website/static/blog/{POST NAME} directory

Bringing it all together

To run the script, we add the following script to the package.json:

  "scripts": {
    "start": "ts-node index.ts"
  },

And have ourselves a merry little yarn start to kick off the process. In a very short period of time, if you crack open the blogs directory of your Docusaurus site you'll see a collection of Markdown files which represent your blog and are ready to power Docusaurus:

Markdown files

I have slightly papered over some details here. For my own case I discovered that I hadn't always written perfect HTML when blogging. I had to go in and fix the HTML in a number of historic blogs such that the mechanism would work. I also learned that a number of my screenshots that I use to illustrate posts have vanished from Blogger at some point. This makes me all the more convinced that storing your blog in a repo is a good idea. Things should not "go missing".

Congratulations! We're now the proud owners of a Docusaurus blog site based upon our Blogger content that looks something like this:

Blog in Docusaurus

Making the move?

Now that I've got the content, I'm theoretically safe to migrate from Blogger to Docusaurus. I'm pondering this now and I have come up with a checklist of criteria to satisfy before I do. You can have a read of the criteria here.

Odds are, I'm likely to make the move; it's probably just a matter of time.

Wednesday, 10 March 2021

Managed Identity, Azure SQL and Entity Framework

Managed Identity offers a very secure way for applications running in Azure to connect to Azure SQL databases. It's an approach that does not require code changes; merely configuration of connection string and associated resources. Hence it has a good developer experience. Importantly, it allows us to avoid exposing our database to username / password authentication, and hence making it a tougher target for bad actors.

This post talks us through using managed identity for connecting to Azure SQL.

Integrated Security=true

Everyone is deploying to the cloud. Few are the organisations that view deployment to data centers they manage as the future. This is generally a good thing, however in the excitement of the new, it's possible to forget some of the good properties that "on premise" deployment afforded when it came to connectivity and authentication.

I speak of course, of our old friend Integrated Security=true. When you seek to connect a web application to a database, you'll typically use some kind of database connection string. And back in the day, it may have looked something like this:

Data Source=myServer;Initial Catalog=myDB;Integrated Security=true;

The above provides a database server, a database and also Integrated Security=true. When you see Integrated Security=true, what you're essentially looking at is an instruction to use the identity that an application is running under (typically called a "service account") as the authentication credential to secure access to the database. Under the covers, this amounts to Windows Authentication.

The significant thing about this approach is that it is more secure than using usernames and passwords in the connection string. If you have to use username and password to authenticate, then you need to persist them somewhere - so you need to make sure that's secure. Also, if someone manages to acquire that username and password, they're free to get access to the database and do malicious things.

Bottom line: the less you are sharing authentication credentials, the better your security. Integrated Security is a harder nut to crack than username and password. The thing to note about the above phrase is "Windows Authentication". Web Apps in Azure / AWS etc do not typically use Windows Authentication when it comes to connecting to the database. Connecting with username / password is far more common.

What if there was a way to have the developer experience of Integrated Security=true without needing to use Windows Authentication? There is.

Managed Identity

The docs express the purpose of managed identity well:

A common challenge for developers is the management of secrets and credentials to secure communication between different services. On Azure, managed identities eliminate the need for developers having to manage credentials by providing an identity for the Azure resource in Azure AD and using it to obtain Azure Active Directory (Azure AD) tokens

Historically a certain amount of ceremony was required to use managed identity to connect to a database, and could involve augmenting a DbContext like so:

public MyDbContext(DbContextOptions options) : base(options) {
    var conn = (Microsoft.Data.SqlClient.SqlConnection)Database.GetDbConnection();
    var credential = new DefaultAzureCredential();
    var token = credential
        .GetToken(
            new Azure.Core.TokenRequestContext(new[] { "https://database.windows.net/.default" })
        );
    conn.AccessToken = token.Token;
}

This mechanism works, and has the tremendous upside of no longer requiring credentials be passed in a connection string. However, as you can see this isn't the simplest of setups. And also, what if you don't want to use managed identity when you're developing locally? This approach has baggage and forces us to make code changes.

Connection String alone

The wonderful aspect of the original Integrated Security=true approach, was that there were no code changes required; one need only supply the connection string. Just configuration.

This is now possible with Azure SQL thanks to this PR to the Microsoft.Data.SqlClient nuget package. (Incidentally, Microsoft.Data.SqlClient is the successor to System.Data.SqlClient.)

Support for connection string managed identities shipped with v2.1. Connection strings can look slightly different depending on the type of managed identity you're using:

// For System Assigned Managed Identity
"Server:{serverURL}; Authentication=Active Directory MSI; Initial Catalog={db};"

// For System Assigned Managed Identity
"Server:{serverURL}; Authentication=Active Directory Managed Identity; Initial Catalog={db};"

// For User Assigned Managed Identity
"Server:{serverURL}; Authentication=Active Directory MSI; User Id={ObjectIdOfManagedIdentity}; Initial Catalog={db};"

// For User Assigned Managed Identity
"Server:{serverURL}; Authentication=Active Directory Managed Identity; User Id={ObjectIdOfManagedIdentity}; Initial Catalog={db};"

Regardless of the approach, you can see that none of the connection strings have credentials in them. And that's special.

Usage with Entity Framework Core 5

If you're using Entity Framework Core, you might be struggling to get this working and encountering strange error messages. In my ASP.NET project I had a dependendency on Microsoft.EntityFrameworkCore.SqlServer@5.

Microsoft.EntityFrameworkCore.SqlServer@5 in NuGet

If you look close above, you'll see that the package has a dependency on Microsoft.Data.SqlClient, but crucially on 2.0.1 or greater. So if dotnet has installed a version of Microsoft.Data.SqlClient which is less than 2.1 then the functionality required will not be installed. The resolution is simple, ensure that the required version is installed:

dotnet add package Microsoft.Data.SqlClient --version 2.1.2

The version which we want to use is 2.1 (or greater) and fortunately that is compatible with Entity Framework Core 5. Incidentally, when Entity Framework Core 6 ships it will no longer be necessary to manually specify this dependency as it already requires Microsoft.Data.SqlClient@2.1 as a minimum.

User Assigned Managed Identity

If you're using user assigned managed identity, you'll need to supply the object id of your managed identity, which you can find in the Azure Portal:

Managed Identity object id

You can configure this in ARM as well, but cryptically, the object id goes by the nom de plume of principalId (thanks to my partner in crime John McCormick for puzzling that out):

"CONNECTIONSTRINGS__OURDBCONNECTION": "[concat('Server=tcp:', parameters('sqlServerName') , '.database.windows.net,1433;Initial Catalog=', parameters('sqlDatabaseName'),';Authentication=Active Directory MSI',';User Id=', reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities/', parameters('managedIdentityName')), '2018-11-30').principalId)]"

That's it! With managed identity handling your authentication you can sleep easy, knowing you should be in a better place security wise.

Saturday, 6 March 2021

Generate TypeScript and CSharp clients with NSwag based on an API

Generating clients for APIs is a tremendous way to reduce the amount of work you have to do when you're building a project. Why handwrite that code when it can be auto-generated for you quickly and accurately by a tool like NSwag? To quote the docs:

The NSwag project provides tools to generate OpenAPI specifications from existing ASP.NET Web API controllers and client code from these OpenAPI specifications. The project combines the functionality of Swashbuckle (OpenAPI/Swagger generation) and AutoRest (client generation) in one toolchain.

There's some great posts out there that show you how to generate the clients with NSwag using an nswag.json file directly from a .NET project.

However, what if you want to use NSwag purely for its client generation capabilities? You may have an API written with another language / platform that exposes a Swagger endpoint, that you simply wish to create a client for. How do you do that? Also, if you want to do some special customisation of the clients you're generating, you may find yourself struggling to configure that in nswag.json. In that case, it's possible to hook into NSwag directly to do this with a simple .NET console app.

This post will:

  • Create a .NET API which exposes a Swagger endpoint. (Alternatively, you could use any other Swagger endpoint; for example an Express API.)
  • Create a .NET console app which can create both TypeScript and CSharp clients from a Swagger endpoint.
  • Create a script which, when run, creates a TypeScript client.
  • Consume the API using the generated client in a simple TypeScript application.

You will need both Node.js and the .NET SDK installed.

Create an API

We'll now create an API which exposes a Swagger / Open API endpoint. Whilst we're doing that we'll create a TypeScript React app which we'll use later on. We'll drop to the command line and enter the following commands which use the .NET SDK, node and the create-react-app package:

mkdir src
cd src
npx create-react-app client-app --template typescript
mkdir server-app
cd server-app
dotnet new api -o API
cd API
dotnet add package NSwag.AspNetCore

We now have a .NET API with a dependency on NSwag. We'll start to use it by replacing the Startup.cs that's been generated with the following:

using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;

namespace API
{
    public class Startup
    {
        const string ALLOW_DEVELOPMENT_CORS_ORIGINS_POLICY = "AllowDevelopmentSpecificOrigins";
        const string LOCAL_DEVELOPMENT_URL = "http://localhost:3000";

        public Startup(IConfiguration configuration)
        {
            Configuration = configuration;
        }

        public IConfiguration Configuration { get; }

        // This method gets called by the runtime. Use this method to add services to the container.
        public void ConfigureServices(IServiceCollection services)
        {

            services.AddControllers();

            services.AddCors(options => {
                options.AddPolicy(name: ALLOW_DEVELOPMENT_CORS_ORIGINS_POLICY,
                    builder => {
                        builder.WithOrigins(LOCAL_DEVELOPMENT_URL)
                            .AllowAnyMethod()
                            .AllowAnyHeader()
                            .AllowCredentials();
                    });
            });

            // Register the Swagger services
            services.AddSwaggerDocument();
        }

        // This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
        public void Configure (IApplicationBuilder app, IWebHostEnvironment env)
        {
            if (env.IsDevelopment())
            {
                app.UseDeveloperExceptionPage();
            } 
            else
            {
                app.UseExceptionHandler("/Error");
                app.UseHsts ();
                app.UseHttpsRedirection();
            }

            app.UseDefaultFiles();
            app.UseStaticFiles();

            app.UseRouting();

            app.UseAuthorization();

            // Register the Swagger generator and the Swagger UI middlewares
            app.UseOpenApi();
            app.UseSwaggerUi3();

            if (env.IsDevelopment())
                app.UseCors(ALLOW_DEVELOPMENT_CORS_ORIGINS_POLICY);

            app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllers();
            });
        }
    }
}

The significant changes in the above Startup.cs are:

  1. Exposing a Swagger endpoint with UseOpenApi and UseSwaggerUi3. NSwag will automagically create Swagger endpoints in your application for all your controllers. The .NET template ships with a WeatherForecastController.
  2. Allowing Cross-Origin Requests (CORS) which is useful during development (and will facilitate a demo later).

Back in the root of our project we're going to initialise an npm project. We're going to use this to put in place a number of handy npm scripts that will make our project easier to work with. So we'll npm init and accept all the defaults.

Now we're going add some dependencies which our scripts will use: npm install cpx cross-env npm-run-all start-server-and-test

We'll also add ourselves some scripts to our package.json:

  "scripts": {
    "postinstall": "npm run install:client-app && npm run install:server-app",
    "install:client-app": "cd src/client-app && npm install",
    "install:server-app": "cd src/server-app/API && dotnet restore",
    "build": "npm run build:client-app && npm run build:server-app",
    "build:client-app": "cd src/client-app && npm run build",
    "postbuild:client-app": "cpx \"src/client-app/build/**/*.*\" \"src/server-app/API/wwwroot/\"",
    "build:server-app": "cd src/server-app/API && dotnet build --configuration release",
    "start": "run-p start:client-app start:server-app",
    "start:client-app": "cd src/client-app && npm start",
    "start:server-app": "cross-env ASPNETCORE_URLS=http://*:5000 ASPNETCORE_ENVIRONMENT=Development dotnet watch --project src/server-app/API run --no-launch-profile"
  }

Let's walk through what the above scripts provide us with:

  • Running npm install in the root of our project will not only install dependencies for our root package.json, thanks to our postinstall, install:client-app and install:server-app scripts it will install the React app and .NET app dependencies as well.
  • Running npm run build will build our client and server apps.
  • Running npm run start will start both our React app and our .NET app. Our React app will be started at http://localhost:3000. Our .NET app will be started at http://localhost:5000 (some environment variables are passed to it with cross-env ).

Once npm run start has been run, you will find a Swagger endpoint at http://localhost:5000/swagger:

swagger screenshot

The client generator project

Now we've scaffolded our Swagger-ed API, we want to put together the console app that will generate our typed clients.

cd src/server-app
dotnet new console -o APIClientGenerator
cd APIClientGenerator
dotnet add package NSwag.CodeGeneration.CSharp
dotnet add package NSwag.CodeGeneration.TypeScript
dotnet add package NSwag.Core

We now have a console app with dependencies on the code generation portions of NSwag. Now let's change up Program.cs to make use of this:

using System;
using System.IO;
using System.Threading.Tasks;
using NJsonSchema;
using NJsonSchema.CodeGeneration.TypeScript;
using NJsonSchema.Visitors;
using NSwag;
using NSwag.CodeGeneration.CSharp;
using NSwag.CodeGeneration.TypeScript;

namespace APIClientGenerator
{
    class Program
    {
        static async Task Main(string[] args)
        {
            if (args.Length != 3)
                throw new ArgumentException("Expecting 3 arguments: URL, generatePath, language");

            var url = args[0];
            var generatePath = Path.Combine(Directory.GetCurrentDirectory(), args[1]);
            var language = args[2];

            if (language != "TypeScript" && language != "CSharp")
                throw new ArgumentException("Invalid language parameter; valid values are TypeScript and CSharp");

            if (language == "TypeScript") 
                await GenerateTypeScriptClient(url, generatePath);
            else
                await GenerateCSharpClient(url, generatePath);
        }

        async static Task GenerateTypeScriptClient(string url, string generatePath) =>
            await GenerateClient(
                document: await OpenApiDocument.FromUrlAsync(url),
                generatePath: generatePath,
                generateCode: (OpenApiDocument document) =>
                {
                    var settings = new TypeScriptClientGeneratorSettings();

                    settings.TypeScriptGeneratorSettings.TypeStyle = TypeScriptTypeStyle.Interface;
                    settings.TypeScriptGeneratorSettings.TypeScriptVersion = 3.5M;
                    settings.TypeScriptGeneratorSettings.DateTimeType = TypeScriptDateTimeType.String;

                    var generator = new TypeScriptClientGenerator(document, settings);
                    var code = generator.GenerateFile();

                    return code;
                }
            );

        async static Task GenerateCSharpClient(string url, string generatePath) =>
            await GenerateClient(
                document: await OpenApiDocument.FromUrlAsync(url),
                generatePath: generatePath,
                generateCode: (OpenApiDocument document) =>
                {
                    var settings = new CSharpClientGeneratorSettings
                    {
                        UseBaseUrl = false
                    };

                    var generator = new CSharpClientGenerator(document, settings);
                    var code = generator.GenerateFile();
                    return code;
                }
            );

        private async static Task GenerateClient(OpenApiDocument document, string generatePath, Func<OpenApiDocument, string> generateCode)
        {
            Console.WriteLine($"Generating {generatePath}...");

            var code = generateCode(document);

            await System.IO.File.WriteAllTextAsync(generatePath, code);
        }
    }
}

We've created ourselves a simple .NET console application that creates TypeScript and CSharp clients for a given Swagger URL. It expects three arguments:

  • url - the url of the swagger.json file to generate a client for.
  • generatePath - the path where the generated client file should be placed, relative to this project.
  • language - the language of the client to generate; valid values are "TypeScript" and "CSharp".

To create a TypeScript client with it then we'd use the following command:

dotnet run --project src/server-app/APIClientGenerator http://localhost:5000/swagger/v1/swagger.json src/client-app/src/clients.ts TypeScript

However, for this to run successfully, we'll first have to ensure the API is running. It would be great if we had a single command we could run that would:

  • bring up the API
  • generate a client
  • bring down the API

Let's make that.

Building a "make a client" script

In the root of the project we're going to add the following scripts:

    "generate-client:server-app": "start-server-and-test generate-client:server-app:serve http-get://localhost:5000/swagger/v1/swagger.json generate-client:server-app:generate",
    "generate-client:server-app:serve": "cross-env ASPNETCORE_URLS=http://*:5000 ASPNETCORE_ENVIRONMENT=Development dotnet run --project src/server-app/API --no-launch-profile",
    "generate-client:server-app:generate": "dotnet run --project src/server-app/APIClientGenerator http://localhost:5000/swagger/v1/swagger.json src/client-app/src/clients.ts TypeScript",

Let's walk through what's happening here. Running npm run generate-client:server-app will:

  • Use the start-server-and-test package to spin up our server-app by running the generate-client:server-app:serve script.
  • start-server-and-test waits for the Swagger endpoint to start responding to requests. When it does start responding, start-server-and-test runs the generate-client:server-app:generate script which runs our APIClientGenerator console app and provides it with the URL where our swagger can be found, the path of the file to generate and the language of "TypeScript"

If you were wanting to generate a C# client (say if you were writing a Blazor app) then you could change the generate-client:server-app:generate script as follows:

   "generate-client:server-app:generate": "dotnet run --project src/server-app/ApiClientGenerator http://localhost:5000/swagger/v1/swagger.json clients.cs CSharp",

Consume our generated API client

Let's run the npm run generate-client:server-app command. It creates a clients.ts file which nestles nicely inside our client-app. We're going to exercise that in a moment. First of all, let's enable proxying from our client-app to our server-app following the instructions in the Create React App docs and adding the following to our client-app/package.json:

  "proxy": "http://localhost:5000"

Now let's start our apps with npm run start. We'll then replace the contents of App.tsx with:

import React from "react";
import "./App.css";
import { WeatherForecast, WeatherForecastClient } from "./clients";

function App() {
  const [weather, setWeather] = React.useState<WeatherForecast[] | null>();
  React.useEffect(() => {
    async function loadWeather() {
      const weatherClient = new WeatherForecastClient(/* baseUrl */ "");
      const forecast = await weatherClient.get();
      setWeather(forecast);
    }
    loadWeather();
  }, [setWeather]);

  return (
    <div className="App">
      <header className="App-header">
        {weather ? (
          <table>
            <thead>
              <tr>
                <th>Date</th>
                <th>Summary</th>
                <th>Centigrade</th>
                <th>Fahrenheit</th>
              </tr>
            </thead>
            <tbody>
              {weather.map(({ date, summary, temperatureC, temperatureF }) => (
                <tr key={date}>
                  <td>{new Date(date).toLocaleDateString()}</td>
                  <td>{summary}</td>
                  <td>{temperatureC}</td>
                  <td>{temperatureF}</td>
                </tr>
              ))}
            </tbody>
          </table>
        ) : (
          <p>Loading weather...</p>
        )}
      </header>
    </div>
  );
}

export default App;

Inside the React.useEffect above you can see we create a new instance of the auto-generated WeatherForecastClient. We then call weatherClient.get() which sends the GET request to the server to acquire the data and provides it in a strongly typed fashion (get() returns an array of WeatherForecast). This is then displayed on the page like so:

load data from server

As you an see we're loading data from the server using our auto-generated client. We're reducing the amount of code we have to write and we're reducing the likelihood of errors.

This post was originally posted on LogRocket.