Deploying .NET Applications To Azure

Overview of my notes for Deploying .NET Applications To Azure, for a guided tutorial I highly recommend Dometrain - From Zero to Hero: Deploying .NET Applications to Azure by Mohamad Lawand

Technologies Used

.NET Web API

I built this simple CRUD(ish) app based on the Microsoft templates and adapted it to use EF Core and Postgres - https://github.com/carlpaton/deploying-dotnet-azure

Its .NET 8.0 so still has Swagger which I enabled for all environments so I can use the Swagger UI when deployed to Azure, normally this is only for local development or demo purposes.

Commands

Overview of the commands needed to run for the various CLIs

Docker

1
docker compose up                         ~ local database

Run EF migrations locally

1
2
3
4
5
dotnet tool install --global dotnet-ef                                                     ~ globally install the EF tooling
dotnet build ~ build the solution

dotnet ef migrations add Initial_Migration --project .\src\DemoApi\DemoApi.csproj ~ add a migration based on the application models
dotnet ef database update --project .\src\DemoApi\DemoApi.csproj ~ apply the migration

Terraform

1
2
3
4
5
6
7
8
terraform version
terraform init ~ local initialization, and tracking with `.terraform.lock.hcl`
~ this is not the backend, its to record the provider selections

terraform plan ~ test run, compare local tf with azure, show the difference

terraform apply ~ execute, should create the resources described in tf plan
~ this will then create `terraform.tfstate` which is used to compare local to Azure

Azure

1
2
3
4
5
6
7
8
9
10
11
12
13
14
az login --tenant TENANT_ID             ~ auth the console to Entra so you can create resources in Azure
az group list --output table ~ list resource groups
az account list-locations -o table ~ list of Azure regions


~ create Entra `Service Principle`, this is used to deploy in Container Apps
~ here we copy the entire JSON object and save as `AZ_CREDENTIALS` github secret

az ad sp create-for-rbac --name github-auth --role contributor --scopes /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/demo-rg --json-auth --output json

~ create Entra `Service Principle`, this is used to deploy IAC
~ here we copy values out of the JSON and save as individual github secrets

az ad sp create-for-rbac --name iac-terraform-auth

Terraform

I created vars.tf, setup.tf and resource-group.tf and updated vars with subscription_id (you get this from az login), sql_pass and sql_user. If you dont set a default, it will prompt for a value. I then ran terraform plan/apply which created my Resource Group (demo-rg)

  • vars.tf
    • variables to be reused in the other .tf files
    • set subscription id, you get this when you login with az login
  • setup.tf
    • sets the cloud provider, API versions and account to connect to by subscription id
    • see azurerm
  • resource-group.tf

I then added the following .tf files, one at a time in the order listed here again running terraform plan/apply

The end result seen from Azure showed the resources in my Resource Group:

Resources created in Azure

Backend (Long Term Storage)

When I ran the terraform x commands locally it created some state listed below, this needs to be pushed to blob storage if we want to run these commands in a CI/CD pipeline. These files also contain GUIDs like the subscription ID which I dont think should be pushed to source control.

1
2
terraform.tfstate
terraform.tfstate.backup

Note the file .terraform.lock.hcl can be commited to source control and I created my backend manually in Azure portal.

  1. Resource groups -> create
  • Name -> reference-rg
  • Region -> (Asia Pacific) Australia East
  • Tags -> environment=dev, source=azure-portal
  • Review and create -> create
  1. Select reference-rg -> create -> search -> storage account -> select Storage account by Microsoft | Azure Service
  • Plan -> Storage account
  • Storage account name -> demoiac
  • Region -> (Asia Pacific) Australia East
  • Primary service -> Azure Blob Storage or Azure Data Lake Storage Gen 2
  • Performance -> Standard
  • Redundancy -> Locally-redundant storage (LRS)
  • Review and create -> create
  • this will take a minute or so to create
  • Go to resource -> Data storage -> Containers -> + Container (New Container)
  • Name -> terraform -> Create
  1. Update Setup.tf to include the backend
1
2
3
4
5
6
7
8
9
10
11
12
terraform {
required_providers {
...
}

backend "azurerm" {
resource_group_name = "reference-rg"
storage_account_name = "demoiac"
container_name = "terraform"
key = "terraform.tfstate"
}
}
  1. To migrate the terraform files from local to the new blob storage run
  • run terraform plan
  • Do you want to copy existing state to the new backend -> yes

tfstate was copied to blob storage

  1. Add a testing tag to the existing demo-rg resource group
1
2
3
4
tags = {
...
test1 = "test1"
}
  1. Run terraform plan, this will show the changes, no need to apply it, this will be done next in a pipeline.

… WIP!

Connect To SQL

The username and DNS shown here was just for the demo, best practice is never to commit or share any secrets.

I then added my own IP address to the firewall by navigating to Resource Group (demo-rg) -> selected the SQL server (demo-sql-dev) -> Networking (Show networking settings) -> Firewall rules -> Add your client IPv4 address (xxx.xxx.xxx.xxx) -> Azure automagically filled mine in -> I called mine Carl home IP -> Save. Your IP wont be static so will probs need an update later.

I then needed to get the server address for SQL in Azure by navigating to SQL server, my server name was demo-sql-dev.database.windows.net, so the suffix comes from mssql.tf where we set the server name

Based on the sql_pass/user values in vars.tf I build then connected using DBeaver

Test sql connection

Github Secrets and Variables

Github Settings -> Secrets and variables -> actions ->

  • Secrets tab -> Repository secrets -> New repository secret
  • Variables tab -> Repository variables -> New repository variable

For Azure Container Registry

Container Registry -> Settings -> Access keys

1
2
3
${{ vars.ACR_SERVER }}
${{ secrets.ACR_USER }}
${{ secrets.ACR_PASSWORD }}

For Azure Container App

Create a service principal and configure its access to Azure resources. See az ad sp create-for-rbac above, save the whole JSON output into git secrets.

1
${{ secrets.AZ_CREDENTIALS }}

For IAC

Create a service principal and configure its access to Azure resources. See az ad sp create-for-rbac above, save keys out of the JSON into git secrets.

1
2
3
4
${{ secrets.ARM_CLIENT_ID }}         ~ appId
${{ secrets.ARM_CLIENT_SECRET }} ~ password
${{ secrets.ARM_SUBSCRIPTION_ID }} ~ you get this from `az login` or the Azure portal under and resource
${{ secrets.ARM_TENANT_ID }} ~ existing tenant

CI/CD

  • WORKFLOW

    • push: When you commit
    • pull_request: When you create a pull request
    • workflow_dispatch: When you manually trigger a job, useful for pull request job re-runs
  • JOB:

    • build-and-push-image
      • STEPS:
        • Checkout repository
        • Setup WebApi .NET
        • Configure Azure Container Registry (ACR)
        • Get commit SHA
        • Build and push image to Azure Container Registry (ACR)
    • deploy-image-to-container-service
      • STEPS
        • Login to Azure
        • Deploy to Azure Container Apps (ACA)

Finding Logs in Azure

  • Resource group -> demo-rg
    • Container App -> demo-acadev -> Activity Log -> Create or Update Container App -> Create or Update Container App -> Change history -> properties.provisioningState

Look for image, here the version 965d7c3 is my commit Sha so I can see what code I changed

1
2
old                                                   new
"image": "demoacrdev.azurecr.io/demoacrdev:478d482" "image": "demoacrdev.azurecr.io/demoacrdev:965d7c3"

Then look for latestRevisionName, here kj2aaos is the revision running in Azure

1
2
old                                                   new
"latestRevisionName": "demo-acadev--gw4qzwk" "latestRevisionName": "demo-acadev--kj2aaos"

Under Monitoring -> Log stream select the Revision matching kj2aaos, this is going to give you the running container logs, so you can see why you code code sucks :D

Under Monitoring -> Logs you can write and execute your own log queries, this is helpful when you have heaps of logs and need to diagnose problems.

References