| title | MSAL Java Workshop: Entra ID Authentication with Angular SPA + Spring Boot API |
|---|---|
| description | Workshop and sample applications demonstrating Microsoft Entra ID authentication with Angular 19 SPA and Spring Boot 3.4 API |
| ms.date | 2026-04-21 |
The Justice Evidence Portal is a three-tier application that uses Microsoft Entra ID across every layer: the SPA authenticates users with Auth Code + PKCE, the API enforces JWT v2 + scope and role authorisation, and the API talks to Azure Data Lake Storage Gen2 with its system-assigned Managed Identity over a Private Endpoint. There are no storage account keys, no SAS tokens, and no anonymous endpoints anywhere in the data path.
The next four diagrams break the system down by concern: the high-level topology, the identity and token flow, the network and DNS plane, and the anatomy of a single download request. A fifth diagram further down covers the GitHub Actions deployment pipeline.
flowchart LR
User["End user<br/>(browser)"]
EntraID["Microsoft Entra ID<br/>App regs · Scopes · App roles"]
subgraph Azure["Azure subscription · Resource group rg-evidence-workshop"]
SPA["App Service: SPA<br/>app-evidence-spa-workshop<br/>Angular 19 + MSAL.js"]
API["App Service: API<br/>app-evidence-api-workshop<br/>Spring Boot 3.4 / Java 17<br/>System-assigned MI"]
Plan["App Service Plan S1 Linux"]
AI["Application Insights<br/>+ Log Analytics"]
subgraph VNet["VNet 10.20.0.0/16"]
SnetApp["snet-app 10.20.1.0/24<br/>delegated Microsoft.Web/serverFarms<br/>SE: Microsoft.Storage"]
SnetPe["snet-pe 10.20.2.0/24<br/>private-endpoint policies disabled"]
PE["Private Endpoint<br/>storage 'dfs' sub-resource<br/>NIC IP 10.20.2.x"]
DNS["Private DNS Zone<br/>privatelink.dfs.core.windows.net"]
end
Storage["ADLS Gen2 (HNS)<br/>stevpworkshopXXXXXXXX<br/>shared keys: DISABLED<br/>defaultAction: Deny<br/>VNet rule: snet-app"]
end
User -- "1. Auth Code + PKCE" --> EntraID
EntraID -- "id + access tokens" --> User
User -- "static assets" --> SPA
User -- "2. XHR + Bearer JWT v2" --> API
API -- "3. validates JWT (issuer, aud, scp)" --> EntraID
API -- "4. requests MI token" --> EntraID
API -- "5. ADLS read via MI" --> SnetApp
SnetApp -. "VNet integration" .- Plan
SnetApp --> PE
PE --> Storage
DNS -. "resolves *.dfs.core.windows.net" .- PE
API -. telemetry .-> AI
SPA -. JS telemetry .-> AI
Two distinct OAuth 2.0 flows are happening at the same time, and they share the same identity provider but never share tokens. The user-facing Auth Code + PKCE flow gets the SPA a delegated access token that the API can validate. Independently, the API's Managed Identity flow gets a separate token that lets it call ADLS Gen2 as itself.
sequenceDiagram
autonumber
actor U as User (browser)
participant SPA as Angular SPA<br/>MSAL.js
participant Entra as Entra ID
participant API as Spring Boot API<br/>(System MI)
participant IMDS as App Service<br/>Managed Identity endpoint
participant ST as ADLS Gen2<br/>storage account
U->>SPA: Open SPA URL
SPA->>Entra: Auth Code + PKCE (login)
Entra-->>U: Sign-in UI / consent
U-->>Entra: credentials (+ MFA)
Entra-->>SPA: id_token + access_token<br/>aud=api://{apiClientId}<br/>scp=Evidence.Read<br/>roles=[CaseReader, CaseAdmin]
U->>SPA: Click Download EV-001
SPA->>API: GET /api/evidence/EV-001/download<br/>Authorization: Bearer {user JWT}
API->>API: Spring Security validates JWT<br/>issuer = login.microsoftonline.com/{tenant}/v2.0<br/>aud, scp, roles
Note over API,IMDS: First call after deploy:<br/>MI token cache is empty
API->>IMDS: GET /metadata/identity (MSAL4J)
IMDS-->>API: access_token for resource=https://storage.azure.com<br/>~15s on first call, sub-second when cached
API->>ST: GET /evidence/{filename} (DFS endpoint)<br/>Authorization: Bearer MI token
ST-->>API: 200 OK + bytes
API-->>SPA: 200 OK · application/pdf · attachment
SPA-->>U: Save dialog
This is the topology the Bicep modules in infra/modules/ provision. Two subtleties are worth calling out. First, ADLS Gen2 uses the dfs sub-resource on the Private Endpoint, not the blob one, and the SDK call must therefore go through the DataLake client (see AzureBlobStorageService.java and AzureStorageConfig.java). Second, the storage account keeps publicNetworkAccess=Enabled on purpose — that is the only mode in which virtualNetworkRules are honoured. The default action is still Deny, so the only public callers that can reach the account are those that match the temporary deployer-IP allow-list during sample-evidence seeding (see the comment block at the top of storage-account.bicep).
flowchart TB
subgraph RG["Resource group · rg-evidence-workshop · canadacentral"]
Plan["asp-evidence-workshop<br/>App Service Plan · S1 Linux"]
SPA["app-evidence-spa-workshop"]
API["app-evidence-api-workshop<br/>WEBSITE_VNET_ROUTE_ALL=1"]
subgraph VNet["vnet-evidence-workshop · 10.20.0.0/16"]
direction LR
subgraph SnetApp["snet-app · 10.20.1.0/24"]
direction TB
Delegation["Delegation<br/>Microsoft.Web/serverFarms"]
SE["Service endpoint<br/>Microsoft.Storage"]
end
subgraph SnetPe["snet-pe · 10.20.2.0/24"]
direction TB
PE["pe-storage-dfs<br/>NIC 10.20.2.x"]
end
DNS["Private DNS Zone<br/>privatelink.dfs.core.windows.net<br/>linked to VNet"]
end
Storage["Storage account (HNS)<br/>publicNetworkAccess=Enabled<br/>networkAcls.defaultAction=Deny<br/>virtualNetworkRules=[snet-app]<br/>ipRules=[deployerIp during seed]<br/>allowSharedKeyAccess=false"]
end
SPA -. hosted on .- Plan
API -. hosted on .- Plan
API -- "regional VNet integration" --> SnetApp
SnetApp -- "VirtualNetworkRule grants access" --> Storage
SnetApp -- "DNS query for *.dfs.* resolves to PE NIC" --> DNS
DNS --> PE
PE -- "Private Link to" --> Storage
This is the sequence to keep in mind when something goes wrong: a 502 Bad Gateway on the very first download after deploy, mysterious CORS errors in the browser, or a 401 instead of a 403. Each decision diamond is enforced by a different component and produces a different failure mode.
flowchart TB
Start([User clicks Download]) --> SPAreq["SPA fetch<br/>GET /api/evidence/EV-001/download<br/>Authorization: Bearer {user JWT}"]
SPAreq -->|"CORS preflight"| Preflight{"OPTIONS allowed?<br/>SecurityConfig.corsConfigurationSource"}
Preflight -->|"no"| CORSfail[["Browser blocks · CORS error"]]
Preflight -->|"yes"| GET["GET reaches Spring Security filter chain"]
GET --> JWTcheck{"JWT valid?<br/>issuer-uri, aud, signature"}
JWTcheck -->|"no"| R401[["401 Unauthorized"]]
JWTcheck -->|"yes"| Scope{"Has SCOPE_Evidence.Read?"}
Scope -->|"no"| R403[["403 Forbidden"]]
Scope -->|"yes"| Ctrl["EvidenceController.downloadEvidence(id)"]
Ctrl --> Lookup["caseService.getFilenameForEvidenceId(id)"]
Lookup --> Storage["AzureBlobStorageService.downloadEvidence<br/>dataLakeServiceClient.read(outputStream)"]
Storage --> MI{"MI token cached?"}
MI -->|"no · first call"| TokenAcq["AppServiceManagedIdentitySource<br/>fetches token (~15s observed)"]
MI -->|"yes · warm"| ReadBytes["DFS GET to storage account via PE"]
TokenAcq --> Risk[["Risk: App Service 230s gateway can return 502<br/>before token + read complete"]]
TokenAcq --> ReadBytes
ReadBytes --> Resp[["200 OK · application/pdf · attachment"]]
classDef bad fill:#fecaca,stroke:#991b1b,color:#111
classDef warn fill:#fde68a,stroke:#b45309,color:#111
classDef good fill:#bbf7d0,stroke:#15803d,color:#111
class CORSfail,R401,R403 bad
class Risk warn
class Resp good
The workshop centers on a Justice Evidence Portal: a secure application for managing case evidence files. Users authenticate through Entra ID, and the API enforces role-based access (CaseReader, CaseAdmin) before serving evidence documents from Azure Storage. External partners access the system as B2B guest users within the organization's tenant.
| Tool | Version | Purpose |
|---|---|---|
| Node.js | 20 LTS or later | Angular SPA build and development |
| Java JDK | 17 or later | Spring Boot API compilation and runtime |
| Maven | 3.9 or later (auto-installed by start script) | Java dependency management and build |
| Azure CLI | 2.60 or later | Azure resource provisioning and deployment |
| VS Code | Latest | Recommended editor with extensions |
The sample apps work immediately without any Azure or Entra ID configuration. The dev profile serves 5 mock cases and permits all API requests so you can explore the code before setting up authentication.
-
Clone the repository (or click "Use this template" on GitHub):
git clone https://github.com/devopsabcs-engineering/msal-java.git cd msal-java -
Start both apps with a single command:
PowerShell (Windows):
.\scripts\start.ps1
Bash (macOS/Linux):
./scripts/start.sh
The start script automatically:
- Downloads and installs Maven if it is not found on your PATH
- Installs SPA npm dependencies if
node_modulesis missing - Kills any previous instances on ports 4200 and 8080
- Starts the Spring Boot API (
http://localhost:8080) - Starts the Angular SPA (
http://localhost:4200)
-
Verify the API returns mock data:
curl http://localhost:8080/api/cases
You should see 5 JSON case objects.
-
Open the SPA at http://localhost:4200 to see the Justice Evidence Portal landing page.
Note: The "Sign In" button will fail until you complete Exercise 1 (Entra ID app registration). The API endpoints are fully functional without authentication in dev mode.
scripts/setup-entra-apps.ps1 is an idempotent PowerShell helper that creates and fully configures the SPA and API app registrations against the tenant you are currently logged in to with the Azure CLI. It is the fastest path through Exercise 1 if you prefer scripting over the Azure Portal.
What it does (every call is a no-op if the resource is already configured the right way):
- Verifies
azis installed and you are signed in (az login). - Acquires a Microsoft Graph access token and calls Graph directly via
Invoke-RestMethod(noaz restquoting issues on Windows). - Creates the API app, sets its Application ID URI to
api://<appId>, exposes theEvidence.ReadOAuth2 scope, and defines theCaseReaderandCaseAdminapp roles. - Creates the SPA app, configures its SPA platform redirect URI(s), grants the delegated
Evidence.Readpermission, and pre-authorizes the SPA on the API. - Creates service principals for both apps if they don't exist yet.
- (Optional, default on) Grants tenant admin consent for the SPA's delegated permission and self-assigns the signed-in user to both
CaseReaderandCaseAdminso you can sign in immediately. - (Optional, default on) Patches the local
environment.ts,environment.prod.ts, andapplication.propertiesfiles with the resulting client/tenant IDs and scope URI.
Usage:
# Sign in to the tenant where the apps should live
az login --tenant <tenantId>
# Bootstrap both app registrations and patch local config
.\scripts\setup-entra-apps.ps1 `
-SpaName "Evidence Portal SPA" `
-ApiName "Evidence Portal API"
# Re-run later with a production redirect URI (idempotent)
.\scripts\setup-entra-apps.ps1 `
-SpaName "Evidence Portal SPA" `
-ApiName "Evidence Portal API" `
-ProductionRedirectUri "https://my-spa.azurewebsites.net" `
-OutputFile ".\.entra-apps.json"The script returns and prints tenantId, apiAppId, apiObjectId, apiServicePrincipalId, apiScopeId, apiScopeUri, roleReaderId, roleAdminId, spaAppId, spaObjectId, spaServicePrincipalId, plus the redirect URIs and consent/role-assignment status. With -OutputFile it also writes a JSON state file that scripts/deploy.ps1 consumes on its next run, so you don't need to re-run setup before every deployment.
Skip the patching or admin consent with
-UpdateLocalConfig:$false,-GrantAdminConsent:$false, or-AssignCurrentUserToRoles:$falseif you would rather wire those up by hand.
If you want to see the deployed end-state in Azure as quickly as possible — without going through the four guided exercises — run the one-stop deployment script. It chains every step of Exercises 1 and 4 into a single idempotent run.
# Sign in once to the tenant where the apps and Azure resources should live
az login --tenant <tenantId>
az account set --subscription <subscriptionIdOrName>
# Deploy everything (Entra ID + Bicep + SPA + API + evidence files)
.\scripts\deploy.ps1What deploy.ps1 does end-to-end:
- Verifies
az,node,mvn(auto-installs Maven into%LOCALAPPDATA%\Mavenif missing). - Calls
setup-entra-apps.ps1to create/reuse both app registrations, expose the scope and roles, force the API token version to v2, grant admin consent, and assign your user toCaseReader+CaseAdmin. - Creates the resource group
rg-evidence-workshopincanadacentraland a deterministic globally-unique storage account name. - Detects your public IP and Entra principal objectId so the seed step can run over OAuth without ever using a shared key.
- Deploys the Bicep stack: VNet (
snet-appdelegated toMicrosoft.Web/serverFarmswith theMicrosoft.Storageservice endpoint,snet-pefor endpoints), App Service Plan (S1 Linux — minimum SKU for VNet integration), two App Services with system-assigned Managed Identity and Regional VNet integration, hardened ADLS Gen2 storage (isHnsEnabled=true,allowSharedKeyAccess=false,publicNetworkAccess=EnabledwithnetworkAcls.defaultAction=Denyand aVirtualNetworkRuleforsnet-app— see Findings for whyDisabledis wrong here), Private Endpoint on the storagedfssub-resource, Private DNS Zoneprivatelink.dfs.<storage suffix>, Application Insights, andStorage Blob Data Contributorrole assignment for the API Managed Identity. - Patches
environment.prod.tswith the deployed SPA/API URLs and App Insights connection string. - Re-runs
setup-entra-apps.ps1to add the production SPA URL as a SPA-platform redirect URI on the SPA app registration. - Builds the Angular SPA in production mode (with the Ontario Design System assets fetched into
public/vendor/) and the Spring Boot API as an executable JAR. - Deploys the SPA zip and the API JAR with
az webapp deploy. - Uploads the five sample PDFs over OAuth (
--auth-mode login, no shared keys) using the temporary deployer-IP allow-list andStorage Blob Data ContributorRBAC. - Re-deploys storage with
deployerIp=''sopublicNetworkAccessflips back toDisabled. App Services keep working via the Private Endpoint. - Smoke-tests the result: SPA URL must return
200, API/api/casesmust return401(proving JWT validation is enforced).
When it finishes you'll see something like:
Deployment complete
Resource Group : rg-evidence-workshop
Region : canadacentral
SPA URL : https://app-evidence-spa-workshop.azurewebsites.net
API URL : https://app-evidence-api-workshop.azurewebsites.net
Storage : stevpworkshopXXXXXXXX (container: evidence)
Open the SPA URL, sign in with the same account you ran the script as, and you should land on the case list with all five sample cases — files served from Blob Storage through the API's Managed Identity.
Common flags:
| Flag | Default | Purpose |
|---|---|---|
-ResourceGroup |
rg-evidence-workshop |
Target resource group (created if missing). |
-Location |
canadacentral |
Azure region. |
-Environment |
workshop |
Suffix used for App Service names (app-evidence-spa-<env>, app-evidence-api-<env>). |
-SkipEntraSetup |
off | Reuse a previous .entra-apps.json and skip the Graph calls. |
-SkipBuild |
off | Reuse the existing dist/ and target/ artifacts. |
-SkipUpload |
off | Skip the sample-evidence blob upload. |
When you're done with the workshop, remove everything with:
az group delete --name rg-evidence-workshop --yes --no-waitAfter the first manual deploy with deploy.ps1, every push to main that touches sample-app/** is built and deployed automatically by .github/workflows/deploy.yml. The workflow uses OIDC federated credentials, so there is no client secret stored anywhere — GitHub mints a short-lived OIDC token and Azure exchanges it for an access token scoped to the workshop service principal. The workflow is intentionally narrow: it builds the SPA and API, deploys both artefacts, and runs a smoke test. It never touches infrastructure, app registrations, or the storage seed (those remain operator responsibilities driven from deploy.ps1 / setup-entra-apps.ps1).
flowchart LR
Dev["Developer"]
GH["GitHub repo<br/>devopsabcs-engineering/msal-java<br/>branch main"]
subgraph WF["GitHub Actions · .github/workflows/deploy.yml"]
direction TB
BuildJob["build job<br/>· setup-java 17 (Temurin) + maven cache<br/>· setup-node 20 + npm cache<br/>· mvn package -DskipTests<br/>· fetch Ontario Design System assets<br/>· npm ci + ng build --configuration production<br/>· upload api-jar + spa-dist artefacts"]
DeployJob["deploy job · environment: workshop<br/>· azure/login@v2 (OIDC)<br/>· webapps-deploy@v3 (api jar)<br/>· webapps-deploy@v3 (spa zip)<br/>· smoke test: SPA 200, API /api/cases 401"]
BuildJob --> DeployJob
end
OIDC["Entra ID app reg<br/>msal-java-github-actions<br/>FIC: branch main + env workshop<br/>RBAC: Website Contributor on RG"]
Azure["Azure<br/>app-evidence-api-workshop<br/>app-evidence-spa-workshop"]
Dev -- "git push" --> GH
GH -- "trigger" --> WF
DeployJob -- "OIDC token<br/>repo:devopsabcs-engineering/msal-java:..." --> OIDC
OIDC -- "AAD access token" --> DeployJob
DeployJob -- "deploy artefacts" --> Azure
Bootstrap the OIDC trust once, idempotently:
.\scripts\setup-github-oidc.ps1What scripts/setup-github-oidc.ps1 does:
- Verifies you are signed in to both
azandghCLIs. - Creates (or reuses) the
msal-java-github-actionsEntra ID app registration and its service principal. - Adds two Federated Identity Credentials so OIDC tokens issued for the
mainbranch and for theworkshopdeployment environment can both exchange for an Azure AD access token. No secret leaves Azure. - Grants the SP
Website Contributoronrg-evidence-workshop(and optionallyStorage Blob Data Contributoron the storage account with-GrantStorageContributor). - Writes
AZURE_CLIENT_ID,AZURE_TENANT_ID, andAZURE_SUBSCRIPTION_IDrepository secrets viagh secret set.
Re-run it any time — every step is a no-op if the resource is already configured the right way. The workflow's smoke test asserts that /api/cases returns 401, which is the canonical proof that JWT validation is enforced (a 200 would mean the API is wide open; a 5xx would mean it failed to start or storage is unreachable).
Follow these exercises in order for the full 3-hour workshop experience. Already saw the Fast-Track land everything in Azure? You can still use these guides as a tear-down of what deploy.ps1 automated.
| Exercise | Duration | Description |
|---|---|---|
| Exercise 1: Configure App Registrations | 30 min | Create Entra ID app registrations for the SPA and API, configure scopes, roles, and update the SPA environment. (Automated end-to-end by setup-entra-apps.ps1.) |
| Exercise 2: Run SPA + API Locally | 30 min | Sign in through the SPA, browse cases, download evidence, and inspect JWT tokens. |
| Exercise 3: Add Role-Protected Endpoint | 20 min | Experience the RBAC cycle: 403 Forbidden, assign CaseAdmin role, re-authenticate, 201 Created. |
| Exercise 4: Deploy to Azure | 20 min | Deploy both apps and infrastructure to Azure using Bicep, verify Managed Identity storage access. (Automated end-to-end by deploy.ps1.) |
For the full instructor delivery guide with 9-module schedule and presentation notes, see workshop/README.md.
msal-java/
├── .github/
│ └── workflows/
│ └── deploy.yml # CI/CD: OIDC -> Azure App Service deployment
├── sample-app/
│ ├── api/ # Spring Boot 3.4 REST API (Java 17)
│ │ ├── src/main/java/ # Controllers, services, security config
│ │ ├── src/main/resources/ # Properties, sample data, evidence PDFs
│ │ ├── Dockerfile
│ │ └── pom.xml
│ └── spa/ # Angular 19 Single Page Application
│ ├── src/app/ # Components, services, MSAL config
│ ├── src/environments/ # Dev and prod environment configs
│ └── package.json
├── workshop/
│ ├── guides/ # 4 hands-on exercise guides
│ ├── solutions/ # Exercise 3 solution files
│ └── README.md # Instructor delivery guide
├── infra/ # Bicep IaC (App Service, Storage, monitoring, VNet, PE)
│ ├── main.bicep
│ ├── main.bicepparam
│ └── modules/ # 8 Bicep modules incl. vnet + private-endpoint-storage
├── scripts/
│ ├── start.ps1 # Start both apps locally (Windows)
│ ├── start.sh # Start both apps locally (macOS/Linux)
│ ├── deploy.ps1 # Full Azure deployment (PowerShell)
│ ├── deploy.sh # Full Azure deployment (Bash)
│ ├── setup-entra-apps.ps1 # Idempotent app registration bootstrap (PowerShell, Graph API)
│ ├── setup-entra-apps.sh # Automate app registrations (Bash, az CLI)
│ ├── setup-github-oidc.ps1 # Idempotent OIDC trust + repo secrets for CI/CD
│ ├── configure-app-settings.sh # Post-deploy configuration
│ ├── fetch-ontario-design-system.ps1 # Pulls Ontario DS assets into the SPA build
│ └── generate-sample-evidence.ps1 # Regenerates the sample PDFs
├── docs/
│ └── production-hardening.md # Front Door, WAF, CMK, multi-region next-steps
└── README.md
| Layer | Technology | Version | Purpose |
|---|---|---|---|
| Frontend | Angular | 19.2 | Single Page Application framework |
| Frontend Auth | MSAL Angular | 5.2 | Entra ID authentication (Auth Code + PKCE) |
| Backend | Spring Boot | 3.4.4 | REST API framework |
| Backend Auth | Spring Security OAuth2 Resource Server | 6.2 | JWT validation with scope and role enforcement |
| Storage | Azure Data Lake Storage Gen2 | azure-storage-file-datalake 12.23.0 |
Evidence file storage via Managed Identity over Private Endpoint |
| Identity | Azure Identity | 1.18.2 SDK | ManagedIdentityCredential in App Service, DefaultAzureCredential locally |
| Monitoring | Application Insights | 3.7.8 Agent | Telemetry for SPA (JS SDK) and API (runtime-attach) |
| Infrastructure | Bicep | Latest | Azure resource provisioning (App Service, Storage, monitoring, VNet, PE) |
| CI/CD | GitHub Actions + OIDC | azure/login@v2 + azure/webapps-deploy@v3 |
Secret-less deploy to App Service on push to main |
- Dev profile is open: No authentication required to explore the API locally. JWT validation and
@PreAuthorizeenforcement activate in non-dev profiles. TheLocalStorageServicebean (active under@Profile("dev")) serves embedded PDFs from the classpath;AzureBlobStorageService(@Profile("!dev")) is the production bean. - Pinned
ManagedIdentityCredentialin App Service: AzureStorageConfig.java detects theIDENTITY_ENDPOINTenv var and usesManagedIdentityCredentialBuilderdirectly instead of theDefaultAzureCredentialchain, which has been observed to fall over silently when one of its earlier providers (e.g.EnvironmentCredential) returns an "unavailable" without throwing. - No storage keys, no SAS tokens, no anonymous access: Shared keys are disabled at the storage account; all data-plane access is Entra ID OAuth + RBAC (
Storage Blob Data Contributoron the API Managed Identity). The seed step uploads sample PDFs the same way (az storage blob upload-batch --auth-mode login) under a temporary deployer-IP allow-list that is removed at the end of the deployment. publicNetworkAccess=Enabled, but withdefaultAction=Deny: Counter-intuitively, the storage account must keep its public-access flag set to Enabled so thatvirtualNetworkRulesare honoured by the Storage RP. Setting it toDisabledcauses the account to refuse every request that does not arrive over a Private Endpoint, which silently breaks the regional-VNet-integration path. The default ACL action is stillDeny, so only the App Service subnet (snet-app, granted via aVirtualNetworkRule) and any temporary deployer IP can reach the data plane.Microsoft.Storageservice endpoint onsnet-appis required: Empirically, traffic from the regional VNet integration is not trusted by storage networkAcls via the Private Endpoint alone. The subnet must explicitly enable theMicrosoft.Storageservice endpoint, and the account must list that subnet in itsvirtualNetworkRules. Without both halves, the API gets403 AuthorizationFailurefrom storage even though the Private Endpoint resolves correctly.- Hardened-by-default network: App Services run with
WEBSITE_VNET_ROUTE_ALL=1andWEBSITE_DNS_SERVER=168.63.129.16so all storage traffic resolves through theprivatelink.dfs.<storage-suffix>Private DNS zone to the Private Endpoint NIC IP. - Secret-less CI/CD: GitHub Actions authenticates to Azure via OIDC federated credentials. The repo secrets (
AZURE_CLIENT_ID,AZURE_TENANT_ID,AZURE_SUBSCRIPTION_ID) are public IDs only; there is no client secret to rotate.
These are the rough edges this codebase hit on the way to a working production deploy. Each is worth knowing before they bite you in your own environment.
The first authenticated GET /api/evidence/{id}/download after a fresh deploy can return 502 Bad Gateway with no entry in the application log. Subsequent downloads complete in well under a second.
Why: The first call into dataLakeServiceClient.read(...) triggers AppServiceManagedIdentitySource to fetch a Managed Identity token from the App Service IMDS endpoint. In the trace below, that token acquisition took ~18 seconds (caching was cold, the token endpoint was warming up, and MSAL4J does its own discovery handshake). App Service's front-end gateway has a hard 230-second request timeout, but the Linux container appears to surface a 502 well before that when the Java thread is blocked on a downstream call during the first request after start-up.
2026-05-01T04:42:38.992Z INFO ... AppServiceManagedIdentitySource : ... Creating App Service managed identity.
2026-05-01T04:42:56.765Z INFO ... HttpHelper : Sent (null) Correlation Id ...
2026-05-01T04:42:56.794Z INFO ... AbstractManagedIdentitySource : Successful response received.
2026-05-01T04:42:56.961Z INFO ... ManagedIdentityCredential : Azure Identity => Managed Identity environment: Managed Identity
Workaround today: ignore the first failed request — every subsequent download works while the JVM is alive, because MSAL4J caches the token. Permanent fix (not yet applied): warm the credential at startup with a @PostConstruct or ApplicationRunner that calls dataLakeServiceClient.getProperties() once, so the token cache is populated before the first user request arrives.
When the 502 above hits, the browser DevTools console shows a CORS error on the same request. That is misleading. App Service's built-in 502 response page is generated by the front-end gateway before it ever reaches the application, and the gateway does not reflect any of the application's CORS headers. The browser then sees a response with no Access-Control-Allow-Origin and reports the only thing it knows how to: a CORS violation.
Diagnostic that confirmed it: a synthetic OPTIONS preflight from the SPA origin returned Access-Control-Allow-Origin: https://app-evidence-spa-workshop.azurewebsites.net with status 200, while a GET with a fake bearer token returned 401 (which is the correct Spring Security response). So the application's CORS filter and JWT filter were both healthy — the real bug was the 502 on the warm path. Lesson: when you see a CORS error in the browser, sanity-check the actual HTTP status with curl -v from outside the browser before going down the CORS rabbit hole.
An earlier iteration of the Bicep set publicNetworkAccess: 'Disabled' on the storage account — intuitively the most secure setting — and the API immediately started returning 403 AuthorizationFailure from storage. The Private Endpoint was in place and DNS was resolving to the PE NIC IP, but the App Service VNet integration path goes through the platform's regional NAT before hitting storage, and that path is governed by virtualNetworkRules, not by the Private Endpoint. With the public flag flipped to Disabled, the Storage RP rejects every request that does not arrive over a Private Endpoint NIC, including the legitimate VNet-integrated ones. Keeping it Enabled while leaving defaultAction=Deny is the correct posture: nothing public can reach the account, but the explicit virtualNetworkRules for snet-app are honoured.
scripts/fetch-ontario-design-system.ps1 is invoked from the GitHub Actions workflow under pwsh on ubuntu-latest, where $env:TEMP is undefined. The original Windows-only Join-Path $env:TEMP <name> returned \<name> and the script failed with a path error. The cross-platform fix is Join-Path ([System.IO.Path]::GetTempPath()) <name>, which returns the right thing on Windows, Linux, and macOS.
ADLS Gen2 (HNS-enabled) accounts expose two endpoints: *.blob.core.windows.net and *.dfs.core.windows.net. The Private Endpoint in this stack is bound to the dfs sub-resource and the Private DNS zone is privatelink.dfs.core.windows.net. The API therefore must use the DataLake SDK and target the .dfs endpoint — using the older BlobServiceClient against the same account would resolve to the public .blob endpoint, bypass the Private Endpoint entirely, and either be denied by the network ACL or take the slow public path.
The workshop already ships with a hardened-by-default network and identity posture: ADLS Gen2 with shared keys disabled, public network access still Enabled but defaultAction=Deny with a VirtualNetworkRule for the App Service subnet, App Service Regional VNet integration, a Private Endpoint on the storage dfs sub-resource, and Managed Identity + RBAC end-to-end. For the optional next-step controls (Front Door + WAF, App Service Private Endpoints, customer-managed keys, multi-region failover), see the Production Hardening Guide.
Important
This is a secure workshop architecture and a strong production baseline, but it is not the maximum-security production pattern yet. The API App Service is still publicly reachable and relies on Entra ID JWT validation, scopes, roles, and CORS for request-level protection. A higher-security production design should also restrict API ingress with App Service access restrictions, an API private endpoint, Azure Front Door Premium or Application Gateway with WAF, or API Management depending on the deployment model. Keep the storage deployer-IP allow-list temporary and narrow; the steady-state storage path should be Managed Identity over the App Service subnet rule and the ADLS Gen2 dfs Private Endpoint.
This project is licensed under the MIT License. See LICENSE for details.