Let’s assume now we want to dockerise a new ASP .NET Core 6.0 web app project that we have locally.
Now, when we build and run the project, we should be able to view it on localhost as shown below.
The default homepage of a new ASP .NET Core web app.
Before adding the .NET app to the Docker image, first it must be published with the following command.
dotnet publish --configuration Release
Build, run, and publish our ASP .NET Core web app.
Create the Dockerfile
Now, we will create a file named Dockerfile in directory containing the .csproj and open it in VS Code. The content of the Dockerfile is as follows.
FROM mcr.microsoft.com/dotnet/aspnet:6.0
COPY bin/Release/net6.0/publish/ App/
WORKDIR /App
ENTRYPOINT ["dotnet", "Lunar.Dashboard.dll"]
A Dockerfile must begin with a FROM instruction. It specifies the parent image from which we are building. Here, we are using mcr.microsoft.com/dotnet/aspnet:6.0, an image contains the ASP.NET Core 6.0 and .NET 6.0 runtimes for running ASP.NET Core web apps.
The COPY command tells Docker to copy the publish folder from our computer to the App folder in the container. Then the current directory inside of the container is changed to App with the WORKDIR command.
Finally, the ENTRYPOINT command tells Docker to configure the container to run as an executable.
Docker Build
Now that we have the Dockerfile, we can build an image from it.
In order to perform docker build, we first need to navigate the our project root folder and issue the docker build command, as shown below.
docker build -t lunar-dashboard -f Dockerfile .
We assign a tag lunar-dashboard to the image name using -t. We then specify the name of the Dockerfile using -f. The . in the command tells Docker to use the current folder, i.e. our project root folder, as the context.
Once the build is successful, we can locate the newly created image with the docker images command, as highlighted in the screenshot below.
The default docker images command will show all top level images.
Create a Container
Now that we have an image lunar-dashboard that contains our ASP .NET Core web app, we can create a container with the docker run command.
docker run -d -p 8080:80 --name lunar-dashboard-app lunar-dashboard
When we start a container, we must decide if it should be run in a detached mode, i.e. background mode, or in a foreground mode. By default the container will be running in foreground.
In the foreground mode, the console that we are using to execute docker run will be attached to standard input, output and error. This is not what we want. What we want is after we start up the container, we can still use the console for other commands. Hence, the container needs to be in detached mode. To do so, we use the -d option which will start the container in detached mode.
We then publish a port of the container to the host with -p 8080:80, where 8080 is the host port and 80 is the container port.
Finally, we name our container lunar-dashboard-app with the --name option. If we do not assign a container name with the --name option, then the daemon will generate a random string name for us. Most of the time, the auto-generated name is quite hard to remember, so it’s better for us to give a meaningful name to the container so we can easily refer the container later.
After we run the docker run command, we should be able to find our newly created container lunar-dashboard with the docker ps command, as shown in the following screenshot. The option -a is to show all containers because by default docker ps will show only containers which are running.
Our container lunar-dashboard is now running.
Now, if we visit the localhost at port 8080, we shall be able to see our web app running smoothly.
Hence, I have no choice but to use WSL, which runs a Linux kernel inside of a lightweight utility VM. WSL provides a mechanism for running Docker (with Linux containers) on the Windows machine.
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
Recently, I was asked to cut down the cost of hosting an ASP .NET Core website on Azure. The website is originally hosted on Azure Web App so there is a fixed cost to it that we need to pay per month. Hence, the first solution that comes to my mind is to move it from Web App to Function because the website is a static website and it is not expecting large group of visitors in any given point of time.
So why do I choose to use Azure Function? Unlike the Web Apps, Functions provide the Consumption Plan where instances of the Functions host are dynamically added and removed based on the number of incoming events. This serverless plan scales automatically, and we are billed only when the Functions are running. Hence, when we switch to use Azure Function to serve the static website with the Consumption Plan, we will be able to save significantly.
Serve Static Content with Azure Function
How do we serve static website with Azure Functions?
There are many online tutorials about this but none of them that I found are based on the latest Azure Portal GUI in 2020. So hopefully my article here will help people out there who are using the latest Azure Portal.
The following screenshot shows the setup of my Azure Function.
[Image Caption: Azure Function with .NET Core 3.1.]
After that, we will create a HTTP triggered function in the Azure Function.
Then for the Function itself, we can add the following code in run.csx.
using Microsoft.AspNetCore.Mvc;
public static async Task<IActionResult> Run(HttpRequest req, ILogger log){
string pathValue = req.Path.Value;
...
return new FileStreamResult(
File.OpenRead(
@"d:\home\site\wwwroot\web-itself\website\index.html"),
"text/html; charset=UTF-8");
}
The pathValue helps the Function to be able to serve different web pages based on different value in the URL path. For example, /page1 will load page1.html and /page2 will load page2.html.
If the Function you build is only to serve a single HTML file, then you can just directly return the FileStreamResult without relying on the pathValue.
Configure the Route Template
To have the pathValue working as expected, we first need to configure the route template of the Function. To do so, we can head to the Integration tab of the Function, as shown in the screenshot below.
[Image Caption: Editing the Route Template.]
For the Route Template, we set it to be “web-itself/{page?}” because web-itself is the name of our Function in this case. The question mark in the “{page?}” means that the page is an optional argument in the URL.
So why do we have to include the Function name “web-itself” in the Route Template? The values, according to the documentation, should be a relative path. Since, by default, the Function URL is “xxx.azurewebsites.net/api/web-itself”, so the relative path needs to start from “web-itself”.
Also, since this is going to be an URL of our website, we can change the authorisation level to be “Anonymous” and set GET as the only accepted HTTP method.
Upload the Static Files
So where do we upload the static files to? As the code above shows, the file actually sit in the d:\home\site\wwwroot. How do we upload the static files to this directory?
We need to head to the Kudu console of the Azure Function, and click on the CMD menu item, as shown below. By the way, Kudu console can be found under Development Tools > Advanced Tools > Go of the Azure Function on the Portal.
[Image Caption: Kudu console of the Azure Function.]
We then navigate to the folder which keeps the run.csx of the Function (which is web-itself in my case here). Then we can create a folder called website, for example, to host our static content. What we need to do after this is just uploading the HTML files to this website folder.
Handle JavaScript, CSS, and Other Static Files
How about other static files such as JavaScript, CSS, and images?
Yes, we can use the same way above to serve these files. However, that might be too troublesome because each of them has different MIME Type we need to specify.
So another way of doing that is to store all these files on Azure Storage. So the links in the HTML will be absolute URLs to the files on the Azure Storage.
Finally we can enable Azure CDN for our Azure Function. So that if next time we need to move back to host our web pages on Azure Web App or even Azure Storage, we don’t have to change our CNAME again.
[Image Caption: Both Azure CDN and Azure Front Door is available in Azure Functions.]
In our previous article, we have successfully imported realistic but not real patient data into the database in our Azure API for FHIR. Hence, the next step we would like to go through in this article is how to write a user-friendly dashboard to show those healthcare data.
For frontend, there are currently many open-source web frontend frameworks for us to choose from. For example, in our earlier project of doing COVID-19 dashboard, we used the Material Design from Google.
In this project, in order to make our healthcare dashboard to be consistent with other Microsoft web apps, we will be following Microsoft design system.
In web app development projects, we always come to situations where we need to add a button, a dropdown, a checkbox to our web apps. If we are working in a large team, then issues like UI consistency across different web apps which might be built using different frontend frameworks are problems that we need to solve. So what FAST Framework excites me is solving the problem with Web Components that can be used with any different frontend frameworks.
🎨 Rob Eisenberg’s (left most) first appearance on Jon Galloway’s (right most) .NET Community Standup with Daniel Roth (top most) and Steve Sanderson from the Blazor team. (Source: YouTube) 🎨
Web Components can be integrated well with major frontend frameworks, such as Angular, Blazor, Vue, etc. We can drop Web Components easily to ASP .NET web projects too and we are going to do that in our healthcare dashboard project.
In the FAST Framework, the Web Component that corresponds to the design system provider is called the FASTDesignSystemProvider. Its design system properties can be easily overridden by just setting the value of the corresponding property in the provider. For example, by simply changing the background of the FASTDesignSystemProvider from light to dark, it will automatically switch from the light mode to the dark mode where corresponding colour scheme will be applied.
🎨 FAST Framework allows our web apps to easily switch between light and dark modes. 🎨
UI Fabric and Fluent UI Core
In August 2015, Microsoft released the GA of Office UI Fabric on GitHub. The goal of having Office UI Fabric is to provide the frontend developers a mobile-first responsive frontend framework, similar like Bootstrap, to create the web experiences.
The Office UI Fabric speaks the Office Design Language. As long as you use any Office-powered web app, such as Outlook or OneDrive, the Office web layout should be very familiar to you. So by using the Office UI Fabric, we can easily make our web apps to have Office-like user interface and user experience.
🎨 OneDrive web with the Office design. 🎨
In order to deliver a more coherent and productive experience, Microsoft later released Fluent Framework, another cross-platform design system. Also, to move towards the goal of simplified developer ecosystem, Office UI Fabric later evolved into Fluent UI as well in March 2020.
Fluent UI can be used in both web and mobile apps. For web platform, it comes with two options, i.e. Fabric UI React and Fabric Core.
Fabric UI React is meant for React application while Fabric Core is provided primarily for non-React web apps or static web pages. Since our healthcare dashboard will be built on top of ASP .NET Core 3.1, Fabric Core is sufficient in our project.
However, due to some components, such as ms-Navbar and ms-Table, are still only available in Office UI Fabric but not the Fabric Core, our healthcare dashboard will use both the CSS libraries.
Azure CDN
A CDN (Content Delivery Network) is a distributed network of servers that work together to provide fast delivery of the Internet content. Normally, they are distributed across the globe so that the content can be accessed by the users based on their geographic locations so that users around the world can view the same high-quality content without slow loading time. Hence, it is normally recommended to use CDN to serve all our static files.
Another reason of us not to host static files in our web servers is that we would like to avoid extra HTTP requests to our web servers just to load only the static files such as images and CSS.
To use Azure CDN, firstly, we need to store all the necessary static files in the container of our Storage account. We will be using back the same Storage account that we are using to store the realistic but not real patient data generated by Synthea(TM).
Secondly, we proceed to create Azure CDN.
Thirdly, we add an endpoint to the Azure CDN, as shown in the following screenshot, to point to the container that stores all our static files.
🎨 The origin path of the CDN endpoint is pointing to the container storing the static files. 🎨
Finally, we can access the static files with the Azure CDN endpoint. For example, to get the Office Fabric UI css, we will use the following URL.
Similar as the Azure Function we deployed in the previous article, we will send GET request to different endpoints in the Azure API for FHIR to request for different resources. However, before we are able to do that, we need to get Access Token from the Azure Active Directory first. The steps to do so have been summarised in the same previous article as well.
Since we need application settings such as Authority, Audience, Client ID, and Client Secret to retrieve the access token, we will store them in appsettings.Development.json for local debugging purpose. When we later deploy the dashboard web app to Azure Web App, we will store the settings in the Application Settings.
We will then create a class which will be used to bind to the AzureApiForFhirSettings.
public class AzureApiForFhirSetting {
public string Authority { get; set; }
public string Audience { get; set; }
public string ClientId { get; set; }
public string ClientSecret { get; set; }
}
Finally, to setup the binding, we will need to add the following line in Startup.cs.
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddOptions();
services.Configure<AzureApiForFhirSetting>(Configuration.GetSection("AzureApiForFhirSetting"));
...
services.AddControllersWithViews();
}
After that, we will apply dependency injection of IOption to the classes that we need to use the configuration, as shown in the following example.
public class AzureApiForFhirService : IAzureApiForFhirService
{
private readonly AzureApiForFhirSetting _azureApiForFhirSetting;
public AzureApiForFhirService(IOptions<AzureApiForFhirSetting> azureApiForFhirSettingAccessor)
{
_azureApiForFhirSetting = azureApiForFhirSettingAccessor.Value;
}
...
}
Once we can get the access token, we will be able to access the Azure API for FHIR. Let’s see some of the endpoints the API has.
Azure API for FHIR: The Patient Endpoint
To retrieve all the patients in the database, it’s very easy. We simply need to send a GET request to the /patient endpoint. By default, the number of records returned by the API is 10 at most. To retrieve the next 10 records, we need to send another GET request to another URL link returned by the API, as highlighted in the red box in the following example screenshot.
🎨 The “self” URL is the API link for the current 10 records. 🎨
Once we have all the patients, we then can list them out in a nice table designed with Office UI Fabric, as shown in the following screenshot.
🎨 Listing all the patients in the Azure API for FHIR database. 🎨
When we click on the link “View Profile” of a record, we then can get more details about the selected patient. To retrieve the info of a particular patient, we need to pass the ID to the /patient endpoint, as shown in the following screenshot, which is highlighted in a red box.
🎨 Getting the info of a patient with his/her ID. 🎨
Where can we get the patient’s ID? The ID is returned, for example, when we get the list of all patients.
So after we click on the “View Profile”, we will then be able to reach a Patient page which shows more details about the selected patient, as shown in the following screenshot.
🎨 We can also get the address together with its geolocation data of a patient. 🎨
Azure API for FHIR: The Other Endpoints
There are many resources available in the Azure API for FHIR. Patient is one of them. Besides, we also have Condition, Encounter, Observation, and so on.
To get entries from the endpoints corresponding to the three resources listed above is quite straightforward. However, if we directly send a GET request to, let’s say, /condition, what we will get is all the Condition records of all the patients in the database.
In order to filter based on the patient, we need to add a query string called patient to the endpoint URL, for example /condition?patient=, and then we append the patient ID to the URL.
Then we will be able to retrieve the resources of that particular patient, as shown in the following screenshot.
🎨 It’s interesting to see COVID-19 test record can already be generated in Synthea. 🎨
So far I have only tried the four endpoints. The /observation endpoint is very tricky because the values that it return can be most of the time a single number for a measurement. However, it will also returns two numbers or text for some other measurement. Hence, I have to do some if-else checks on the returned values from the /observation endpoint.
🎨 The SARS-Cov-2 testing result is returned as a text instead of number. 🎨
Source Code of the Dashboard
That’s all for the healthcare dashboard that I have built so far. There are still many exciting features we can expect to see after we have integrated with the Azure API for FHIR.
In the previous article, we talked about how to generate realistic but not real patient data using Synthea(TM) and then also how to store them securely in Azure Storage.
Setup Azure API for FHIR®
Today, we will continue the journey. The first step we need to do is to setup the Azure API for FHIR®.
🎨 The architecture of the system we are going to setup in this article. 🎨
The Azure API for FHIR® is a managed, standards-based, and healthcare data platform available on Azure. It enables organisations to bring their clinical health data into the cloud based on the interoperable data standard FHIR®. The reason why we choose to use it is because security and privacy features are embedded into the service. As customers, we own and control the patient data, knowing how it is stored and accessed. Hence, it’s a PaaS that enables us build healthcare data solution easily.
When we are setting up the Azure API for FHIR®, we need to specify the version of FHIR® we are going to use. Currently there are only four milestone releases of FHIR®. The latest version, R4, was released in December 2018. On Azure, we can only choose either R4 or STU3 (which is the third release). We will go for R4.
🎨 Default values of authentication and database settings when we’re creating the API. 🎨
For the Authentication of the API service, we will be using Azure Access Control (IAM) which is the default option. Hence, we will use the Authority and Audience default values.
When we are setting up this API service, we also need to specify the throughput of the database which will be used to store the imported patient data later.
After we click on the button to create the API service, it will take about 5 minutes to successfully deploy it on Azure.
The following screenshot shows how we register the client application with a redirect URI pointing to https://www.getpostman.com/oauth2/callback which will help us to test the connectivity via Postman later.
🎨 Registering a client application. 🎨
Once the client application is created we need to proceed to create a client secret, as shown in the following screenshot, so that later we can use it to request a token.
🎨 Creating a client secret which will expire one year later. 🎨
Then we have to allow this client application to access our Azure API for FHIR®. There are two things we need to do.
Firstly, we need to grant the client application a permission called user_impersonation from the Azure Healthcare APIs, as shown in the screenshot below.
🎨 Granting API permissions. 🎨
Secondly, we need to head back to our Azure API for FHIR® to enable this client application to access it, as shown in the following screenshot.
🎨 Adding the client application to have the role FHIR® Data Writer. 🎨
The reason we choose only “FHIR Data Writer” role is because this roles enable both read and write access to the API. Once the role is successfully added, we shall see something similar as shown in the screenshot below.
🎨 The client application can now read and write FHIR® data. 🎨
Test the API with Postman
To make sure our Azure API for FHIR® is running well, we can visit its metadata link without any authentication. If it is running smoothly, we shall see something as shown in the following screenshot.
🎨 Yay, our Azure API for FHIR® is running! 🎨
To access the patient data, we need to authenticate ourselves. In order to do so, we first need to get an access token from the client application in Azure Active Directory. We do so by making a POST request to the following URL https://login.microsoftonline.com/<tenant-id>/oauth2/token.
As shown in the following screenshot, the Tenant ID (and also Client ID) can be found at the Overview page of the client application. The resource is basically the URL of the Azure API for FHIR®.
🎨 Successfully retrieved the access_token! 🎨
Once we have the access token, we then can access the Patient endpoint, as shown in the following screenshot.
🎨 Michael Hansen on Azure Friday with Scott Hanselman to talk about Azure API for FHIR®. (Source: YouTube) 🎨
Import Data from Azure Storage
Now that we have the realistic but real patient data in the Azure Storage and we have the Azure API for FHIR® with a SQL database. So the next step that we need to do is pump the data into the SQL database so that other clients can consume the data through the Azure API for FHIR®. In order to do so, we will need a data importer.
Firstly, we will create an Azure Function which will do the data import. There is an official sample on how to write this Function. I didn’t really follow the deployment steps given in the README of the project. Instead, I created a new Azure Function project in the Visual Studio and published it to the Azure. Interestingly, if I use VS Code, the deployment will fail.
🎨 I could not publish Azure Function from local to the cloud via VS Code. 🎨
In Visual Studio, we will be creating a C# function which will run whenever a new patient data is uploaded to the container. Then the same function will remove the patient data from the Azure Storage once the data is fully updated.
🎨 Publish successful on Visual Studio 2019. 🎨
When we are creating a new Azure Function project on Visual Studio, for the convenience later, it’s better we use back the Azure Storage that we use for storing the realistic but not real patient data for our Azure Function app storage as well, as shown in the following screenshot. Thus, the Connection Setting Name will be AzureWebJobsStorage and the Path will point to the container storing our patient data (I recreated the container from syntheadata used in previous article to fhirimport in this article).
🎨 Creating new Azure Functions application. 🎨
After the deployment is successful, we need to add the following application settings to the Azure Function.
Audience: <found in Authentication of Azure API for FHIR®>
Authority: <found in Authentication of Azure API for FHIR®>
ClientId: <found in the Overview of the Client App registered>
ClientSecret: <found in the Certificates & secrets of the Client App>
FhirServerUrl: <found in the Overview of Azure API for FHIR®>
🎨 We need to add these five application settings correctly. 🎨
After that, in order to help us diagnose problems happening in each data import, it’s recommended to integrate Application Insights to our Azure Function. After that, we can use ILogger to log information, warnings, or errors in our Azure Function, for example
log.LogWarning($"Request failed for {resource_type} (id: {id}) with {result.Result.StatusCode}.");
Then with Application Insights, we can easily get the log information from the Azure Function in its Monitor section.
🎨 Invocation details of the Azure Function. 🎨
From the official sample code, I made a small change to the waiting time between each try of the request to the Azure API for FHIR®, as shown in the following screenshot.
In the FhirBundleBlobTrigger.cs, I increased the waiting time to have extra 30 seconds because the original waiting time is short that sometimes the data import will fail.
In the following screenshot, the Observation data can only be uploaded after 5 attempts. In the mean time, our request rate has exceeded the maximum API request rate and thus has been throttled too. So we cannot make calls to Azure API for FHIR® too frequent.
🎨 Five attempts with the same request with throttling happens. 🎨
Now, when we make a GET request to the Patient endpoint of Azure API for FHIR® with a certain ID, we will be able to get the corresponding patient data back on Postman.
🎨 Successfully retrieved the patient data from the API service. 🎨
Yup, so at this stage, we have successfully imported data generated by Synthea(TM) to the Azure API for FHIR® database.