Headless Raspberry Pi 3 Model B SSH Setup on macOS

Image Source: Circuit Board

Two years ago when I visited Microsoft Singapore office, their engineers, Chun Siong and Say Bin, gave me Satya Nadella’s book, Hit Refresh, as well as a Raspberry Pi 3 as gift.

In fact I also got another Raspberry Pi from my brother few years ago and it was running on Windows 10 IoT. It is now in my office together with monitor, keyboard, and mouse. Due to the covid lockdown, I am not allowed to go back to office yet. Hence, I think it’s time to see how we can setup this new Raspberry Pi for headless SSH access with my MacBook.

The method I use here is the headless approach which is suitable if you don’t have access to the GUI to set up a wireless LAN on the Raspberry Pi. Hence, you don’t need to connect monitor, external keyboard, or mouse to the Raspberry Pi at all.

[Image Caption: Enjoying tea break with Chun Siong (Left 1) and Say Bin (Right 2) in October 2018.]

Step 0: Things to Purchase

Besides the Raspberry Pi, we also need to get some other things ready first before we can proceed to setup the device. Most of the things here I bought from Challenger.

Item 1: Toshiba microSDHC UHS-I Card 32GB with SD Adapter

Raspberry Pi uses a microSD card as a hard drive. We can either use a Class 10 microSD card or UHS for Ultra High Speed. Here we are using UHS Class 1. The reason why we do not choose anything greater than 32GB is also because according to the SD specifications, any SD card larger than 32GB is an SDXC card and has to be formatted with the exFAT filesystem which is not supported yet by the Raspberry Pi bootloader. There is of course solutions for this if more SD space is needed for your use cases, please read about it more on Raspberry Pi documentation. The microSD card we choose here is a 32GB SDHC using FAT32 file system which is supported by the Raspberry Pi, so we are safe.

Item 2: USB Power Cable

All models up to the Raspberry Pi 3 require a microUSB power connector (Raspberry Pi 4 uses a USB-C connector). If your Raspberry Pi doesn’t come with the power supply, you can get one from the official website which has the official universal micro USB power supply recommended for Raspberry Pi 1, 2, and 3.

Item 3: Cat6 Ethernet Cable (Optional)

It’s possible to setup Raspberry Pi over WiFi because Model 3 B comes with support of WiFi. However to play safe, I also prepare an Ethernet cable. In the market now, we can find Cat6 Ethernet cable easily which is suitable for transferring heavy files and communication with a local network.

Item 4: USB 2.0 Ethernet Adapter (Optional)

Again, this item is optional if you don’t plan to setup the Raspberry Pi through Ethernet cable and you are not using a machine like MacBook which doesn’t have an Ethernet port.

Step 1: Flash Raspbian Image to SD Card

With the SD card ready, we can now proceed to burn the OS image to it. Raspberry Pi OS (previously called Raspbian) is the official operating system for all models of the Raspberry Pi. There are three types of Raspberry Pi OS we can choose from. Since I don’t need to use the desktop version of it, I go ahead with the Raspberry Pi OS Lite, which is also the smallest in size. Feel free to choose the type that suits your use case most.

[Image Caption: There are three types of Raspberry Pi OS for us to choose.]

Take note that here the Raspberry Pi OS is already based on Debian Buster, which is the development codename for Debian 10 released in July 2019.

After downloading the zip file to MacBook, we need to burn the image to the SD card.

Since the microSD card we bought above comes with the adapter, so we can easily slot them into the MacBook (which has a SD slot). To burn the OS image to the SD card, we can use Etcher for macOS.

[Image Caption: To use Etcher is pretty straight forward.]

The first step in Etcher is to select the Raspberry Pi OS zip file we downloaded earlier. Then we select the microSD card as the target. Finally we just need to click on the “Flash!” button.

After it’s done, we may need to pull out the SD card from our machine and then plug it back in in order to see the image that we flash. On MacBook, it is appears as a volume called “boot” in the Finder.

Step 2: Enabling Raspberry Pi SSH

To enable SSH, we need to place an empty file called “ssh” (without extension) in the root of the “boot” with the following command.

touch /Volumes/boot/ssh

This will later allow us to login to the Raspberry Pi over SSH with the username pi and password raspberry.

Step 3: Adding WiFi Network Info

Again, we need to place a file called “wpa_supplicant.conf” in the root of the “boot”. Then in the file, we will put the following as its content.

network={
    ssid="NETWORK-NAME"
    psk="NETWORK-PASSWORD"
}

The WPA in the file name stands for WiFi Protected Access, a security and security certification program built by the Wi-Fi Alliance® to secure wireless computer networks.

The wpa_supplicant is a free software implementation of an IEEE 802.11i supplicant (To understand what supplicant is, please read here). Using wpa_supplicant to configure WiFi connection on Raspberry Pi is going to be straightforward.

Hence, in order to setup the WiFi connection on the Raspberry Pi, now we just need to specify our WiFi network name (SSID) and its password in the configuration of the wpa_supplicant.

Step 3a: Buster Raspberry Pi Onwards

[Image Caption: The new version of Raspberry Pi OS, Buster, was released in June 2019.]

However, with the latest Buster Raspberry Pi OS release, we must also add a few more lines at the top of the wpa_supplicant.conf as shown below.

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
    ssid="NETWORK-NAME"
    psk="NETWORK-PASSWORD"
}

The “ctrl_interface” is for the control interface. When it’s specified, wpa_supplicant will allow external programs to manage wpa_supplicant. For example, we can use wpa_cli, a WPA command line client, to interact with the wpa_supplicant. Here, “/var/run/wpa_supplicant” is the recommended directory for sockets and by default, wpa_cli will use it when trying to connect with wpa_supplicant.

In the Troubleshooting section near the end of this article, I will show how we can use wpa_cli to scan and list network names.

In addition, access control for the control interface can be configured by setting the directory to allow only members of a group to use sockets. This way, it is possible to run wpa_supplicant as root (since it needs to change network configuration and open raw sockets) and still allow GUI/CLI components to be run as non-root users. Here we allow only members of “netdev” who can manage network interfaces through the network manager and Wireless Interface Connection Daemon (WICD).

Finally, we have “update_config”. This option is to allow wpa_supplicant to overwrite configuration file whenever configuration is changed. This is required for wpa_cli to be able to store the configuration changes permanently, so we set it to 1.

Step 3b: Raspberry Pi 3 B+ and 4 B

According to the Raspberry Pi documentation, if you are using the Raspberry Pi 3 B+ and Raspberry Pi 4 B, you will also need to set the country code, so that the 5GHz networking can choose the correct frequency bands. With the country code, it looks something as such. 

country=SG
ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1
network={
ssid="NETWORK-NAME"
psk="NETWORK-PASSWORD"
}

The first line is setting the country in which the device is used. Its value is the country code following ISO 3166 Alpha-2 country code. Here I put SG because I am now in Singapore.

Take note that, Raspberry Pi 3 B doesn’t support 5GHz WiFi networks. It only can work with 2.4GHz WiFi networks.

In the place where I am staying, we have a dual-band router. It uses two bands: 2.4GHz and 5GHz. So my Raspberry Pi can only connect to the 2.4GHz WiFi network.

By the way, 5GHz WiFi or 5G WiFi has nothing to do with the 5G mobile network. =)

Trump bans Huawei 5G in the U.S.
[Image Caption: 5GHz WiFi, or normally referred to as “5G WiFi”, has nothing to do with the new 5G mobile network that is widely discussed now. The 5G in mobile network stands for “5th Generation” instead. (Image Source: CityNews News 1130)]

Step 4: Boot the Raspberry Pi

Now we can eject the SD card from our MacBook and plug it into the Raspberry Pi. After that, we proceed to power on the Raspberry Pi after plug the USB power cable into it.

By default, the hostname of a new Raspberry Pi is raspberrypi. Hence, before we proceed, we need to remove all the keys belonging to raspberrypi.local with the following command, just so that we have a clean start.

ssh-keygen -R raspberrypi.local

Don’t worry if you get a host not found error for this command.

Of course, if you know the Raspberry Pi IP address, you can use its IP address as well instead of “raspberrypi.local”.

Next, let’s login to the Raspberry Pi with the following commands using the username and password in the Step 2 above.

ssh pi@raspberrypi.local

Step 5: Access Raspberry Pi Successfully

Now we should be able to access the Raspberry Pi (If not, please refer to the Troubleshooting section near the end of this article).

Now we have to do a few additional configurations for our Raspberry Pi using the following command.

sudo raspi-config
[Image Caption: Configuration Tool of our Raspberry Pi.]

Firstly, we need to change the password by selecting the “1 Change User Password” item as shown in the screenshot above. Don’t use the default password for security purposes.

Secondly, we also need to change the hostname of the device which is under the “2 Network Options” item. Don’t always use the default hostname else we will end up with many Raspberry Pi using the same “raspberrypi” as the hostname.

Thirdly, under the “4 Localisation Options”, please make sure the Locale (by default should be UTF-8 chosen), Time Zone, and WLAN Country (should be the same we set in Step 3b) are correct.

[Image Caption: Since I am in Singapore, so I choose “Asia” here.]

Finally, if you are also on a new image, we are recommended to expand the file system under the “7 Advanced Options” item.

[Image Caption: Ensuring all of the SD card storage is available to the OS.]

Now we can proceed to reboot our Raspberry Pi with the following command.

sudo reboot

Step 6: Get Updates

After the Raspberry Pi is successfully rebooted, please login to the device again to do updates with the following commands.

sudo apt-get update -y
sudo apt-get upgrade -y

After this, if you would like to switch off the Raspberry Pi, please shut it down properly with the following command else it may corrupt the SD card.

sudo shutdown -h now
[Image Caption: It’s important that our Raspberry Pi gets a clean shutdown else there may be a variety of issues including corruption of our SD card and file system. (Image Source: BC Robotics)]

Troubleshooting WiFi Connection

When I first started connecting to the Raspberry Pi over WiFi, the connection always fails. That’s why in the end I chose to connect the Raspberry Pi with my laptop through Ethernet cable first before I could do some troubleshooting.

Connecting with Ethernet Cable

To connect to the Raspberry Pi via Ethernet cable, we just need to make sure that in the Network settings of our MacBook, the Ethernet connection status is connected, as shown in the following screenshot.

[Image Caption: Connected!]

We also have to make sure that we have “Using DHCP” selected for “Configure IPv4” option. Finally, we also need to check that the “Location” at the top of the dialog box has “Automatic” selected for this Ethernet network configuration.

That’s all to setup the Ethernet connection between the Raspberry Pi and our MacBook.

Troubleshooting the WiFi Connection

If you also have problems in WiFi connection even after you have rebooted the Raspberry Pi, then you can try the following methods after you can access the Raspberry Pi through Ethernet cable.

  1. To get the network interfaces:
    ip link show
  2. To list network names (SSID):
    iwlist wlan0 scan | grep ESSID
  3. To edit or review the WiFi settings on the Raspberry Pi:
    sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
  4. To show wireless devices and their configuration:
    iw list

In the output of iwlist wlan0 frequency command, we can see that all broadcasting WiFi channels are having frequency in the 2.4GHz range, as shown in the following screenshot. Hence we know that the Raspberry Pi 3 Model B can only do 2.4GHz.

[Image Caption: Raspberry Pi 3 Model B does not support 5GHz WiFi networks.]

This is a very crucial thing to take note of because I wrongly set the SSID of a 5GHz WiFi network in the WPA Supplicant configuration file and the Raspberry Pi could not connect to the WiFi network at all.

We can also use wpa_cli to scan and list the network names, as shown in the following screenshot.

[Image Caption: Using wpa_cli to scan for the network.]

In the above scan results, you can see that there are 12 networks that the Raspberry Pi can pick up, the frequency that they are broadcasting on (again, it is shown in the range of 2.4GHz), the signal strength, the security type, and network name (SSID).

You may ask why I need to specify the interface in the wpa_cli command. This is because wpa_cli default interface is actually not the correct WiFi interface the Raspberry Pi has. As shown in the command “ip link show”, we are using wlan0 for WiFi connection. However, if we don’t specify the interface in wpa_cli, then we will get the following issue.

[Image Caption: Oh no, no network results found in the scan_result.]

This is the problem discussed in the Arch Linux forum and solved in February 2018.

Conclusion

That’s all to get my Raspberry Pi up and running. Please let me know in comment section if you have any doubts about the steps listed above.

Here, I’d like to thank Mitch Allen for writing the great tutorial on how to setup Raspberry Pi over WiFi. His tutorial also includes instructions for Windows users, so if you are on Windows, please refer to his tutorial as well.

References

Smart Contracts and Azure Blockchain Service

On 30th July, I’m glad to attend the webinar “Percolation Framework for Loss Distribution of Smart Contract Risks”, which is organised by National University of Singapore.

Assistance Prof. Petar Jevtić from Arizona State University is invited as the webinar key speaker. Prof Jevtić’s research focus is on the modelling of risk with primary applications in Actuarial Science and Mathematical Finance. During the webinar, he shared with us his work done together with Nicolas Lanchier. Since the topic is related to Smart Contracts, in the beginning of his talk, he gave us a clear definition of Smart Contract from Nick Szabo.

For those who have been following Blockchain news, the name “Nick Szabo” should sound familiar because Nick Szabo is the computer scientist known for research in digital currency. Most importantly, although he has repeatedly denied it, people suspect that he is the bitcoin founder, Satoshi Nakamoto.

🎨 Topic and synopsis of the webinar. 🎨

Smart Contracts

So, what is a Smart Contract?

The contract, a set of promises agreed to in a “meeting of the minds”, is the traditional way to formalise a relationship. We sign many different types of contracts in our lives, for example work contract, sale agreement, tenancy agreement, etc. Some of the major problems with the traditional contracts are drafting it takes a long time and we have to go to the hassle of hiring lawyers and detailing every term and condition of the agreement. Hence, what if the contract comes in a digital form where we can code it and then it can understand and execute its own terms?

Smart Contract, according to Nick Szabo, is first defined in 1990s as “a set of promises, specified in digital form, including protocols within which the parties perform on these promises“. So, Smart Contract is a computer program or a transaction protocol which is able to automate a good deal of the contracting process, without anyone intervening.

Since Smart Contract can map legal obligation into an automated process, Smart Contract helps us to reduce the need in trusted intermediators, enforcement cost, and fraud losses by formalising and securing relationships over computer networks.

Azure Blockchain Service (ABS)

When we talk about Smart Contract, there is one thing that we definitely have to discuss as well, i.e. the Blockchain technology. Both of them are of intense interest to businesses. The Smart Contracts are stored on blockchain. It’s extremely difficult for the blockchain system to be corrupted as it would require enormous computing power to override the whole network.

🎨 Taking Singapore Airlines flight from Japan to Singapore. 🎨

Singapore Airlines is one of those businesses that adopt the blockchain technology. In 2018, Singapore Airlines used Azure to convert customers’ airline miles into blockchain-based token which can then be spent across a network of retail partners. They are currently also evaluating Azure Blockchain Service (ABS), the new Microsoft managed service offering for blockchain.

ABS is a fully-managed ledger service in Azure announced in May 2019. With ABS, customers do not need to worry about developing and maintaining the underlying blockchain infrastructure.

🎨 Azure Blockchain Service is still in preview in July 2020. 🎨

There are a few things that I’d like to share regarding the ABS, even though it is still in preview now (as of July 2020).

Firstly, currently the ABS is supported only in a few regions, such as Southeast Asia, West Europe, East US, and so on, as shown in the following screenshot.

🎨 The supported regions in Azure Blockchain Service. 🎨

However, if we would like to capture, transform, and delivers transaction data using the Blockchain Data Service, then we have to be extra careful in choosing the region. As of July 2020, Blockchain Data Manager is only available in the East US and West Europe regions. Hence, if we choose any region outside of the two, we will not be able to access the Blockchain Data Manager, as shown in the screenshot below.

🎨 Blockchain Data Manager is only available in the East US and West Europe regions. 🎨

Secondly, as shared by Mark Russionovich, the Azure CTO, in his announcement of ABS, Microsoft is partnering with J.P. Morgan to make Quorum the first ledger available in the service. What is Quorum? Quorum is an Ethereum-based distributed ledger protocol with transaction/contract privacy and new consensus mechanisms.

Hence, we need to know that currently ABS supports only Quorum as the ledger protocol.

Thirdly, if we would like to have some sample codes on how to interact with the transaction nodes on the blockchain, there are a few samples available on the Azure Portal where we can refer to, as shown in the following screenshot.

🎨 Sample codes on how to connect to the transaction node. 🎨

Currently it takes about 10 minutes to create an ABS successfully on the Azure Portal.

Local Deployment of Smart Contracts, Problems Encountered, and GitHub Issues

With the Azure Blockchain Development Kit for Ethereum extension, we can connect to the ABS consortium from the Visual Studio Code. With the development kit, developers can easily create, connect, build, and deploy smart contracts locally.

There is a very good quick-start guide on how to do all these on Microsoft Docs, so I will not talk about it here. Instead, I will share about problems that I encounter during the development and deployment phases.

As mentioned in the Microsoft Docs, the programming language used to implement Smart Contracts are Solidity. The first problem comes from here. In the new Solidity project created by the development kit, the sample HelloBlockchain as well as the Migrations files are having some issues.

🎨 Error happens in the line having pragma keyword. 🎨

Firstly, there is an issue regarding the compiler version, as highlighted in the first line that starts with Pragma keyword in the screenshot above.

Pragma is a keyword used to enable certain compiler features or checks. As shared by Alex Beregszaszi, the Solidity Co-Lead at Ethereum Foundation, ^0.5.0 means >=0.5.0 and < 0.6.0. Thus, in order to support both 0.5.x and 0.6.x, we need to change it to >=0.5.0, as recommended on Stack Overflow.

Secondly, by default there is no SPDX license identifier in the Solidity files. What is a SPDX license? SPDX stands for Software Package Data Exchange, an open standard about the software licenses under which a software is distributed. So, why do we need SPDX here? The Solidity documentation explains it well as follows.

Trust in smart contract can be better established if their source code is available. Since making source code available always touches on legal problems with regards to copyright, the Solidity compiler encourages the use of machine-readable SPDX license identifiers. Every source file should start with a comment indicating its license…

Layout of a Solidity Source File

Hence, we need to add the SPDX license identifier at the top of Solidity files, for example

// SPDX-License-Identifier: MIT

Now, we can proceed to build and deploy the Smart Contract.

🎨 Deploying contracts to the cloud. 🎨

According to the Microsoft Docs, we should be able to use the Smart Contract UI provided in the Visual Studio Code to call the contract functions. However, it is not possible at all if we have deployed the Smart Contracts to the Azure.

So the third problem that I’d like to share is that, according to Cale Teeter, Microsoft SDE and Senior Software Engineer in Blockchain Engineering, currently there is “no version of the extension support Smart Contract Interaction with remote contracts (including ABS). This will only work for locally deployed contracts“. Hence, I’ve submitted a new Pull Request to Microsoft Docs to add this notice in the documentation. Pat Altimore, Microsoft Senior Content Developer, later agreed with me and was planning to remove this part of the documentation from the Microsoft Docs.

🎨 The Smart Contract UI. It will show “method handler crashed” if the Smart Contract is on ABS. 🎨

Now here comes the fourth problem. When I launched the VS Code again, I could no longer get the Smart Contract UI back even though I redeploy the Smart Contract to local instead of ABS. VS Code will simply keep showing the message “Loading dapp…”, as shown in the screenshot below.

🎨 I raised an issue regarding this on GitHub. 🎨

I thus highlighted this issue on GitHub and later in a discussion with Cale Teeter, we both agree that we really need to think of supporting the Smart Contract UI page in different manner because this Smart Contract UI page is actually a react app and as Cale Teeter himself explains, “supporting the react app inside the IDE requires some IPC magic that hides too much of what is going on“.

Future Work

Currently the issues highlighted above are new. The last one was just raised about two weeks when I am writing this article. So, if you have any new findings or doubts, please always let us know on the GitHub Issues of the project.

That’s all for my journey with Azure Blockchain Service which is still in preview and very young.

By the way, Mark Russinovich did a very good introductory presentation on what blockchain is. If you are interested in blockchain and would like to have a quick overview about this cool technology, please watch the YouTube video below.

Building a Healthcare Dashboard with FAST Framework and Azure API for FHIR

In our previous article, we have successfully imported realistic but not real patient data into the database in our Azure API for FHIR. Hence, the next step we would like to go through in this article is how to write a user-friendly dashboard to show those healthcare data.

For frontend, there are currently many open-source web frontend frameworks for us to choose from. For example, in our earlier project of doing COVID-19 dashboard, we used the Material Design from Google.

In this project, in order to make our healthcare dashboard to be consistent with other Microsoft web apps, we will be following Microsoft design system.

For the backend of the dashboard, we will be using ASP .NET Core 3.1 because even though Michael Hansen provided a sample on GitHub about how to write client app to consume Azure API for FHIR, it is a JavaScript app. So I think another sample on how to do it with ASP .NET Core will be helpful to other developers as well.

🎨 The architecture of the system we are going to setup in this article. 🎨

FAST Framework and Web Components

Last week on 7th July, Rob Eisenberg from Microsoft introduced the FAST Framework during the .NET Community Standup. The FAST Framework, where FAST stands for Functional, Adaptive, Secure, and Timeless, is an adaptive interface system for modern web experiences.

In web app development projects, we always come to situations where we need to add a button, a dropdown, a checkbox to our web apps. If we are working in a large team, then issues like UI consistency across different web apps which might be built using different frontend frameworks are problems that we need to solve. So what FAST Framework excites me is solving the problem with Web Components that can be used with any different frontend frameworks.

🎨 Rob Eisenberg’s (left most) first appearance on Jon Galloway’s (right most) .NET Community Standup with Daniel Roth (top most) and Steve Sanderson from the Blazor team. (Source: YouTube) 🎨

All modern browsers now support Web Components. The term Web Components basically refers to a suite of different technologies allowing us to create reusable custom elements — with their functionality encapsulated away from the rest of our code — and utilise them in our web apps. Hence, using Web Components in our web app increases reusability, testability, and reliability in our codes.

Web Components can be integrated well with major frontend frameworks, such as Angular, Blazor, Vue, etc. We can drop Web Components easily to ASP .NET web projects too and we are going to do that in our healthcare dashboard project.

FAST Design System Provider

Another cool thing in FAST Framework is that it comes with a component known as the Design System Provider which provides us a convenient mechanisms to surface the design system values to UI components and change values where desired.

In the FAST Framework, the Web Component that corresponds to the design system provider is called the FASTDesignSystemProvider. Its design system properties can be easily overridden by just setting the value of the corresponding property in the provider. For example, by simply changing the background of the FASTDesignSystemProvider from light to dark, it will automatically switch from the light mode to the dark mode where corresponding colour scheme will be applied.

🎨 FAST Framework allows our web apps to easily switch between light and dark modes. 🎨

UI Fabric and Fluent UI Core

In August 2015, Microsoft released the GA of Office UI Fabric on GitHub. The goal of having Office UI Fabric is to provide the frontend developers a mobile-first responsive frontend framework, similar like Bootstrap, to create the web experiences.

The Office UI Fabric speaks the Office Design Language. As long as you use any Office-powered web app, such as Outlook or OneDrive, the Office web layout should be very familiar to you. So by using the Office UI Fabric, we can easily make our web apps to have Office-like user interface and user experience.

🎨 OneDrive web with the Office design. 🎨

In order to deliver a more coherent and productive experience, Microsoft later released Fluent Framework, another cross-platform design system. Also, to move towards the goal of simplified developer ecosystem, Office UI Fabric later evolved into Fluent UI as well in March 2020.

Fluent UI can be used in both web and mobile apps. For web platform, it comes with two options, i.e. Fabric UI React and Fabric Core.

Fabric UI React is meant for React application while Fabric Core is provided primarily for non-React web apps or static web pages. Since our healthcare dashboard will be built on top of ASP .NET Core 3.1, Fabric Core is sufficient in our project.

However, due to some components, such as ms-Navbar and ms-Table, are still only available in Office UI Fabric but not the Fabric Core, our healthcare dashboard will use both the CSS libraries.

Azure CDN

A CDN (Content Delivery Network) is a distributed network of servers that work together to provide fast delivery of the Internet content. Normally, they are distributed across the globe so that the content can be accessed by the users based on their geographic locations so that users around the world can view the same high-quality content without slow loading time. Hence, it is normally recommended to use CDN to serve all our static files.

Another reason of us not to host static files in our web servers is that we would like to avoid extra HTTP requests to our web servers just to load only the static files such as images and CSS.

Fortunately, Azure has a service called Azure CDN which will be able to offer us a global CDN to store cached content on edge servers in the point-of-presence locations that are close to end users, to minimise latency.

To use Azure CDN, firstly, we need to store all the necessary static files in the container of our Storage account. We will be using back the same Storage account that we are using to store the realistic but not real patient data generated by Synthea(TM).

Secondly, we proceed to create Azure CDN.

Thirdly, we add an endpoint to the Azure CDN, as shown in the following screenshot, to point to the container that stores all our static files.

🎨 The origin path of the CDN endpoint is pointing to the container storing the static files. 🎨

Finally, we can access the static files with the Azure CDN endpoint. For example, to get the Office Fabric UI css, we will use the following URL.

https://lunar-website-statics.azureedge.net/fabric.min.css

There is already a very clear quick-start tutorial on Microsoft Docs that you can refer to if you are interested to find out more about the integration of Azure CDN with Azure Storage.

Options Pattern in ASP .NET Core

Similar as the Azure Function we deployed in the previous article, we will send GET request to different endpoints in the Azure API for FHIR to request for different resources. However, before we are able to do that, we need to get Access Token from the Azure Active Directory first. The steps to do so have been summarised in the same previous article as well.

Since we need application settings such as Authority, Audience, Client ID, and Client Secret to retrieve the access token, we will store them in appsettings.Development.json for local debugging purpose. When we later deploy the dashboard web app to Azure Web App, we will store the settings in the Application Settings.

Then instead of reading from the setting file directly using the IConfiguration, here we would choose to use Options pattern which enable us to provide strongly typed access to groups of related settings and thus provide also a mechanism to validate the configuration data. For example, the settings we have in our appsettings.Development.json is as follows.

{ 
    "Logging": { 
        ...
    }, 
    "AzureApiForFhirSetting": { 
        "Authority": "...", 
        "Audience": "...", 
        "ClientId": "...", 
        "ClientSecret": "..." 
    }, 
    ...
}

We will then create a class which will be used to bind to the AzureApiForFhirSettings.

public class AzureApiForFhirSetting { 
    public string Authority { get; set; }
    public string Audience { get; set; }
    public string ClientId { get; set; }
    public string ClientSecret { get; set; } 
}

Finally, to setup the binding, we will need to add the following line in Startup.cs.

// This method gets called by the runtime. Use this method to add services to the container. 
public void ConfigureServices(IServiceCollection services) 
{ 
    services.AddOptions();
    services.Configure<AzureApiForFhirSetting>(Configuration.GetSection("AzureApiForFhirSetting")); 
    ...
    services.AddControllersWithViews(); 
}

After that, we will apply dependency injection of IOption to the classes that we need to use the configuration, as shown in the following example.

public class AzureApiForFhirService : IAzureApiForFhirService 
{ 
    private readonly AzureApiForFhirSetting _azureApiForFhirSetting; 

    public AzureApiForFhirService(IOptions<AzureApiForFhirSetting> azureApiForFhirSettingAccessor) 
    { 
        _azureApiForFhirSetting = azureApiForFhirSettingAccessor.Value; 
    }

    ...
}

Once we can get the access token, we will be able to access the Azure API for FHIR. Let’s see some of the endpoints the API has.

Azure API for FHIR: The Patient Endpoint

To retrieve all the patients in the database, it’s very easy. We simply need to send a GET request to the /patient endpoint. By default, the number of records returned by the API is 10 at most. To retrieve the next 10 records, we need to send another GET request to another URL link returned by the API, as highlighted in the red box in the following example screenshot.

🎨 The “self” URL is the API link for the current 10 records. 🎨

Once we have all the patients, we then can list them out in a nice table designed with Office UI Fabric, as shown in the following screenshot.

🎨 Listing all the patients in the Azure API for FHIR database. 🎨

When we click on the link “View Profile” of a record, we then can get more details about the selected patient. To retrieve the info of a particular patient, we need to pass the ID to the /patient endpoint, as shown in the following screenshot, which is highlighted in a red box.

🎨 Getting the info of a patient with his/her ID. 🎨

Where can we get the patient’s ID? The ID is returned, for example, when we get the list of all patients.

So after we click on the “View Profile”, we will then be able to reach a Patient page which shows more details about the selected patient, as shown in the following screenshot.

🎨 We can also get the address together with its geolocation data of a patient. 🎨

Azure API for FHIR: The Other Endpoints

There are many resources available in the Azure API for FHIR. Patient is one of them. Besides, we also have Condition, Encounter, Observation, and so on.

To get entries from the endpoints corresponding to the three resources listed above is quite straightforward. However, if we directly send a GET request to, let’s say, /condition, what we will get is all the Condition records of all the patients in the database.

In order to filter based on the patient, we need to add a query string called patient to the endpoint URL, for example /condition?patient=, and then we append the patient ID to the URL.

Then we will be able to retrieve the resources of that particular patient, as shown in the following screenshot.

🎨 It’s interesting to see COVID-19 test record can already be generated in Synthea. 🎨

So far I have only tried the four endpoints. The /observation endpoint is very tricky because the values that it return can be most of the time a single number for a measurement. However, it will also returns two numbers or text for some other measurement. Hence, I have to do some if-else checks on the returned values from the /observation endpoint.

🎨 The SARS-Cov-2 testing result is returned as a text instead of number. 🎨

Source Code of the Dashboard

That’s all for the healthcare dashboard that I have built so far. There are still many exciting features we can expect to see after we have integrated with the Azure API for FHIR.

So, if you would like to check out my codes, the dashboard project is now available on GitHub at https://github.com/goh-chunlin/Lunar.HealthcareDashboard. Feel free to raise issue or PR to the project.

Together, we learn better.