Controlling Raspberry Pi with Surface Dial

Last week, I got a Surface Dial from Riza Marhaban because the device looks cool.

Microsoft always produces cool products such as the Zune music player, Windows Mobile, Nokia Lumia phone, etc. However, most of them are discontinued and are now difficult to be found in the market. Even though I hope that Surface Dial won’t have a similar fate as them, I still decide to get one before anything happen to it.

If you are not sure what Surface Dial is, previously, I had shared in another blog post about how we could use Surface Dial in our UWP applications on Windows 10. I have also found a video, which I attach it below, about how people do amazing music with Surface Dial on Windows.

Using Surface Dial Outside of Windows

Many exciting examples online seem to show that we can only use Surface Dial on Windows. Well, that is not true.

Starting from Linux 4.19 released on 22nd October 2018, Surface Dial is supported. Fortunately, if we are using the standard Raspberry Pi OS update/upgrade process, the Linux kernel we have in our Raspberry Pi should be the latest stable version. So that means we can connect Surface Dial to, for example, our Raspberry Pi 3 Model B too.

I use Raspberry Pi 3 Model B as example here because that is the only one that I have now and I have previously set it up with the latest Raspberry Pi OS. Another reason of using Raspberry Pi 3 Model B is because it comes with built-in Bluetooth 4.1.

Pair and Connect Surface Dial to Raspberry Pi

After we SSH into the Raspberry Pi, we will use the bluetoothctl command to pair the Raspberry Pi with a bluetooth device which is a Surface Dial in this case.

$ sudo bluetoothctl

On a smart phone, for example, when we are pairing the phone with a Bluetooth device, we always need to go through an authentication where we will be prompted for a password or asked whether we would like to connect to the Bluetooth device. So how is that being done using bluetoothctl?

Within the bluetoothctl, we need to first register something called Agent. Agent is what manages the Bluetooth “pairing code” and we set the Agent to be the Default Agent.

[bluetooth]# agent on
Agent registered
[bluetooth]# default-agent 
Default agent request successful

Next, we will scan for the nearby Bluetooth devices with the following command. Since the Surface Dial is made by default discoverable, the device name “Surface Dial” is visible together with its Bluetooth address.

[bluetooth]# scan on
Discovery started
[NEW] Device XX:XX:XX:XX:XX:XX Surface Dial

Now we can then pair and connect to the Surface Dial using its Bluetooth address.

[bluetooth]# pair XX:XX:XX:XX:XX:XX
...
[bluetooth]# connect XX:XX:XX:XX:XX:XX
Attempting to connect to XX:XX:XX:XX:XX:XX
[CHG] Device XX:XX:XX:XX:XX:XX Connected: yes 

Now we have Surface Dial successfully connected to our Raspberry Pi. Yay!

Exploring /dev/input

The /dev directory contains all the device files for all the devices connected to our Raspberry Pi. Then we have an input subdirectory in /dev which holds the files for various input devices such as mouse and keyboard or in our case, the Surface Dial.

[Image Caption: The content of /dev/input on my Raspberry Pi.]

With so many event files, how do we know which events are for which input devices? To find that out, we just need to use the following command.

$ cat /proc/bus/input/devices

Then we will see something as follows. The following is the output on my Raspberry Pi.

 I: Bus=0003 Vendor=0d8c Product=000c Version=0100 
 N: Name="C-Media USB Headphone Set  "
 ...
 H: Handlers=kbd event0
 ...

 I: Bus=0005 Vendor=045e Product=091b Version=0108
 N: Name="Surface Dial System Multi Axis"
 ...
 H: Handlers=event1 
 ...
  
 I: Bus=0005 Vendor=045e Product=091b Version=0108
 N: Name="Surface Dial System Control"
 ...
 H: Handlers=kbd event2  
 ...

So based on the Handlers, we know that event1 and event2 bind to the Surface Dial.

Input Event Messages

The event binary messages are C structs, we need to be able to delimit individual messages and decode them. Unless you are, for example, using the beta test of the Raspberry Pi OS, normally we are recommended to use 32-bit OS for Raspberry Pi. On 32-bit platforms, the event messages have 16 bytes each. On 64-bit platforms, the event timestamp is 16 bytes and thus each event message has 24 bytes each.

[Image Caption: Event message size in 32-bit and 64-bit platforms.]

Reading Surface Dial Input Events with Golang

There are three basic actions we can do on Surface Dial, i.e. turn left, turn right, and click. Now we know that the events generated by these actions from Surface Dial can be found in the event1, so what we need to do is just to find a way to read device input events from /dev/input/event1.

There are many ways of doing that. There are online documentations on how to do it in Python with evdev and C# on .NET Core. I didn’t try them out. Instead what I try is to write a simple Golang application to process the event messages. In 2017, Janczer had successfully read and parsed the event messages with Golang.

However, Janczer was working on a 64-bit platform so today I’m going to share how to change the code to work on a Raspberry Pi running on a 32-bit OS.

package main
import (
"bytes"
"encoding/binary"
"fmt"
"os"
"time"
)
func main() {
f, err := os.Open("/dev/input/event1")
if err != nil {
panic(err)
}
defer f.Close()
b := make([]byte, 16)
for {
f.Read(b)
fmt.Printf("%b\n", b)
sec := binary.LittleEndian.Uint32(b[0:8])
t := time.Unix(int64(sec), 0)
fmt.Println(t)
var value int32
typ := binary.LittleEndian.Uint16(b[8:10])
code := binary.LittleEndian.Uint16(b[10:12])
binary.Read(bytes.NewReader(b[12:]), binary.LittleEndian, &value)
fmt.Printf("type: %x\ncode: %d\nvalue: %d\n", typ, code, value)
}
}

Now, when I turn or press on the Surface Dial, I will be able to see the output as shown in the screenshot below. I run the program three times so that I can easily demonstrate the outputs for turning Surface Dial clockwise, turning it counter-clockwise, and clicking it, respectively.

[Image Caption: Event messages sent from the Surface Dial to Raspberry Pi.]

With this, now it’s up to us to make the best out of the combination of Raspberry Pi and Surface Dial.

References

Building COVID-19 Dashboard in Golang with Google BigQuery

It has been almost half a year since the first case relating to the COVID-19 pandemic in Singapore, the country I am now working at, was confirmed. Two days after the first case was confirmed in Singapore, eight travellers entering Malaysia, my home country, from Singapore were confirmed to be infected as well.

Since then, we were asked to work from home as travel restriction is applied in both countries. While the situation is not getting better, it’s quite disappointing to know that there are still people believing that COVID-19 is a hoax.

Fortunately, there are still a lot more people working hard in this tough period. Earlier on, my friend who is doing research in Colorado told me that she’s working hard with a group of scientists to educate the public about the virus.

🎨  People endured hours-long queues to enter Singapore from Malaysia before the travel restrictions to curb the spread of COVID-19 came into force. 🎨 

In addition, to aid the researchers and data scientists in an effort to combat the pandemic disease, Google BigQuery also decided to host a repository of public datasets from JHU CSSE (Johns Hopkins Center for Systems Science and Engineering). With the public datasets, we can now query up to 1TB for free each month on COVID-19 datasets and the queries over COVID-19 data are free (until 15th of September 2020).

In my previous article, I talked about how Google BigQuery could work together with Google Data Studio to render beautiful reports without any coding. Thus, in this article, I will show how we can write a simple web client in Golang to fetch data from the BigQuery via its API.

BigQuery Public Datasets Programme

There are a huge number of datasets hosted by Google where we can access and integrate them into our applications but Google pays for the storage. Using the public datasets, we only need to pay for the queries we perform on the data.

🎨  There are a lot of public datasets available in the GCP Marketplace. 🎨 

In order to access the public datasets, we first need to enable them through the Google BigQuery documentation (I find this to be quite funny because Google makes the enabling link to be so hidden). In the “Using the Web UI” page, as shown in the screenshot below, we can then find an URL which will let us open the public datasets project manually through browser (Remember to update the &page=project to your project in GCP).

There are also detailed steps written in the documentation of Data Analytics Products (Yes, the same info is spread all over different places).

🎨  The link to enable the public datasets in the web UI. 🎨 

The COVID-19 Dataset

Once we have done the steps above, we shall see the public datasets, including the COVID-19 datasets, available in our Google BigQuery. The dataset that I will be using in this article is the covid19_jhu_csse, a daily updated data repository for COVID-19 from JHU CSSE.

🎨  The covid19_jhu_csse dataset. 🎨 

There are four tables under the dataset where the first three recording the number of confirmed cases, the number of reported deaths, and the number of recovered cases, respectively, in each of the country or region.

The interesting about the first three tables is that they recorded the numbers of each day in a separated column. Hence, every day, there will be one new column added to three of the tables. I’m not sure why they do so but this actually requires us to write our own client in order to get the data. Google Data Studio cannot work well with dynamic column names.

🎨  A column for each of the day. 🎨 

Luckily, there is a fourth table called summary which actually has just one date column and every record for each day is one row instead of one column. This is a more SQL-friendly table and can be integrated with Google Data Studio easily.

🎨  The summary table is more SQL-friendly because the date is stored in just one column. 🎨 

In this article, I will demonstrate using 1st, 2nd, and 4th table in order to show how we can programmatically get the data through the BigQuery API.

BigQuery Client Library for Golang

There are many client libraries of Google BigQuery for different types of programming languages, including C#. In this article, we choose to use Golang.

Before we proceed, we need to make sure that we have already enabled the BigQuery API for our project in the GCP. From the GCP Cloud Console, we will get the credential which will allow us to connect to the Google BigQuery and thus we must keep this credential file in a safe and secret place.

Now we can proceed to build our Golang client.

Firstly, we need to install the client library using go get command.

go get -u cloud.google.com/go/bigquery

Secondly, we need to initialise a Google BigQuery client.

ctx := context.Background()
client, err := bigquery.NewClient(ctx, projectID)

if err != nil {
    log.Fatalf("bigquery.NewClient: %v", err)
}

defer client.Close()

Querying the Tables

Next, we can start to query the data in the BigQuery.

rows, err := queryData1(ctx, client)
if err != nil {
    log.Fatal(err) 
}

queryResult := processQueryResult1(rows)

If we have other different queries for different tables or even datasets, we can continue to query in the same way as above.

So what does queryData1 look like? It is basically as simple as follows.

func queryData1(ctx context.Context, client *bigquery.Client) (*bigquery.RowIterator, error) { 
    query := client.Query("<SQL here>")
    
    return query.Read(ctx)
}

For example, if we are fetching the date as well as numbers of confirmed cases and deaths, we will be using the the following SQL.

`SELECT 
    CAST(date as STRING) as date, 
    IFNULL(confirmed, 0) as confirmed_cases, 
    IFNULL(deaths, 0) as deaths 
FROM ` + "`bigquery-public-data.covid19_jhu_csse.summary`" + ` ORDER BY date;

There are a few things to take note here is the use of CAST.

It casts the date field to string otherwise we may encounter problems such as having error of “schema field date of type DATE is not assignable to struct field date of type time.Time” when we unmarshal the returned JSON from the BigQuery in Golang later. The reason why I choose CAST is because casting from a date type to a string is independent of time zone and is of the form YYYY-MM-DD.

In addition, we also use IFNULL to make sure that the value in the confirmed_cases and deaths are always non-negative integers. In the original tables, the numbers can be null.

Now, we just need to have a struct where we can apply RowIterator.Next() to load each row into it. The struct that corresponding to the SQL above is as follows.

type QueryResultDataRow struct { 
    Date           string `bigquery:"date"` 
    ConfirmedCases int64  `bigquery:"confirmed_cases"` 
    Deaths         int64  `bigquery:"deaths"`
}

To iterate, we can use the code below.

func processQueryResult1(iter *bigquery.RowIterator) []QueryResultDataRow { 
    var result []QueryResultDataRow

    for { 
        var row QueryResultDataRow

        err := iter.Next(&row)

        if err == iterator.Done { 
            break 
        }

        if err != nil { 
            log.Print(err) 
            continue 
        }

        result = append(result, row) 
    }
    
    return result
}

Here, I’d like to share that there was a mistake I made when I wrote the code above. I forgot that I should end the for loop when the iterator is done, i.e. when err == iterator.Done. So the return statement will never reach. Please take note of this when you are writing this type of iteration.

Challenge: The Tables Having Dates as Columns

If you would like to challenge yourself to retrieve the data from the tables having dates as their columns, it is possible too, just with a few challenges.

First challenge is that we are not sure when the dataset will be updated. So, we can never be sure for the value of the last column. Since the dataset will be updated daily, to be safe, we can let the date of two days ago to be the last column in our query.

Second challenge is the format of the date. We cannot use the Golang magical reference date (Mon, Jan 2 15:04:05 MST 2006) to format the date because of the underscores found in the column name. There is a very interesting discussion about the origin of the magical reference date on Stack Overflow, in case you are interested, but it’s not important here. Hence, we will use the following code to format the date instead.

latestDateInQuery := fmt.Sprintf("_%v_%v_%v", int(d.Month()), d.Day(), d.Year() - 2000)

So the following code will help us to get the count from the second latest, if not the latest, column.

latestDate := time.Now().AddDate(0, 0, -2)
latestDateInQuery := fmt.Sprintf("_%v_%v_%v", int(latestDate.Month()), latestDate.Day(), latestDate.Year()-2000)

Once we get the column name, we can then use it in the following query.

`SELECT 
    IFNULL(province_state, "") AS place, 
    country_region, 
    latitude, 
    longitude, 
    (` + latestDateInQuery + `) AS count 
FROM ` + "`bigquery-public-data.covid19_jhu_csse.confirmed_cases`;"

Visualising the Data

With the queries above, we can then easily generate results with Google Charts. Here, I use the Line Chart and the GeoChart.

🎨  The COVID-19 dashboard powered by Golang and Google BigQuery. 🎨 

There is an interesting feature in GeoChart is that, by default, when we are using latitude and longitude instead of the address to identify the places, the text shown on the map tooltip will be the latitude and longitude, which is not user friendly. However, we can actually change the text by putting a description column right after the longitude column, as discussed over here on Google Groups. It’s interesting because this is said to be an undocumented support for such a column. So we’re not sure where this will stop working.

Next, I am using web page done with Material Design to display the charts. Please enjoy the following screenshots.

🎨  Charts showing the situation in both of my beloved countries. 🎨 
🎨  Top 10 countries having the most confirmed cases. 🎨 
🎨  The global situation where we locate the places with latitudes and longitudes. 🎨 

That’s all for the COVID-19 dashboard done using Golang and Google BigQuery. Also, thanks to JHU CSSE and Google, we are able to access to such an important data for free.

Finally, I’d like to wish all of you and your loved ones to stay safe and healthy.

🎨  A nurse checks the temperature of a visitor as part of the COVID-19 screening procedure. (Photo Credit: The Straits Times) 🎨 

Leveraging Golang Concurrency in Web App Development

Continue from the previous topic

I first learned about goroutine and channel when I was attending the Golang meetup in GoJek Singapore. In the talk “Orchestrating Concurrency in Go” delivered by the two GoJek engineers Abhinav Suryavanshi and Douglas Vaz, they highlighted the point “concurrency is not same as parallelism” in the very beginning of their talk.

Goroutines and channels are two main things that provide the concurrency support in Golang. To someone who is from .NET and C# background, we are familiar with threads. Goroutines are similar to threads and in fact, goroutines are multiplexed on threads. Threads are managed by kernel and has hardware dependencies but goroutines are managed by Golang runtime and has no hardware dependencies.

Using goroutines is very simple because we only need to add the keyword go in front of any function. This remind me of the async/await in C# which is also concurrent. Async/await in C# implies that if we make a chain of function calls, and the last function is an async function, then all the functions before it have to be async too. On contrary, there is no such constraint in Golang. So when we do concurrency in Golang, we don’t really have to plan on what’s going to be asynchronous ahead of time.

Applying Goroutines

In the web application we built earlier, we introduce a new feature where users can pick the music categories that they are interested at, then it will insert relevant videos into the playlist.

We have six different music categories to choose.

The codes to do the suggestion is as follows.

for i := 0; i < len(request.PostForm["MusicType"]); i++ {
if request.PostForm["MusicType"][i] != "" {
    go retrieveSuggestedVideos(request.PostForm["MusicType"][i], video, user)
   }
}

Now, if we select all categories and press the Submit button, we will see the videos to be added not following the selection order. For example, as shown in the following screenshot, the Anime related videos actually come after the Piano videos.

A lot suggested videos added to the list.

YouTube Data API

If you are interested at how the relevant videos are retrieved, it is actually done with the YouTube Data API as follows.

...
youtubeDataURL := "https://www.googleapis.com/youtube/v3/search?"
youtubeDataURL += "part=snippet&"
youtubeDataURL += "maxResults=25&"
youtubeDataURL += "topicId=%2Fm%2F04rlf&"
youtubeDataURL += "type=video&"
youtubeDataURL += "videoCategoryId=10&"
youtubeDataURL += "videoDuration=medium&"
youtubeDataURL += "q=" + area + "&"
youtubeDataURL += "key=" + os.Getenv("YOUTUBE_API_KEY")

request, err := http.NewRequest("GET", youtubeDataURL, nil)

checkError(err)

request.Header.Set("Accept", "application/json")

client := &http.Client{}
response, err := client.Do(request)

checkError(err)

defer response.Body.Close()

var youtubeResponse YoutubeResponse
err = json.NewDecoder(response.Body).Decode(&youtubeResponse)
checkError(err)

...

References

Containerize Golang Code and Deploy to Azure Web App

Continue from the previous topic

Learning about containers is essentially a huge topic but for beginners, there needs to be something small to help them get started. Hence, in this article, we will focus only on the key concepts of containers and the steps to containerize the program and deploy it to Azure Web App.

In 2017, Azure introduced Web Apps for Containers. Before that, the Azure Web Apps actually ran on Windows VMs managed by Microsoft. So now with this new feature, we can build a custom Docker image containing all the binaries and files and then run a Docker container based on the image on Azure Web Apps. Hence, we can now bring our own Docker container images supporting Golang to Azure with its PaaS option.

As explained in the book “How to Containerize Your Go Code“, containers isolates an application so that container thinks it’s running on its own private machine. So, a container is similar to a VM but it uses the OS kernel on the host rather than having its own.

Firstly, we need to prepare the Dockerfile, a file having the instructions telling the Docker how to build the images automatically. So, a Dockerfile is simply a text file containing all the commands a user could call on the command line to assemble an image. Traditionally, the Dockerfile is named Dockerfile and located in the root of the context.

The Dockerfile we have for our project is as follows.

FROM scratch

EXPOSE 80

COPY GoLab /
COPY public public
COPY templates templates

ENV APPINSIGHTS_INSTRUMENTATIONKEY '' \
CONNECTION_STRING '' \
OAUTH_CLIENT_ID '' \
OAUTH_CLIENT_SECRET '' \
COOKIE_STORE_SECRET '' \
OAUTH2_CALLBACK ''

CMD [ "/GoLab" ]

The Dockerfile starts with a FROM command that specifies the starting point for the image to build. For our project, we don’t have any dependencies, so we can start from scratch. So what is scratch? Scratch is basically a special Docker image that is empty (0B). That means there will be nothing else in our container later aside from what we put in with the rest of the Dockerfile.

The reason why we build from scratch is because not only we can have a smaller image to build later, but also our container will have smaller attack surface. This is because the less code there is within our container, the less likely it is to include a vulnerability.

The EXPOSE 80 command is telling Docker that we need to open the port 80 because the web server is listening on port 80. Hence, in order to access our program from outside the container through HTTP, we need to define it in the Dockerfile that we need the port 80 to be always opened.

The next three COPY commands are basically copying firstly the GoLab executable into the root directory of the container and secondly the two directories, public and templates into the container. Without the HTML, CSS, and JavaScript, our web app will not work.

Now you may wonder why the first COPY command says GoLab instead of GoLab.exe. We shall discuss it later in this article.

After that, we use ENV command to set the environment variables that we will be using in the app.

Finally we have the line CMD [“/GoLab”] to directs the container as to which command to execute when the container is run.

Since the container is not a Windows container, the code that runs inside the container thus needs to be a Linux binary. Fortunately, this is really simple to obtain with the cross-compilation support in Go using the following command.

$ $env:GOOS = "linux"
$ go build -o GoLab .

Thus, in the Dockerfile, we use GoLab file instead of GoLab.exe.

We can now proceed to build the container image with the following command (Take note of the dot in the end of line).

$ docker image build -t chunlindocker/golab:v1 .

The -t flag is for us to specify the name and tag of the container. In this case, I call it chunlindocker/golab:v1 where chunlindocker is the Docker ID of my Docker Hub. Naming in such a way later helps me to push it to a registry, i.e. the Docker Hub.

My Docker Hub profile.

If we want to build the image with another dockerfile, for example Dockerfile.development, we can do it as follows.

$ docker image build -t chunlindocker/golab:v1 -f Dockerfile.development .

Once the docker image is built, we can see it listen when we perform the list command as shown in the screenshot below.

Created docker images.

Now the container image is “deployable”. That means we can run it anywhere with a running docker engine. Since our laptop has Docker installed, so we can proceed to run it locally with the following command.

$ docker container run -P chunlindocker/golab:v1

If you run the command above in the Terminal window inside VS Code, you will see that the command line is “stuck”. This is because the container is already running on local machine. So what we need to do is just open another terminal window and view all the running containers.

The docker ps command by default only shows running containers.

To help humans, Docker auto generates a random name with two words and assigns it to the container. We can see that the container we created is given a random name “nifty_elgama”, lol. So now our container has a “human” name to call. If you want to remove the container later, you not only need to Ctrl+C to stop it, but to totally remove it, you need to use the rm command as follows.

$ docker container rm nifty_elgama

The PORTS column shown in the screenshot is important because it tells us how ports exposed on the container can be accessed from the host. So to test it locally, we shall visit http://localhost:32768.

So our next step is to upload it to a container registry so that later it can be pulled onto any machines, including Azure Web Apps, that will run it. To do so, we do push the image we built above to Docker Hub with the following command.

$ docker push chunlindocker/golab:v1
Successfully push our new container image to Docker Hub.

So, now how do we deploy the container to Azure?

Firstly, we need to create a Web App for Containers on the Azure Portal, as shown in the screenshot below.

Creating Web App for Containers.

The last item in the configuration is the “Configure Container”. Clicking on that, we will be brought to the following screen where we can then specify the container image we want to use and pull it from Docker Hub.

We will be deploying single container called chunlindocker/golab:v4 from Docker Hub.

You can of course deploy a private container from Docker Hub by choosing “Private” as Repository Access. Then Azure Portal will prompt you for Docker Hub login credential for it to pull image from Docker Hub.

Once the App Service is created, we can proceed to read the Logs under “Container Settings”. Then we can see the container initializing process.

Logs about the container in App Service.

After that we can proceed to fill up the Application Settings with the environment variables we have in the web application and then we are good to go.

The website is up and running on Azure Web App for Containers.

References