Kubernetes CronJob to Send Email via Azure Communication Services

In March 2021, Azure Communication Services was made generally available after being showcased in Microsoft Ignite. In the beginning, it only provides services such as SMS as well as voice and video calling. One year after that, in May 2022, it also offers a way to facilitate high volume transactional emails. However, currently this email function is still in public preview. Hence, the email relevant APIs and SDKs are provided without a SLA, which is thus not recommended for production workloads.

Currently, our Azure account has a set of limitation on the number of email messages that we can send. For all the developers, email sending is limited to 10 emails per minute, 25 emails in an hour, and 100 emails in day.

Setup Azure Communication Services

To begin, we need to createa a new Email Communication Services resource from the marketplace, as shown in the screenshot below.

US is the only option for the Data Location now in Email Communication Services.

Take note that currently we can only choose United States as the Data Location, which determines where the data will be stored at rest. This cannot be changed after the resource has been created. This thus make our Azure Communication Services which we need to configure next to store the data in United States as well. We will talk about this later.

Once the Email Communication Service is created, we can begin by adding a free Azure subdomain. With the “1-click add” function, as shown in the following screenshot, Azure will automatically configures the required email authentication protocols based on the email authentication best practices.

Click “1-click add” to provision a free Azure managed domain for sending emails.

We will then have a MailFrom address in the format of donotreply@xxxx.azurecomm.net which we can use to send email. We are allowed to modify the MailFrom address and From display name to more user-friendly values.

After getting the domain, we need to connect Azure Communication Services to it to send emails.

As we talked earlier, we need to make sure that the Azure Communication Services to have United States as its Data Location as well. Otherwise, we will not be able to link the email domain for email sending.

Successfully connected our email domain. =)

A Simple Console App for Sending Email

Now, we need to create the console app which we will be used in our Kubernetes CronJob later to send the emails with the Azure Communication Services Email client library.

Before we begin, we have to get the connection string for the Azure Communication Service resource.

Getting connection string of the Azure Communication Service.

Here I have the following code to send a sample email to myself.

using Azure.Communication.Email.Models;
using Azure.Communication.Email;

string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING") ?? string.Empty;
string emailFrom = Environment.GetEnvironmentVariable("EMAIL_FROM") ?? string.Empty;

if (connectionString != string.Empty)
{
    EmailClient emailClient = new EmailClient(connectionString);

    EmailContent emailContent = new EmailContent("Welcome to Azure Communication Service Email APIs.");
    emailContent.PlainText = "This email message is sent from Azure Communication Service Email using .NET SDK.";
    List<EmailAddress> emailAddresses = new List<EmailAddress> {
            new EmailAddress("gclin009@hotmail.com") { DisplayName = "Goh Chun Lin" }
        };
    EmailRecipients emailRecipients = new EmailRecipients(emailAddresses);
    EmailMessage emailMessage = new EmailMessage(emailFrom, emailContent, emailRecipients);
    SendEmailResult emailResult = emailClient.Send(emailMessage, CancellationToken.None);
}
Setting environment variables for local debugging purpose.

Tada, there should be an email successfully sent out as instructed.

Email is successfully sent and received. =)

Containerise the Console App

Next what we need to do is containerising our console app above.

Assume that our console app is called MyConsoleApp, then we will prepare a Dockerfile as follows.

FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["MyMedicalEmailSending.csproj", "."]
RUN dotnet restore "./MyConsoleApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "MyConsoleApp.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "MyConsoleApp.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyConsoleApp.dll"]

We then can publish it to Docker Hub for consumption later.

If you prefer to use Azure Container Registry, you can refer to the documentation on how to do it on Microsoft Learn.

Create the CronJob

In Kubernetes, pods are the smallest deployable units of computing we can create and manage. A pod can have one or more relevant containers, with shared storage and network resources. Here, we will be scheduling a job so that it creates pods containing our container with the image we created above to operate the execution of the pods, which is in our case, to send emails.

The schedule of the cronjob is defined as follows, according to the Kubernetes documentation on the schedule syntax.

# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (sun, mon, tue, wed, thu, fri, sat)
# │ │ │ │ │
# * * * * *

Hence, if we would like to have the email scheduler to be triggered at 8am of every Friday, we can create a CronJob in the namespace my-namespace with the following YAML file.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: email-scheduler
  namespace: ns-mymedical
spec:
  jobTemplate:
    metadata:
      name: email-scheduler
    spec:
      template:
        spec:
          containers:
          - image: chunlindocker/emailsender:v2023-01-25-1600
            name: email-scheduler
          restartPolicy: OnFailure
  schedule: 0 8 * * fri

After the CronJob is created, we can proceed to annotate it with the command below.

kubectl annotate cj email-scheduler jobtype=scheduler frequency=weekly

This helps us to query the cron jobs with jsonpath easily in the future. For example, we can list all cronjobs which are scheduled weekly, we can do it with the following command.

kubectl get cj -A -o=jsonpath="{range .items[?(@.metadata.annotations.jobtype)]}{.metadata.namespace},{.metadata.name},{.metadata.annotations.jobtype},{.metadata.annotations.frequency}{'\n'}{end}"

Create ConfigMap

In our email sending programme, we have two environment variables. Hence, we can create ConfigMap to store the data as key-value pair.

apiVersion: v1
kind: ConfigMap
metadata:
  name: email-sending
  namespace: my-namespace
data:
  EMAIL_FROM: DoNotReply@xxxxxx.azurecomm.net

For connection string of Azure Communication Service, since it is a sensitive data, we will store it in Secret. Secrets are similar to ConfigMaps but are specifically intended to hold confidential data. We will create a Secret with the command below.

kubectl create secret generic azure-communication-service --from-literal=CONNECTION_STRING=xxxxxx --dry-run=client --namespace=my-namespace -o yaml

It should generate a YAML which is similar to the following.

apiVersion: v1
kind: Secret
metadata:
  name: azure-communication-service
  namespace: my-namespace
data:
  CONNECTION_STRING: yyyyyyyyyy

Then, the Pods created by the CronJob can thus consume the ConfigMap and Secret above as environment variables. So, we need to update the CronJob YAML file to be as follows.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: email-scheduler
  namespace: my-namespace
spec:
  jobTemplate:
    metadata:
      name: email-scheduler
    spec:
      template:
        spec:
          containers:
          - image: chunlindocker/emailsender:v2023-01-25-1600
            name: email-scheduler
            env:
              - name: EMAIL_FROM
                valueFrom:
                  configMapKeyRef:
                    name: email-sending
                    key: EMAIL_FROM
              - name: COMMUNICATION_SERVICES_CONNECTION_STRING
                valueFrom:
                  secretKeyRef:
                    name: azure-communication-service
                    key: CONNECTION_STRING
          restartPolicy: OnFailure
  schedule: 0 8 * * fri

Using SealedSecret

Problem with using Secrets is that we can’t really commit them to our code repository because the data are only encoded but not encrypted. Hence, in order to store our Secrets safely, we need to use SealedSecret which helps us to encrypt our Secret. The SealedSecret can only be decrypted by the controller running in the targer cluster.

Currently, the SealedSecret Helm Chart is officially supported and hosted on GitHub.

Helm is the package manager for Kubernetes. Helm uses a packaging format called Chart, a collection of files describing a related set of Kubernetes resource. Each Chart comprises one or more Kubernetes manifests. With Chart, developers are able to configure, package, version, and share the apps with their dependencies and sensible defaults.

To install Helm on Windows 11 machine, we can execute the following commands in Ubuntu on Windows console.

  1. Download desired version of Helm release, for example, to download version 3.11.0:
    wget https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz
  2. Unpack it:
    tar -zxvf helm-v3.2.0-linux-amd64.tar.gz
  3. Move the Helm binary to desired location:
    sudo mv linux-amd64/helm /usr/local/bin/helm

Once we have successfully downloaded Helm and have it ready, we can add a Chart repository. In our case, we need to add the repo of SealedSecret Helm Chart.

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets

We should be able to locate the SealedSecret chart that we can install with the following command.

helm search repo bitnami
The Chart bitnami/sealed-secret is one of the Charts we can install.

To installed SealedSecret Helm Chart, we will use the following command.

helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets

Once we have done this, we should be able to locate a new service called sealed-secret-controller under Kubernetes services.

The sealed-secret-controller service is under kube-system namespace.

Before we can proceed to use kubeseal to create an encrypted secret, for me at least, there is a need to edit the sealed-secret-controller service. Otherwise, there will be an error message saying “cannot fetch certificate: no endpoints available for service”. If you also encounter the same issue, simply follow the steps mentioned by ghostsquad to edit the service YAML accordingly.

My final edit of the sealed-secret-controller service YAML.

Next, we then can proceed to encrypt our secret, as instructed on the SealedSecret GitHub readme.

kubectl create secret generic azure-communication-service --from-literal=CONNECTION_STRING=xxxxxx --dry-run=client --namespace=my-namespace -o json > mysecret-acs.json

kubeseal < mysecret-acs.json > mysealedsecret-acs.json

The generated file mysealedsecret-acs.json should look something as shown below.

The connection string is now encrypted.

To create the Secret resource, we will simply create it based on the file mysealedsecret-acs.json.

This generated file mysealedsecret-acs.json is thus safe to be committed to our code repository.

Going Zero-Trust: Using Kamus and InitContainer

Besides SealedSecret, there is also another open-source solution known as Kamus, a zero-trust secrets encryption and decryption solution for Kubernetes apps. We can also use Kamus to encrypt our secrets and make sure that the secrets can only be decrypted by the desired Kubernetes apps.

Similarly, we can also install Kamus using Helm Chart with the commands below.

helm repo add soluto https://charts.soluto.io
helm upgrade --install kamus soluto/kamus

Kamus will encrypt secrets for only a specific application represented by a ServiceAccount. A service account provides an identity for processes that run in a Pod, and maps to a ServiceAccount object. Hence, we need to create a Service Account with the YAML below.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-kamus-sa

After creating the ServiceAccount, we can update our CronJob YAML to mount it on the pods.

Next, we can proceed to download and install Kamus CLI which we can use to encrypt our secret with the following command.

kamus-cli encrypt \
  --secret xxxxxxxx \
  --service-account my-kamus-sa \
  --namespace my-namespace \
  --kamus-url <Kamus URL>

The Kamus URL could be found after we installed Kamus as shown in the screenshot below.

Kamus URL in localhost

We need to follow the instruction printed on the screen to get the Kamus URL. To do so, we need to forward local port to the pod, as shown in the following screenshot.

Successfully forward the port and thus can use the URL as the Kamus URL.

Hence, let’s say we want to encrypt a secret “alamak”, we can do so as follows.

Since our localhost Kamus URL is using HTTP, so we have to specify “–allow-insecure-url”.

After we have encrypted our secret successfully, we need to configure our pod accordingly so that it can decrypt the value with Kamus Decrypt API. The simplest way will be storing our secret in a ConfigMap because it is already encrypted, so it’s safe to store it in ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-encrypted-secret
  namespace: my-namespace
data:
  data: rADEn4o8pdN8Zcw40vFS/g==:zCPnDs8AzcTwqkvuu+k8iQ==

Then we can include an InitContainer in our pod. This is because the use of an initContainer allows one or more containers to run only if one or more previous containers run and exit successfully. So we can make use of Kamus Init Container to decrypt the secret using Kamus Decryptor API and output it to a file to be consumed by our app. There is an official demo from the Kamus Team on how to do that on the GitHub. Please take note that one of their YAML files is outdated and thus there is a need to update their deployment.yaml to use “apiVersion: apps/v1” with a proper selector.

Updated deployment.yaml.

After the deployment is successful, we can forward the port 8081 to the pod in the deployment as shown below.

kubectl port-forward deployment/kamus-example 8081:80

If the deployment is successful, we should be able to see the following when we visit localhost:8081 on our Internet browser, as shown in the following screenshot.

Yay, the original text “alamak” is successfully decrypted and displayed.

Deploy Our CronJob

Now, since we have everything setup, we can create our Kubernetes CronJob with the YAML file we have earlier. For local testing, I have edited the schedule to be “*/2 * * * *”. This means that an email will be sent to me every 2 minutes.

After waiting for a couple of minutes, I have received a few emails sent via the Azure Communication Services, as shown below.

Now the emails are received every 2 minutes. =)

Hoorey, this is how we build a simple Kubernetes CronJob and how we can send emails with the Azure Email Communication Services.

Running Our Own NuGet Server on Azure Container Instance (ACI)

In software development, it is a common practice that developers from different teams create, share, and consume useful code which is bundled into packages. For .NET the way we share code is using NuGet package, a single ZIP file with the .nupkg extension that contains compiled code (DLLs).

Besides the public nuget.org host, which acts as the central repository of over 100,000 unique packages, NuGet also supports private hosts. Private host is useful for example it allows developers working in a team to produce NuGet packages and share them with other teams in the same organisation.

There are many open-source NuGet server available. BaGet is one of them. BaGet is built using .NET Core, thus it is able to run behind IIS or via Docker.

In this article, we will find out how to host our own NuGet server on Azure using BaGet.

Hosting Locally

Before we talk about hosting NuGet server on the cloud, let’s see how we could do it in our own machine with Docker.

Fortunately, there is an official image for BaGet on Docker Hub. Hence, we can pull it easily with the following command.

docker pull loicsharma/baget

Before we run a new container from the image, we need to create a file named baget.env to store BaGet’s configurations, as shown below.

# The following config is the API Key used to publish packages.
# We should change this to a secret value to secure our own server.
ApiKey=NUGET-SERVER-API-KEY

Storage__Type=FileSystem
Storage__Path=/var/baget/packages
Database__Type=Sqlite
Database__ConnectionString=Data Source=/var/baget/baget.db
Search__Type=Database

Then we also need to have a new folder named baget-data in the same directory as the baget.env file. This folder will be used by BaGet to persist its state.

The folder structure.

As shown in the screenshot above, we have the configuration file and baget-data at the C:\Users\gclin\source\repos\Lunar.NuGetServer directory. So, let’s execute the docker run command from there.

docker run --rm --name nuget-server -p 5000:80 --env-file baget.env -v "C:\Users\gclin\source\repos\Lunar.NuGetServer\baget-data:/var/baget" loicsharma/baget:latest

In the command, we also mount the baget-data folder on our host machine into the container. This is necessary so that data generated by and used by the container, such as package information, can be persisted.

We can browse our own local NuGet server by visiting the URL http://localhost:5000.

Now, let’s assume that we have our packages to publish in the folder named packages. We can publish it easily with dotnet nuget push command, as shown in the screenshot below.

Oops, we are not authorised to publish the package to own own NuGet server.

We will be rejected to do the publish, as shown in the screenshot above, if we do not provide the NUGET-SERVER-API-KEY that we defined earlier. Hence, the complete command is as follows.

dotnet nuget push -s http://localhost:5000/v3/index.json -k <NUGET-SERVER-API-KEY here> WordPressRssFeed.1.0.0.nupkg

Once we have done that, we should be able to see the first package on our own NuGet server, as shown below.

Yay, we have our first package in our own local NuGet server!

Moving on to the Cloud

Instead of hosting the NuGet server locally, we can also host it on the cloud so that other developers can access too. Here, we will be using Azure Cloud Instance (ACI).

ACI allows us to run Docker containers on-demand in a managed, serverless Azure environment. ACI is currently the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.

The first thing we need to have is to create a resource group (in this demo, we will be using a new resource group named resource-group-lunar-nuget) which will contain ACI, File Share, etc. for this project.

Secondly, we need to have a way to retrieve and persist state with ACI because by default, ACI is stateless. Hence, when the container is restarted all of its state will be lost and the packages we’ve uploaded to our NuGet server on the container will also be lost. Fortunately, we can make use of the Azure services, such as Azure SQL and Azure Blob Storage to store the metadata and packages.

For example, we can create a new Azure SQL database called lunar-nuget-db. Then we create an empty Container named nuget under the Storage Account lunar-nuget.

Created a new Container nuget under lunarnuget Storage Account.

Thirdly, we need to deploy our Docker container above on ACI using docker run. To do so, we first need to log into Azure with the following command.

docker login azure

Once we have logged in, we proceed to create a Docker context associated with ACI to deploy containers in ACI of our resource group, resource-group-lunar-nuget.

Creating a new ACI context called lunarnugetacicontext.

After the context is created, we can use the following command to see the current available contexts.

docker context ls
We should be able to see the context we just created in the list.

Next, we need to swich to use the new context with the following command because currently, as shown in the screenshot above, the context being used is default (the one with an asterisk).

docker context use lunarnugetacicontext

Fourthly, we can now proceed to create our ACI which connect to the Azure SQL and Azure Blob Storage above.

az container create \
    --resource-group resource-group-lunar-nuget \
    --name lunarnuget \
    --image loicsharma/baget \
    --environment-variables <Environment Variables here>
    --dns-name-label lunar-nuget-01 \
    --ports 80

The environment variables include the following

  • ApiKey=<NUGET-SERVER-API-KEY here>
  • Database__Type=SqlServer
  • Database__ConnectionString=”<Azure SQL connection string here>”
  • Storage__Type=AzureBlobStorage
  • Storage__AccountName=lunarnuget
  • Storage__AccessKey=<Azure Storage key here>
  • Storage__Container=nuget

If there is no issue, after 1 to 2 minutes, the ACI named lunarnuget will be created. Otherwise, we can always use docker ps to get the container ID first and then use the following command to find out the issues if any.

docker logs <Container ID here>
Printing the logs from one of our containers with docker logs.

Now, if we visit the given FQDN of the ACI, we shall be able to browse the packages on our NuGet server.

That’s all for a quick setup of our own NuGet server on Microsoft Azure. =)

References

Learning to Learn

The fast pace of change in today’s world means we must understand and quickly respond to changes. Hence, in order to survive and be successful in today’s VUCA world, we need to constantly scan for growth opportunities and be willing to learn new skills.

Working in software industry helps me to realise that with all the disruptions in the modern world, especially technology, ongoing skill acquisition is critical to persistent professional relevance. We shall always look for ways to stretch ourselves to get ahead.

Even though I have been dealing with cloud computing, especially Microsoft Azure, for more than 10 years in my career and study, I still would like to find out how well I compare with my peers instead of thinking that I’m already fine at this area. Hence, with that in mind, I focus on learning Microsoft Azure development related skills on Microsoft Learn during the holiday.

Make the Most of Our Limited Learning Time

So much to learn, so little time.

We all have very little time for learning outside of our work. Combine time we have for learning and the importance of the skills, we can get a simple 2×2 matrix with four quadrants.

2×2 matrix to help prioritizing skills to learn (Reference: Marc Zao-Sanders)

I don’t have much time to keep my cloud computing knowledge relevant because nowadays I focus more on desktop application development. Hence, I decided to give myself a one-week break from work and schedule 6-7 hours each day for learning in the holiday.

In order to make sure we’re investing our time wisely, we shall focus on learning what is needed. Unless we need the skill for our job or a future position, it’s better not to spend time and money for training on that skill because learning is an investment and we shall figure out what the return will be. This is why I choose to learn more about developing cloud apps on Microsoft Azure because that has been what I’m doing at work in the past decade.

To better achieve my goals in self learning, I’ve also identified the right learning materials before I get started. Since I already have the experience of developing modern cloud applications early in my career, I choose to focus only on going through all the 43 relevant modules available on the Microsoft Learn.

Make Learning a Lifelong Habit

No matter which technology era we are in, the world will always belong to people who are always keeping themselves up to date. Hence, lifelong learning is a habit many of us would like to emulate.

Before we start our learning journey, we need to set realistic goals, i.e. goals that are attainable, because there are limits to what we can learn. In addition, as we discussed earlier, we need to ask ourselves how much time and energy we can give to our self learning. We have to understand that learning a skill takes extreme commitment, so we can’t get very far on the journey of self learning if we don’t plan it properly.

Learning is hard work but it also can be fun, especially when we are learning together with like-minded people. Don’t try to learn alone, otherwise self learning can feel over-whelming. For example, besides learning from online tutorials, I also join local software development groups where members are mostly developers who love to share and learn from each other.

Azure Community Singapore, for all who are interested in cloud technology.

Finally, to improve our ability to learn, we also have to unlearn, i.e. choose an alternative mental model or paradigm. We should acknowledge that old mental model is not always relevant or effective. When we fail, we also should avoid defending ourselves and capture the lessons we’ve learned.

Certification and Exam

I’m now a Microsoft certified Azure Developer Associate after I passed their exam AZ-204 in November 2021.

The exam is not difficult but it’s definitely not easy as well.

The exam tests not only our knowledge in developing cloud solutions with Azure services such as Azure Compute and Storage Account, but also our understanding of cloud security and Azure services troubleshooting.

Clearing all the relevant modules on Microsoft Learn does not guarantee that one will pass the exam easily. In fact, it’s the skills and knowledge I gain from work and personal projects help me a lot in the exam, for example the service bus implementation that I learnt last year when I was building a POC for a container trailer tracking system.

How Microsoft Learn helps in my self learning is that it provides an opportunity for me to learn in a free sandbox environment. In addition, the learning materials on the platform are normally best practices to follow. Hence, by learning on Microsoft Learn, I find out some of the mistakes I’ve made in the past and things that I can improve, for example resource management with tags, RBAC, VNet setup, etc.

Notes taken when I was going through the learning materials on Microsoft Learn.

I use Notion to take notes. Notion is a great tool to keep our notes clean and organised. Taking notes helps me to do a last-minute quick revision.

Conclusion

In a fast-moving world, being able to learn new skills helps in our life. There are many ways to learn continuously in our life. Earning certificates by going through challenging exams is just one of the methods. You know what works for yourself, do more of it.

Stay hungry. Stay foolish.

References

My Vaccination Journey: 2nd Jab

On 3rd of September 2021, Singapore announced to offer 3rd COVID-19 shots to senior citizens. Only one day after that, on 4th of September, I went to the vaccination centre to have my 2nd jab of the vaccine.

I took one week leave in the following week to have some rest. I felt tired and thus I slept as much as I could in the first three days after the vaccination. In order to maintain the body hydration level, I also drank about 2 liters of plain water per day. On top of that, since the weather in Singapore was extremely warm in September, starting from three days before my vaccination day, I also bought a cup of coconut water every day.

Fortunately, to me, there was no other major side effects from the vaccine. Hence, I spent my one-week leave to do many things that I didn’t have the time to do in the normal working days.

Activity 1: Microsoft Virtual Training Day

There are many virtual training sessions available currently. The sessions are all offered by Microsoft for free. You can browse the available training sessions on the Training Days website.

On 6th of September, there was a session about Artificial Intelligence (AI) Fundamentals.

AI Fundamental virtual training session.

In the training session, we learnt about concepts such as, AI in Azure, common AI workloads, challenges and risks with AI, and principles of responsible AI.

After that, we learnt how to create predictive models by finding relationships in data using Machine Learning (ML). Using Azure ML Designer, we can visually create a ML pipeline in a drag-and-drop manner.

Creating a predictive pricing model with Azure ML Designer.

Finally, we also learnt how to use Azure Cognitive Services to analyse images, recognise faces, perform OCR.

Activity 2: Learning PyQt

I was asked by our Senior IT Architect to learn how to build a dashboard as a desktop application using Python before my leave. Hence, I also read the tutorials about PyQt5 during my leave.

Using the knowledge I learnt from Azure Virtual Training mentioned above, I built a sample PyQt desktop application to perform face detection in a photo. The source code of the application is currently available on my GitHub repo.

Detecting faces in a photo using the Face API in Azure Cognitive Services.

In this learning exercise, I also found out how to apply Material Design theme to a PyQt5 application using library such as Qt-Material. In addition, I also learnt how to draw charts using PyQtChart. For example, emotions of the faces detected in the screenshot above can be drawn as a chart shown in the following screenshot using the Face API.

One of the faces in the photo above looks a bit sad.

Activity 3: Playing Games

Besides coding, I also took some time to play computer games. Since the version 2.1 of Genshin Impact was released just few days before my leave, I got more time to clear the new story and have fun fishing with my friends as well.

Let’s fish together!

One-Week Leave

Yup, that’s all what I had done during my leave. Now, I am thinking how to clear the remaining annual leave I have brought over from the previous year.