Recently, we are working on a project which needs a long-running service for processing CPU-intensive data. We choose to build a .NET worker service because with .NET, we are now able to make our service cross-platform and run it on Amazon ECS, for example.
Setup
To simplify, in this article, we will be running the following code as a worker service.
using Microsoft.Extensions.Hosting;
using NLog;
using NLog.Extensions.Logging;
Console.WriteLine("Hello, World!");
var builder = Host.CreateApplicationBuilder(args);
var logger = LogManager.Setup()
.GetCurrentClassLogger();
try
{
builder.Logging.AddNLog();
logger.Info("Starting");
using var host = builder.Build();
await host.RunAsync();
}
catch (Exception e)
{
logger.Error(e, "Fatal error to start");
throw;
}
finally
{
// Ensure to flush and stop internal timers/threads before application-exit (Avoid segmentation fault on Linux)
LogManager.Shutdown();
}
So, if we run the code above locally, we should be seeing the following output.
The output of our simplified .NET worker service.
In this project, we are using the NuGet library NLog.Extensions.Logging, thus the NLog configuration is by default read from appsettings.json, which is provided below.
So, we should be having two log files generated with one showing something similar to the output on the console earlier.
The log file generated by NLog.
Containerisation and the Issue
Since we will be running this worker service on Amazon ECS, we need to containerise it first. The Dockerfile we use is simplified as follows.
Simplified version of the Dockerfile we use.
However, when we run the Docker image locally, we receive an error, as shown in the screenshot below, saying “You must install or update .NET to run this application.” However, aren’t we already using .NET runtime as stated in our Dockerfile?
No framework is found.
In fact, if we read the error message clearly, it is the ASP .NET Core that it could not find. This confused us for a moment because it is a worker service project, not a ASP .NET project. So why does it complain about ASP .NET Core?
We accidentally include the NLog.Web.AspNetCore NuGet package which supports only ASP .NET Core platform. This library is not used in our worker service at all.
NLog.Web.AspNetCore supports only ASP .NET platform.
So, after we remove the reference, we can now run the Docker image successfully.
WRAP-UP
That’s all for how we solve the issue we encounter when developing our .NET worker service.
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
Docker Desktop also includes a standalone Kubernetes server running locally within our Docker instance. It is thus very convenient for the developers to perform local testing easily using Docker Desktop.
Despite Docker Desktop remaining free for small businesses, personal use, education, and non-commercial open source projects, it now requires a paid subscription for professional use in larger businesses. Consequently, my friend expressed a desire for me to suggest a fast and free alternative for development without relying on Docker Desktop.
Install Docker Engine on WSL
Before we continue, we need to understand that Docker Engine is the fundamental runtime that powers Docker containers, while Docker Desktop is a higher-level application that includes Docker Engine. Hence, Docker Engine can also be used independently without Docker Desktop on local machine.
In order to install Docker Engine on Windows without using Docker Desktop, we need to utilise the WSL (Windows Subsystem for Linux) to run it.
Step 1: Enable WSL
We have to enable WSL from the Windows Features by checking the option “Windows Subsystem for Linux”, as shown in the screenshot below.
After that, we can press “OK” and wait for the operation to be completed. We will then be asked to restart our computer.
If we already have WSL installed earlier, we can update the built-in WSL to the Microsoft latest version of WSL using the “wsl –update” command in Command Prompt.
Later, if we want to shutdown WSL, we can run the command “wsl –shutdown”.
Step 2: Install Linux Distribution
After we restarted our machine, we can use the Microsoft Store app and look for the Linux distribution we want to use, for example Ubuntu 20.04 LTS, as shown below.
We then can launch Ubuntu 20.04 LTS from our Start Menu. To find out the version of Linux you are using, you can run the command “wslfetch”, as shown below.
For the first timer, we need to set the Linux username and password.
Step 3: Install Docker
Firstly, we need to update the Ubuntu APT repository using the “sudo apt update” command.
After we see the message saying that we have successfully updated the apt repository, we can proceed to install Docker. Here, the “-y” option is used to grant the permission to install required packages automatically.
When Docker is installed, we need to make a new user group with the name “docker” by utilising the below-mentioned command.
Docker Engine acts as a client-server application with a server that has a long-running daemon process dockerd. dockerd is the command used to start the Docker daemon on Linux systems. The Docker daemon is a background process that manages the Docker environment and is responsible for creating, starting, stopping, and managing Docker containers.
Before we can build images using Docker, we need to use dockerd, as shown in the screenshot below.
Step 4: Using Docker on WSL
Now, we simply need to open another WSL terminal and execute docker commands, such as docker ps, docker build, etc.
With this, we can now push our image to Docker Hub from our local Windows machine.
Configure a local Kubernetes
Now if we try to run the command line tool, kubectl, we will find out that the command is still not yet available.
We can use the following commands to install kubectl.
The following screenshot shows what we can see after running the commands above.
After we have kubectl, we need to make Kubernetes available on our local machine. To do so, we need to install minikube, a local Kubernetes. minikube can setup a local Kubernetes cluster on macOS, Linux, and Windows.
To install the latest minikube stable release on x86-64 Linux using binary download:
The high concentration of talented individuals in Samsung SDS is remarkable. I have worked alongside amazing colleagues who are not only friendly but also intelligent and dedicated to their work.
Before attempting the CKAD exam, I received advice on how demanding and challenging the assessment could be. Self-learning can also be daunting, particularly in a stressful work environment. However, I seized the opportunity to embark on my journey towards getting certified and committed myself to the process of kaizen, continuous improvement. It was a journey that required a lot of effort and dedication, but it was worth it.
I took the CKAD certification exam while I was working in Seoul in March 2023. The lovely weather has a soothing impact on my stress levels.
August 2022: Learning Docker Fundamentals
To embark on a successful Kubernetes learning journey, I acknowledge the significance of first mastering the fundamentals of Docker.
Docker is a tool that helps developers build, package, and run applications in a consistent way across different environments. Docker allows us to package our app and its dependencies into a Docker container, and then run it on any computer that has Docker installed.
Docker serves as the foundation for many container-based technologies, including Kubernetes. Hence, understanding Docker fundamentals provides a solid groundwork for comprehending Kubernetes.
I borrowed the free Pluralsight account from my friend, Marvin Heng.
The learning path helps me gain essential knowledge and skills that are directly applicable to Kubernetes. For example, it shows me how the best practices of optimising Docker images by carefully placing the Docker instructions and making use of its caching mechanism.
In the learning path, we learnt about Docker Swarm. Docker Swarm is a tool that helps us manage and orchestrate multiple Docker containers across multiple machines or servers, making it easier to deploy and scale our apps.
A simple architecture diagram of a system using Kubernetes. (Source: Pluralsight)
After getting the basic understanding of Docker Swarm, we move on to learning Kubernetes. Kubernetes is similar to Docker Swarm because they are both tools for managing and orchestrating containerised apps. However, Kubernetes has a larger and more mature ecosystem, with more third-party tools and plugins available for tasks like monitoring, logging, and service discovery.
The Linux Foundation provides a neutral and collaborative environment for open-source projects like Kubernetes to thrive, and the CNCF is able to leverage this environment to build a strong community of contributors and users around Kubernetes.
In addition, the Linux Foundation offers a variety of certification exams that allow individuals to demonstrate their knowledge and skills in various areas of open-source technology. CKAD is one of them.
The CKAD exam costs USD 395.00.
The Linux Foundation also offers Kubernetes-related training courses.
The CKAD course is self-paced and can be completed online, making it accessible to learners around the world. It is designed for developers who have some experience with Kubernetes and want to deepen their knowledge and skills in preparation for the CKAD certification exam.
The CKAD course includes a combination of lectures, hands-on exercises, and quizzes to reinforce the concepts covered. It covers a wide range of topics related to Kubernetes, including:
Kubernetes architecture;
Build and design;
Deployment configuration;
App exposing;
App troubleshooting;
Security in Kubernetes;
Helm.
Kubectl, the command-line client used to interact with Kubernetes clusters. (Image Credit: The Linux Foundation Training)
January 2023: Going through CKAD Exercises and Killer Shell
Following approximately one month of dedicated effort, I successfully completed the online course and proudly received my course completion certificate on 7th of January 2023. So, throughout the remainder of January, I directed my attention towards exam preparation by diligently working through the various online exercises.
The exercise comprises numerous questions, therefore, my suggestion would be to devote one week to thoroughly delve into them, by allocating an hour each day to tackle a subset of the questions.
During my 10-day Chinese New Year holiday, I dedicated my time towards preparing for the exam. (Image Credit: Global Times)
Furthermore, upon purchasing the CKAD exam, we are entitled to receive two complementary simulator sessions for the exam on Killer Shell (killer.sh), both containing the same set of questions. Therefore, it is advisable to strategise and plan our approach towards making optimal utilisation of them.
After going through all the questions in the CKAD exercise mentioned above, I proceeded to undertake the initial killer.sh exam. The simulator features an interface that closely resembles the new remote desktop Exam UI, thereby providing me with invaluable insights on how the actual exam will be conducted.
The killer.sh session is allocated a total of 2 hours for the exam, encompassing a set of 22 questions. Similar to the actual exam, the session is to test our hands-on experience and practical knowledge of Kubernetes. Thus, we are expected to demonstrate our proficiency by completing a series of tasks in a given Kubernetes environment.
The simulator questions are comparatively more challenging than the actual exam. In my initial session, I was able to score only 50% out of 100%. Upon analysing and rectifying my errors, I resolved to invest an additional month’s time to study and prepare more comprehensively.
Scenario-based questions like this are expected in the CKAD exam.
February 2023: Working on Cloud Migration Project
Upon my return from the Chinese New Year holiday, to my dismay, I discovered that I had been assigned to a cloud migration project at work.
The project presented me with an exceptional chance to deploy an ASP .NET solution on Kubernetes on Google Cloud Platform, allowing me to put into practice what I have learned and thereby fortify my knowledge of Kubernetes-related topics.
Furthermore, I am lucky to have had the opportunity to engage in a fruitful discussion with my fellow colleagues, through which I was able to learn more from them about Kubernetes by presenting my work.
March 2023: The Exam
In the early of March, I was assigned to visit Samsung SDS in Seoul until the end of the month. Therefore, I decided to seize the opportunity to complete my second kill.sh simulation session. This time, I managed to score more than the passing score, which is 66%.
After that, I dedicated an extra week to reviewing the questions in the CKAD exercises on GitHub before proceeding to take the actual CKAD exam.
The actual CKAD exam consists of 16 questions that need to be completed within 2 hours. Even though the exam is online and open book, we are not allowed to refer any resources other than the Kubernetes documentation and the Helm documentaion during the exam.
In addition, the exam has been updated to use the PSI Bridge where we get access to a remote desktop instead of just a remote terminal. There is an an article about it. This should not be unfamiliar to you if you have gone through the killer.sh exams.
The new exam UI now provides us access to a full remote XFCE desktop, enabling us to run the terminal application and Firefox to open the approved online documentations, unlike the previous exam UI. Thus, having multiple monitors and bookmarking the documentation pages on our personal Internet browser before the exam are no longer helpful.
Even though I am 30-minute early to the exam, I faced a technical issue with Chrome on my laptop that caused me to be 5 minutes late for the online exam. Fortunately, my exam time was not reduced due to the delay.
The issue was related to the need to end the “remoting_host.exe” application used by Chrome Remote Desktop in order to use a specific browser for the exam. Despite trying to locate it in task manager, I was unable to do so. After searching on Google, I found a solution for Windows users. We need to execute the command “net stop chromoting” to the “remoting_host.exe”.
During my stay in Seoul, my room at Shilla Stay Seocho served as my exam location.
CKAD certification exam is an online proctored exam. This means that it can be taken remotely but monitored by a proctor via webcam and microphone to ensure the integrity of the exam. Hence, to ensure a smooth online proctored exam experience, it is crucial to verify that our webcam is capable of capturing the text on our ID, such as our passport, and that we are using a stable, high-speed Internet connection.
During the exam, the first thing I did is to create a few aliases as listed below.
alias k="kubectl "
alias kn="kubectl config set-context --current --namespace"
export dry="--dry-run=client -o yaml"
export now="--force --grace-period 0"
These aliases helped me to complete the commands quickier. In addition, if it’s possible, I also always use an imperative command to create a YAML file using kubectl.
By working on the solution based on the generated YAML file, I am able to save a significant amount of time as opposed to writing the entire YAML file from scratch.
I completed only 15 questions with 1 not answered. I chose to forgo a 9-mark question that I was not confident in answering correctly, in order to have more time to focus on other questions. In the end, I still managed to score 78% out of 100%.
The passing score for CKAD is 66% out of 100%.
Moving Forward: Beyond the Certification
In conclusion, obtaining certification in one’s chosen field can be a valuable asset for personal and professional development. In my experience, it has helped me feel more confident in my abilities and given me a sense of purpose in my career.
However, it is essential to remember that it is crucial to continue learning and growing, both through practical experience and ongoing education, in order to stay up-to-date with the latest developments in the field. The combination of certification, practical experience, and ongoing learning can help us to achieve our career goals and excel in our role as a software engineer.
To begin, we need to createa a new Email Communication Services resource from the marketplace, as shown in the screenshot below.
US is the only option for the Data Location now in Email Communication Services.
Take note that currently we can only choose United States as the Data Location, which determines where the data will be stored at rest. This cannot be changed after the resource has been created. This thus make our Azure Communication Services which we need to configure next to store the data in United States as well. We will talk about this later.
After getting the domain, we need to connect Azure Communication Services to it to send emails.
As we talked earlier, we need to make sure that the Azure Communication Services to have United States as its Data Location as well. Otherwise, we will not be able to link the email domain for email sending.
Before we begin, we have to get the connection string for the Azure Communication Service resource.
Getting connection string of the Azure Communication Service.
Here I have the following code to send a sample email to myself.
using Azure.Communication.Email.Models;
using Azure.Communication.Email;
string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING") ?? string.Empty;
string emailFrom = Environment.GetEnvironmentVariable("EMAIL_FROM") ?? string.Empty;
if (connectionString != string.Empty)
{
EmailClient emailClient = new EmailClient(connectionString);
EmailContent emailContent = new EmailContent("Welcome to Azure Communication Service Email APIs.");
emailContent.PlainText = "This email message is sent from Azure Communication Service Email using .NET SDK.";
List<EmailAddress> emailAddresses = new List<EmailAddress> {
new EmailAddress("gclin009@hotmail.com") { DisplayName = "Goh Chun Lin" }
};
EmailRecipients emailRecipients = new EmailRecipients(emailAddresses);
EmailMessage emailMessage = new EmailMessage(emailFrom, emailContent, emailRecipients);
SendEmailResult emailResult = emailClient.Send(emailMessage, CancellationToken.None);
}
Setting environment variables for local debugging purpose.
Tada, there should be an email successfully sent out as instructed.
Email is successfully sent and received. =)
Containerise the Console App
Next what we need to do is containerising our console app above.
Assume that our console app is called MyConsoleApp, then we will prepare a Dockerfile as follows.
FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["MyMedicalEmailSending.csproj", "."]
RUN dotnet restore "./MyConsoleApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "MyConsoleApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyConsoleApp.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyConsoleApp.dll"]
We then can publish it to Docker Hub for consumption later.
In Kubernetes, pods are the smallest deployable units of computing we can create and manage. A pod can have one or more relevant containers, with shared storage and network resources. Here, we will be scheduling a job so that it creates pods containing our container with the image we created above to operate the execution of the pods, which is in our case, to send emails.
Hence, if we would like to have the email scheduler to be triggered at 8am of every Friday, we can create a CronJob in the namespace my-namespace with the following YAML file.
After the CronJob is created, we can proceed to annotate it with the command below.
kubectl annotate cj email-scheduler jobtype=scheduler frequency=weekly
This helps us to query the cron jobs with jsonpath easily in the future. For example, we can list all cronjobs which are scheduled weekly, we can do it with the following command.
kubectl get cj -A -o=jsonpath="{range .items[?(@.metadata.annotations.jobtype)]}{.metadata.namespace},{.metadata.name},{.metadata.annotations.jobtype},{.metadata.annotations.frequency}{'\n'}{end}"
Create ConfigMap
In our email sending programme, we have two environment variables. Hence, we can create ConfigMap to store the data as key-value pair.
Then, the Pods created by the CronJob can thus consume the ConfigMap and Secret above as environment variables. So, we need to update the CronJob YAML file to be as follows.
Problem with using Secrets is that we can’t really commit them to our code repository because the data are only encoded but not encrypted. Hence, in order to store our Secrets safely, we need to use SealedSecret which helps us to encrypt our Secret. The SealedSecret can only be decrypted by the controller running in the targer cluster.
Unpack it: tar -zxvf helm-v3.2.0-linux-amd64.tar.gz
Move the Helm binary to desired location: sudo mv linux-amd64/helm /usr/local/bin/helm
Once we have successfully downloaded Helm and have it ready, we can add a Chart repository. In our case, we need to add the repo of SealedSecret Helm Chart.
The Kamus URL could be found after we installed Kamus as shown in the screenshot below.
Kamus URL in localhost
We need to follow the instruction printed on the screen to get the Kamus URL. To do so, we need to forward local port to the pod, as shown in the following screenshot.
Successfully forward the port and thus can use the URL as the Kamus URL.
Hence, let’s say we want to encrypt a secret “alamak”, we can do so as follows.
Since our localhost Kamus URL is using HTTP, so we have to specify “–allow-insecure-url”.
After we have encrypted our secret successfully, we need to configure our pod accordingly so that it can decrypt the value with Kamus Decrypt API. The simplest way will be storing our secret in a ConfigMap because it is already encrypted, so it’s safe to store it in ConfigMap.
If the deployment is successful, we should be able to see the following when we visit localhost:8081 on our Internet browser, as shown in the following screenshot.
Yay, the original text “alamak” is successfully decrypted and displayed.
Deploy Our CronJob
Now, since we have everything setup, we can create our Kubernetes CronJob with the YAML file we have earlier. For local testing, I have edited the schedule to be “*/2 * * * *”. This means that an email will be sent to me every 2 minutes.
After waiting for a couple of minutes, I have received a few emails sent via the Azure Communication Services, as shown below.
Now the emails are received every 2 minutes. =)
Hoorey, this is how we build a simple Kubernetes CronJob and how we can send emails with the Azure Email Communication Services.