[KOSD] Learning from Issues: Troubleshooting Containerisation for .NET Worker Service

Recently, we are working on a project which needs a long-running service for processing CPU-intensive data. We choose to build a .NET worker service because with .NET, we are now able to make our service cross-platform and run it on Amazon ECS, for example.

Setup

To simplify, in this article, we will be running the following code as a worker service.

using Microsoft.Extensions.Hosting;

using NLog;
using NLog.Extensions.Logging;

Console.WriteLine("Hello, World!");

var builder = Host.CreateApplicationBuilder(args);

var logger = LogManager.Setup()
.GetCurrentClassLogger();

try
{
builder.Logging.AddNLog();

logger.Info("Starting");

using var host = builder.Build();
await host.RunAsync();
}
catch (Exception e)
{
logger.Error(e, "Fatal error to start");
throw;
}
finally
{
// Ensure to flush and stop internal timers/threads before application-exit (Avoid segmentation fault on Linux)
LogManager.Shutdown();
}

So, if we run the code above locally, we should be seeing the following output.

The output of our simplified .NET worker service.

In this project, we are using the NuGet library NLog.Extensions.Logging, thus the NLog configuration is by default read from appsettings.json, which is provided below.

{

"NLog":{
"internalLogLevel":"Info",
"internalLogFile":"Logs\\internal-nlog.txt",
"extensions": [
{ "assembly": "NLog.Extensions.Logging" }
],
"targets":{
"allfile":{
"type":"File",
"fileName":"C:\\\\Users\\gclin\\source\\repos\\Lunar.AspNetContainerIssue\\Logs\\nlog-all-${shortdate}.log",
"layout":"${longdate}|${event-properties:item=EventId_Id}|${uppercase:${level}}|${logger}|${message} ${exception:format=tostring}"
}
},
"rules":[
{
"logger":"*",
"minLevel":"Trace",
"writeTo":"allfile"
},
{
"logger":"Microsoft.*",
"maxLevel":"Info",
"final":"true"
}
]
}
}

So, we should be having two log files generated with one showing something similar to the output on the console earlier.

The log file generated by NLog.

Containerisation and the Issue

Since we will be running this worker service on Amazon ECS, we need to containerise it first. The Dockerfile we use is simplified as follows.

Simplified version of the Dockerfile we use.

However, when we run the Docker image locally, we receive an error, as shown in the screenshot below, saying “You must install or update .NET to run this application.” However, aren’t we already using .NET runtime as stated in our Dockerfile?

No framework is found.

In fact, if we read the error message clearly, it is the ASP .NET Core that it could not find. This confused us for a moment because it is a worker service project, not a ASP .NET project. So why does it complain about ASP .NET Core?

Solution

This problem happens because one of the NuGet packages in our project relies on ASP.NET Core runtime being present, as discussed in one of the StackOverflow threads.

We accidentally include the NLog.Web.AspNetCore NuGet package which supports only ASP .NET Core platform. This library is not used in our worker service at all.

NLog.Web.AspNetCore supports only ASP .NET platform.

So, after we remove the reference, we can now run the Docker image successfully.

WRAP-UP

That’s all for how we solve the issue we encounter when developing our .NET worker service.


KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

Using Docker and Kubernetes without Docker Desktop on Windows 11

Last week, my friend who is working on a microservice project at work suddenly messaged me saying that he realised Docker Desktop is no longer free.

Docker Desktop is basically an app that can be installed on our Windows machine to build and share containerised apps and microservices. It provides a straightforward GUI to manage our containers and images directly from our local machine.

Docker Desktop also includes a standalone Kubernetes server running locally within our Docker instance. It is thus very convenient for the developers to perform local testing easily using Docker Desktop.

Despite Docker Desktop remaining free for small businesses, personal use, education, and non-commercial open source projects, it now requires a paid subscription for professional use in larger businesses. Consequently, my friend expressed a desire for me to suggest a fast and free alternative for development without relying on Docker Desktop.

Install Docker Engine on WSL

Before we continue, we need to understand that Docker Engine is the fundamental runtime that powers Docker containers, while Docker Desktop is a higher-level application that includes Docker Engine. Hence, Docker Engine can also be used independently without Docker Desktop on local machine.

Fortunately, Docker Engine is licensed under the Apache License, Version 2.0. Thus, we are allowed to use it in our commercial products for free.

In order to install Docker Engine on Windows without using Docker Desktop, we need to utilise the WSL (Windows Subsystem for Linux) to run it.

Step 1: Enable WSL

We have to enable WSL from the Windows Features by checking the option “Windows Subsystem for Linux”, as shown in the screenshot below.

After that, we can press “OK” and wait for the operation to be completed. We will then be asked to restart our computer.

If we already have WSL installed earlier, we can update the built-in WSL to the Microsoft latest version of WSL using the “wsl –update” command in Command Prompt.

Later, if we want to shutdown WSL, we can run the command “wsl –shutdown”.

Step 2: Install Linux Distribution

After we restarted our machine, we can use the Microsoft Store app and look for the Linux distribution we want to use, for example Ubuntu 20.04 LTS, as shown below.

We then can launch Ubuntu 20.04 LTS from our Start Menu. To find out the version of Linux you are using, you can run the command “wslfetch”, as shown below.

For the first timer, we need to set the Linux username and password.

Step 3: Install Docker

Firstly, we need to update the Ubuntu APT repository using the “sudo apt update” command.

After we see the message saying that we have successfully updated the apt repository, we can proceed to install Docker. Here, the “-y” option is used to grant the permission to install required packages automatically.

When Docker is installed, we need to make a new user group with the name “docker” by utilising the below-mentioned command.

Docker Engine acts as a client-server application with a server that has a long-running daemon process dockerd. dockerd is the command used to start the Docker daemon on Linux systems. The Docker daemon is a background process that manages the Docker environment and is responsible for creating, starting, stopping, and managing Docker containers.

Before we can build images using Docker, we need to use dockerd, as shown in the screenshot below.

Step 4: Using Docker on WSL

Now, we simply need to open another WSL terminal and execute docker commands, such as docker ps, docker build, etc.

With this, we can now push our image to Docker Hub from our local Windows machine.

Configure a local Kubernetes

Now if we try to run the command line tool, kubectl, we will find out that the command is still not yet available.

We can use the following commands to install kubectl.

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl


$ chmod +x ./kubectl


$ sudo mv ./kubectl /usr/local/bin/kubectl


$ kubectl version --client

The following screenshot shows what we can see after running the commands above.

After we have kubectl, we need to make Kubernetes available on our local machine. To do so, we need to install minikube, a local Kubernetes. minikube can setup a local Kubernetes cluster on macOS, Linux, and Windows.

To install the latest minikube stable release on x86-64 Linux using binary download:

$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

$ sudo install minikube-linux-amd64 /usr/local/bin/minikube

The following is the results of running the installation of minicube. We also run the minicube by executing the command “minikube start”.

We can now run some basic kubectl commands, as shown below.

References

Getting Certified as Kubernetes App Developer: My Kaizen Journey

The high concentration of talented individuals in Samsung SDS is remarkable. I have worked alongside amazing colleagues who are not only friendly but also intelligent and dedicated to their work.

In July 2022, I had numerous discussions with my visionary and supportive seniors about the future of cloud computing. They eventually encouraged me to continue my cloud certification journey by taking the Certified Kubernetes Application Developer (CKAD) certification exam.

Before attempting the CKAD exam, I received advice on how demanding and challenging the assessment could be. Self-learning can also be daunting, particularly in a stressful work environment. However, I seized the opportunity to embark on my journey towards getting certified and committed myself to the process of kaizen, continuous improvement. It was a journey that required a lot of effort and dedication, but it was worth it.

I took the CKAD certification exam while I was working in Seoul in March 2023. The lovely weather has a soothing impact on my stress levels.

August 2022: Learning Docker Fundamentals

To embark on a successful Kubernetes learning journey, I acknowledge the significance of first mastering the fundamentals of Docker.

Docker is a tool that helps developers build, package, and run applications in a consistent way across different environments. Docker allows us to package our app and its dependencies into a Docker container, and then run it on any computer that has Docker installed.

Docker serves as the foundation for many container-based technologies, including Kubernetes. Hence, understanding Docker fundamentals provides a solid groundwork for comprehending Kubernetes.

There is a learning path on Pluralsight specially designed for app developers who are new to Docker so that they can learn more about developing apps with Docker.

I borrowed the free Pluralsight account from my friend, Marvin Heng.

The learning path helps me gain essential knowledge and skills that are directly applicable to Kubernetes. For example, it shows me how the best practices of optimising Docker images by carefully placing the Docker instructions and making use of its caching mechanism.

In the learning path, we learnt about Docker Swarm. Docker Swarm is a tool that helps us manage and orchestrate multiple Docker containers across multiple machines or servers, making it easier to deploy and scale our apps.

A simple architecture diagram of a system using Kubernetes. (Source: Pluralsight)

After getting the basic understanding of Docker Swarm, we move on to learning Kubernetes. Kubernetes is similar to Docker Swarm because they are both tools for managing and orchestrating containerised apps. However, Kubernetes has a larger and more mature ecosystem, with more third-party tools and plugins available for tasks like monitoring, logging, and service discovery.

December 2022: Attending LXF Kubernetes Course

Kubernetes is a project that was originally developed by Google, but it is now maintained by the Cloud Native Computing Foundation (CNCF), which is a sub-foundation of the Linux Foundation.

The Linux Foundation provides a neutral and collaborative environment for open-source projects like Kubernetes to thrive, and the CNCF is able to leverage this environment to build a strong community of contributors and users around Kubernetes.

In addition, the Linux Foundation offers a variety of certification exams that allow individuals to demonstrate their knowledge and skills in various areas of open-source technology. CKAD is one of them.

The CKAD exam costs USD 395.00.

The Linux Foundation also offers Kubernetes-related training courses.

The CKAD course is self-paced and can be completed online, making it accessible to learners around the world. It is designed for developers who have some experience with Kubernetes and want to deepen their knowledge and skills in preparation for the CKAD certification exam.

The CKAD course includes a combination of lectures, hands-on exercises, and quizzes to reinforce the concepts covered. It covers a wide range of topics related to Kubernetes, including:

  • Kubernetes architecture;
  • Build and design;
  • Deployment configuration;
  • App exposing;
  • App troubleshooting;
  • Security in Kubernetes;
  • Helm.
Kubectl, the command-line client used to interact with Kubernetes clusters. (Image Credit: The Linux Foundation Training)

January 2023: Going through CKAD Exercises and Killer Shell

Following approximately one month of dedicated effort, I successfully completed the online course and proudly received my course completion certificate on 7th of January 2023. So, throughout the remainder of January, I directed my attention towards exam preparation by diligently working through the various online exercises.

The initial series of exercises that I went through is the CKAD exercise thoughtfully curated by a skilled software developer, dgkanatsios, and made available on GitHub. The exercise covers the following areas:

  • Core concepts;
  • Multi-container pods;
  • Pod design;
  • Configuration;
  • Observability;
  • Services and networking;
  • State persistence;
  • Helm;
  • Custom Resource Definitions.

The exercise comprises numerous questions, therefore, my suggestion would be to devote one week to thoroughly delve into them, by allocating an hour each day to tackle a subset of the questions.

During my 10-day Chinese New Year holiday, I dedicated my time towards preparing for the exam. (Image Credit: Global Times)

Furthermore, upon purchasing the CKAD exam, we are entitled to receive two complementary simulator sessions for the exam on Killer Shell (killer.sh), both containing the same set of questions. Therefore, it is advisable to strategise and plan our approach towards making optimal utilisation of them.

After going through all the questions in the CKAD exercise mentioned above, I proceeded to undertake the initial killer.sh exam. The simulator features an interface that closely resembles the new remote desktop Exam UI, thereby providing me with invaluable insights on how the actual exam will be conducted.

The killer.sh session is allocated a total of 2 hours for the exam, encompassing a set of 22 questions. Similar to the actual exam, the session is to test our hands-on experience and practical knowledge of Kubernetes. Thus, we are expected to demonstrate our proficiency by completing a series of tasks in a given Kubernetes environment.

The simulator questions are comparatively more challenging than the actual exam. In my initial session, I was able to score only 50% out of 100%. Upon analysing and rectifying my errors, I resolved to invest an additional month’s time to study and prepare more comprehensively.

Scenario-based questions like this are expected in the CKAD exam.

February 2023: Working on Cloud Migration Project

Upon my return from the Chinese New Year holiday, to my dismay, I discovered that I had been assigned to a cloud migration project at work.

The project presented me with an exceptional chance to deploy an ASP .NET solution on Kubernetes on Google Cloud Platform, allowing me to put into practice what I have learned and thereby fortify my knowledge of Kubernetes-related topics.

Furthermore, I am lucky to have had the opportunity to engage in a fruitful discussion with my fellow colleagues, through which I was able to learn more from them about Kubernetes by presenting my work.

March 2023: The Exam

In the early of March, I was assigned to visit Samsung SDS in Seoul until the end of the month. Therefore, I decided to seize the opportunity to complete my second kill.sh simulation session. This time, I managed to score more than the passing score, which is 66%.

After that, I dedicated an extra week to reviewing the questions in the CKAD exercises on GitHub before proceeding to take the actual CKAD exam.

The actual CKAD exam consists of 16 questions that need to be completed within 2 hours. Even though the exam is online and open book, we are not allowed to refer any resources other than the Kubernetes documentation and the Helm documentaion during the exam.

In addition, the exam has been updated to use the PSI Bridge where we get access to a remote desktop instead of just a remote terminal. There is an an article about it. This should not be unfamiliar to you if you have gone through the killer.sh exams.

The new exam UI now provides us access to a full remote XFCE desktop, enabling us to run the terminal application and Firefox to open the approved online documentations, unlike the previous exam UI. Thus, having multiple monitors and bookmarking the documentation pages on our personal Internet browser before the exam are no longer helpful.

The PSI Bridge™ (Image Credit: YouTube)

Before taking the exam, there are a lot more key points mentioned in the Candidate Handbook, the Important Instructions, and the PSI Bridge System Requirements that can help ensure success. Please make sure you have gone through them and get your machine and environment ready for the exam.

Even though I am 30-minute early to the exam, I faced a technical issue with Chrome on my laptop that caused me to be 5 minutes late for the online exam. Fortunately, my exam time was not reduced due to the delay.

The issue was related to the need to end the “remoting_host.exe” application used by Chrome Remote Desktop in order to use a specific browser for the exam. Despite trying to locate it in task manager, I was unable to do so. After searching on Google, I found a solution for Windows users. We need to execute the command “net stop chromoting” to the “remoting_host.exe”.

During my stay in Seoul, my room at Shilla Stay Seocho served as my exam location.

CKAD certification exam is an online proctored exam. This means that it can be taken remotely but monitored by a proctor via webcam and microphone to ensure the integrity of the exam. Hence, to ensure a smooth online proctored exam experience, it is crucial to verify that our webcam is capable of capturing the text on our ID, such as our passport, and that we are using a stable, high-speed Internet connection.

During the exam, the first thing I did is to create a few aliases as listed below.

alias k="kubectl "
alias kn="kubectl config set-context --current --namespace"
export dry="--dry-run=client -o yaml"
export now="--force --grace-period 0"

These aliases helped me to complete the commands quickier. In addition, if it’s possible, I also always use an imperative command to create a YAML file using kubectl.

By working on the solution based on the generated YAML file, I am able to save a significant amount of time as opposed to writing the entire YAML file from scratch.

I completed only 15 questions with 1 not answered. I chose to forgo a 9-mark question that I was not confident in answering correctly, in order to have more time to focus on other questions. In the end, I still managed to score 78% out of 100%.

The passing score for CKAD is 66% out of 100%.

Moving Forward: Beyond the Certification

In conclusion, obtaining certification in one’s chosen field can be a valuable asset for personal and professional development. In my experience, it has helped me feel more confident in my abilities and given me a sense of purpose in my career.

However, it is essential to remember that it is crucial to continue learning and growing, both through practical experience and ongoing education, in order to stay up-to-date with the latest developments in the field. The combination of certification, practical experience, and ongoing learning can help us to achieve our career goals and excel in our role as a software engineer.

Together, we learn better.

Running Our Own NuGet Server on Azure Container Instance (ACI)

In software development, it is a common practice that developers from different teams create, share, and consume useful code which is bundled into packages. For .NET the way we share code is using NuGet package, a single ZIP file with the .nupkg extension that contains compiled code (DLLs).

Besides the public nuget.org host, which acts as the central repository of over 100,000 unique packages, NuGet also supports private hosts. Private host is useful for example it allows developers working in a team to produce NuGet packages and share them with other teams in the same organisation.

There are many open-source NuGet server available. BaGet is one of them. BaGet is built using .NET Core, thus it is able to run behind IIS or via Docker.

In this article, we will find out how to host our own NuGet server on Azure using BaGet.

Hosting Locally

Before we talk about hosting NuGet server on the cloud, let’s see how we could do it in our own machine with Docker.

Fortunately, there is an official image for BaGet on Docker Hub. Hence, we can pull it easily with the following command.

docker pull loicsharma/baget

Before we run a new container from the image, we need to create a file named baget.env to store BaGet’s configurations, as shown below.

# The following config is the API Key used to publish packages.
# We should change this to a secret value to secure our own server.
ApiKey=NUGET-SERVER-API-KEY

Storage__Type=FileSystem
Storage__Path=/var/baget/packages
Database__Type=Sqlite
Database__ConnectionString=Data Source=/var/baget/baget.db
Search__Type=Database

Then we also need to have a new folder named baget-data in the same directory as the baget.env file. This folder will be used by BaGet to persist its state.

The folder structure.

As shown in the screenshot above, we have the configuration file and baget-data at the C:\Users\gclin\source\repos\Lunar.NuGetServer directory. So, let’s execute the docker run command from there.

docker run --rm --name nuget-server -p 5000:80 --env-file baget.env -v "C:\Users\gclin\source\repos\Lunar.NuGetServer\baget-data:/var/baget" loicsharma/baget:latest

In the command, we also mount the baget-data folder on our host machine into the container. This is necessary so that data generated by and used by the container, such as package information, can be persisted.

We can browse our own local NuGet server by visiting the URL http://localhost:5000.

Now, let’s assume that we have our packages to publish in the folder named packages. We can publish it easily with dotnet nuget push command, as shown in the screenshot below.

Oops, we are not authorised to publish the package to own own NuGet server.

We will be rejected to do the publish, as shown in the screenshot above, if we do not provide the NUGET-SERVER-API-KEY that we defined earlier. Hence, the complete command is as follows.

dotnet nuget push -s http://localhost:5000/v3/index.json -k <NUGET-SERVER-API-KEY here> WordPressRssFeed.1.0.0.nupkg

Once we have done that, we should be able to see the first package on our own NuGet server, as shown below.

Yay, we have our first package in our own local NuGet server!

Moving on to the Cloud

Instead of hosting the NuGet server locally, we can also host it on the cloud so that other developers can access too. Here, we will be using Azure Cloud Instance (ACI).

ACI allows us to run Docker containers on-demand in a managed, serverless Azure environment. ACI is currently the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.

The first thing we need to have is to create a resource group (in this demo, we will be using a new resource group named resource-group-lunar-nuget) which will contain ACI, File Share, etc. for this project.

Secondly, we need to have a way to retrieve and persist state with ACI because by default, ACI is stateless. Hence, when the container is restarted all of its state will be lost and the packages we’ve uploaded to our NuGet server on the container will also be lost. Fortunately, we can make use of the Azure services, such as Azure SQL and Azure Blob Storage to store the metadata and packages.

For example, we can create a new Azure SQL database called lunar-nuget-db. Then we create an empty Container named nuget under the Storage Account lunar-nuget.

Created a new Container nuget under lunarnuget Storage Account.

Thirdly, we need to deploy our Docker container above on ACI using docker run. To do so, we first need to log into Azure with the following command.

docker login azure

Once we have logged in, we proceed to create a Docker context associated with ACI to deploy containers in ACI of our resource group, resource-group-lunar-nuget.

Creating a new ACI context called lunarnugetacicontext.

After the context is created, we can use the following command to see the current available contexts.

docker context ls
We should be able to see the context we just created in the list.

Next, we need to swich to use the new context with the following command because currently, as shown in the screenshot above, the context being used is default (the one with an asterisk).

docker context use lunarnugetacicontext

Fourthly, we can now proceed to create our ACI which connect to the Azure SQL and Azure Blob Storage above.

az container create \
    --resource-group resource-group-lunar-nuget \
    --name lunarnuget \
    --image loicsharma/baget \
    --environment-variables <Environment Variables here>
    --dns-name-label lunar-nuget-01 \
    --ports 80

The environment variables include the following

  • ApiKey=<NUGET-SERVER-API-KEY here>
  • Database__Type=SqlServer
  • Database__ConnectionString=”<Azure SQL connection string here>”
  • Storage__Type=AzureBlobStorage
  • Storage__AccountName=lunarnuget
  • Storage__AccessKey=<Azure Storage key here>
  • Storage__Container=nuget

If there is no issue, after 1 to 2 minutes, the ACI named lunarnuget will be created. Otherwise, we can always use docker ps to get the container ID first and then use the following command to find out the issues if any.

docker logs <Container ID here>
Printing the logs from one of our containers with docker logs.

Now, if we visit the given FQDN of the ACI, we shall be able to browse the packages on our NuGet server.

That’s all for a quick setup of our own NuGet server on Microsoft Azure. =)

References