Few days ago, my teammate would like to learn how to use MS SQL Server. However, he only has a Macbook and MS SQL Server doesn’t run on macOS. Hence, I decided to write him a quick setup guide on how to do that with the help of container.
🎨 Microsoft introduced SQL Server on Linux in 2016. 🎨
Docker
We need to run Docker on our Mac machine. Since my teammate is new to Docker, he can simply choose a rather straight-forward path for this, which is to use Docker Desktop on Mac. Kindly take note of the system requirement before proceed to install it.
Once the Docker is up and running, we can proceed to pull the image of SQL Server 2019 from the Docker Hub.
We can run the following command in Terminal window to start the database server. Here we are using 1501 as the port. Take note that, we need to replace the password with our password which meets the following guideline:
at least 8 characters;
including uppercase, lowercase letters, base-10 digits and/or non-alphanumeric symbols.
In the command above, there are two environment variables.
Firstly, it is the environment variable “ACCEPT_EULA”. Setting it to Y means that we accept the End-User Licensing Agreement of the product. So far I still couldn’t find the EULA of the Microsoft SQL Server 2019. If you know, please drop me a message in the comment section. Thanks!
Secondly, it is the “SA_PASSWORD” which is used to set the password that we will later use to connect to the SQL server later as the database admin (userid = “sa”).
Actually, there is another environment variable which is not set here. It is the MSSQL_PID, i.e. the product ID of the SQL Server. By default, it is the Developer edition. If we would like to use Express or Enterprise edition, we can specify it here.
To visualise and manage our data in the databases, we need to use tools such as SQL Server Management Studio (SSMS). However, SSMS is only for Windows (AMD or Intel). So, on macOS, we have to choose another cross-platform alternative, which is Azure Data Studio. Azure Data Studio is usable on Windows and Linux too.
Now we can connect to the SQL Server from Azure Data Studio as shown below. Take note that the Server is “localhost,1501” and it is using comma, not dot, between the word localhost and the port number.
🎨 Connecting to the Microsoft SQL Server from Azure Data Studio. 🎨
If the connection is successful, we shall be able to see the Server Dashboard as shown below.
🎨 Server Dashboard in Azure Data Studio. 🎨
That’s all. Now we can have MS SQL Server running on our Mac machine for local testing and development.
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
Dapr is an open-source, portable, and event-driven runtime which makes the development of resilient micro-service applications easier.
In addition, Dapr is light-weight and it can run alongside our application either as a sidecar process or container. It offers us some capabilities such as state management, which will be demonstrated in this article today, pub-sub, and service discovery which are useful in building our distributed applications.
🎨 Dapr building blocks which can be called over standard HTTP or gRPC APIs. (Image Credit: Dapr GitHub Project) 🎨
Dapr makes developer’s life better when building micro-service application by providing best-practice building blocks. In addition, since building blocks communicate over HTTP or gRPC, another advantage of Dapr is that we can use it with our favourite languages and frameworks. In this article, we will be using NodeJS.
🎨 Yaron explains how developers can choose which building blocks in Dapr to use. (Image Source: Azure Friday) 🎨
In this article, we will be using only the state management feature in Dapr and using one of them doesn’t mean we have to use them all.
Getting Started
We will first run Dapr locally. Dapr can be run in either Standalone or Kubernetes modes. For our local development, we will run it in Standalone mode first. In the future then we will deploy our Dapr applications to Kubernetes cluster.
In order to setup Dapr on our machine locally and manage the Dapr instances, we need to have Dapr CLI installed too.
Before we begin, we need to make sure we have Docker installed on our machine and since the application we are going to build is a NodeJS RPG game, we will need NodeJS (version 8 or greater).
After having Docker, we can then proceed to install Dapr CLI. The machine that I am using is Macbook. On MacOS, the installation is quite straightforward with the following command.
For the project, we have two folders in the project root, which is backend and game.
🎨 Project structure. 🎨
The game project is just a normal NodeJS project where all the relevant codes of the html-rpg is located in the public folder. Then in the app.js, we have the following line.
app.use(express.static('public))
🎨 Four character types (from top to bottom): King, player, soldier, and minister. 🎨
We also update the code of html-rpg so that whenever the player encounters the soldier or the minister face-to-face, the player HP will drop 10 points. To do so, we simply send HTTP POST request to the Dapr instance which is listening on port 4001 (will explain where this port number comes from later).
...
var data = {};
data["data"] = {};
data["data"]["playerHp"] = map.playerHp;
// construct an HTTP request
var xhr = new XMLHttpRequest();
xhr.open("POST", "http://localhost:4001/v1.0/invoke/backend/method/updatePlayerHp", true);
xhr.setRequestHeader('Content-Type', 'application/json; charset=UTF-8');
// send the collected data as JSON
xhr.send(JSON.stringify(data));
...
In the backend project, we will have the code to handle the /updatePlayerHp request, as shown in the code below.
The code above will get the incoming request and then persist the payer HP to the state store.
CosmosDB as State Store
By default, when we run Dapr locally, Redis state store will be used. The two files in the components directory in the backend folder, i.e. redis_messagebus.yaml and redis.yaml are automatically created when we run Dapr with the Dapr CLI. If we delete the two files and run Dapr again, it the two files will still be re-generated. However, that does not mean we cannot choose another storage as state store.
The four required values above can be retrieved from the CosmosDB page on the Azure Portal. There is, however, one thing that we need to be careful, i.e. the Partition Key of the container in CosmosDB.
🎨 Partition Key is a mandatory field during the container creation step. 🎨
When I was working on this project, I always received the following error log from Dapr.
== APP == Failed to persist state: Internal Server Error
I had a great discussion with Yaron on GitHub and he agreed to update the documentation to highlight the fact that we must use “/id” as the partition key.
So, after correcting the partition key, I finally can see the state stored on CosmosDB.
🎨 CosmosDB reflects the current HP of the player which has dropped from 100 to 60. 🎨
Learning about containers is essentially a huge topic but for beginners, there needs to be something small to help them get started. Hence, in this article, we will focus only on the key concepts of containers and the steps to containerize the program and deploy it to Azure Web App.
As explained in the book “How to Containerize Your Go Code“, containers isolates an application so that container thinks it’s running on its own private machine. So, a container is similar to a VM but it uses the OS kernel on the host rather than having its own.
The Dockerfile starts with a FROM command that specifies the starting point for the image to build. For our project, we don’t have any dependencies, so we can start from scratch. So what is scratch? Scratch is basically a special Docker image that is empty (0B). That means there will be nothing else in our container later aside from what we put in with the rest of the Dockerfile.
The reason why we build from scratch is because not only we can have a smaller image to build later, but also our container will have smaller attack surface. This is because the less code there is within our container, the less likely it is to include a vulnerability.
The EXPOSE 80 command is telling Docker that we need to open the port 80 because the web server is listening on port 80. Hence, in order to access our program from outside the container through HTTP, we need to define it in the Dockerfile that we need the port 80 to be always opened.
The next three COPY commands are basically copying firstly the GoLab executable into the root directory of the container and secondly the two directories, public and templates into the container. Without the HTML, CSS, and JavaScript, our web app will not work.
Now you may wonder why the first COPY command says GoLab instead of GoLab.exe. We shall discuss it later in this article.
After that, we use ENV command to set the environment variables that we will be using in the app.
Finally we have the line CMD [“/GoLab”] to directs the container as to which command to execute when the container is run.
Since the container is not a Windows container, the code that runs inside the container thus needs to be a Linux binary. Fortunately, this is really simple to obtain with the cross-compilation support in Go using the following command.
$ $env:GOOS = "linux" $ go build -o GoLab .
Thus, in the Dockerfile, we use GoLab file instead of GoLab.exe.
We can now proceed to build the container image with the following command (Take note of the dot in the end of line).
$ docker image build -t chunlindocker/golab:v1 .
The -t flag is for us to specify the name and tag of the container. In this case, I call it chunlindocker/golab:v1 where chunlindocker is the Docker ID of my Docker Hub. Naming in such a way later helps me to push it to a registry, i.e. the Docker Hub.
My Docker Hub profile.
If we want to build the image with another dockerfile, for example Dockerfile.development, we can do it as follows.
Once the docker image is built, we can see it listen when we perform the list command as shown in the screenshot below.
Created docker images.
Now the container image is “deployable”. That means we can run it anywhere with a running docker engine. Since our laptop has Docker installed, so we can proceed to run it locally with the following command.
$ docker container run -P chunlindocker/golab:v1
If you run the command above in the Terminal window inside VS Code, you will see that the command line is “stuck”. This is because the container is already running on local machine. So what we need to do is just open another terminal window and view all the running containers.
The docker ps command by default only shows running containers.
To help humans, Docker auto generates a random name with two words and assigns it to the container. We can see that the container we created is given a random name “nifty_elgama”, lol. So now our container has a “human” name to call. If you want to remove the container later, you not only need to Ctrl+C to stop it, but to totally remove it, you need to use the rm command as follows.
$ docker container rm nifty_elgama
The PORTS column shown in the screenshot is important because it tells us how ports exposed on the container can be accessed from the host. So to test it locally, we shall visit http://localhost:32768.
So our next step is to upload it to a container registry so that later it can be pulled onto any machines, including Azure Web Apps, that will run it. To do so, we do push the image we built above to Docker Hub with the following command.
$ docker push chunlindocker/golab:v1
Successfully push our new container image to Docker Hub.
So, now how do we deploy the container to Azure?
Firstly, we need to create a Web App for Containers on the Azure Portal, as shown in the screenshot below.
Creating Web App for Containers.
The last item in the configuration is the “Configure Container”. Clicking on that, we will be brought to the following screen where we can then specify the container image we want to use and pull it from Docker Hub.
We will be deploying single container called chunlindocker/golab:v4 from Docker Hub.
You can of course deploy a private container from Docker Hub by choosing “Private” as Repository Access. Then Azure Portal will prompt you for Docker Hub login credential for it to pull image from Docker Hub.
Once the App Service is created, we can proceed to read the Logs under “Container Settings”. Then we can see the container initializing process.
Logs about the container in App Service.
After that we can proceed to fill up the Application Settings with the environment variables we have in the web application and then we are good to go.
The website is up and running on Azure Web App for Containers.
When we are dockerizing an ASP .NET Core application, there will be a file called Dockerfile. For example, the Dockerfile in my previous project, Changshi, has the following content.
Please switch off Hyper-V before running CentOS 64-bit OS on VirtualBox.
Funny thing is that after switching off Hyper-V, Docker for Windows will make noise saying that it needs Hyper-V to be enabled to work properly. So currently I have to keep switching on and off the Hyper-V feature option depends on which tool I’m going to use.
We then can get the IP Address which is allocated to the VM. Sometimes, I need to wait for about 5 minutes before it can display the IP address successfully.
We now need to create a folder called scripts in the root and then create a file called createimage.sh in the folder. This step can be summarized as the following commands.
# mkdir scripts
# cd scripts
# vim createimage.sh
We then need to copy-and-paste the script from Moby Project to createimage.sh.
After that, we need to make createimage.sh executable with the following command.
# chmod +x createimage.sh
To run this script now, we need to do as follows, where centos7base is the name of the image file.
# ./createimage.sh centos7base
After it is done, we will see the centos7base image added in docker. The image is very, very small with only 271MB as its size.
First docker image!
Step 5: Add Something (.NET Core SDK) to Container
Since now we have our first Docker image, then we can proceed to create a container with the following command.
# docker run -i -t /bin/bash
We will be brought into the container. So now we can simply add something, such as the .NET Core SDK to the container by following the .NET Core installation steps for CentOS 7.1 (64-bit) which can be summarized as the following commands to execute.
We then can create a new image from the changes we have done on the container using the following command where the centos_netcore is the repository name and 1.0 is its tag.
docker commit [centos_netcore:1.0]
We will then realize the new image container will be quite big with 1.7GB as its size. Thanks to .NET Core SDK.
Step 6: Moving the New Image to PC
The next step that we are going to do is exporting the new image as a .tar file using the following command.
docker save > /tmp/centos_netcore.tar
Now, we need to launch WinSCP to retrieve the .tar file via SCP (Secure Copy Protocol) to local host.
Ready to access the VM via SCP.
Step 7: Load Docker Image
So now we can shutdown the VM and enable back the Hyper-V because the subsequent steps will need Docker for Windows to work.
After restarting our local computer with Hyper-V enabled, we can launch Docker for Windows. After that, we load the image to the Docker using the following command in the directory where we keep the .tar file in local host.
docker load < centos_netcore.tar
Step 8: Running ASP .NET Core Web App on the Docker Image
Now, we can change the Dockerfile to use the new image we created.
When we hit F5 to make it run in Docker, yup, we will get back the website.
No, just kidding. We will actually get an error message that says localhost doesn’t send any data.
Localhost did not send any data. Why?
So if we read the messages in Visual Studio Output Window, we will see one line of message saying that it’s unable to bind to http://localhost:5000 on the IPv6 loopback interface.
Explanation of Error -99 EADDRNOTAVAIL by Microsoft engineer. (Link)
Then I switch to view the output from Docker on the Output Window.
Output from Docker
It turns out that the port on docker is port 80. So I tried to add the following line in Program.cs.
public static IWebHost BuildWebHost(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseUrls("http://0.0.0.0:80") // Added this line
.UseStartup()
.Build();
Now, it works again with the beautiful web page.
Success!
Containers, Containers Everywhere
The whole concept of Docker images, containers, micro-services are still very new to me. Hence, if you spot any problem in my post, feel free to point out. Thanks in advance!