I like to explore interesting new technologies. I also love to learn more from the materials available on Microsoft Virtual Academy, Google Developers channel, and several other tech/dev events.
Last week, a developer in our team encountered an interesting question in his SQL script on SQL Server 2019. For the convenience of discussion, I’ve simplified his script as follow.
DECLARE @NUM AS TINYINT = 0
DECLARE @VAL AS VARCHAR(MAX) = '20.50'
SELECT CASE @NUM WHEN 0 THEN CAST(@VAL AS DECIMAL(10, 2))
WHEN 1 THEN CAST(@VAL AS DECIMAL(10, 4))
ELSE -1
END AS Result
The result he expected was 20.50 because @NUM equals to 0, so by right the first result expression should be executed. However, the truth is that it returned 20.5000 as if the second result expression which is casting @VAL into a decimal value with a scale of 4 was run.
All data type conversions allowed for SQL Server system-supplied data types (Image Source: Microsoft Learn)
Data Precendence
While the above chart illustrates all the possible explicit and implicit conversions, we still do not know the resulting data type of the conversion. For our case above, the resulting data type depends on the rules of data type precedence.
Since DECIMAL has a higher precedence than INT, hence we are sure that the script above will result in a DECIMAL output with the highest scale, i.e. DECIMAL(10, 4). This explains why the result of his script is 20.5000.
Conclusion
Now, if we change the script above to be something as follows, we should receive an error saying “Error converting data type varchar to numeric”.
DECLARE @NUM AS TINYINT = 0
DECLARE @VAL AS VARCHAR(MAX) = '20.50'
SELECT CASE @NUM WHEN 0 THEN 'A'
WHEN 1 THEN CAST(@VAL AS DECIMAL(10, 4))
ELSE -1
END AS Result
Yup, that’s all about our discussion about the little bug he found in his script. Hope you find it useful. =)
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
In order to build applications which utilise the scalability, flexibility, and resilience of cloud computing, the applications are nowadays normally developed with microservice architecture using containers. Microservice architecture enables our applications to be composed of small independent backend services that communicate with each other over the network.
In general, when applying a microservice architecture, while backend systems are split up into microservices, frontend is still often developed as a monolith. This is not a problem when our application is small and we have a strong frontend team working on its frontend. However when the application grows to a larger scale, a monolithic frontend will start to be inefficient and unmaintainable due to the following reasons.
Firstly, it is challenging to keep the frontend technologies used in a large application up-to-date. Hence, with micro frontend, we can upgrade the version of the frontend on a functional basis. It also allows developers to use different frontend technologies to different functions based on the needs.
Secondly, since the source code of the micro frontend is separated, the source code of the individual frontend component is not as much as the monolith version of it. This improves the maintainability of the frontend because smaller code is easy to understand and distribute.
Thirdly, with micro frontend, we can split the frontend development team into smaller teams so that each team only needs to focus on relevant business functions.
Introduction of single-spa
In micro frontend architecture, we need a framework to bring together muliple JavaScript micro frotnends in our application. The framework we’re going to discuss here is called the single-spa.
The reason why we choose single-spa is because it is a framework allowing the implementation of micro frontend by supporting many popular JavaScript UI frameworks such as Angular and Vue. By leveraging the single-spa framework, we are able to register micro frontends such that the micro frontends are mounted and unmounted correctly for different URLs.
In this article, single-spa will work as an orchestrator to handle the micro frontend switch so that individual micro frontend does not need to worry about the global routing.
The Orchestrator
The orchestrator is nothing but a project holding single-spa which is responsible for global routing, i.e. determining which micro frontends get loaded.
We will be loading different micro frontends into the two placeholders which consume the same custom styles.
We can install the create-single-spa tool globally with the following command.
npm install --global create-single-spa
Once it is installed, we will create our project folder containing another empty called “orchestrator”, as shown in the following screenshot.
We have now initialised our project.
We will now create the single-spa root config, which is the core of our orchestrator, with the following command.
create-single-spa
Then we will need to answer a few questions, as shown in the screenshots below in order to generate our orchestrator.
We’re generating orchestrator using the single-spa root config type.
That’s all for now for our orchestrator. We will come back to it after we have created our micro frontends.
Micro Frontends
We will again use the create-single-spa to create the micro frontends. Instead of choosing root config as the type, this time we will choose to generate the parcel instead, as shown in the following screenshot.
We will be creating Vue 3.0 micro frontends.
To have our orchestrator import the micro frontends, the micro frontend app needs to be exposed as a System.register module, as shown below on how we edit the vue.config.js file with the following configuration.
Here we also force the generated output file name to be app.js for import convenience in the orchestrator.
Now, we can proceed to build this app with the following command so that the app.js file can be generated.
npm run build
The app.js file is generated after we run the build script that is defined in package.json file.
We then can serve this micro frontend app with http-server for local testing later. We will be running the following command in its dist directory to specify that we’re using port 8011 for the app1 micro frontend.
http-server . --port 8011 --cors
This is what we will be seeing if we navigate to the micro frontend app now.
Link Orchestrator with Micro Frontend AppS
Now, we can return to the index.ejs file to specify the URL of our micro frontend app as shown in the screenshot below.
We can now launch our orchestrator with the following command in the orchestrator directory.
npm start
Based on the package.json file, our orchestrator will be hosted at port 9000.
Now, if we repeat what we have done for app1 for another Vue 3.0 app called app2 (which we will deploy on port 8012), we can achieve something as follows.
Finally, to have the images shown properly, we simply need to update the Content-Security-Policy to be as follows.
Also, in order to make sure the orchestrator indeed loads two different micro frontends, we can edit the content of the two apps to look different, as shown below.
Design System
In a micro frontend architecture, every team builds its part of the frontend. With this drastic expansion of the frontend development work, there is a need for us to streamline the design work by having a complete set of frontend UI design standards.
In addition, in order to maintain the consistency of the look-and-feel of our application, it is important to make sure that all our relevant micro frontends are adopting the same design system which also enables developers to replicate designs quickly by utilising premade UI components.
Here in single-spa, we can host our CSS in one of the shared micro frontend app and then have it contains only the common CSS.
However, micro frontend is not suitable for all projects, especially when the development team is small or when the project is just starting off. Micro frontend is only recommended when the backend is already on microservices and the team finds that scaling is getting more and more challenging. Hence, please plan carefully before migrating to micro frontend.
If you’d like to find out more about the single-spa framework that we are using in this article, please visit the following useful links.
Beginning with Windows 2000, Microsoft Windows operating systems have been shipped with a data protection interface known as DPAPI (Data Protection Application Programming Interface). DPAPI is a simple cryptographic API. It doesn’t store any persistent data for itself; instead, it simply receives plaintext and returns cyphertext.
In the code above, instead of storing key at the default location, which is %LOCALAPPDATA%, we store it on a network drive by specifying the path to the UNC Share.
The code above shows how we can store keys on a UNC share. If we head to the directory \server\shared\directory\, we will be seeing an XML file with content similar as what is shown below.
As we can see, the key <masterKey> itself is in an unencrypted form.
Hence, in order to protect the data protection key ring, we need to make sure that the storage location should be protected as well. Normally, we can use file system permissions to ensure only the identity under which our web app runs has access to the storage directory. Now with Azure, we can also protect our keys using Azure Key Vault, a cloud service for securely storing and accessing secrets.
The approach we will take is to first create an Azure Key Vault called lunar-dpkeyvault with a key named dataprotection, as shown in the screenshot below.
Created a key called dataprotection on Azure Key Vault.
The credential can be a ClientSecretCredential object or DefaultAzureCredential object.
Tenant Id, Client Id, and Client Secret can be retrieved from the App Registrations page of the app having the access to the Azure Key Vault above. We can use these three values to create a ClientSecretCredential object.
Now, if we check again the newly generated XML file, we shall see there won’t be <masterKey> element anymore. Instead, it is replaced with the content shown below.
<encryptedKey xmlns="">
<!-- This key is encrypted with Azure Key Vault. -->
<kid>https://lunar-dpkeyvault.vault.azure.net/keys/dataprotection/...</kid>
<key>HSCJsnAtAmf...RHXeeA==</key>
<iv>...</iv>
<value>...</value>
</encryptedKey>
Key Lifetime
We shall remember that, by default, the generated key will have a 90-day lifetime. This means that the app will automatically generate a new active key when the current active key expires. However, the retired keys can still be used to decrypt any data protected with them.
To create a protector, we need to specify Purpose Strings. A Purpose String provides isolation between consumers so that a protector cannot decrypt cyphertext encrypted by another protector with different purpose.
One of the main reasons why we can easily identify objects in our daily life is because we can tell the boundary of objects easily with our eyes. For example, whenever we see objects, we can tell the edge between the boundary of the object and the background behind it. Hence, there are some images can play tricks on our eyes and confuse our brain with edge optical illusion.
Sobel-Felman Operator in Computer Vision
Similarly, if a machine would like to understand what it sees, edge detection needs to be implemented in its computer vision. Edge detection, one of the image processing techniques, refers to an algorithm for detecting edges in an image when the image has sharp changes.
There are many methods for edge detection. One of the methods is using a derivative kernel known as the Sobel-Feldman Operator which can emphasise edges in a given digital image. The operator is based on convolving the image with filters in both horizontal and vertical directions to calculate approximations of the Image Derivatives which will tell us the strength of edges.
An example of how edge strength can be computed with Image Derivative with respect to x and y. (Image Credit: Chris McCormick)
The Kernels
The operator uses two 3×3 kernels which are convolved with the original image to calculate approximations of the derivatives for both horizontal and vertical changes.
We define the two 3×3 kernels as follows. Firstly, the one for calculating the horizontal changes.
Let’s say we have our image in a Bitmap variable sourceImage, then we can perform the following.
int width = sourceImage.Width;
int height = sourceImage.Height;
int bytes = srcData.Stride * srcData.Height;
//Lock source image bits into system memory
BitmapData srcData = sourceImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
byte[] pixelBuffer = new byte[bytes];
//Get the address of the first pixel data
IntPtr srcScan0 = srcData.Scan0;
//Copy image data to one of the byte arrays
Marshal.Copy(srcScan0, pixelBuffer, 0, bytes);
//Unlock bits from system memory
sourceImage.UnlockBits(srcData);
Converting to Grayscale Image
Since our purpose is to identify edges found on objects within the image, it is standard practice to take the original image and convert it to grayscale first so that we can simplifying our problem by ignoring the colours and other noise. Only then we perform the edge detection on this grayscale image.
However, how do we convert colour to grayscale?
GIMP is a cross-platform image editor available for GNU/Linux, macOS, Windows and more operating systems. (Credit: GIMP)
Now we can finally calculate the approximations of the derivatives. Given S as the grayscale of sourceImage, and Gx and Gy are two images which at each point containing the horizontal and vertical derivative approximations respectively, we have the following.
Given such estimates of the Image Derivatives, the gradient magnitude is then computed as follows.
Translating to C#, the formulae above will look like the following code. As we all know, S here is grayscale, so we will only focus on one colour channel instead of all RGB.
//Create variable for pixel data for each kernel
double xg = 0.0;
double yg = 0.0;
double gt = 0.0;
//This is how much our center pixel is offset from the border of our kernel
//Sobel is 3x3, so center is 1 pixel from the kernel border
int filterOffset = 1;
int calcOffset = 0;
int byteOffset = 0;
byte[] resultBuffer = new byte[bytes];
//Start with the pixel that is offset 1 from top and 1 from the left side
//this is so entire kernel is on our image
for (int offsetY = filterOffset; offsetY < height - filterOffset; offsetY++)
{
for (int offsetX = filterOffset; offsetX < width - filterOffset; offsetX++)
{
//reset rgb values to 0
xg = yg = 0;
gt = 0.0;
//position of the kernel center pixel
byteOffset = offsetY * srcData.Stride + offsetX * 4;
//kernel calculations
for (int filterY = -filterOffset; filterY <= filterOffset; filterY++)
{
for (int filterX = -filterOffset; filterX <= filterOffset; filterX++)
{
calcOffset = byteOffset + filterX * 4 + filterY * srcData.Stride;
xg += (double)(pixelBuffer[calcOffset + 1]) * xkernel[filterY + filterOffset, filterX + filterOffset];
yg += (double)(pixelBuffer[calcOffset + 1]) * ykernel[filterY + filterOffset, filterX + filterOffset];
}
}
//total rgb values for this pixel
gt = Math.Sqrt((xg * xg) + (yg * yg));
if (gt > 255) gt = 255;
else if (gt < 0) gt = 0;
//set new data in the other byte array for output image data
resultBuffer[byteOffset] = (byte)(gt);
resultBuffer[byteOffset + 1] = (byte)(gt);
resultBuffer[byteOffset + 2] = (byte)(gt);
resultBuffer[byteOffset + 3] = 255;
}
}
Output Image
With the resultBuffer, we can now generate the output as an image using the following codes.
//Create new bitmap which will hold the processed data
Bitmap resultImage = new Bitmap(width, height);
//Lock bits into system memory
BitmapData resultData = resultImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
//Copy from byte array that holds processed data to bitmap
Marshal.Copy(resultBuffer, 0, resultData.Scan0, resultBuffer.Length);
//Unlock bits from system memory
resultImage.UnlockBits(resultData);
So, let’s say the image below is our sourceImage,
A photo of Taipei that I took when I was in Taiwan.
then the algorithm above should return us an image which contains only the detected edges as shown below.
Successful edge detection on the Taipei photo above.