How to Bind an Enum to a ComboBox Control in UWP?

One of the ways to develop a desktop application for Windows 10/11 is UWP (Universal Windows Platform). UWP app has two fundamental sections, i.e. front-end and back-end. The front-end is developed using XAML (Extensible Markup Language) and back-end can be coded in C# (or even JavaScript back in the old time).

In order to decoupling front-end and back-end codes, there is a UI architectural design pattern, the MVVM (Model-View-ViewModel), introduced. With MVVM, we define our UI declaratively in XAML and use data binding markup to link it to other layers containing data and commands.

In order to implement MVVM in our UWP app, we can use Prism, which is an implementation of a collection of design patterns that are helpful in writing well-structured and maintainable XAML applications, including MVVM.

Even though Prism maintainers had decided to drop support for non-Xamarin.Forms UWP project back in 2019, Uno team announced that, in 2020, they stepped up to the plate and committed to providing ongoing support to the library.

In this article, we will figure out how we can setup data binding of an enum to a ComboBox control in UWP with Prism.

PROJECT GITHUB REPOSITORY

The complete source code of this project can be found at https://github.com/goh-chunlin/gcl-boilerplate.csharp/tree/master/universal-windows-platform/WTS.Prism.EnumCombo.

Model: The Enum

Let’s say we have an enum, MyColors, whose values are six different colours, as shown below.

public enum MyColors
{
    [Description("Red")] Red,
    [Description("Green")] Green,
    [Description("Blue")] Blue,
    [Description("Orange")] Orange,
    [Description("Pink")] Pink,
    [Description("Black")] Black
}

The enum has an attribute known as Description which can be retrieved with the extension method GetDescription().

public static class EnumExtension
{
    public static string GetDescription(this Enum value)
    {
        FieldInfo fi = value.GetType().GetField(value.ToString());
        var attributes = (DescriptionAttribute[])fi.GetCustomAttributes(typeof(DescriptionAttribute), false);
        if (attributes.Length > 0) return attributes[0].Description;
        else return value.ToString();
    }
}

ViewModel

Next we will define the ViewModel of our MainPage.xaml which will contains the ComboBox control. We will bind the variable SelectorColor whose type is the enum to the ComboBox control, as shown below.

public class MainViewModel : ViewModelBase
{
    private MyColors _selectedColor = MyColors.Black;

    public MyColors SelectedColor
    {
        get => _selectedColor;
        set
        {
            if (_selectedColor != value)
            {
                SetProperty(ref _selectedColor, value);
            }
        }
    }
}

The method SetProperty will set the property and notifies listeners only when necessary. The SetProperty method checks whether the backing field is different from the value being set. If different, the backing field is updated and the PropertyChanged event is raised.

Value Conversion

The data binding will be simple when the source and target properties are of the same type, or when one type can be converted to the other type through an implicit conversion, for example binding a string variable to the Text field of a TextBlock control. However, to bind enum to the dropdown value and text fields of a ComboBox, we will need the help of a value conversion.

The value conversion can be done by a converter class, which implements the IValueConverter interface. It will act like middlemen and translate a value between the source and the destination.

Here, we will implement a converter, MyColorValueConverter, that takes an enum value and then return a string value to be used in ComboBox fields, as well as the other way around.

public class MyColorValueConverter : IValueConverter
{
    public object Convert(object value, Type targetType, object parameter, string language)
    {
        if (value is MyColors color) return color.GetDescription();

        return null;
    }

    public object ConvertBack(object value, Type targetType, object parameter, string language)
    {
        if (value is string s) return Enum.Parse(typeof(MyColors), s);
        
        return null;
    }

    ...

}

After this, in order to provide all available values in the enum as ItemsSource of the ComboBox control, we will need to have a Strings property in the MyColorValueConverter.

public string[] Strings => GetStrings();

public static string[] GetStrings()
{
    List<string> list = new List<string>();

    foreach (MyColors color in Enum.GetValues(typeof(MyColors)))
    {
        list.Add(color.GetDescription());
    }


    return list.ToArray();
}

View: The Front-end

Now in the MainPage.xaml which contains the ComboBox control, we first need to instantiate the value converter in the resource dictionary of the page.

<Page
    x:Class="WTS.Prism.EnumCombo.Views.MainPage"
    xmlns:local="using:WTS.Prism.EnumCombo.Helpers"
    ...>
    <Page.Resources>
        <local:MyColorValueConverter x:Key="MyColorValueConverter" />
    </Page.Resources>
    ...
</Page>

We then can have our ComboBox control defined as follows.

<ComboBox 
    ItemsSource="{Binding Source={StaticResource MyColorValueConverter}, Path=Strings}"
    SelectedItem="{Binding SelectedColor, Converter={StaticResource MyColorValueConverter}, Mode=TwoWay}"/>

Conclusion

That’s all for the quickstart steps to bind an enum to a ComboBox control in an UWP app.

I have made the source code available on GitHub. The code will have more features where the colour of text in a TextBlock will be based on the colour we pick from the ComboBox, as shown below.

Demo of the ComboBox on UWP.

References

Image Based CAPTCHA using Jigsaw Puzzle on Blazor

In this article, I will share about how I deploy an image based CAPTCHA as a Blazor app on Azure Static Web App.

PROJECT GITHUB REPOSITORY

The complete source code of this project can be found at https://github.com/goh-chunlin/Lunar.JigsawPuzzleCaptcha.

Project Motivation

CAPTCHA, which stands for “Completely Automated Public Turing test to tell Computers and Humans Apart”, is a type of challenge-response test used in computing to determine whether or not the user is human. Since the day CAPTCHA was invented by Luis von Ahn’s team at Carnegie Mellon University, it has been a reliable tool in separating machines from humans.

In 2007, Luis von Ahn’s team released a programme known as reCAPTCHA which asked users to decipher hard-to-read texts in order to distinguish between human and bots.

The words shown in reCAPTCHA come directly from old books that are being digitized. Hence, it does not only stop spam, but also help digitise books at the same time. (Source: reCAPTCHA)

A team led by Prof Gao Haichang from Xidian University realised that, with the development of automated computer vision techniques such as OCR, traditional text-based CAPTHCAs are not considered safe anymore for authentication. During the IEEE conference in 2010, they thus proposed a new way, i.e. using an image based CAPTCHA which involves in solving a jigsaw puzzle. Their experiments and security analysis further proved that human can complete the jigsaw puzzle CAPTCHA verification quickly and accurately which bots rarely can. Hence, jigsaw puzzle CAPTCHA can be a substitution to the text-based CAPTCHA.

Xidian University, one of the 211 Project universities and a high level scientific research innovation in China. (Image Source: Shulezetu)

In 2019, on CSDN (Chinese Software Developer Network), a developer 不写BUG的瑾大大 shared his implementation of jigsaw puzzle captcha in Java. It’s a very detailed blog post but there is still room for improvement in, for example, documenting the code and naming the variables. Hence, I’d like to take this opportunity to implement this jigsaw puzzle CAPTCHA in .NET 5 with C# and Blazor. I also host the demo web app on Azure Static Web App so that you all can access and play with the CAPTCHA: https://jpc.chunlinprojects.com/.

Today, jigsaw puzzle CAPTCHA is used in many places. (Image Credit: Hirabiki at HoYoLAB)

Jigsaw Puzzle CAPTCHA

In a jigsaw puzzle CAPTCHA, there is usually a jigsaw puzzle with at least one misplaced piece where users need to move to the correct place to complete the puzzle. In my demo, I have only one misplaced piece that needs to be moved.

Jigsaw puzzle CAPTCHA implementation on Blazor. (Try it here)

As shown in the screenshot above, there are two necessary images in the CAPTCHA. One of them is a misplaced piece of the puzzle. Another image is the original image with a shaded area indicating where the misplaced piece should be dragged to. What users need to do is just dragging the slider to move the misplaced piece to the shaded area to complete the jigsaw puzzle within a time limit.

In addition, here the CAPTCHA only needs user to drag the missing piece horizontally. This is not only the popular implementation of the jigsaw puzzle CAPTCHA, but also not too challenging for users to pass the CAPTCHA.

Now, let’s see how we can implement this in C# and later deploy the codes to Azure.

Retrieve the Original Image

The first thing we need to do is getting an image for the puzzle. We can have a collection of images that make good jigsaw puzzle stored in our Azure Blob Storage. After that, each time before generating the jigsaw puzzle, we simply need to fetch all the images from the Blob Storage with the following codes and randomly pick one as the jigsaw puzzle image.

public async Task<List<string>> GetAllImageUrlsAsync() 
{
    var output = new List<string>();

    var container = new BlobContainerClient(_storageConnectionString, _containerName);

    var blobItems = container.GetBlobsAsync();

    await foreach (var blob in blobItems) 
    {
        var blobClient = container.GetBlobClient(blob.Name);
        output.Add(blobClient.Uri.ToString());
    }

    return output;
}

Define the Missing Piece Template

To increase the difficulty of the puzzle, we can have jigsaw pieces with different patterns, such as having tabs appearing on different sides of the pieces. In this demo, I will stick to just one pattern of missing piece, which has tabs on the top and right sides, as shown below.

The missing piece template.

The tabs are basically two circles with the same radius. Their centers are positioned at the middle point of the rectangle side. Hence, we can now build a 2D matrix for the pixels indicating the missing piece template with 1 means inside of the the piece and 0 means outside of the piece.

In addition, we know the general equation of a circle of radius r at origin (h,k) is as follows.

Hence, if there is a point (i,j) inside the circle above, then the following must be true.

If the point (i,j) is outside of the circle, then the following must be true.

With these information, we can build our missing piece 2D matrix as follows.

private int[,] GetMissingPieceData()
{
    int[,] data = new int[PIECE_WIDTH, PIECE_HEIGHT];

    double c1 = (PIECE_WIDTH - TAB_RADIUS) / 2;
    double c2 = PIECE_HEIGHT / 2;
    double squareOfTabRadius = Math.Pow(TAB_RADIUS, 2);

    double xBegin = PIECE_WIDTH - TAB_RADIUS;
    double yBegin = TAB_RADIUS;

    for (int i = 0; i < PIECE_WIDTH; i++)
    {
        for (int j = 0; j < PIECE_HEIGHT; j++)
        {
            double d1 = Math.Pow(i - c1, 2) + Math.Pow(j, 2);
            double d2 = Math.Pow(i - xBegin, 2) + Math.Pow(j - c2, 2);
            if ((j <= yBegin && d1 < squareOfTabRadius) || (i >= xBegin && d2 > squareOfTabRadius))
            {
                data[i, j] = 0;
            }
            else
            {
                data[i, j] = 1;
            }
        }
    }

    return data;
}

After that, we can determine the border of the missing piece easily too from just the template data above. We then can draw the border of the missing piece for better user experience when we display it on screen.

private int[,] GetMissingPieceBorderData(int[,] d)
{
    int[,] borderData = new int[PIECE_WIDTH, PIECE_HEIGHT];

    for (int i = 0; i < d.GetLength(0); i++)
    {
        for (int j = 0; j < d.GetLength(1); j++)
        {
            if (d[i, j] == 0) continue;

            if (i - 1 < 0 || j - 1 < 0 || i + 1 >= PIECE_WIDTH || j + 1 >= PIECE_HEIGHT) 
            {
                borderData[i, j] = 1;

                continue;
            }

            int sumOfSourrounding = 
                d[i - 1, j - 1] + d[i, j - 1] + d[i + 1, j - 1] + 
                d[i - 1, j] + d[i + 1, j] + 
                d[i - 1, j + 1] + d[i, j + 1] + d[i + 1, j + 1];

            if (sumOfSourrounding != 8) 
            {
                borderData[i, j] = 1;
            }
        }
    }

    return borderData;
}

Define the Shaded Area

Next, we need to tell the user where the missing piece should be dragged to. We will use the template data above and apply it to the original image we get from the Azure Blob Storage.

Due to the shape of the missing piece, the proper area to have the shaded area needs to be in the region highlighted in green colour below. Otherwise, the shaded area will not be shown completely and thus give users a bad user experience. The yellow area is okay too but we don’t allow the shaded area to be there to avoid cases where the missing piece covers the shaded area when the images first load and thus confuses the users.

Random random = new Random();

int x = random.Next(originalImage.Width - 2 * PIECE_WIDTH) + PIECE_WIDTH;
int y = random.Next(originalImage.Height - PIECE_HEIGHT);
Green area is where the top left of the shaded area should be positioned at.

Let’s assume the shaded area is at the point (x,y) of the original image, then given the original image in a Bitmap variable called originalImage, we can then have the following code to traverse the area and process the pixels in that area.

...
int[,] missingPiecePattern = GetMissingPieceData();

for (int i = 0; i < PIECE_WIDTH; i++)
{
    for (int j = 0; j < PIECE_HEIGHT; j++)
    {
        int templatePattern = missingPiecePattern[i, j];
        int originalArgb = originalImage.GetPixel(x + i, y + j).ToArgb();

        if (templatePattern == 1)
        {
            ...
            originalImage.SetPixel(x + i, y + j, FilterPixel(originalImage, x + i, y + j));
        }
        else
        {
            missingPiece.SetPixel(i, j, Color.Transparent);
        }
    }
}
...

Now we can perform the image convolution with kernel, a 3×3 convolution matrix, as shown below in the FilterPixel method. Here we will be using Box Blur. A Box Blur is a spatial domain linear filter in which each pixel in the resulting image has a value equal to the average value of its neighboring pixels in the input image. By the Central Limit Theorem, repeated application of a box blur will approximate a Gaussian Blur.

Kernels used in different types of image processing.

For the kernel, I don’t really follow the official Box Blur kernel or Gaussian Blur kernel. Instead, I dim the generated colour by forcing three pixel to be always black (when i = j). This is to make sure the shaded area is not only blurred but darkened.

private Color FilterPixel(Bitmap img, int x, int y)
{
    const int KERNEL_SIZE = 3;
    int[,] kernel = new int[KERNEL_SIZE, KERNEL_SIZE];

    ...

    int r = 0;
    int g = 0;
    int b = 0;
    int count = KERNEL_SIZE * KERNEL_SIZE;
    for (int i = 0; i < kernel.GetLength(0); i++)
    {
        for (int j = 0; j < kernel.GetLength(1); j++)
        {
            Color c = (i == j) ? Color.Black : Color.FromArgb(kernel[i, j]);
            r += c.R;
            g += c.G;
            b += c.B;
        }
    }

return Color.FromArgb(r / count, g / count, b / count);

What will happen when we are processing pixel without all 8 neighbouring pixels? To handle this, we will take the value of the pixel at the opposite position which is describe in the following diagram.

Applying kernel on edge pixels.

Since we have two images ready, i.e. an image for the missing piece and another image which shows where the missing piece needs to be, we can convert them into base 64 string and send the string values to the web page.

Now, the next step will be displaying these two images on the Blazor web app.

API on Azure Function

When we publish our Blazor app to Azure Static Web Apps, we are getting fast hosting of our web app and scalable APIs. Azure Static Web Apps is designed to host applications where the API and frontend source code lives on GitHub.

The purpose of API in this project is to retrieve the jigsaw puzzle images and verify user submissions. We don’t need a full server for our API because Azure Static Web Apps hosts our API in Azure Functions. So we need to implement our API as Azure Functions here.

We will have two API methods here. The first one is to retrieve the jigsaw puzzle images, as shown below.

[FunctionName("JigsawPuzzleGet")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "jigsaw-puzzle")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    var availablePuzzleImageUrls = await _puzzleImageService.GetAllImageUrlsAsync();

    var random = new Random();
    string selectedPuzzleImageUrl = availablePuzzleImageUrls[random.Next(availablePuzzleImageUrls.Count)];

    var jigsawPuzzle = _puzzleService.CreateJigsawPuzzle(selectedPuzzleImageUrl);
    _captchaStorageService.Save(jigsawPuzzle);

    return new OkObjectResult(jigsawPuzzle);
}

The Azure Function first retrieve all the images from the Azure Blob Storage and then randomly pick one to use in the jigsaw puzzle generation.

Before it returns the puzzle images back in a jigsawPuzzle object, it also saves it into Azure Table Storage so that later when users submit their answer back, we can have another Azure Function to verify whether the users solve the puzzle correctly.

In the Azure Table Storage, we generate a GUID and then store it together with the location of the shaded area, which is randomly generated, as well as an expiry date and time so that users must solve the puzzle within a limited time.

...
var tableClient = new TableClient(...);

...

var entity = new JigsawPuzzleEntity
{
    PartitionKey = ...,
    RowKey = id,
    Id = id,
    X = x,
    Y = y,
    CreatedAt = createdAt,
    ExpiredAt = expiredAt
};

tableClient.AddEntity(entity);
...

Here, GUID is used as the RowKey of the Table Storage. Hence, later when user submits his/her answer, the GUID will be sent back to the Azure Function to help locate back the corresponding record in the Table Storage.

[FunctionName("JigsawPuzzlePost")]
public async Task<IActionResult> Run(
    [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "jigsaw-puzzle")] HttpRequest req,
    ILogger log)
{
    log.LogInformation("C# HTTP trigger function processed a request.");

    var body = await new StreamReader(req.Body).ReadToEndAsync();
    var puzzleSubmission = JsonSerializer.Deserialize<PuzzleSubmissionViewModel>(body, new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase });

    var correspondingRecord = await _captchaStorageService.LoadAsync(puzzleSubmission.Id);

    ...

    bool isPuzzleSolved = _puzzleService.IsPuzzleSolved(...);

    var response = new Response 
    {
        IsSuccessful = isPuzzleSolved,
        Message = isPuzzleSolved ? "The puzzle is solved" : "Sorry, time runs out or you didn't solve the puzzle"
    };

    return new OkObjectResult(response);
}

Since our API is hosted as Azure Function in Consumption Plan, as shown in the screenshot below, we need to note that our code in the Function will be in the serverless mode, i.e. it effectively scales out to meet whatever load it is seeing and scales down when code isn’t running.

The Azure Function managed by the Static Web App will be in Consumption Plan.

Since the Function is in the serverless mode, we will have the issue of serverless cold start. Hence, there will be a latency that users must wait for their function, i.e. the time period starts from when an event happens to a function starts up until that function completes responding to the event. So more precisely, a cold start is an increase in latency for Functions which haven’t been called recently.

Latency will be there when Function is cold. (Image Source: Microsoft Azure Blog)

In this project, my friend feedbacked to me that he had encountered at least 15 seconds of latency to have the jigsaw puzzle loaded.

Blazor Frontend

Now we can move on to the frontend.

To show the jigsaw puzzle images when the page is loaded, we have the following code.

protected override async Task OnInitializedAsync()
{
    var jigsawPuzzle = await http.GetFromJsonAsync("api/jigsaw-puzzle");
    id = jigsawPuzzle.Id;
    backgroundImage = "data:image/png;base64, " + jigsawPuzzle.BackgroundImage;
    missingPieceImage = jigsawPuzzle.MissingPieceImage;
    y = jigsawPuzzle.Y;
}

Take note that we don’t only get the two images but also the GUID of the jigsaw puzzle record in the Azure Table Storage so that later we can send back this information to the Azure Function for submission verification.

Here, we only return the y-axis value of the shaded area location because users are only allowed to drag the missing puzzle horizontally as discussed earlier. If you would like to increase the difficulty of the CAPTCHA by allowing users to drag the missing piece vertically as well, you can choose not to return the y-axis value.

We then have the following HTML to display the two images.

<div style="margin: 0 auto; padding-left: @(x)px; padding-top: @(y)px; width: 696px; height: 442px; background-image: url('@backgroundImage'); background-size: contain;">
    <div style="width: 88px; height: 80px; background-image: url('data:image/png;base64, @missingPieceImage');">
            
    </div>
</div>

We also have a slider which is binded to the x variable and a button to submit both the value of the x and the GUID back to the Azure Function.

<div style="margin: 0 auto; width: 696px; text-align: center;">
    <input type="range" min="0" max="608" style="width: 100%;" @bind="@x" @bind:event="oninput" />
    <button type="button" @onclick="@Submit">Submit</button>

</div>

The Submit method is as follows which will feedback to users whether they solve the jigsaw puzzle correctly or not. Here I use a toast library for Blazor done by Chris Sainty, a Microsoft MVP.

private async Task Submit()
{
    var submission = new PuzzleSubmissionViewModel
    {
        Id = id,
        X = x
    };
    
    var response = await http.PostAsJsonAsync("api/jigsaw-puzzle", submission);

    var responseMessage = await response.Content.ReadFromJsonAsync<Response>();

    if (responseMessage.IsSuccessful)
    {
        toastService.ShowSuccess(responseMessage.Message);
    }
    else
    {
        toastService.ShowError(responseMessage.Message);
    }
        
}

Now we can test how our app works!

Testing Locally

Before we can test locally, we need to provide the secrets and relevant settings to access Azure Blob Storage and Table Storage.

We first need to have a file called local.settings.json in the root of the Api project with the following content. (Remember to have “Copy to output directly” set to “copy if newer” for the file)

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "",
    "FUNCTIONS_WORKER_RUNTIME": "dotnet",
    "CaptchaStorageEndpoint": "...",
    "CaptchaStorageTableName": "...",
    "CaptchaStorageAccountName": "...",
    "CaptchaStorageAccessKey": "...",
    "ImageBlobStorageConnectionString": "...",
    "ImageBlobContainerName": "..."
  },
  "Host": {
    "LocalHttpPort": 7071,
    "CORS": "*"
  }
}

The CORS setting is necessary as well else our Blazor app cannot access the API when we test the web app locally. We don’t have to worry about CORS when we publish it to Azure Static Web Apps because Azure Static Web Apps will automatically configure the app so that it can communicate with the API on Azure using a reverse proxy.

In addition, please remember to exclude local.settings.json from the source control.

In the Client project, since we are going to run our Api at port 7071, we shall let the Client know too. To do so, we first need to specify the base address for local in the Program.cs of the Client project.

builder.Services.AddScoped(sp => new HttpClient { BaseAddress = new Uri(builder.Configuration["API_Prefix"] ?? builder.HostEnvironment.BaseAddress) });

Then we can specify the value for API_Prefix in the appsettings.Development.json in the wwwroot folder.

{
    "API_Prefix": "http://localhost:7071"
}

Finally, please also set both Api and Client projects as the Startup Project in the Visual Studio.

Setting multiple startup projects in Visual Studio.

Deploy to Azure Static Web App

After we have created an Azure Static Web Apps resource and bound it with a GitHub Actions which monitors our GitHub repository, the workflow will automatically build and deploy our app and its API to Azure every time we commit or create pull requests into the watched branch. The steps have been described in my previous blog post about Blazor on Azure Static Web App, so I won’t repeat it here.

Since our API needs to have the information of secrets and connection settings to the Azure Storage, we need to specify them under Application Settings of the Azure Static Web App as well. The values will be accessible by API methods in the Azure Functions.

Managing the secrets in Application Settings.

Yup, that’s all for implementing a jigsaw puzzle CAPTCHA in .NET. Feel free to try it out on my Azure Static Web App and let me know your thoughts about it. Thank you!

Jigsaw puzzle CAPTCHA implementation on Blazor. (Try it here)

References

The code of this Blazor project described in this article can be found in my GitHub repository: https://github.com/goh-chunlin/Lunar.JigsawPuzzleCaptcha.

Publish a Blazor Web App as Azure Static Web App

In 2018, the web framework, Blazor, was introduced. With Blazor, we can work on web UI with C# instead of JavaScript. Blazor can run the client-side C# code directly in the browser, using WebAssembly.

When server-side rendering is not required, we can then deploy our web app on platforms such as Azure Static Web App, a service that automatically builds and deploys full stack web apps to Azure from a code repository, such as GitHub.

In this article, I will share how the website for Singapore .NET Developers Community and Azure Community is re-built as a Blazor web app and deployed to Azure.

PROJECT GITHUB REPOSITORY

The complete source code of this project can be found at https://github.com/sg-dotnet/website.

Blazor Web UI

The community website is very simple. It is merely a single-page website with some descriptions and photos about the community. Then it also has a section showing list of meetup videos from the community YouTube channels.

We will build the website as Blazor WebAssembly App.

Firstly, we will have the index.html defined as follows. Please take note that the code snippet below uses CSS file which is not shown in this post. The complete and updated project can be viewed on the GitHub repo.

<!DOCTYPE html>
<html>

<head>
    <title>Singapore .NET Developers Community + Azure Community</title>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no" />

    ...

    <link rel="icon" href="images/favicon.png" type="image/png">
    <link rel="stylesheet" href="css/main.css" />

    <base href="/" />
    http://_framework/blazor.webassembly.js
</head>

<body>
    <div id="app">
        <div style="position:absolute; top:30vh; width:100%; text-align:center">
            <h2>Welcome to dotnet.sg</h2>
            <div style="width: 50%; display: inline-block; height: 20px;">
                <div class="progress-line"></div>
            </div>
            
            <p>
                The website is loading...
            </p>
        </div>
    </div>

    <div id="blazor-error-ui">
        An unhandled error has occurred.
        <a href="" class="reload">Reload</a>
        <a class="dismiss">🗙</a>
    </div>

    <!-- Scripts -->
    http://javascript/jquery.min.js
</body>

</html>

Secondly, if we hope to have a similar UI template across all the web pages in the website, then we can define the HTML template under, for example, MainLayout.razor, as shown below. This template means that the header and footer sections can be shared across different web pages.

@inherits LayoutComponentBase

<!-- Header -->
<header id="header" class="alt">
    <div class="logo"><a href="/">SG <span>.NET + Azure Dev</span></a></div>
</header>

@Body

<!-- Footer -->
<footer id="footer">
    <div class="container">
        <ul class="icons">
           ...
        </ul>
    </div>
    <div class="copyright">
        &copy; ...
    </div>
</footer>

Finally, we simply need to define the @Body of each web page in their own Razor file, for example the Index.razor for the homepage.

In the Index.razor, we will fetch the data from a JSON file hosted on Azure Storage. The JSON file is periodically updated by Azure Function to fetch the latest video list from the YouTube channel of the community. Instead of using JavaScript, here we can simply write a C# code to do that directly on the Razor file of the homepage.

@code {
    private List<VideoFeed> videoFeeds = new List<VideoFeed>();

    protected override async Task OnInitializedAsync()
    {
        var allVideoFeeds = await Http.GetFromJsonAsync<VideoFeed[]>("...");

        videoFeeds = allVideoFeeds.ToList();
    }

    public class VideoFeed
    {
        public string VideoId { get; set; }

        public Channel Channel { get; set; }

        public string Title { get; set; }

        public string Description { get; set; }

        public DateTimeOffset PublishedAt { get; set; }
    }

    public class Channel
    {
        public string Name { get; set; }        
    }
}

Publish to Azure Static Web App from GitHub

We will have our codes ready in a GitHub repo with the following structure.

  • .github/workflows
  • DotNetCommunity.Singapore
    • Client
      • (Blazor client project here)

Next, we can proceed to create a new Azure Static Web App where we will host our website at. In the first step, we can easily link it up with our GitHub account.

We need to specify the deployment details for the Azure Static Web App.

After that, we will need to provide the Build details so that a GitHub workflow will be automatically generated. That is a GitHub Actions workflow that builds and publishes our Blazor web app. Hence, we must specify the corresponding folder paths within our GitHub repo, as shown in the screenshot below.

In the Build Details, we must setup the folder path correctly.

The “App location” is to point to the location of the source code for our Blazor web app. For the “Api location”, although we are not using it in our Blazor project now, we can still set it as follows so that in the future when we can easily setup the Api folder.

With this setup ready, whenever we update the codes in our GitHub repo via commits or pull requests, our Blazor web app will be built and deployed.

Our Blazor web app is being built in GitHub Actions.

Custom Domains

For the free version of the Azure Static web app, we are only allowed to have 2 custom domains per app. Another good news is that Azure Static Web Apps automatically provides a free SSL/TLS certificate for the auto-generated domain name and any custom domains we add.

CNAME record validation is the recommended way to add a custom domain, however, it only works for subdomains, such as “www.dotnet.sg” in our case.

For root domain, which is “dotnet.sg” in our case, by right we can do it in Azure Static Web App by using TXT record validation and an ALIAS record.

Take note that we can only create an ALIAS record if our domain provider supports it.

However, since there is currently no support of ALIAS or ANAME records in the domain provider that I am using, I have no choice but to have another Azure Function for binding “dotnet.sg”. This is because currently there is no IP address given in Azure Static Web App but there are IP address and Custom Domain Verification ID available in Azure Function. With these two information, we can easily map an A Record to our root domain, i.e. “dotnet.sg”.

Please take note that A Records are not supported for Consumption-based Function Apps. We must pay for the “App Service Plan” instead.

IP address and Custom Domain Verification ID on Azure Function. The root domain here is also SSL enabled.

After having the Azure Function ready, we need to perform URL redirect from “dotnet.sg” to “www.dotnet.sg”. With just a Proxy, we can create a Response Override with Status Code=302 and add a Header of Location=https://www.dotnet.sg, as shown in the following screenshot.

HTTP 302 on Azure Function Proxy.

With all these ready, we can finally get our community website up and running at dotnet.sg.

Welcome to the Singapore .NET/Azure Developer Community at dotnet.sg.

Export SSL Certificate For Azure Function

This step is optional. I need to go through this step because I have a Azure App Service managed certificate in one subscription but Azure Function in another subscription. Hence, I need to export the SSL certificate out and then import it back to another subscription.

We can export certificate from the Key Vault Secret.

In the Key Vault Secret screen, we then need to choose the correct secret version and download the certificate, as shown in the following screenshot.

Downloading the certificate as pfx.

After that, as mentioned in an online discussion about exporting and importing Azure App Service Certificate which has no password, we shall use tool such as OpenSSL to regenerate a pfx certificate with password that Azure Function can accept with the following commands.

> openssl pkcs12 -in .\old.pfx -out old.pem -nodes

> openssl pkcs12 -export -out .\new.pfx -in old.pem

We will be prompted for a password after executing the first command. We simply press enter to proceed because the certificate, as mentioned above, has no password.

OpenSSL command prompt.

With this step done, I finally can import the cert to the Azure Function in another subscription.

Yup, that’s all for hosting our community website as a Blazor web app on Azure Static Web App!

References

The code of this Blazor project described in this article can be found in our community GitHub repository: https://github.com/sg-dotnet/website.

Implement OCR Feature in UWP for Windows 10 and Hololens 2

I have been in the logistics and port industry for more than 3 years. I have also been asked by different business owners and managers about implementing OCR in their business solutions for more than 3 years. This is because it’s not only a challenging topic, but also a very crucial feature in their daily jobs.

For example, currently the truck drivers need to manually key in the container numbers into their systems. Sometimes, there will be human errors. Hence, they always have this question about whether there is a feature in their mobile app, for example, that can extract the container number directly from merely a photo of the container.

In 2019, I gave a talk about implementing OCR technology in the logistics industry during a tech meetup in Microsoft Tokyo. At that point of time, I demoed using Microsoft Cognitive Services. Since then many things have changed. Thus, it’s now a good time to revisit this topic.

Pen and paper is still playing an important role in the logistics industry. So, can OCR help in digitalising the industry? (Image Source: Singapore .NET Developers Community YouTube Channel)

Performing OCR Locally with Tesseract

Tesseract is an open-source OCR engine currently developed and led by Ray Smith from Google. The reason why I choose Tesseract is because there is no Internet connection needed. Hence, OCR can be done quickly without the need to upload images to the cloud to process.

In 2016, Hewlett Packard Enterprise senior developer, Yoisel Melis, created a project which enables developers to use Tesseract on Windows Store Apps. However, it’s just a POC and it has not been updated for about 5 years. Fortunately, there is also a .NET wrapper for Tesseract, done by Charles Weld, available on NuGet. With that package, we now can easily implement OCR feature in our UWP apps.

Currently, I have tried out the following two features offered by Tesseract OCR engine.

  1. Reading text from the image with confidence level returned;
  2. Getting the coordinates of the image.

The following screenshot shows that Tesseract is able to retrieve the container number out from a photo of a container.

Only the “45G1” is misread as “4561”m as highlighted by the orange rectangle. The main container number is correctly retrieved from the photo.

Generally, Tesseract is also good at recognizing multiple fonts. However, sometimes we do need to train it based on certain font to improve the accuracy of text recognition. To do so, Bogusław Zaręba has written a very detailed tutorial on how to do it, so I won’t repeat the steps here.

Tesseract can also work with multiple languages. To recognise different languages, we simply need to download the corresponding language data files for Tesseract 4 and add them to our UWP project. The following screenshot shows the Chinese text that Tesseract can extract from a screenshot of a Chinese game. The OCR engine performs better on images with lesser noise, so in this case, some Chinese words are not recognised.

Many Chinese words are still not recognised.

So, how about doing OCR with Azure Cognitive Services? Will it perform better than Tesseract?

Performing OCR on Microsoft Azure

On Azure, Computer Vision is able to analyse content in images and video. Similar as Tesseract, Azure Computer Vision can also extract printed text written in different languages and styles from images. It currently also offers free instance which allows us to have 5,000 free transactions per month. Hence, if you would like to try out the Computer Vision APIs, you can start with the free tier.

So, let’s see how well Azure OCR engine can recognise the container number shown on the container image above.

Our UWP app can run on the Hololens 2 Emulator.

As shown in the screenshot above, not only the container number, but also the text “45G1” is correctly retrieved by the Computer Vision OCR API. The only downside of the API is that we need to upload the photo to the cloud first and it will then take one to two minutes to process the image.

Computer Vision OCR also can recognise non-English words, such as Korean characters, as shown in the screenshot below. So next time we can travel the world without worry with Hololens translating the local languages to us.

With Hololens, now I can know what I’m ordering in a Korean restaurant. I want 돼지갈비 (BBQ Pork)~

Conclusion

That’s all for my small little experiment on the two OCR engines, i.e. Tesseract and Azure Computer Vision. Depends on your use cases, you can further update the engine and the UWP app above to make the app works smarter in your business.

Currently I am still having problem of using Tesseract on Hololens 2 Emulator. If you know how to solve this problem, please let me know. Thanks in advance!

I have uploaded the project source code of the UWP app to GitHub, feel free to contribute to the project.

Together, we learn better.

References