Migrate to TLS 1.2 for Azure Blob Storage

Objective

In November 2023, Azure conveyed through an email notification that, starting from 31st October 2024, all interactions with their services must be safeguarded using Transport Layer Security (TLS) version 1.2 or later. Post this date, their support for TLS versions 1.0 and 1.1 will be discontinued.

By default, Azure Storage already supports TLS 1.2 on public HTTPS endpoints. However, for some companies, they are still using TLS 1.0 or 1.1. Hence, to maintain their connections to Azure Storage, they have to update their OS and apps to support TLS 1.2.

About TLS

The history of TLS can be traced back to SSL.

SSL stands for “Secure Sockets Layer,” and it was developed by Netscape in the 1990s. SSL was one of the earliest cryptographic protocols developed to provide secure communication over a computer network.

SSL has been found to have several vulnerabilities over time, and these issues have led to its deprecation in favor of more secure protocols like TLS. In 2019, TLS 1.0 was introduced as an improvement over SSL. Nowadays, while the term “SSL” is still commonly used colloquially to refer to the broader category of secure protocols, it typically means TLS.

When we see “https://” in the URL and the padlock icon, it means that the website is using either TLS or SSL to encrypt the connection.

While TLS addressed some SSL vulnerabilities, it still had weaknesses, and over time, security researchers identified new threats and attacks. Subsequent versions of TLS, i.e. TLS 1.1, TLS 1.2, and TLS 1.3, were developed to further enhance security and address vulnerabilities.

Why TLS 1.2?

By the mid-2010s, it became increasingly clear that TLS 1.2 was a more secure choice, and we were encouraged to upgrade our systems to support it instead. TLS 1.2 introduced new and stronger cipher suites, including Advanced Encryption Standard (AES) cipher suites, providing better security compared to older algorithms.

Older TLS versions (1.0 and 1.1) are deprecated and removed to meet regulatory standards from NIST (National Institute of Standards and Technologies). (Photo Credit: R. Jacobson/NIST)

Ten years after TLS 1.2 was officially released as a standardised protocol, TLS 1.3 was introduced by the Internet Engineering Task Force (IETF).

The coexistence of TLS 1.2 and TLS 1.3 is currently part of a transitional approach, allowing organisations to support older clients that may not yet have adopted TLS 1.3.

For Microsoft Azure, if the service we are using still have a dependency on TLS 1.0 or 1.1, we are advised to migrate them to TLS 1.2 or 1.3 by 31 October 2024.

Monitoring TLS Version of Requests

Before we enabling that, we should setup logging to make sure that our Azure policy is working as intended. Here, we will be using Azure Monitor.

For demonstration purpose, we will create a new Log Analytics workspace called “LunarTlsAzureStorage”.

In this article, we will only be logging requests for the Blob Storage, hence, we will be setting up the Diagnostic of the Storage Account as shown in the screenshot below.

Adding new diagnostic settings for blob.

In the next step, we need to specify that we would like to collect the logs of only read and write requests of the Azure Blob Storage. After that, we will send the logs to Log Analytics we have just created above.

Creating a new diagnostic setting for our blob storage.

After we have created the diagnostic setting, requests to the storage account are subsequently logged according to that setting.

As demonstrated in the following screenshot, we use the query below to find out how many requests were made against our blob storage with different versions of TLS over the past seven day.

There are only TLS 1.2 requests for the “gclstorage” blob storage.

Verify with Telerik Fiddler

Fiddler is a popular web debugging proxy tool that allows us to monitor, inspect, and debug HTTP traffic between our machine and the Internet. Fiddler can thus be used to inspect and analyze both TLS and SSL requests.

We can refer to the Fiddler trace to confirm that the correct version of TLS 1.2 was used to send the request to the blob storage “gclstorage”, as shown in the following screenshot.

TLS 1.2 is SSL 3.3, thus the version there states that it is version 3.3.

Enforce the Minimum Accepted TLS Version

Currently, the minimum TLS version accepted by storage account is set to TLS 1.0 by default before November 2014.

We at most can only set Version 1.2 for the minumum TLS version.

In advance of the deprecation date, we can enable Azure policy to enforce minimum TLS version to be TLS 1.2. Hence, we can now update the value to 1.2 so that we can reject all requests from clients that are sending data to our Azure Storage with an TLS 1.0 and 1.1.

Change in Kestrel for ASP .NET Core

Meanwhile, Kestrel, the cross-platform web server for ASP.NET Core, now also uses the system default TLS protocol versions rather than restricting connections to the TLS 1.1 and TLS 1.2 protocols like it did previously.

Thus, if we are running our apps on the latest Windows servers, then the latest TLS should be automatically used by our apps without any configuration from our side.

In fact, according to the TLS best practices guide from Microsoft, we should not specify the TLS version. Instead, we shall configure our code to let the OS decide on the TLS version for us.

Wrap-Up

Enhancing the security stance for Windows users, as of September 2023, the default configuration of the operating system will deactivate TLS versions 1.0 and 1.1.

As developers, we should ensure that all apps and services running on Windows are using up-to-date versions that support TLS 1.2 or higher. Hence, prior to the enforcement of TLS updates, we must test our apps in a controlled environment to verify compatibility with TLS 1.2 or later.

While TLS 1.0 and 1.1 will be disabled by default, it is also good to confirm these settings and ensure they align with your security requirements.

By taking these proactive measures, we should be able to have a seamless transition to updated TLS versions, maintaining a secure computing environment while minimising any potential disruptions to applications or services.

References

Revisit Avalonia UI App Development

Back in April 2018, I had the priviledge of sharing about Avalonia UI app development with the Singapore .NET Developers Community. At the time, Avalonia was still in its early stages, exclusively tailored for the creation of cross-platform desktop applications. Fast forward to the present, five years since my initial adventure to Avalonia, there is a remarkable transformation in this technology landscape.

In July 2023, Avalonia v11 was announced. It is a big release with mobile development support for iOS and Android, and WebAssembly support to allow running directly in the browser.

In this artlcle, I will share about my new development experience with Avalonia UI.

About Avalonia UI

Avalonia UI, one of the .NET Foundations projects, is an open-source, cross-platform UI framework designed for building native desktop apps. It has been described as the spiritual successor to WPF (Windows Presentation Foundation), enabling our existing WPF apps to run on macOS and Linux without expensive and risky rewrites.

Platforms supported by Avalonia. (Reference)

Like WPF and Xamarin.Forms, Avalonia UI also uses XAML for the UI. XAML is a declarative markup language that simplifies UI design and separates the UI layout from the application’s logic. Same as WPF, Avalonia also encourages the Model-View-ViewModel (MVVM) design pattern for building apps.

Hence, for WPF developers, they will find the transition to Avalonia relatively smooth because they can apply their knowledge of XAML and WPF design patterns to create UI layouts in Avalonia easily. With Avalonia, they can reuse a significant portion of their existing WPF code when developing cross-platform apps. This reusability can save time and effort in the development process.

Semi.Avalonia Theme

Theming is still a challenge especially when it comes to develop line-of-business apps with Avalonia UI. According to the community, there are a few professional themes available, such as

Currently, I have only tried out Semi.Avalonia.

Semi.Avalonia is a theme inspired by Semi Design, a design system designed and currently maintained by Douyin. The reason why I chose Semi.Avalonia is because there is a demo app which demonstrating all of the general controls and styles available to develop Avalonia apps.

There is a demo executable available for us to play around with Semi Avalonia Themes.

XAML Previewer for Avalonia

In September 2023, .NET Foundation announced on the social network, X, that Avalonia UI offered a live XAML previewer for Avalonia in Visual Studio Code through an extension as well.

The Avalonia XAML Previewer offers real-time visualisation of XAML code. With this capability, developers can deftly craft and refine user interfaces, swiftly pinpoint potential issues, and witness the immediate effects of their alterations.

Unlike Visual Studio, VS Code will reuse the single preview window. Hence, the previewer will refresh everytime when we switch between multiple XAML files.

Besides, the Avalonia for Visual Studio Code Extension also contains support for Avalonia XAML autocomplete.

The Avalonia XAML Previewer somehow is not working perfectly on my Surface Go.

C# DevKit

In addition, there is also a new VS Code extension that needs our attention.

In October 2023, Microsoft announced the general availability of C# Dev Kit, a VS Code extension that brings an improved editor-first C# development experience to Linux, macOS, and Windows.

When we install this extension, three other extensions, i.e. the C# extension, the IntelliCode for C# Dev Kit, and the .NET Runtime Install Tool will automatically be installed together.

With C# Dev Kit, we can now manage our projects with the Solution Explorer that we have been very familiar with on the Visual Studio.

Besides the normal file explorer, we now can have the Solution Explorer in VS Code too.

Since the IntelliCode for C# Dev Kit extension is installed together, on top of the basic IntelliSense code-completion found in the existing C# extension, we can also get powerful IntelliCode features such as whole-line completions and starred suggestions based on our personal codebase.

AI-assisted IntelliCode predicts the most likely correct method to use in VSCode.

Grafana Dashboard

Next, I would like to talk about the observability of an app.

I attended Grafana workshop during the GrafanaLive event in Singapore in September 2023.

Observability plays a crucial role in system and app management, allowing us to gain insights into the inner workings of the system, understand its functions, and leverage the data it produces effectively.

In the realm of observability, our first concern is to assess how well the system can gauge its internal status merely by examining its external output. This aspect of observability is crucial for proactive issue detection and troubleshooting, as it allows us to gain a deeper insight into performance and potential problems of the system without relying on manual methods.

Effective observability not only aids in diagnosing problems but also in understanding the system behavior in various scenarios, contributing to better decision-making and system optimisation.

Grafana engineer shared about the 3 pillars of observability.

There are three fundamental components of observability, i.e. monitoring, logging, and tracing. Monitoring enhances the understanding of system actions by collecting, storing, searching, and analysing monitoring metrics from the system.

Prometheus and Grafana are two widely used open-source monitoring tools that, when used together, provide a powerful solution for monitoring and observability. Often, Prometheus collects metrics from various systems and services. Grafana then connects to Prometheus as a data source to fetch these metrics. Finally, we design customised dashboards in Grafana, incorporating the collected metrics.

A simple dashboard collecting metrics from the Avalonia app though HTTP metrics.

We can get started quickly with Grafana Cloud, a hosted version of Grafana, without the need to set up and manage infrastructure components.

On Grafana Cloud, using the “HTTP Metrics”, we are able to easily send metrics directly from our app over HTTP for storage in the Grafana Cloud using Prometheus. Prometheus uses a specific data model for organising and querying metrics, which includes the components as highlighted in the following image.

Prometheus metrics basic structure.

Thus, in our Avalonia project, we can easily send metrics to Grafana Cloud with the codes below, where apiUrl, userId, and apiKey are given by the Grafana Cloud.

HttpClient httpClient = new();
httpClient.DefaultRequestHeaders.Add("Authorization", "Bearer " + userId + ":" + apiKey);

string metricLabelsText = metricLabels.Select(kv => $"{kv.Key}={kv.Value}").Aggregate((a, b) => $"{a},{b}");

string metricText = $"{metricName},{metricLabelsText} metric={metricValue}";

HttpContent content = new StringContent(metricText, Encoding.UTF8, "text/plain");

await httpClient.PostAsync(apiUrl, content);

Wrap-Up

The complete source code of this project can be found at https://github.com/goh-chunlin/Lunar.Avalonia1. In the Readme file, I have also included both the presentation slide and recording for my presentation in the Singapore .NET Developers Community meetup in October 2023.

My Avalonia app can run on WSLg without any major issues.

Serverless Web App on AWS Lambda with .NET 6

We have a static website for marketing purpose hosting on Amazon S3 buckets. S3 offers a pay-as-you-go model, which means we only pay for the storage and bandwidth used. This can be significantly cheaper than traditional web hosting providers, especially for websites with low traffic.

However, S3 is designed as a storage service, not a web server. Hence, it lacks many features found in common web hosting providers. We thus decide to use AWS Lambda to power our website.

AWS Lambda and .NET 6

AWS Lambda is a serverless service that runs code for backend service without the need to provision or manage servers. Building serverless apps means that we can focus on our web app business logic instead of worrying about managing and operating servers. Similar to S3, Lambda helps to reduce overhead and lets us reclaim time and energy that we can spent on developing our products and services.

Lambda natively supports several programming languages such as Node.js, Go, and Python. In February 2022, the AWS team announced that .NET 6 runtime can be officially used to build Lambda functions. That means now Lambda also supports C#10 natively.

So as the beginning, we will setup the following simple architecture to retrieve website content from S3 via Lambda.

Simple architecture to host our website using Lambda and S3.

API Gateway

When we are creating a new Lambda service, we have the option to enable the function URL so that a HTTP(S) endpoint will be assigned to our Lambda function. With the URL, we can then use it to invoke our function through, for example, an Internet browser directly.

The Function URL feature is an excellent choice when we seek rapid exposure of our Lambda function to the wider public on the Internet. However, if we are in search of a more comprehensive solution, then opting for API Gateway in conjunction with Lambda may prove to be the better choice.

We can configure API Gateway as a trigger for our Lambda function.

Using API Gateway also enables us to invoke our Lambda function with a secure HTTP endpoint. In addition, it can do a bit more, such as managing large volumes of calls to our function by throttling traffic and automatically validating and authorising API calls.

Keeping Web Content in S3

Now, we will create a new S3 bucket called “corewebsitehtml” to store our web content files.

We then can upload our HTML file for our website homepage to the S3 bucket.

We will store our homepage HTML in the S3 for Lambda function to retrieve it later.

Retrieving Web Content from S3 with C# in Lambda

With our web content in S3, the next issue will be retrieving the content from S3 and returning it as response via the API Gateway.

According to performance evaluation, even though C# is the slowest on a cold start, it is one of the fastest languages if few invocations go one by one.

The code editor on AWS console does not support the .NET 6 runtime. Thus, we have to install the AWS Toolkit for Visual Studio, so that we can easily develop, debug, and deploy .NET applications using AWS, including the AWS Lambda.

Here, we will use the AWS SDK for reading the file from S3 as shown below.

public async Task<APIGatewayProxyResponse> FunctionHandler(APIGatewayProxyRequest request, ILambdaContext context)
{
    try 
    {
        RegionEndpoint bucketRegion = RegionEndpoint.APSoutheast1;

        AmazonS3Client client = new(bucketRegion);

        GetObjectRequest s3Request = new()
        {
            BucketName = "corewebsitehtml",
            Key = "index.html"
        };

        GetObjectResponse s3Response = await client.GetObjectAsync(s3Request);

        StreamReader reader = new(s3Response.ResponseStream);

        string content = reader.ReadToEnd();

        APIGatewayProxyResponse response = new()
        {
            StatusCode = (int)HttpStatusCode.OK,
            Body = content,
            Headers = new Dictionary<string, string> { { "Content-Type", "text/html" } }
        };

        return response;
    } 
    catch (Exception ex) 
    {
        context.Logger.LogWarning($"{ex.Message} - {ex.InnerException?.Message} - {ex.StackTrace}");

        throw;
    }
}

As shown in the code above, we first need to specify the region of our S3 Bucket, which is Asia Pacific (Singapore). After that, we also need to specify our bucket name “corewebsitehtml” and the key of the file which we are going to retrieve the web content from, i.e. “index.html”, as shown in the screenshot below.

Getting file key in S3 bucket.

Deploy from Visual Studio

After ew have done the coding of the function, we can right click on our project in the Visual Studio and then choose “Publish to AWS Lambda…” to deploy our C# code to Lambda function, as shown in the screenshot below.

Publishing our function code to AWS Lambda from Visual Studio.

After that, we will be prompted to key in the name of the Lambda function as well as the handler in the format of <assembly>::<type>::<method>.

Then we are good to proceed to deploy our Lambda function.

Logging with .NET in Lambda Function

Now when we hit the URL of the API Gateway, we will receive a HTTP 500 internal server error. To investigate, we need to check the error logs.

Lambda logs all requests handled by our function and automatically stores logs generated by our code through CloudWatch Logs. By default, info level messages or higher are written to CloudWatch Logs.

Thus, in our code above, we can use the Logger to write a warning message if the file is not found or there is an error retrieving the file.

context.Logger.LogWarning($"{ex.Message} - {ex.InnerException?.Message} - {ex.StackTrace}");

Hence, now if we access our API Gateway URL now, we should find a warning log message in our CloudWatch, as shown in the screenshot below. The page can be accessed from the “View CloudWatch logs” button under the “Monitor” tab of the Lambda function.

Viewing the log streams of our Lambda function on CloudWatch.

From one of the log streams, we can filter the results to list only those with the keyword “warn”. From the log message, we then know that our Lambda function has access denied from accessing our S3 bucket. So, next we will setup the access accordingly.

Connecting Lambda and S3

Since both our Lambda function and S3 bucket are in the same AWS account, we can easily grant the access from the function to the bucket.

Step 1: Create IAM Role

By default, Lambda creates an execution role with minimal permissions when we create a function in the Lambda console. So, now we first need to create an AWS Identity and Access Management (IAM) role for the Lambda function that also grants access to the S3 bucket.

In the IAM homepage, we head to the Access Management > Roles section to create a new role, as shown in the screenshot below.

Click on the “Create role” button to create a new role.

In the next screen, we will choose “AWS service” as the Trusted Entity Type and “Lambda” as the Use Case so that Lambda function can call AWS services like S3 on our behalf.

Select Lambda as our Use Case.

Next, we need to select the AWS managed policies AWSLambdaBasicExecutionRole and AWSXRayDaemonWriteAccess.

Attaching two policies to our new role.

Finally, in the Step 3, we simply need to key in a name for our new role and proceed, as shown in the screenshot below.

We will call our new role “CoreWebsiteFunctionToS3”.

Step 2: Configure the New IAM Role

After we have created this new role, we can head back to the IAM homepage. From the list of IAM roles, we should be able to see the role we have just created, as shown in the screenshot below.

Search for the new role that we have just created.

Since the Lambda needs to assume the execution role, we need to add lambda.amazonaws.com as a trusted service. To do so, we simply edit the trust policy under the Trust Relationships tab.

Updating the Trust Policy of the new role.

The trust policy should be updated to be as follows.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

After that, we also need to add one new inline policy under the Permissions tab.

Creating new inline policy.

We need to grant this new role to the list and read access (s3:ListBucket and s3:GetObject) access our S3 bucket (arn:aws:s3:::corewebsitehtml) and its content (arn:aws:s3:::corewebsitehtml/*) with the following policy in JSON. The reason why we grant the list access is so that our .NET code later can tell whether the list is empty or not. If we only grant this new role the read access, the AWS S3 SDK will always return 404.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
	    "Effect": "Allow",
	    "Action": [
                "s3:GetObject",
	        "s3:ListBucket"
	    ],
	    "Resource": [
	        "arn:aws:s3:::corewebsitehtml/*",
	        "arn:aws:s3:::corewebsitehtml"
	    ]
        }
    ]
}

You can switch to the JSON editor, as shown in the following screenshot, to easily paste the JSON above into the AWS console.

Creating inline policy for our new role to access our S3 bucket.

After giving this inline policy a name, for example “CoreWebsiteS3Access”, we can then proceed to create it in the next step. We should now be able to see the policy being created under the Permission Policies section.

We will now have three permission policies for our new role.

Step 3: Set New Role as Lambda Execution Role

So far we have only setup the new IAM role. Now, we need to configure this new role as the Lambda functions execution role. To do so, we have to edit the current Execution Role of the function, as shown in the screenshot below.

Edit the current execution role of a Lambda function.

Next, we need to change the execution role to the new IAM role that we have just created, i.e. CoreWebsiteFunctionToS3.

After save the change above, when we visit the Execution Role section of this function again, we should see that it can already access Amazon S3, as shown in the following screenshot.

Yay, our Lambda function can access S3 bucket now.

Step 4: Allow Lambda Access in S3 Bucket

Finally, we also need to make sure that the S3 bucket policy doesn’t explicitly deny access to our Lambda function or its execution role with the following policy.

{
    "Version": "2012-10-17",
    "Id": "CoreWebsitePolicy",
    "Statement": [
        {
            "Sid": "CoreWebsite",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::875137530908:role/CoreWebsiteFunctionToS3"
            },
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::corewebsitehtml/*",
                "arn:aws:s3:::corewebsitehtml"
            ]
        }
    ]
}

The JSON policy above can be entered in the Bucket Policy section, as demonstrated in the screenshot below.

Simply click on the Edit button to input our new bucket policy.

Setup Execution Role During Deployment

Since we have updated to use the new execution role for our Lambda function, in our subsequent deployment of the function, we should remember to set the role to be the correct role, i.e. CoreWebsiteFunctionToS3, as highlighted in the screenshot below.

Please remember to use the correct execution role during the deployment.

After we have done all these, we shall be able to see our web content which is stored in S3 bucket to be displayed when we visit the API Gateway URL on our browser.

References

Kaizen Journey to be Microsoft Certified

In the rapidly evolving fields like software development, staying static in terms of technical skills and knowledge can quickly lead to obsolescence. Hence, the ability to learn independently is a crucial skill in a rapidly changing world. Self-learning allows software developers to acquire new skills and deepen their knowledge in specific areas of interest.

Renew my Azure Developer Associate Certificate

In the September, I was on a business trip to Hanoi, Vietnam. I thus decided to take the opportunity of my time staying in hotel after work to prepare for my Microsoft certificate renewal test.

To Hanoi, from Singapore!

Well, it took me some time to hit refresh on the latest updates in Microsoft Azure because in Samsung, I don’t work daily with it. Fortunately, thanks to Microsoft Learn, I am able to quickly pickup the new knowledge after going through the online resources on the Microsoft Learn platform.

As usual, I took down the notes of what I learned from Microsoft Learn. This year, the exam focuses on the following topics.

  • Microsoft Identity Platform;
  • Azure Key Vault;
  • Azure App Configuration and Monitoring;
  • Azure Container Apps;
  • CosmosDB.

I did pretty well in all the topics above with the exception of Azure Container Apps, where my responses to questions related to Azure Container Registry were unfortunately incorrect. However, I am pleased to share that despite this challenge, I successfully passed the renewal assessment on my first attempt.

Achieving success in my Azure exam at midnight in Hanoi.

Participating in the AI Skills Challenge

Last month, I also participated in an online Microsoft event. It is the Microsoft Learn AI Skills Challenge where we are allowed to choose to complete one out of the four challenges from Machine Learning Challenge, Cognitive Services Challenge, Machine Learning Operations (MLOps) Challenge, and AI Builder Challenge.

The AI Builder Challenge introduces us to AI Builder. AI Builder is a Microsoft Power Platform capability that provides AI models that are designed to optimise the business processes.

The challenge shows us how to build models, and explains how we can use them in Power Apps and Power Automate. Throughout the online course, we can learn how to create topics, custom entities, and variables to capture, extract, and store information in a bot.

Why Taking Microsoft AI Challenge?

Users login the Samsung app using face recognition technology from Microsoft AI (Image Credit: cyberlink.com)

Since last year, I have been working in the AI module in a Samsung app. I am proud to have the opportunity to learn about Microsoft AI and use it in our project to, for example, allow users login to our app using the face recognition feature in Microsoft AI.

Therefore, embracing this challenge provides me with a valuable opportunity to gain a deeper understanding of Microsoft AI, with a specific focus on the AI Builder. The AI Builder platform empowers us to create models tailored to our business requirements or to opt for prebuilt models designed to seamlessly address a wide array of common business scenarios.

In August, I finally completed the challenge and received my certificate from Microsoft.

WRAP-UP

By adopting a growth mindset, applying Kaizen principles, and following a structured learning plan, we can embark on our self-learning journey and emerge as a certified professional.

Besides Microsoft Learn, depends on what you’d like to learn, you can enroll in other online courses on platforms like Coursera, Udemy, and edX which offer comprehensive courses with video lectures, quizzes, and labs.

Once you have chosen your certification, create a structured learning plan. You can then proceed to outline the topics covered in the exam objectives and allocate specific time slots for each.

Anyway, remember, continuous learning is the path to excellence, and getting certification is only one of the steps in that direction. Just as software development involves iterations, so does our learning journey. We shall continuously refine our technical skills and knowledge.