Setup and Access Private RDS Database via a Bastion Host

There is always a common scenario that requires cloud engineers to configure infrastructure which allows developers to safely and securely connect to the RDS or Aurora database that is in a private subnet.

For development purpose, some developers tend to create a public IP address to access the databases on AWS as part of setup. This makes it easy for the developers to gain access to their database, but it is undoubtedly not a recommended method because it has huge security vulnerability that can compromise sensitive data.

Architecture Design

In order to make our database secure, the recommended approach by AWS is to place our database in a private subnet. Since a private subnet has no ability to communicate with the public Internet directly, we are able to isolate our data from the outside world.

Then in order to enable the developers to connect remotely to our database instance, we will setup a bastion host that allows them to connect to the database via SSH tunnelling.

The following diagram describes the overall architecture that we will be setting up for this scenario.

We will be configuring with CloudFormation template. The reason why we use CloudFormation is because it provides us with a simple way to create and manage a collection of AWS resources by provisioning and updating them in a predictable way.

Step 1: Specify Parameters

In the CloudFormation template, we will be using the following parameters.

Parameters:
ProjectName:
Type: String
Default: my-project
EC2InstanceType:
Type: String
Default: t2.micro
EC2AMI:
Type: String
Default: ami-020283e959651b381 # Amazon Linux 2023 AMI 2023.3.20240219.0 x86_64 HVM kernel-6.1
EC2KeyPairName:
Type: String
Default: my-project-ap-northeast-1-keypair
MasterUsername:
Type: String
Default: admin
MasterUserPassword:
Type: String
AllowedPattern: "[a-zA-Z0-9]+"
NoEcho: true
EngineVersion:
Type: String
Default: 8.0
MinCapacity:
Type: String
Default: 0.5
MaxCapacity:
Type: String
Default: 1

As you have noticed in the parameters for EC2, we choose to use the Amazon Linux 2023 AMI, which is shown in the following screenshot.

We can easily retrieve the AMI ID of an image in the AWS Console.

We are also using a keypair that we have already created. It is a keypair called “my-project-ap-northeast-1-keypair”.

We can locate existing key pairs in the EC2 instances page.

Step 2: Setup VPC

Amazon Virtual Private Cloud (VPC) is a foundational service for networking and compute categories. It lets us provision a logically isolated section of the AWS cloud to launch our AWS resources. VPC allows resources within a VPC to access AWS services without needing to go over the Internet.

When we use a VPC, we have control over our virtual networking environment. We can choose our own IP address range, create subnets, and configure routing and access control lists.

VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 38.0.0.0/16
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc'
- Key: Project
Value: !Ref ProjectName

Step 3: Setup Public Subnet, IGW, and Bastion Host

A bastion host is a dedicated server that lets authorised users access a private network from an external network such as the Internet.

A bastion host, also known as a jump server, is used as a bridge between the public Internet and a private subnet in a network architecture. It acts as a gateway that allows secure access from external networks to internal resources without directly exposing those resources to the public.

This setup enhances security by providing a single point of entry that can be closely monitored and controlled, reducing the attack surface of the internal network.

In this step, we will be launching an EC2 instance which is also our bastion host into our public subnet which is defined as follows.

PublicSubnet:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Select [0, !GetAZs '']
VpcId: !Ref VPC
CidrBlock: 38.0.0.0/20
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc-public-subnet1'
- Key: AZ
Value: !Select [0, !GetAZs '']
- Key: Project
Value: !Ref ProjectName

This public subnet will be able to receive public connection requests from the Internet. However, we should make sure that our bastion host to only be accessible via SSH at port 22.

BastionSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub '${AWS::StackName}-bastion-sg'
GroupDescription:
!Sub 'Security group for ${AWS::StackName} bastion host'
VpcId: !Ref VPC

BastionAllowInboundSSHFromInternet:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !Ref BastionSecurityGroup
IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0

CidrIp defines the IP address range that is permitted to send inbound traffic through the security group. 0.0.0.0/0 means from the whole Internet. Thus, we can also make sure that the connections are from certain IP addresses such as our home or workplace networks. Doing so will reduce the risk of exposing our bastion host to unintended outside audiences.

In order to enable resources in our public subnets, which is our bastion host in this case, to connect to the Internet, we also need to add Internet Gateway (IGW). IGW is a VPC component that allows communication between the VPC and the Internet.

InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-igw'
- Key: Project
Value: !Ref ProjectName

VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC

For outbound traffic, a route table for the IGW is necessary. When resources within a subnet need to communicate with resources outside of the VPC, such as accessing the public Internet or other AWS services, they need a route to the IGW.

PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-route-table'
- Key: Project
Value: !Ref ProjectName

InternetRoute:
Type: AWS::EC2::Route
DependsOn: VPCGatewayAttachment
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway

SubnetRouteTableAssociationAZ1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref PublicSubnet

A destination of 0.0.0.0/0 in the DestinationCidrBlock means that all traffic that is trying to access the Internet needs to flow through the target, i.e. the IGW.

Finally, we can define our bastion host EC2 instance with the following template.

BastionInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref EC2AMI
InstanceType: !Ref EC2InstanceType
KeyName: !Ref EC2KeyPairName
SubnetId: !Ref PublicSubnet
SecurityGroupIds:
- !Ref BastionSecurityGroup
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-bastion'
- Key: Project
Value: !Ref ProjectName

Step 4: Configure Private Subnets and Subnet Group

The database instance, as shown in the diagram above, is hosted in a private subnet so that it is securely protected from direct public Internet access.

When we are creating a database instance, we need to provide something called a Subnet Group. Subnet group helps deploy our instances across multiple Availability Zones (AZs), providing high availability and fault tolerance. Hence, we need to create two private subnets in order to successfully setup our database cluster.

PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [0, !GetAZs '']
CidrBlock: 38.0.128.0/20
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc-private-subnet1'
- Key: AZ
Value: !Select [0, !GetAZs '']
- Key: Project
Value: !Ref ProjectName

PrivateSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [1, !GetAZs '']
CidrBlock: 38.0.144.0/20
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc-private-subnet2'
- Key: AZ
Value: !Select [1, !GetAZs '']
- Key: Project
Value: !Ref ProjectName

Even thought resources in private subnets should not be directly accessible from the internet, they still need to communicate with other resources within the VPC. Hence, route table is neccessary to define routes that enable this internal communication.

PrivateRouteTable1:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-route-table-private-1'
- Key: Project
Value: !Ref ProjectName

PrivateSubnetRouteTableAssociationAZ1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PrivateRouteTable1
SubnetId: !Ref PrivateSubnet1

PrivateRouteTable2:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-route-table-private-2'
- Key: Project
Value: !Ref ProjectName

PrivateSubnetRouteTableAssociationAZ2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PrivateRouteTable2
SubnetId: !Ref PrivateSubnet2

In this article, as shown in the diagram above, one of the private subnets is not used. The additional subnet makes it easier for us to switch to a Multi-AZ database instance deployment in the future.

After we have defined the two private subnets, we can thus proceed to configure the subnet group as follows.

DBSubnetGroup: 
Type: 'AWS::RDS::DBSubnetGroup'
Properties:
DBSubnetGroupDescription:
!Sub 'Subnet group for ${AWS::StackName}-core-db DB Cluster'
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Tags:
- Key: Project
Value: !Ref ProjectName

Step 5: Define Database Cluster and Instance

As mentioned earlier, we will be using Amazon Aurora. So what is Aurora?

In 2014, Aurora was introduced to the public. Aurora is a fully-managed MySQL and PostgreSQL-compatible RDBMS. Aurora has 5x the throughput of MySQL and 3x of PostgreSQL, at 1/10th the cost of commercial databases. Aurora.

Five years after that, in 2019, Aurora Serverless was generally available in several regions such as US, EU, and Japan. Aurora Serverless is a flexible and cost-effective RDBMS option on AWS for apps with variable or unpredictable workloads because it offers an on-demand and auto-scaling way to run Aurora database clusters.

In 2022, Aurora Serverless v2 is generally available and supports CloudFormation.

RDSDBCluster:
Type: 'AWS::RDS::DBCluster'
Properties:
Engine: aurora-mysql
DBClusterIdentifier: !Sub '${AWS::StackName}-core-db'
DBSubnetGroupName: !Ref DBSubnetGroup
NetworkType: IPV4
VpcSecurityGroupIds:
- !Ref DatabaseSecurityGroup
AvailabilityZones:
- !Select [0, !GetAZs '']
EngineVersion: !Ref EngineVersion
MasterUsername: !Ref MasterUsername
MasterUserPassword: !Ref MasterUserPassword
ServerlessV2ScalingConfiguration:
MinCapacity: !Ref MinCapacity
MaxCapacity: !Ref MaxCapacity

RDSDBInstance:
Type: 'AWS::RDS::DBInstance'
Properties:
Engine: aurora-mysql
DBInstanceClass: db.serverless
DBClusterIdentifier: !Ref RDSDBCluster

The ServerlessV2ScalingConfiguration property is specially designed for Aurora Serverless v2 only. Here, we configure the minimum and maximum capacities for our database cluster to be 0.5 and 1 ACUs, respectively.

Choose 0.5 for the minimum because that allows our database instance to scale down the most when it is completely idle. For the maximum, we choose the lowest possible value, i.e. 1 ACU, to avoid the possibility of unexpected charges.

Step 6: Allow Connection from Bastion Host to the Database Instance

Finally, we need to allow the traffic from our bastion host to the database. Hence, our database security group template should be defined in the following manner.

DatabaseSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub '${AWS::StackName}-core-database-sg'
GroupDescription:
!Sub 'Security group for ${AWS::StackName} core database'
VpcId: !Ref VPC

DatabaseAllowInboundFromBastion:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !Ref DatabaseSecurityGroup
IpProtocol: tcp
FromPort: 3306
ToPort: 3306
SourceSecurityGroupId:
Fn::GetAtt:
- BastionSecurityGroup
- GroupId
GroupId:
Fn::GetAtt:
- DatabaseSecurityGroup
- GroupId

To connect to the database instance from the bastion host, we need to navigate to the folder containing the private key and perform the following.

ssh -i <private-key.pem> -f -N -L 3306:<db-instance-endpoint>:3306 ec2-user@<bastion-host-ip-address> -vvv

The -L option in the format of port:host:hostport in the command above basically specifies that connections to the given TCP port on the local host are to be forwarded to the given host and port on the remote side.

We can get the endpoint and port of our DB instance from the AWS Console.

With the command above, we should be able to connect to our database instance via our bastion host, as shown in the screenshot below.

We can proceed to connect to our database instance after reaching this step.

Now, we are able to connect to our Aurora database on MySQL Workbench.

Connecting to our Aurora Serverless database on AWS!

WRAP-UP

Thatโ€™s all for how we have to configure the infrastructure described in the following diagram so that we can connect to our RDS databases in private subnets through a bastion host.

I have also attached the complete CloudFormation template below for your reference.

# This is the complete template for our scenario discussed in this article.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Setup and Access Private RDS Database via a Bastion Host'

Parameters:
ProjectName:
Type: String
Default: my-project
EC2InstanceType:
Type: String
Default: t2.micro
EC2AMI:
Type: String
Default: ami-020283e959651b381 # Amazon Linux 2023 AMI 2023.3.20240219.0 x86_64 HVM kernel-6.1
EC2KeyPairName:
Type: String
Default: my-project-ap-northeast-1-keypair
MasterUsername:
Type: String
Default: admin
MasterUserPassword:
Type: String
AllowedPattern: "[a-zA-Z0-9]+"
NoEcho: true
EngineVersion:
Type: String
Default: 8.0
MinCapacity:
Type: String
Default: 0.5
MaxCapacity:
Type: String
Default: 1

Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 38.0.0.0/16
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc'
- Key: Project
Value: !Ref ProjectName

PublicSubnet:
Type: AWS::EC2::Subnet
Properties:
AvailabilityZone: !Select [0, !GetAZs '']
VpcId: !Ref VPC
CidrBlock: 38.0.0.0/20
MapPublicIpOnLaunch: true
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc-public-subnet1'
- Key: AZ
Value: !Select [0, !GetAZs '']
- Key: Project
Value: !Ref ProjectName

PrivateSubnet1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [0, !GetAZs '']
CidrBlock: 38.0.128.0/20
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc-private-subnet1'
- Key: AZ
Value: !Select [0, !GetAZs '']
- Key: Project
Value: !Ref ProjectName

PrivateSubnet2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [1, !GetAZs '']
CidrBlock: 38.0.144.0/20
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-vpc-private-subnet2'
- Key: AZ
Value: !Select [1, !GetAZs '']
- Key: Project
Value: !Ref ProjectName

InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-igw'
- Key: Project
Value: !Ref ProjectName

VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC

PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-route-table'
- Key: Project
Value: !Ref ProjectName

InternetRoute:
Type: AWS::EC2::Route
DependsOn: VPCGatewayAttachment
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway

SubnetRouteTableAssociationAZ1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PublicRouteTable
SubnetId: !Ref PublicSubnet

PrivateRouteTable1:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-route-table-private-1'
- Key: Project
Value: !Ref ProjectName

PrivateSubnetRouteTableAssociationAZ1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PrivateRouteTable1
SubnetId: !Ref PrivateSubnet1

PrivateRouteTable2:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-route-table-private-2'
- Key: Project
Value: !Ref ProjectName

PrivateSubnetRouteTableAssociationAZ2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PrivateRouteTable2
SubnetId: !Ref PrivateSubnet2

BastionSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub '${AWS::StackName}-bastion-sg'
GroupDescription:
!Sub 'Security group for ${AWS::StackName} bastion host'
VpcId: !Ref VPC

BastionAllowInboundSSHFromInternet:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !Ref BastionSecurityGroup
IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0

BastionInstance:
Type: AWS::EC2::Instance
Properties:
ImageId: !Ref EC2AMI
InstanceType: !Ref EC2InstanceType
KeyName: !Ref EC2KeyPairName
SubnetId: !Ref PublicSubnet
SecurityGroupIds:
- !Ref BastionSecurityGroup
Tags:
- Key: Name
Value: !Sub '${AWS::StackName}-bastion'
- Key: Project
Value: !Ref ProjectName

DatabaseSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupName: !Sub '${AWS::StackName}-core-database-sg'
GroupDescription:
!Sub 'Security group for ${AWS::StackName} core database'
VpcId: !Ref VPC

DatabaseAllowInboundFromBastion:
Type: AWS::EC2::SecurityGroupIngress
Properties:
GroupId: !Ref DatabaseSecurityGroup
IpProtocol: tcp
FromPort: 3306
ToPort: 3306
SourceSecurityGroupId:
Fn::GetAtt:
- BastionSecurityGroup
- GroupId
GroupId:
Fn::GetAtt:
- DatabaseSecurityGroup
- GroupId

DBSubnetGroup:
Type: 'AWS::RDS::DBSubnetGroup'
Properties:
DBSubnetGroupDescription:
!Sub 'Subnet group for ${AWS::StackName}-core-db DB Cluster'
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
Tags:
- Key: Project
Value: !Ref ProjectName

RDSDBCluster:
Type: 'AWS::RDS::DBCluster'
Properties:
Engine: aurora-mysql
DBClusterIdentifier: !Sub '${AWS::StackName}-core-db'
DBSubnetGroupName: !Ref DBSubnetGroup
NetworkType: IPV4
VpcSecurityGroupIds:
- !Ref DatabaseSecurityGroup
AvailabilityZones:
- !Select [0, !GetAZs '']
EngineVersion: !Ref EngineVersion
MasterUsername: !Ref MasterUsername
MasterUserPassword: !Ref MasterUserPassword
ServerlessV2ScalingConfiguration:
MinCapacity: !Ref MinCapacity
MaxCapacity: !Ref MaxCapacity

RDSDBInstance:
Type: 'AWS::RDS::DBInstance'
Properties:
Engine: aurora-mysql
DBInstanceClass: db.serverless
DBClusterIdentifier: !Ref RDSDBCluster

[KOSD] Learning from Issues: Troubleshooting Containerisation for .NET Worker Service

Recently, we are working on a project which needs a long-running service for processing CPU-intensive data. We choose to build a .NET worker service because with .NET, we are now able to make our service cross-platform and run it on Amazon ECS, for example.

Setup

To simplify, in this article, we will be running the following code as a worker service.

using Microsoft.Extensions.Hosting;

using NLog;
using NLog.Extensions.Logging;

Console.WriteLine("Hello, World!");

var builder = Host.CreateApplicationBuilder(args);

var logger = LogManager.Setup()
.GetCurrentClassLogger();

try
{
builder.Logging.AddNLog();

logger.Info("Starting");

using var host = builder.Build();
await host.RunAsync();
}
catch (Exception e)
{
logger.Error(e, "Fatal error to start");
throw;
}
finally
{
// Ensure to flush and stop internal timers/threads before application-exit (Avoid segmentation fault on Linux)
LogManager.Shutdown();
}

So, if we run the code above locally, we should be seeing the following output.

The output of our simplified .NET worker service.

In this project, we are using the NuGet library NLog.Extensions.Logging, thus the NLog configuration is by default read from appsettings.json, which is provided below.

{

"NLog":{
"internalLogLevel":"Info",
"internalLogFile":"Logs\\internal-nlog.txt",
"extensions": [
{ "assembly": "NLog.Extensions.Logging" }
],
"targets":{
"allfile":{
"type":"File",
"fileName":"C:\\\\Users\\gclin\\source\\repos\\Lunar.AspNetContainerIssue\\Logs\\nlog-all-${shortdate}.log",
"layout":"${longdate}|${event-properties:item=EventId_Id}|${uppercase:${level}}|${logger}|${message} ${exception:format=tostring}"
}
},
"rules":[
{
"logger":"*",
"minLevel":"Trace",
"writeTo":"allfile"
},
{
"logger":"Microsoft.*",
"maxLevel":"Info",
"final":"true"
}
]
}
}

So, we should be having two log files generated with one showing something similar to the output on the console earlier.

The log file generated by NLog.

Containerisation and the Issue

Since we will be running this worker service on Amazon ECS, we need to containerise it first. The Dockerfile we use is simplified as follows.

Simplified version of the Dockerfile we use.

However, when we run the Docker image locally, we receive an error, as shown in the screenshot below, saying “You must install or update .NET to run this application.” However, aren’t we already using .NET runtime as stated in our Dockerfile?

No framework is found.

In fact, if we read the error message clearly, it is the ASP .NET Core that it could not find. This confused us for a moment because it is a worker service project, not a ASP .NET project. So why does it complain about ASP .NET Core?

Solution

This problem happens because one of the NuGet packages in our project relies on ASP.NET Core runtime being present, as discussed in one of the StackOverflow threads.

We accidentally include the NLog.Web.AspNetCore NuGet package which supports only ASP .NET Core platform. This library is not used in our worker service at all.

NLog.Web.AspNetCore supports only ASP .NET platform.

So, after we remove the reference, we can now run the Docker image successfully.

WRAP-UP

Thatโ€™s all for how we solve the issue we encounter when developing our .NET worker service.


KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

Migrate to TLS 1.2 for Azure Blob Storage

Objective

In November 2023, Azure conveyed through an email notification that, starting from 31st October 2024, all interactions with their services must be safeguarded using Transport Layer Security (TLS) version 1.2 or later. Post this date, their support for TLS versions 1.0 and 1.1 will be discontinued.

By default, Azure Storage already supports TLS 1.2 on public HTTPS endpoints. However, for some companies, they are still using TLS 1.0 or 1.1. Hence, to maintain their connections to Azure Storage, they have to update their OS and apps to support TLS 1.2.

About TLS

The history of TLS can be traced back to SSL.

SSL stands for “Secure Sockets Layer,” and it was developed by Netscape in the 1990s. SSL was one of the earliest cryptographic protocols developed to provide secure communication over a computer network.

SSL has been found to have several vulnerabilities over time, and these issues have led to its deprecation in favor of more secure protocols like TLS. In 2019, TLS 1.0 was introduced as an improvement over SSL. Nowadays, while the term “SSL” is still commonly used colloquially to refer to the broader category of secure protocols, it typically means TLS.

When we see “https://&#8221; in the URL and the padlock icon, it means that the website is using either TLS or SSL to encrypt the connection.

While TLS addressed some SSL vulnerabilities, it still had weaknesses, and over time, security researchers identified new threats and attacks. Subsequent versions of TLS, i.e. TLS 1.1, TLS 1.2, and TLS 1.3, were developed to further enhance security and address vulnerabilities.

Why TLS 1.2?

By the mid-2010s, it became increasingly clear that TLS 1.2 was a more secure choice, and we were encouraged to upgrade our systems to support it instead. TLS 1.2 introduced new and stronger cipher suites, including Advanced Encryption Standard (AES) cipher suites, providing better security compared to older algorithms.

Older TLS versions (1.0 and 1.1) are deprecated and removed to meet regulatory standards from NIST (National Institute of Standards and Technologies). (Photo Credit: R. Jacobson/NIST)

Ten years after TLS 1.2 was officially released as a standardised protocol, TLS 1.3 was introduced by the Internet Engineering Task Force (IETF).

The coexistence of TLS 1.2 and TLS 1.3 is currently part of a transitional approach, allowing organisations to support older clients that may not yet have adopted TLS 1.3.

For Microsoft Azure, if the service we are using still have a dependency on TLS 1.0 or 1.1, we are advised to migrate them to TLS 1.2 or 1.3 by 31 October 2024.

Monitoring TLS Version of Requests

Before we enabling that, we should setup logging to make sure that our Azure policy is working as intended. Here, we will be using Azure Monitor.

For demonstration purpose, we will create a new Log Analytics workspace called “LunarTlsAzureStorage”.

In this article, we will only be logging requests for the Blob Storage, hence, we will be setting up the Diagnostic of the Storage Account as shown in the screenshot below.

Adding new diagnostic settings for blob.

In the next step, we need to specify that we would like to collect the logs of only read and write requests of the Azure Blob Storage. After that, we will send the logs to Log Analytics we have just created above.

Creating a new diagnostic setting for our blob storage.

After we have created the diagnostic setting, requests to the storage account are subsequently logged according to that setting.

As demonstrated in the following screenshot, we use the query below to find out how many requests were made against our blob storage with different versions of TLS over the past seven day.

There are only TLS 1.2 requests for the “gclstorage” blob storage.

Verify with Telerik Fiddler

Fiddler is a popular web debugging proxy tool that allows us to monitor, inspect, and debug HTTP traffic between our machine and the Internet. Fiddler can thus be used to inspect and analyze both TLS and SSL requests.

We can refer to the Fiddler trace to confirm that the correct version of TLS 1.2 was used to send the request to the blob storage “gclstorage”, as shown in the following screenshot.

TLS 1.2 is SSL 3.3, thus the version there states that it is version 3.3.

Enforce the Minimum Accepted TLS Version

Currently, the minimum TLS version accepted by storage account is set to TLS 1.0 by default before November 2014.

We at most can only set Version 1.2 for the minumum TLS version.

In advance of the deprecation date, we can enable Azure policy to enforce minimum TLS version to be TLS 1.2. Hence, we can now update the value to 1.2 so that we can reject all requests from clients that are sending data to our Azure Storage with an TLS 1.0 and 1.1.

Change in Kestrel for ASP .NET Core

Meanwhile, Kestrel, the cross-platform web server for ASP.NET Core, now also uses the system default TLS protocol versions rather than restricting connections to the TLS 1.1 and TLS 1.2 protocols like it did previously.

Thus, if we are running our apps on the latest Windows servers, then the latest TLS should be automatically used by our apps without any configuration from our side.

In fact, according to the TLS best practices guide from Microsoft, we should not specify the TLS version. Instead, we shall configure our code to let the OS decide on the TLS version for us.

Wrap-Up

Enhancing the security stance for Windows users, as of September 2023, the default configuration of the operating system will deactivate TLS versions 1.0 and 1.1.

As developers, we should ensure that all apps and services running on Windows are using up-to-date versions that support TLS 1.2 or higher. Hence, prior to the enforcement of TLS updates, we must test our apps in a controlled environment to verify compatibility with TLS 1.2 or later.

While TLS 1.0 and 1.1 will be disabled by default, it is also good to confirm these settings and ensure they align with your security requirements.

By taking these proactive measures, we should be able to have a seamless transition to updated TLS versions, maintaining a secure computing environment while minimising any potential disruptions to applications or services.

References

Revisit Avalonia UI App Development

Back in April 2018, I had the priviledge of sharing about Avalonia UI app development with the Singapore .NET Developers Community. At the time, Avalonia was still in its early stages, exclusively tailored for the creation of cross-platform desktop applications. Fast forward to the present, five years since my initial adventure to Avalonia, there is a remarkable transformation in this technology landscape.

In July 2023, Avalonia v11 was announced. It is a big release with mobile development support for iOS and Android, and WebAssembly support to allow running directly in the browser.

In this artlcle, I will share about my new development experience with Avalonia UI.

About Avalonia UI

Avalonia UI, one of the .NET Foundations projects, is an open-source, cross-platform UI framework designed for building native desktop apps. It has been described as the spiritual successor to WPF (Windows Presentation Foundation), enabling our existing WPF apps to run on macOS and Linux without expensive and risky rewrites.

Platforms supported by Avalonia. (Reference)

Like WPF and Xamarin.Forms, Avalonia UI also uses XAML for the UI. XAML is a declarative markup language that simplifies UI design and separates the UI layout from the application’s logic. Same as WPF, Avalonia also encourages the Model-View-ViewModel (MVVM) design pattern for building apps.

Hence, for WPF developers, they will find the transition to Avalonia relatively smooth because they can apply their knowledge of XAML and WPF design patterns to create UI layouts in Avalonia easily. With Avalonia, they can reuse a significant portion of their existing WPF code when developing cross-platform apps. This reusability can save time and effort in the development process.

Semi.Avalonia Theme

Theming is still a challenge especially when it comes to develop line-of-business apps with Avalonia UI. According to the community, there are a few professional themes available, such as

Currently, I have only tried out Semi.Avalonia.

Semi.Avalonia is a theme inspired by Semi Design, a design system designed and currently maintained by Douyin. The reason why I chose Semi.Avalonia is because there is a demo app which demonstrating all of the general controls and styles available to develop Avalonia apps.

There is a demo executable available for us to play around with Semi Avalonia Themes.

XAML Previewer for Avalonia

In September 2023, .NET Foundation announced on the social network, X, that Avalonia UI offered a live XAML previewer for Avalonia in Visual Studio Code through an extension as well.

The Avalonia XAML Previewer offers real-time visualisation of XAML code. With this capability, developers can deftly craft and refine user interfaces, swiftly pinpoint potential issues, and witness the immediate effects of their alterations.

Unlike Visual Studio, VS Code will reuse the single preview window. Hence, the previewer will refresh everytime when we switch between multiple XAML files.

Besides, the Avalonia for Visual Studio Code Extension also contains support for Avalonia XAML autocomplete.

The Avalonia XAML Previewer somehow is not working perfectly on my Surface Go.

C# DevKit

In addition, there is also a new VS Code extension that needs our attention.

In October 2023, Microsoft announced the general availability of C# Dev Kit, a VS Code extension that brings an improved editor-first C# development experience to Linux, macOS, and Windows.

When we install this extension, three other extensions, i.e. the C# extension, the IntelliCode for C# Dev Kit, and the .NET Runtime Install Tool will automatically be installed together.

With C# Dev Kit, we can now manage our projects with the Solution Explorer that we have been very familiar with on the Visual Studio.

Besides the normal file explorer, we now can have the Solution Explorer in VS Code too.

Since the IntelliCode for C# Dev Kit extension is installed together, on top of the basic IntelliSense code-completion found in the existing C# extension, we can also get powerful IntelliCode features such as whole-line completions and starred suggestions based on our personal codebase.

AI-assisted IntelliCode predicts the most likely correct method to use in VSCode.

Grafana Dashboard

Next, I would like to talk about the observability of an app.

I attended Grafana workshop during the GrafanaLive event in Singapore in September 2023.

Observability plays a crucial role in system and app management, allowing us to gain insights into the inner workings of the system, understand its functions, and leverage the data it produces effectively.

In the realm of observability, our first concern is to assess how well the system can gauge its internal status merely by examining its external output. This aspect of observability is crucial for proactive issue detection and troubleshooting, as it allows us to gain a deeper insight into performance and potential problems of the system without relying on manual methods.

Effective observability not only aids in diagnosing problems but also in understanding the system behavior in various scenarios, contributing to better decision-making and system optimisation.

Grafana engineer shared about the 3 pillars of observability.

There are three fundamental components of observability, i.e. monitoring, logging, and tracing. Monitoring enhances the understanding of system actions by collecting, storing, searching, and analysing monitoring metrics from the system.

Prometheus and Grafana are two widely used open-source monitoring tools that, when used together, provide a powerful solution for monitoring and observability. Often, Prometheus collects metrics from various systems and services. Grafana then connects to Prometheus as a data source to fetch these metrics. Finally, we design customised dashboards in Grafana, incorporating the collected metrics.

A simple dashboard collecting metrics from the Avalonia app though HTTP metrics.

We can get started quickly with Grafana Cloud, a hosted version of Grafana, without the need to set up and manage infrastructure components.

On Grafana Cloud, using the “HTTP Metrics”, we are able to easily send metrics directly from our app over HTTP for storage in the Grafana Cloud using Prometheus. Prometheus uses a specific data model for organising and querying metrics, which includes the components as highlighted in the following image.

Prometheus metrics basic structure.

Thus, in our Avalonia project, we can easily send metrics to Grafana Cloud with the codes below, where apiUrl, userId, and apiKey are given by the Grafana Cloud.

HttpClient httpClient = new();
httpClient.DefaultRequestHeaders.Add("Authorization", "Bearer " + userId + ":" + apiKey);

string metricLabelsText = metricLabels.Select(kv => $"{kv.Key}={kv.Value}").Aggregate((a, b) => $"{a},{b}");

string metricText = $"{metricName},{metricLabelsText} metric={metricValue}";

HttpContent content = new StringContent(metricText, Encoding.UTF8, "text/plain");

await httpClient.PostAsync(apiUrl, content);

Wrap-Up

The complete source code of this project can be found at https://github.com/goh-chunlin/Lunar.Avalonia1. In the Readme file, I have also included both the presentation slide and recording for my presentation in the Singapore .NET Developers Community meetup in October 2023.

My Avalonia app can run on WSLg without any major issues.