Serverless Web App on AWS Lambda with .NET 6

We have a static website for marketing purpose hosting on Amazon S3 buckets. S3 offers a pay-as-you-go model, which means we only pay for the storage and bandwidth used. This can be significantly cheaper than traditional web hosting providers, especially for websites with low traffic.

However, S3 is designed as a storage service, not a web server. Hence, it lacks many features found in common web hosting providers. We thus decide to use AWS Lambda to power our website.

AWS Lambda and .NET 6

AWS Lambda is a serverless service that runs code for backend service without the need to provision or manage servers. Building serverless apps means that we can focus on our web app business logic instead of worrying about managing and operating servers. Similar to S3, Lambda helps to reduce overhead and lets us reclaim time and energy that we can spent on developing our products and services.

Lambda natively supports several programming languages such as Node.js, Go, and Python. In February 2022, the AWS team announced that .NET 6 runtime can be officially used to build Lambda functions. That means now Lambda also supports C#10 natively.

So as the beginning, we will setup the following simple architecture to retrieve website content from S3 via Lambda.

Simple architecture to host our website using Lambda and S3.

API Gateway

When we are creating a new Lambda service, we have the option to enable the function URL so that a HTTP(S) endpoint will be assigned to our Lambda function. With the URL, we can then use it to invoke our function through, for example, an Internet browser directly.

The Function URL feature is an excellent choice when we seek rapid exposure of our Lambda function to the wider public on the Internet. However, if we are in search of a more comprehensive solution, then opting for API Gateway in conjunction with Lambda may prove to be the better choice.

We can configure API Gateway as a trigger for our Lambda function.

Using API Gateway also enables us to invoke our Lambda function with a secure HTTP endpoint. In addition, it can do a bit more, such as managing large volumes of calls to our function by throttling traffic and automatically validating and authorising API calls.

Keeping Web Content in S3

Now, we will create a new S3 bucket called “corewebsitehtml” to store our web content files.

We then can upload our HTML file for our website homepage to the S3 bucket.

We will store our homepage HTML in the S3 for Lambda function to retrieve it later.

Retrieving Web Content from S3 with C# in Lambda

With our web content in S3, the next issue will be retrieving the content from S3 and returning it as response via the API Gateway.

According to performance evaluation, even though C# is the slowest on a cold start, it is one of the fastest languages if few invocations go one by one.

The code editor on AWS console does not support the .NET 6 runtime. Thus, we have to install the AWS Toolkit for Visual Studio, so that we can easily develop, debug, and deploy .NET applications using AWS, including the AWS Lambda.

Here, we will use the AWS SDK for reading the file from S3 as shown below.

public async Task<APIGatewayProxyResponse> FunctionHandler(APIGatewayProxyRequest request, ILambdaContext context)
{
    try 
    {
        RegionEndpoint bucketRegion = RegionEndpoint.APSoutheast1;

        AmazonS3Client client = new(bucketRegion);

        GetObjectRequest s3Request = new()
        {
            BucketName = "corewebsitehtml",
            Key = "index.html"
        };

        GetObjectResponse s3Response = await client.GetObjectAsync(s3Request);

        StreamReader reader = new(s3Response.ResponseStream);

        string content = reader.ReadToEnd();

        APIGatewayProxyResponse response = new()
        {
            StatusCode = (int)HttpStatusCode.OK,
            Body = content,
            Headers = new Dictionary<string, string> { { "Content-Type", "text/html" } }
        };

        return response;
    } 
    catch (Exception ex) 
    {
        context.Logger.LogWarning($"{ex.Message} - {ex.InnerException?.Message} - {ex.StackTrace}");

        throw;
    }
}

As shown in the code above, we first need to specify the region of our S3 Bucket, which is Asia Pacific (Singapore). After that, we also need to specify our bucket name “corewebsitehtml” and the key of the file which we are going to retrieve the web content from, i.e. “index.html”, as shown in the screenshot below.

Getting file key in S3 bucket.

Deploy from Visual Studio

After ew have done the coding of the function, we can right click on our project in the Visual Studio and then choose “Publish to AWS Lambda…” to deploy our C# code to Lambda function, as shown in the screenshot below.

Publishing our function code to AWS Lambda from Visual Studio.

After that, we will be prompted to key in the name of the Lambda function as well as the handler in the format of <assembly>::<type>::<method>.

Then we are good to proceed to deploy our Lambda function.

Logging with .NET in Lambda Function

Now when we hit the URL of the API Gateway, we will receive a HTTP 500 internal server error. To investigate, we need to check the error logs.

Lambda logs all requests handled by our function and automatically stores logs generated by our code through CloudWatch Logs. By default, info level messages or higher are written to CloudWatch Logs.

Thus, in our code above, we can use the Logger to write a warning message if the file is not found or there is an error retrieving the file.

context.Logger.LogWarning($"{ex.Message} - {ex.InnerException?.Message} - {ex.StackTrace}");

Hence, now if we access our API Gateway URL now, we should find a warning log message in our CloudWatch, as shown in the screenshot below. The page can be accessed from the “View CloudWatch logs” button under the “Monitor” tab of the Lambda function.

Viewing the log streams of our Lambda function on CloudWatch.

From one of the log streams, we can filter the results to list only those with the keyword “warn”. From the log message, we then know that our Lambda function has access denied from accessing our S3 bucket. So, next we will setup the access accordingly.

Connecting Lambda and S3

Since both our Lambda function and S3 bucket are in the same AWS account, we can easily grant the access from the function to the bucket.

Step 1: Create IAM Role

By default, Lambda creates an execution role with minimal permissions when we create a function in the Lambda console. So, now we first need to create an AWS Identity and Access Management (IAM) role for the Lambda function that also grants access to the S3 bucket.

In the IAM homepage, we head to the Access Management > Roles section to create a new role, as shown in the screenshot below.

Click on the “Create role” button to create a new role.

In the next screen, we will choose “AWS service” as the Trusted Entity Type and “Lambda” as the Use Case so that Lambda function can call AWS services like S3 on our behalf.

Select Lambda as our Use Case.

Next, we need to select the AWS managed policies AWSLambdaBasicExecutionRole and AWSXRayDaemonWriteAccess.

Attaching two policies to our new role.

Finally, in the Step 3, we simply need to key in a name for our new role and proceed, as shown in the screenshot below.

We will call our new role “CoreWebsiteFunctionToS3”.

Step 2: Configure the New IAM Role

After we have created this new role, we can head back to the IAM homepage. From the list of IAM roles, we should be able to see the role we have just created, as shown in the screenshot below.

Search for the new role that we have just created.

Since the Lambda needs to assume the execution role, we need to add lambda.amazonaws.com as a trusted service. To do so, we simply edit the trust policy under the Trust Relationships tab.

Updating the Trust Policy of the new role.

The trust policy should be updated to be as follows.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "lambda.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

After that, we also need to add one new inline policy under the Permissions tab.

Creating new inline policy.

We need to grant this new role to the list and read access (s3:ListBucket and s3:GetObject) access our S3 bucket (arn:aws:s3:::corewebsitehtml) and its content (arn:aws:s3:::corewebsitehtml/*) with the following policy in JSON. The reason why we grant the list access is so that our .NET code later can tell whether the list is empty or not. If we only grant this new role the read access, the AWS S3 SDK will always return 404.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
	    "Effect": "Allow",
	    "Action": [
                "s3:GetObject",
	        "s3:ListBucket"
	    ],
	    "Resource": [
	        "arn:aws:s3:::corewebsitehtml/*",
	        "arn:aws:s3:::corewebsitehtml"
	    ]
        }
    ]
}

You can switch to the JSON editor, as shown in the following screenshot, to easily paste the JSON above into the AWS console.

Creating inline policy for our new role to access our S3 bucket.

After giving this inline policy a name, for example “CoreWebsiteS3Access”, we can then proceed to create it in the next step. We should now be able to see the policy being created under the Permission Policies section.

We will now have three permission policies for our new role.

Step 3: Set New Role as Lambda Execution Role

So far we have only setup the new IAM role. Now, we need to configure this new role as the Lambda functions execution role. To do so, we have to edit the current Execution Role of the function, as shown in the screenshot below.

Edit the current execution role of a Lambda function.

Next, we need to change the execution role to the new IAM role that we have just created, i.e. CoreWebsiteFunctionToS3.

After save the change above, when we visit the Execution Role section of this function again, we should see that it can already access Amazon S3, as shown in the following screenshot.

Yay, our Lambda function can access S3 bucket now.

Step 4: Allow Lambda Access in S3 Bucket

Finally, we also need to make sure that the S3 bucket policy doesn’t explicitly deny access to our Lambda function or its execution role with the following policy.

{
    "Version": "2012-10-17",
    "Id": "CoreWebsitePolicy",
    "Statement": [
        {
            "Sid": "CoreWebsite",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::875137530908:role/CoreWebsiteFunctionToS3"
            },
            "Action": "s3:GetObject",
            "Resource": [
                "arn:aws:s3:::corewebsitehtml/*",
                "arn:aws:s3:::corewebsitehtml"
            ]
        }
    ]
}

The JSON policy above can be entered in the Bucket Policy section, as demonstrated in the screenshot below.

Simply click on the Edit button to input our new bucket policy.

Setup Execution Role During Deployment

Since we have updated to use the new execution role for our Lambda function, in our subsequent deployment of the function, we should remember to set the role to be the correct role, i.e. CoreWebsiteFunctionToS3, as highlighted in the screenshot below.

Please remember to use the correct execution role during the deployment.

After we have done all these, we shall be able to see our web content which is stored in S3 bucket to be displayed when we visit the API Gateway URL on our browser.

References

Kaizen Journey to be Microsoft Certified

In the rapidly evolving fields like software development, staying static in terms of technical skills and knowledge can quickly lead to obsolescence. Hence, the ability to learn independently is a crucial skill in a rapidly changing world. Self-learning allows software developers to acquire new skills and deepen their knowledge in specific areas of interest.

Renew my Azure Developer Associate Certificate

In the September, I was on a business trip to Hanoi, Vietnam. I thus decided to take the opportunity of my time staying in hotel after work to prepare for my Microsoft certificate renewal test.

To Hanoi, from Singapore!

Well, it took me some time to hit refresh on the latest updates in Microsoft Azure because in Samsung, I don’t work daily with it. Fortunately, thanks to Microsoft Learn, I am able to quickly pickup the new knowledge after going through the online resources on the Microsoft Learn platform.

As usual, I took down the notes of what I learned from Microsoft Learn. This year, the exam focuses on the following topics.

  • Microsoft Identity Platform;
  • Azure Key Vault;
  • Azure App Configuration and Monitoring;
  • Azure Container Apps;
  • CosmosDB.

I did pretty well in all the topics above with the exception of Azure Container Apps, where my responses to questions related to Azure Container Registry were unfortunately incorrect. However, I am pleased to share that despite this challenge, I successfully passed the renewal assessment on my first attempt.

Achieving success in my Azure exam at midnight in Hanoi.

Participating in the AI Skills Challenge

Last month, I also participated in an online Microsoft event. It is the Microsoft Learn AI Skills Challenge where we are allowed to choose to complete one out of the four challenges from Machine Learning Challenge, Cognitive Services Challenge, Machine Learning Operations (MLOps) Challenge, and AI Builder Challenge.

The AI Builder Challenge introduces us to AI Builder. AI Builder is a Microsoft Power Platform capability that provides AI models that are designed to optimise the business processes.

The challenge shows us how to build models, and explains how we can use them in Power Apps and Power Automate. Throughout the online course, we can learn how to create topics, custom entities, and variables to capture, extract, and store information in a bot.

Why Taking Microsoft AI Challenge?

Users login the Samsung app using face recognition technology from Microsoft AI (Image Credit: cyberlink.com)

Since last year, I have been working in the AI module in a Samsung app. I am proud to have the opportunity to learn about Microsoft AI and use it in our project to, for example, allow users login to our app using the face recognition feature in Microsoft AI.

Therefore, embracing this challenge provides me with a valuable opportunity to gain a deeper understanding of Microsoft AI, with a specific focus on the AI Builder. The AI Builder platform empowers us to create models tailored to our business requirements or to opt for prebuilt models designed to seamlessly address a wide array of common business scenarios.

In August, I finally completed the challenge and received my certificate from Microsoft.

WRAP-UP

By adopting a growth mindset, applying Kaizen principles, and following a structured learning plan, we can embark on our self-learning journey and emerge as a certified professional.

Besides Microsoft Learn, depends on what you’d like to learn, you can enroll in other online courses on platforms like Coursera, Udemy, and edX which offer comprehensive courses with video lectures, quizzes, and labs.

Once you have chosen your certification, create a structured learning plan. You can then proceed to outline the topics covered in the exam objectives and allocate specific time slots for each.

Anyway, remember, continuous learning is the path to excellence, and getting certification is only one of the steps in that direction. Just as software development involves iterations, so does our learning journey. We shall continuously refine our technical skills and knowledge.

[KOSD] Solving SQL File Encoding Issues on Git with PowerShell

Few days ago, some of our teammates discovered that the SQL files they tried to pull from our GitHub repo had encoding issue. When they did git pull, there would be an error saying “fatal: failed to encode ‘…/xxxxx.sql’ from UTF-16-LE-BOM to UTF-8”.

In addition, on GitHub, the SQL files we committed to the GitHub are all marked as binary files. Thus we couldn’t view the changes we made to those files in the commit.

Cause of the Issue

It turns out that those SQL files are generated from SQL Server Management Studio (SSMS).

Default file encoding of SSMS is Western European (Windows) – Codepage 1252.

By default, the encoding used to save SQL files in SSMS is UTF-16. For my case, my default encoding is the “Western European (Windows) – Codepage 1252”. Codepage 1252 is a single-byte character encoding of the Latin alphabet that was used in Windows for English and many Romance and Germanic languages. This encoding will cause Git to treat the files as binary files.

Solution

The way to resolve this issue is to force the file to use UTF-8 encoding. We can run the following PowerShell script to change the encoding of all SQL files in a given directory and its subdirectories.

$Utf8NoBomEncoding = New-Object System.Text.UTF8Encoding $False

Get-ChildItem "<absolute directory path>" -Recurse *.sql | foreach {
    $FilePath = $_.FullName
    $FileContent = Get-Content $FilePath
    [System.IO.File]::WriteAllLines($FilePath, $FileContent, $Utf8NoBomEncoding)
}

The BOM (Byte Order Mark), a sequence of bytes at the start of a text stream (0xEF, 0xBB, 0xBF), is used to signal the endianness of an encoding, but since endianness is irrelevant to UTF-8, the BOM is unnecessary. This explains why we pass $False to the constructor of UTF8Encoding to indicate that BOM is not needed.

Wrap-Up

That’s all for a short little PowerShell script we used to solve the encoding issue of our SQL files.

There is an interesting discussion on StackOverflow about this issue, please give it a read too.

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

Improve Life with Codes

In the realm of software development and business practices, not automating processes when it could bring significant benefits will normally be considered a missed opportunity or an inefficient use of resources. It could lead to wasted time, increased chances of errors, and reduced productivity.

Background Story

My teammate encountered this strange issue that a third-party core component in the system which run as a Windows service would stop randomly. The service is listening to a certain TCP port. When the service was down, telnet to that port would show that the connection was not successful.

After weeks of intensive log investigation, my teammate still could not figure out the reason why it would stop working. However, a glimmer of insight emerged: restarting the Windows service would consistently bring the component back online.

Hence, he solution is creating an alert system which would trigger email to him and the team to restart the Windows service when it goes down. The alert system is basically a scheduler checking the health of the TCP port which the service is listening to.

Since my teammate was only the few ones who could login to the server, he had to standby during weekends too to restart the Windows service. Not long after that, he submitted his resignation and left the company. Other teammates thus had to take over this manual restarting Windows service task.

Auto Restart Windows Service with C#

In order to avoid teammates getting burnout from manually restarting Window service frequently even at nights and during weekends, I decided to develop a C# programme which will be executed every 10 minutes at the server. The C# programme will make a connection to the port being listened by the Windows service to check whether the service is running or not. If it is not, the programme will restart it.

The code is as follows.

try
{
    using (TcpClient tcpClient = new())
    {
        tcpClient.Connect(serverIpAddress, port);
    }

    Console.WriteLine("No issue...");
}
catch (Exception)
{
    int timeoutMilliseconds = 120000;

    ServiceController service = new(targetService);

    try
    {
        Console.WriteLine("Restarting...");
        int millisec1 = Environment.TickCount;

        TimeSpan timeout = TimeSpan.FromMilliseconds(timeoutMilliseconds);

        if (service.Status != ServiceControllerStatus.Stopped) 
        {
            Console.WriteLine("Stopping...");
            service.Stop();
            service.WaitForStatus(ServiceControllerStatus.Stopped, timeout);
        }

        Console.WriteLine("Stopped!");
        int millisec2 = Environment.TickCount;
        timeout = TimeSpan.FromMilliseconds(timeoutMilliseconds - (millisec2 - millisec1));

        Console.WriteLine("Starting...");
        service.Start();
        service.WaitForStatus(ServiceControllerStatus.Running, timeout);

        Console.WriteLine("Restarted!");
    }
    catch (Exception ex) 
    {
        Console.WriteLine(ex.Message);
    }
}

In the programme above, we implement a timeout of 2 minutes. So after waiting the Windows service to stop, we will use the remaining time to wait for the service to be back to the Running status within the remaining time.

After the team had launched this programme as a scheduler, no one has to wake up at midnight just to login to server to restart the Windows service anymore.

Converting Comma-Delimted CSV to Tab-Delimted CSV

Soon, we realised another issue. The input files sent to the Windows service to process has invalid file content. The service is expecting tab-delimited CSV files but the actual content is comma-delimited. The problem has been there since last year, so there are hundreds of files not being processed.

In order to save his time, I wrote a Powershell script to do the conversion.

Get-ChildItem "<directory contains the files>" -Filter *.csv | 
Foreach-Object {
    Import-Csv -Path $_.FullName -Header 1,2,3,4,5,6,7,8,9 | Export-Csv -Path ('<output directory>' + $_.BaseName + '_out.tmp') -Delimiter `t -NoTypeInformation 

    Get-Content ('<output directory>' + $_.BaseName + '_out.tmp') | % {$_ -replace '"', ''} | Select-Object -Skip 1 | out-file -FilePath ('<output directory>' + $_.BaseName + '.csv')

    Remove-Item ('<output directory>' + $_.BaseName + '_out.tmp')
}

The CSV files do not have the header row and they all have 9 columns. Hence, that is the reason why I use “-Header 1,2,3,4,5,6,7,8,9” to add a temporary header. Otherwise, the script will treat the first line in the file to be header. This means that if the first line has multiple columns having the same value, the Import-Csv will fail. This is the reason why we need to add a temporary header with unique column values.

When using Export-Csv, all fields in the CSV are enclosed in quotation marks. Hence, we need to remove the quotation marks and remove the temporary header before we generate a tab-delimited CSV file as the output.

With this my teammate easily transform all the files to the correct format in less than 5 minutes.

Searching File Content with PowerShell

A few days after that, I found out that another teammate was reading the log files manually to find out the lines containing a keyword “access”. I was shocked by what he was doing because there were hundreds of logs everyday and that would mean he needed to spend hours or even days on the task.

Hence, I wrote him another simple PowerShell just to do the job.

Get-ChildItem "<directory contains the files>" -Filter *.log | 
Foreach-Object {
    Get-Content $_.FullName | % { if($_ -match "access") {write-host $_}}
}

With this, my teammate finally could finish his task early.

Wrap-Up

Automating software development processes is a common practice in the industry because of the benefits it offers. It saves time, reduces errors, improves productivity, and allows the team to focus on more challenging and creative tasks.

From a broader perspective, not automating the process but doing it manually might not be a tragic event in the traditional sense, as it does not involve loss of life or extreme suffering. However, it could be seen as a missed chance for improvement and growth.