Kubernetes CronJob to Send Email via Azure Communication Services

In March 2021, Azure Communication Services was made generally available after being showcased in Microsoft Ignite. In the beginning, it only provides services such as SMS as well as voice and video calling. One year after that, in May 2022, it also offers a way to facilitate high volume transactional emails. However, currently this email function is still in public preview. Hence, the email relevant APIs and SDKs are provided without a SLA, which is thus not recommended for production workloads.

Currently, our Azure account has a set of limitation on the number of email messages that we can send. For all the developers, email sending is limited to 10 emails per minute, 25 emails in an hour, and 100 emails in day.

Setup Azure Communication Services

To begin, we need to createa a new Email Communication Services resource from the marketplace, as shown in the screenshot below.

US is the only option for the Data Location now in Email Communication Services.

Take note that currently we can only choose United States as the Data Location, which determines where the data will be stored at rest. This cannot be changed after the resource has been created. This thus make our Azure Communication Services which we need to configure next to store the data in United States as well. We will talk about this later.

Once the Email Communication Service is created, we can begin by adding a free Azure subdomain. With the “1-click add” function, as shown in the following screenshot, Azure will automatically configures the required email authentication protocols based on the email authentication best practices.

Click “1-click add” to provision a free Azure managed domain for sending emails.

We will then have a MailFrom address in the format of donotreply@xxxx.azurecomm.net which we can use to send email. We are allowed to modify the MailFrom address and From display name to more user-friendly values.

After getting the domain, we need to connect Azure Communication Services to it to send emails.

As we talked earlier, we need to make sure that the Azure Communication Services to have United States as its Data Location as well. Otherwise, we will not be able to link the email domain for email sending.

Successfully connected our email domain. =)

A Simple Console App for Sending Email

Now, we need to create the console app which we will be used in our Kubernetes CronJob later to send the emails with the Azure Communication Services Email client library.

Before we begin, we have to get the connection string for the Azure Communication Service resource.

Getting connection string of the Azure Communication Service.

Here I have the following code to send a sample email to myself.

using Azure.Communication.Email.Models;
using Azure.Communication.Email;

string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING") ?? string.Empty;
string emailFrom = Environment.GetEnvironmentVariable("EMAIL_FROM") ?? string.Empty;

if (connectionString != string.Empty)
{
    EmailClient emailClient = new EmailClient(connectionString);

    EmailContent emailContent = new EmailContent("Welcome to Azure Communication Service Email APIs.");
    emailContent.PlainText = "This email message is sent from Azure Communication Service Email using .NET SDK.";
    List<EmailAddress> emailAddresses = new List<EmailAddress> {
            new EmailAddress("gclin009@hotmail.com") { DisplayName = "Goh Chun Lin" }
        };
    EmailRecipients emailRecipients = new EmailRecipients(emailAddresses);
    EmailMessage emailMessage = new EmailMessage(emailFrom, emailContent, emailRecipients);
    SendEmailResult emailResult = emailClient.Send(emailMessage, CancellationToken.None);
}
Setting environment variables for local debugging purpose.

Tada, there should be an email successfully sent out as instructed.

Email is successfully sent and received. =)

Containerise the Console App

Next what we need to do is containerising our console app above.

Assume that our console app is called MyConsoleApp, then we will prepare a Dockerfile as follows.

FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["MyMedicalEmailSending.csproj", "."]
RUN dotnet restore "./MyConsoleApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "MyConsoleApp.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "MyConsoleApp.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyConsoleApp.dll"]

We then can publish it to Docker Hub for consumption later.

If you prefer to use Azure Container Registry, you can refer to the documentation on how to do it on Microsoft Learn.

Create the CronJob

In Kubernetes, pods are the smallest deployable units of computing we can create and manage. A pod can have one or more relevant containers, with shared storage and network resources. Here, we will be scheduling a job so that it creates pods containing our container with the image we created above to operate the execution of the pods, which is in our case, to send emails.

The schedule of the cronjob is defined as follows, according to the Kubernetes documentation on the schedule syntax.

# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (sun, mon, tue, wed, thu, fri, sat)
# │ │ │ │ │
# * * * * *

Hence, if we would like to have the email scheduler to be triggered at 8am of every Friday, we can create a CronJob in the namespace my-namespace with the following YAML file.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: email-scheduler
  namespace: ns-mymedical
spec:
  jobTemplate:
    metadata:
      name: email-scheduler
    spec:
      template:
        spec:
          containers:
          - image: chunlindocker/emailsender:v2023-01-25-1600
            name: email-scheduler
          restartPolicy: OnFailure
  schedule: 0 8 * * fri

After the CronJob is created, we can proceed to annotate it with the command below.

kubectl annotate cj email-scheduler jobtype=scheduler frequency=weekly

This helps us to query the cron jobs with jsonpath easily in the future. For example, we can list all cronjobs which are scheduled weekly, we can do it with the following command.

kubectl get cj -A -o=jsonpath="{range .items[?(@.metadata.annotations.jobtype)]}{.metadata.namespace},{.metadata.name},{.metadata.annotations.jobtype},{.metadata.annotations.frequency}{'\n'}{end}"

Create ConfigMap

In our email sending programme, we have two environment variables. Hence, we can create ConfigMap to store the data as key-value pair.

apiVersion: v1
kind: ConfigMap
metadata:
  name: email-sending
  namespace: my-namespace
data:
  EMAIL_FROM: DoNotReply@xxxxxx.azurecomm.net

For connection string of Azure Communication Service, since it is a sensitive data, we will store it in Secret. Secrets are similar to ConfigMaps but are specifically intended to hold confidential data. We will create a Secret with the command below.

kubectl create secret generic azure-communication-service --from-literal=CONNECTION_STRING=xxxxxx --dry-run=client --namespace=my-namespace -o yaml

It should generate a YAML which is similar to the following.

apiVersion: v1
kind: Secret
metadata:
  name: azure-communication-service
  namespace: my-namespace
data:
  CONNECTION_STRING: yyyyyyyyyy

Then, the Pods created by the CronJob can thus consume the ConfigMap and Secret above as environment variables. So, we need to update the CronJob YAML file to be as follows.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: email-scheduler
  namespace: my-namespace
spec:
  jobTemplate:
    metadata:
      name: email-scheduler
    spec:
      template:
        spec:
          containers:
          - image: chunlindocker/emailsender:v2023-01-25-1600
            name: email-scheduler
            env:
              - name: EMAIL_FROM
                valueFrom:
                  configMapKeyRef:
                    name: email-sending
                    key: EMAIL_FROM
              - name: COMMUNICATION_SERVICES_CONNECTION_STRING
                valueFrom:
                  secretKeyRef:
                    name: azure-communication-service
                    key: CONNECTION_STRING
          restartPolicy: OnFailure
  schedule: 0 8 * * fri

Using SealedSecret

Problem with using Secrets is that we can’t really commit them to our code repository because the data are only encoded but not encrypted. Hence, in order to store our Secrets safely, we need to use SealedSecret which helps us to encrypt our Secret. The SealedSecret can only be decrypted by the controller running in the targer cluster.

Currently, the SealedSecret Helm Chart is officially supported and hosted on GitHub.

Helm is the package manager for Kubernetes. Helm uses a packaging format called Chart, a collection of files describing a related set of Kubernetes resource. Each Chart comprises one or more Kubernetes manifests. With Chart, developers are able to configure, package, version, and share the apps with their dependencies and sensible defaults.

To install Helm on Windows 11 machine, we can execute the following commands in Ubuntu on Windows console.

  1. Download desired version of Helm release, for example, to download version 3.11.0:
    wget https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz
  2. Unpack it:
    tar -zxvf helm-v3.2.0-linux-amd64.tar.gz
  3. Move the Helm binary to desired location:
    sudo mv linux-amd64/helm /usr/local/bin/helm

Once we have successfully downloaded Helm and have it ready, we can add a Chart repository. In our case, we need to add the repo of SealedSecret Helm Chart.

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets

We should be able to locate the SealedSecret chart that we can install with the following command.

helm search repo bitnami
The Chart bitnami/sealed-secret is one of the Charts we can install.

To installed SealedSecret Helm Chart, we will use the following command.

helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets

Once we have done this, we should be able to locate a new service called sealed-secret-controller under Kubernetes services.

The sealed-secret-controller service is under kube-system namespace.

Before we can proceed to use kubeseal to create an encrypted secret, for me at least, there is a need to edit the sealed-secret-controller service. Otherwise, there will be an error message saying “cannot fetch certificate: no endpoints available for service”. If you also encounter the same issue, simply follow the steps mentioned by ghostsquad to edit the service YAML accordingly.

My final edit of the sealed-secret-controller service YAML.

Next, we then can proceed to encrypt our secret, as instructed on the SealedSecret GitHub readme.

kubectl create secret generic azure-communication-service --from-literal=CONNECTION_STRING=xxxxxx --dry-run=client --namespace=my-namespace -o json > mysecret-acs.json

kubeseal < mysecret-acs.json > mysealedsecret-acs.json

The generated file mysealedsecret-acs.json should look something as shown below.

The connection string is now encrypted.

To create the Secret resource, we will simply create it based on the file mysealedsecret-acs.json.

This generated file mysealedsecret-acs.json is thus safe to be committed to our code repository.

Going Zero-Trust: Using Kamus and InitContainer

Besides SealedSecret, there is also another open-source solution known as Kamus, a zero-trust secrets encryption and decryption solution for Kubernetes apps. We can also use Kamus to encrypt our secrets and make sure that the secrets can only be decrypted by the desired Kubernetes apps.

Similarly, we can also install Kamus using Helm Chart with the commands below.

helm repo add soluto https://charts.soluto.io
helm upgrade --install kamus soluto/kamus

Kamus will encrypt secrets for only a specific application represented by a ServiceAccount. A service account provides an identity for processes that run in a Pod, and maps to a ServiceAccount object. Hence, we need to create a Service Account with the YAML below.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-kamus-sa

After creating the ServiceAccount, we can update our CronJob YAML to mount it on the pods.

Next, we can proceed to download and install Kamus CLI which we can use to encrypt our secret with the following command.

kamus-cli encrypt \
  --secret xxxxxxxx \
  --service-account my-kamus-sa \
  --namespace my-namespace \
  --kamus-url <Kamus URL>

The Kamus URL could be found after we installed Kamus as shown in the screenshot below.

Kamus URL in localhost

We need to follow the instruction printed on the screen to get the Kamus URL. To do so, we need to forward local port to the pod, as shown in the following screenshot.

Successfully forward the port and thus can use the URL as the Kamus URL.

Hence, let’s say we want to encrypt a secret “alamak”, we can do so as follows.

Since our localhost Kamus URL is using HTTP, so we have to specify “–allow-insecure-url”.

After we have encrypted our secret successfully, we need to configure our pod accordingly so that it can decrypt the value with Kamus Decrypt API. The simplest way will be storing our secret in a ConfigMap because it is already encrypted, so it’s safe to store it in ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-encrypted-secret
  namespace: my-namespace
data:
  data: rADEn4o8pdN8Zcw40vFS/g==:zCPnDs8AzcTwqkvuu+k8iQ==

Then we can include an InitContainer in our pod. This is because the use of an initContainer allows one or more containers to run only if one or more previous containers run and exit successfully. So we can make use of Kamus Init Container to decrypt the secret using Kamus Decryptor API and output it to a file to be consumed by our app. There is an official demo from the Kamus Team on how to do that on the GitHub. Please take note that one of their YAML files is outdated and thus there is a need to update their deployment.yaml to use “apiVersion: apps/v1” with a proper selector.

Updated deployment.yaml.

After the deployment is successful, we can forward the port 8081 to the pod in the deployment as shown below.

kubectl port-forward deployment/kamus-example 8081:80

If the deployment is successful, we should be able to see the following when we visit localhost:8081 on our Internet browser, as shown in the following screenshot.

Yay, the original text “alamak” is successfully decrypted and displayed.

Deploy Our CronJob

Now, since we have everything setup, we can create our Kubernetes CronJob with the YAML file we have earlier. For local testing, I have edited the schedule to be “*/2 * * * *”. This means that an email will be sent to me every 2 minutes.

After waiting for a couple of minutes, I have received a few emails sent via the Azure Communication Services, as shown below.

Now the emails are received every 2 minutes. =)

Hoorey, this is how we build a simple Kubernetes CronJob and how we can send emails with the Azure Email Communication Services.

[KOSD] Fixed 0x800B0100 WACK Issue in VS2019 16.10.2 Onwards

I have been using Visual Studio 2019 to develop desktop and mobile applications using Xamarin. I could successfully deploy my Xamarin UWP app to Microsoft Store until I upgraded my Visual Studio 2019 to 16.10.2.

Normally, before we can publish our UWP app to Microsoft Store, we need to launch WACK (Windows App Certification Kit) to validate our app package. However, in VS2019 16.10.2 (and onwards), there will be an error occurs, as shown in the screenshot below, and the validation cannot be completed.

Error 0x800B0100 in Windows App Certification Kit (WACK).

MSBuild Project Build Output

Since my code is the same, so the first thing that I suspect is that the new updates in Visual Studio 2019 are causing this issue. Hence, I changed the verbosity of the project build output to Diagnostic, as shown below. This will help us understand better about what’s happening during the build.

Setting MSBuild project build output verbosity.

By comparing the current build output with the one using the previous version of Visual Studio 2019, I realised that there is something new in the current build ouput. The parameter GenerateTemporaryStoreCertificate is set to false while BuildAppxUploadPackageForUap is true, as shown below.

1>Target "_RemoveDisposableSigningCertificate: (TargetId:293)" in file "C:\Program Files (x86)\Microsoft Visual Studio\2019\Preview\MSBuild\Microsoft\VisualStudio\v16.0\AppxPackage\Microsoft.AppXPackage.Targets" from project "...UWP.csproj" (target "_GenerateAppxPackage" depends on it):
1>Task "RemoveDisposableSigningCertificate" skipped, due to false condition; ('$(GenerateTemporaryStoreCertificate)' == 'true' and '$(BuildAppxUploadPackageForUap)' == 'true') was evaluated as ('false' == 'true' and 'true' == 'true').
1>Done building target "_RemoveDisposableSigningCertificate" in project "...UWP.csproj".: (TargetId:293)

Online Discussions

Meanwhile, there are only two discussion threads online about this issue.

On 22nd of June 2021, Nick Stevens first reported a problem that he encountered in publishing app to Microsoft Store after upgrading his Visual Studio 2019 to 16.10.2. However, his problem is about package family name and publisher name being marked as invalid.

Few days later, on 1st of July 2021, another developer Tautvydas Zilys also reported a similar issue as Nick Stevens’. Interestingly, the same Microsoft engineer, James Parsons, replied to them with the similar answer, i.e. adding the following property in their project file to set GenerateTemporaryStoreCertificate to true.

<GenerateTemporaryStoreCertificate>true</GenerateTemporaryStoreCertificate>

As explained by James, the GenerateTemporaryStoreCertificate will mimic the old behavior of Visual Studio where it will generate a certificate for us that has the publisher name that Microsoft Partner Center expects.

Fixed

Thankfully, after adding this line in the UWP csproject of my Xamarin project as shown below, the WACK works again without the error showing.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="14.0" ...>
    ...
    <PropertyGroup>
        ...
        <GenerateTemporaryStoreCertificate>True</GenerateTemporaryStoreCertificate>
        ...
    </PropertyGroup>
</Project>

That’s all to fix the issue. I hope this article, which is also the 3rd in the world discussing about this Visual Studio 2019 problem, is helpful to other Xamarin UWP developers who are running into the same problem.

References

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

First Step into Orchard Core

This afternoon, I decided to take a look at Orchard Core, an open-source CMS (Content Management System) built on top of an ASP .NET Core application framework.

Since it is open-source, I easily forked its repository from Github and then checked out its dev branch.

After waiting for less than one minute to get all the Nuget packages restored in the project, I set OrchardCore.Cms.Web as the default project. Then I tried to run it but it failed with tons of errors. One of the major errors is “Assembly location for Razor SDK Tasks was not specified”. According to online discussion, it turns out that .NET Core 2.2 is needed.

After downloading the correct SDK, the projects are now successfully built with the following web page pops out as a result.

Take note that, as shown in the screenshot above, when I fill in Table Prefix, it will throw me exception saying that “SqlException: Invalid object name ‘OrchardroadDocument’” during the setup stage, as shown in the following screenshot.

Hence, the best way to proceed is to not enter anything to the Table Prefix textbox. Then we will be able to setup our CMS successfully. Once we log in to the system as Super User, we can proceed to configure the CMS.

Yup, this concludes my first attempt with the new Orchard Core CMS. =)

Protecting Web API with User Password

identity-server

In my previous post, I shared about the way to connect Android app with IdentityServer4 using AppAuth for Android. However, that way will popup a login page on a web browser on phone when users are trying to login to our app. This may not be what the business people want. Sometimes, they are looking for a customized native login page on the app itself.

To do so, we can continue to make use of IdentityServer4.

IdentityServer4 has a grant which is called Resource Owner Password Grant. It allows a client to send username and password to the token service and get an access token back that represents that user. Generally speaking, it is not really recommended to use the AppAuth way. However, since the mobile app is built by our own team, so using the resource owner password grant is okay.

Identity Server Setup: Adding New API Resource

In this setup, I will be using in-memory configuration.

As a start, I need to introduce a new ApiResource with the following codes in the Startup.cs of our IdentityServer project.

var availableResources = new List<ApiResource>();
...
availableResources.Add(new ApiResource("mobile-app-api", "Mobile App API Main Scope"));
...
services.AddIdentityServer()
    ...
    .AddInMemoryApiResources(availableResources)
    .AddInMemoryClients(new ClientStore(Configuration).GetClients())
    .AddAspNetIdentity<ApplicationUser>();

Identity Server Setup: Defining New Client

As the code above shows, there is a ClientStore that we need to add a new client to with the following codes.

public class ClientStore : IClientStore
{
    ...

    public IEnumerable<Client> GetClients()
    {
        var availableClients = new List<Client>();
        
        ...
        
        availableClients.Add(new Client
        {
            ClientId = "mobile-app-api",
            ClientName = "Mobile App APIs",
            AllowedGrantTypes = GrantTypes.ResourceOwnerPassword,
            ClientSecrets = { new Secret(Configuration["MobileAppApi:ClientSecret"].Sha256()) },
            AllowedScopes = { "mobile-app-api" }
        });

        return availableClients;
    }
}

Configuring Services in Web API

In the Startup.cs of our Web API project, we need to update it as follows.

public void ConfigureServices(IServiceCollection services)
{
    services.AddMvc();

    services.AddAuthorization();

    services.AddAuthentication("Bearer")
    .AddIdentityServerAuthentication(options =>
    {
        options.Authority = "<URL of the identity server>";
        options.RequireHttpsMetadata = true;
        options.ApiName = "mobile-app-api";
    });

    services.Configure<MvcOptions>(options =>
    {
        options.Filters.Add(new RequireHttpsAttribute());
    });
}

Configuring HTTP Request Pipeline in Web API

Besides the step above, we also need to make sure the following one line “app.UseAuthentication()” in the Startup.cs. Without this, we cannot make the authentication and authorization to work in our Web API project.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    ...
    app.UseAuthentication();
    app.UseMvc();
}

Receiving Username and Password to Return Access Token

We also need to add a new controller to receive username and password which will in return tell the mobile app whether the login of the user is successful or not. If the user is logged in successfully, then an access token will be returned.

[Route("api/[controller]")]
public class AuthenticateController : Controller
{
    ...
    [HttpPost]
    [Route("login")]
    public async Task<ActionResult> Login([FromBody] string userName, string password)
    {
        var disco = await DiscoveryClient.GetAsync("<URL of the identity server>");
        var tokenClient = new TokenClient(disco.TokenEndpoint, "mobile-app-api", Configuration["MobileAppApi:ClientSecret"]);
        var tokenResponse = await tokenClient.RequestResourceOwnerPasswordAsync(userName, password, "mobile-app-api");

        if (tokenResponse.IsError)
        {
            return Unauthorized();
        }

        return new JsonResult(tokenResponse.Json);
    }
    ...
}

Securing our APIs

We can now proceed to protect our Web APIs with [Authorize] attribute. In the code below, I also try to return the available claims via the API. The claims will tell the Web API who is logging in and calling the API now via the mobile app.

[HttpGet]
[Authorize]
public IEnumerable<string> Get()
{
    var claimTypesAndValues = new List<string>();

    foreach (var claim in User.Claims)
    {
        claimTypesAndValues.Add($"{ claim.Type }: { claim.Value }");
    }

    return claimTypesAndValues.ToArray();
}

Conclusion

This project took me two days to find out how to make the authentication works because I misunderstand how IdentityServer4 works in this case. Hence, it is always important to fully understand the things on your hands before working on them.

do-not-give-up.png
Do not give up! (Source: A Good Librarian Like a Good Shepherd)

Reference