Setting up SFTP on Kubernetes for EDI

EDI or Electronic Data Interchange is a computer-to-computer exchange of business documents between companies in a standard electronic format. Even though EDI has been around since 1960s and there are alternatives such as API, EDI still remains a crucial business component in industries such as supply chain and logistics.

With EDI, there is no longer a need to manually enter data into the system or send paper documents through the mail. EDI replaces the traditional paper-based communication, which is often slow, error-prone, and inefficient.

EDI works by converting business documents into a standard electronic format that can be exchanged between different computer systems. The documents later can be transmitted securely over a network using a variety of protocols, such as SFTP. Upon receipt, the documents are automatically processed and integrated into the computer system of the recipient.

EDI and SFTP

There are many EDI software applications using SFTP as one of the methods for transmitting EDI files between two systems. This allows the EDI software to create EDI files, translate them into the appropriate format, and then transmit them securely via SFTP.

SFTP provides a secure and reliable method for transmitting EDI files between trading partners. It uses a combination of SSH (Secure Shell) encryption and SFTP protocol for secure file transfer. This helps to ensure that EDI files are transmitted securely and that sensitive business data is protected from unauthorized access.

In this blog post, we will take a look at how to setup simple SFTP on Kubernetes.

Setup SFTP Server Locally with Docker

On our local machine, we can also setup SFTP server on Docker with an image known as “atmoz/sftp”, which offers easy-to-use SFTP server with OpenSSH.

When we run it on our Docker locally, we can also mount it with our local directory with the command below.

docker run \
    --name my_sftp_server \
    -v C:\Users\gclin\Documents\atmoz-sftp:/home/user1/ftp-file-storage \
    -p 2224:22
    -d atmoz/sftp:alpine \
    user1:$6$Zax4...Me3Px/:e:::ftp-file-storage

This allows the user “user1” to login to this SFTP server with the encrypted password “$6$Zax4…Me3Px” (The “:e” means the password is encrypted. Here we are using SHA512).

The directory name at the end of the command, i.e. “ftp-file-storage”, will be created under the user’s home directory with write permission. This allows the files uploaded by the user to be stored at the “ftp-file-storage” folder. Hence, we choose to mount it to our local Documents sub-directory “atmoz-sftp”.

The OpenSSH server runs by default on port 22. Here, we are forwarding the port 22 of the container to the host port 2224.

Currently, this Docker image provides two versions, i.e. Debian and Alpine. According to the official documentation, Alpine is 10 times smaller than Debian but Debian is generally considered more stable and only bug fixes as well as security fixes are added after each Debian release which is about 2 years.

Yay, we have successfully created a SFTP server on Docker.

Moving to Kubernetes

Setting up SFTP on Kubernetes can be done using a Deployment object with a container running the SFTP server.

Firstly, we will use back the same image “atmoz/sftp” for our container. We will then create a Kubernetes Deployment object using a YAML file which defines a container specification including the SFTP server image, environment variables, and volume mounts for data storage;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sftp-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sftp
  template:
    metadata:
      labels:
        app: sftp
    spec:
      volumes:
      - name: sftp-data
        emptyDir: {}
      containers:
      - name: sftp
        image: atmoz/sftp
        ports:
        - containerPort: 22
        env:
        - name: SFTP_USERS
          value: "user1:$6$NsJ2.N...DTb1:e:::ftp-file-storage"
        volumeMounts:
        - name: sftp-data
          mountPath: /home/user1/ftp-file-storage
        resources:
          requests:
            memory: "64Mi"
            cpu: "250m"
          limits:
            memory: "128Mi"
            cpu: "500m"

To keep things simple, I mounted emptyDir to the Container. The emptyDir is basically a volume first created and initially empty when a Pod is assigned to a Node, and exists as long as that Pod is running on that Node.

In addition, in order to avoid our container to starve other processes, we will add resource limits to it. In the YAML definition above, our container will be defined with a request for 0.25 CPU and 64MiB of memory. Also, the container has a limit of 0.5 CPU and 128MiB of memory.

Next, since I will be doing testing locally on my machine, I will be using the kubectl port-forward command.

kubectl port-forward deploy/sftp-deployment 22:22

After doing all these, we can now access the SFTP server on our Kubernetes cluster as shown in the following screenshot.

We can also upload files to the SFTP server on Kubernetes.

Conclusion

Yup, that’s all for having a SFTP server running on our Kubernetes easily.

After we have configured our SFTP server to use secure authentication and encryption, we can then proceed to create an SFTP user account on the server and give each of the parties who are going to communicate with us using EDI the necessary permissions to access the directories where EDI files will be stored.

Finally, we also need to monitor file transfers for errors or issues and troubleshoot as needed. This may involve checking SFTP server logs, EDI software logs, or network connections.

Overall, this is the quick start of using SFTP for EDI to ensure the secure and reliable transfer of electronic business documents between different parties.

Kubernetes CronJob to Send Email via Azure Communication Services

In March 2021, Azure Communication Services was made generally available after being showcased in Microsoft Ignite. In the beginning, it only provides services such as SMS as well as voice and video calling. One year after that, in May 2022, it also offers a way to facilitate high volume transactional emails. However, currently this email function is still in public preview. Hence, the email relevant APIs and SDKs are provided without a SLA, which is thus not recommended for production workloads.

Currently, our Azure account has a set of limitation on the number of email messages that we can send. For all the developers, email sending is limited to 10 emails per minute, 25 emails in an hour, and 100 emails in day.

Setup Azure Communication Services

To begin, we need to createa a new Email Communication Services resource from the marketplace, as shown in the screenshot below.

US is the only option for the Data Location now in Email Communication Services.

Take note that currently we can only choose United States as the Data Location, which determines where the data will be stored at rest. This cannot be changed after the resource has been created. This thus make our Azure Communication Services which we need to configure next to store the data in United States as well. We will talk about this later.

Once the Email Communication Service is created, we can begin by adding a free Azure subdomain. With the “1-click add” function, as shown in the following screenshot, Azure will automatically configures the required email authentication protocols based on the email authentication best practices.

Click “1-click add” to provision a free Azure managed domain for sending emails.

We will then have a MailFrom address in the format of donotreply@xxxx.azurecomm.net which we can use to send email. We are allowed to modify the MailFrom address and From display name to more user-friendly values.

After getting the domain, we need to connect Azure Communication Services to it to send emails.

As we talked earlier, we need to make sure that the Azure Communication Services to have United States as its Data Location as well. Otherwise, we will not be able to link the email domain for email sending.

Successfully connected our email domain. =)

A Simple Console App for Sending Email

Now, we need to create the console app which we will be used in our Kubernetes CronJob later to send the emails with the Azure Communication Services Email client library.

Before we begin, we have to get the connection string for the Azure Communication Service resource.

Getting connection string of the Azure Communication Service.

Here I have the following code to send a sample email to myself.

using Azure.Communication.Email.Models;
using Azure.Communication.Email;

string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING") ?? string.Empty;
string emailFrom = Environment.GetEnvironmentVariable("EMAIL_FROM") ?? string.Empty;

if (connectionString != string.Empty)
{
    EmailClient emailClient = new EmailClient(connectionString);

    EmailContent emailContent = new EmailContent("Welcome to Azure Communication Service Email APIs.");
    emailContent.PlainText = "This email message is sent from Azure Communication Service Email using .NET SDK.";
    List<EmailAddress> emailAddresses = new List<EmailAddress> {
            new EmailAddress("gclin009@hotmail.com") { DisplayName = "Goh Chun Lin" }
        };
    EmailRecipients emailRecipients = new EmailRecipients(emailAddresses);
    EmailMessage emailMessage = new EmailMessage(emailFrom, emailContent, emailRecipients);
    SendEmailResult emailResult = emailClient.Send(emailMessage, CancellationToken.None);
}
Setting environment variables for local debugging purpose.

Tada, there should be an email successfully sent out as instructed.

Email is successfully sent and received. =)

Containerise the Console App

Next what we need to do is containerising our console app above.

Assume that our console app is called MyConsoleApp, then we will prepare a Dockerfile as follows.

FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["MyMedicalEmailSending.csproj", "."]
RUN dotnet restore "./MyConsoleApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "MyConsoleApp.csproj" -c Release -o /app/build

FROM build AS publish
RUN dotnet publish "MyConsoleApp.csproj" -c Release -o /app/publish /p:UseAppHost=false

FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyConsoleApp.dll"]

We then can publish it to Docker Hub for consumption later.

If you prefer to use Azure Container Registry, you can refer to the documentation on how to do it on Microsoft Learn.

Create the CronJob

In Kubernetes, pods are the smallest deployable units of computing we can create and manage. A pod can have one or more relevant containers, with shared storage and network resources. Here, we will be scheduling a job so that it creates pods containing our container with the image we created above to operate the execution of the pods, which is in our case, to send emails.

The schedule of the cronjob is defined as follows, according to the Kubernetes documentation on the schedule syntax.

# ┌───────────── minute (0 - 59)
# │ ┌───────────── hour (0 - 23)
# │ │ ┌───────────── day of the month (1 - 31)
# │ │ │ ┌───────────── month (1 - 12)
# │ │ │ │ ┌───────────── day of the week (sun, mon, tue, wed, thu, fri, sat)
# │ │ │ │ │
# * * * * *

Hence, if we would like to have the email scheduler to be triggered at 8am of every Friday, we can create a CronJob in the namespace my-namespace with the following YAML file.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: email-scheduler
  namespace: ns-mymedical
spec:
  jobTemplate:
    metadata:
      name: email-scheduler
    spec:
      template:
        spec:
          containers:
          - image: chunlindocker/emailsender:v2023-01-25-1600
            name: email-scheduler
          restartPolicy: OnFailure
  schedule: 0 8 * * fri

After the CronJob is created, we can proceed to annotate it with the command below.

kubectl annotate cj email-scheduler jobtype=scheduler frequency=weekly

This helps us to query the cron jobs with jsonpath easily in the future. For example, we can list all cronjobs which are scheduled weekly, we can do it with the following command.

kubectl get cj -A -o=jsonpath="{range .items[?(@.metadata.annotations.jobtype)]}{.metadata.namespace},{.metadata.name},{.metadata.annotations.jobtype},{.metadata.annotations.frequency}{'\n'}{end}"

Create ConfigMap

In our email sending programme, we have two environment variables. Hence, we can create ConfigMap to store the data as key-value pair.

apiVersion: v1
kind: ConfigMap
metadata:
  name: email-sending
  namespace: my-namespace
data:
  EMAIL_FROM: DoNotReply@xxxxxx.azurecomm.net

For connection string of Azure Communication Service, since it is a sensitive data, we will store it in Secret. Secrets are similar to ConfigMaps but are specifically intended to hold confidential data. We will create a Secret with the command below.

kubectl create secret generic azure-communication-service --from-literal=CONNECTION_STRING=xxxxxx --dry-run=client --namespace=my-namespace -o yaml

It should generate a YAML which is similar to the following.

apiVersion: v1
kind: Secret
metadata:
  name: azure-communication-service
  namespace: my-namespace
data:
  CONNECTION_STRING: yyyyyyyyyy

Then, the Pods created by the CronJob can thus consume the ConfigMap and Secret above as environment variables. So, we need to update the CronJob YAML file to be as follows.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: email-scheduler
  namespace: my-namespace
spec:
  jobTemplate:
    metadata:
      name: email-scheduler
    spec:
      template:
        spec:
          containers:
          - image: chunlindocker/emailsender:v2023-01-25-1600
            name: email-scheduler
            env:
              - name: EMAIL_FROM
                valueFrom:
                  configMapKeyRef:
                    name: email-sending
                    key: EMAIL_FROM
              - name: COMMUNICATION_SERVICES_CONNECTION_STRING
                valueFrom:
                  secretKeyRef:
                    name: azure-communication-service
                    key: CONNECTION_STRING
          restartPolicy: OnFailure
  schedule: 0 8 * * fri

Using SealedSecret

Problem with using Secrets is that we can’t really commit them to our code repository because the data are only encoded but not encrypted. Hence, in order to store our Secrets safely, we need to use SealedSecret which helps us to encrypt our Secret. The SealedSecret can only be decrypted by the controller running in the targer cluster.

Currently, the SealedSecret Helm Chart is officially supported and hosted on GitHub.

Helm is the package manager for Kubernetes. Helm uses a packaging format called Chart, a collection of files describing a related set of Kubernetes resource. Each Chart comprises one or more Kubernetes manifests. With Chart, developers are able to configure, package, version, and share the apps with their dependencies and sensible defaults.

To install Helm on Windows 11 machine, we can execute the following commands in Ubuntu on Windows console.

  1. Download desired version of Helm release, for example, to download version 3.11.0:
    wget https://get.helm.sh/helm-v3.11.0-linux-amd64.tar.gz
  2. Unpack it:
    tar -zxvf helm-v3.2.0-linux-amd64.tar.gz
  3. Move the Helm binary to desired location:
    sudo mv linux-amd64/helm /usr/local/bin/helm

Once we have successfully downloaded Helm and have it ready, we can add a Chart repository. In our case, we need to add the repo of SealedSecret Helm Chart.

helm repo add sealed-secrets https://bitnami-labs.github.io/sealed-secrets

We should be able to locate the SealedSecret chart that we can install with the following command.

helm search repo bitnami
The Chart bitnami/sealed-secret is one of the Charts we can install.

To installed SealedSecret Helm Chart, we will use the following command.

helm install sealed-secrets -n kube-system --set-string fullnameOverride=sealed-secrets-controller sealed-secrets/sealed-secrets

Once we have done this, we should be able to locate a new service called sealed-secret-controller under Kubernetes services.

The sealed-secret-controller service is under kube-system namespace.

Before we can proceed to use kubeseal to create an encrypted secret, for me at least, there is a need to edit the sealed-secret-controller service. Otherwise, there will be an error message saying “cannot fetch certificate: no endpoints available for service”. If you also encounter the same issue, simply follow the steps mentioned by ghostsquad to edit the service YAML accordingly.

My final edit of the sealed-secret-controller service YAML.

Next, we then can proceed to encrypt our secret, as instructed on the SealedSecret GitHub readme.

kubectl create secret generic azure-communication-service --from-literal=CONNECTION_STRING=xxxxxx --dry-run=client --namespace=my-namespace -o json > mysecret-acs.json

kubeseal < mysecret-acs.json > mysealedsecret-acs.json

The generated file mysealedsecret-acs.json should look something as shown below.

The connection string is now encrypted.

To create the Secret resource, we will simply create it based on the file mysealedsecret-acs.json.

This generated file mysealedsecret-acs.json is thus safe to be committed to our code repository.

Going Zero-Trust: Using Kamus and InitContainer

Besides SealedSecret, there is also another open-source solution known as Kamus, a zero-trust secrets encryption and decryption solution for Kubernetes apps. We can also use Kamus to encrypt our secrets and make sure that the secrets can only be decrypted by the desired Kubernetes apps.

Similarly, we can also install Kamus using Helm Chart with the commands below.

helm repo add soluto https://charts.soluto.io
helm upgrade --install kamus soluto/kamus

Kamus will encrypt secrets for only a specific application represented by a ServiceAccount. A service account provides an identity for processes that run in a Pod, and maps to a ServiceAccount object. Hence, we need to create a Service Account with the YAML below.

apiVersion: v1
kind: ServiceAccount
metadata:
  name: my-kamus-sa

After creating the ServiceAccount, we can update our CronJob YAML to mount it on the pods.

Next, we can proceed to download and install Kamus CLI which we can use to encrypt our secret with the following command.

kamus-cli encrypt \
  --secret xxxxxxxx \
  --service-account my-kamus-sa \
  --namespace my-namespace \
  --kamus-url <Kamus URL>

The Kamus URL could be found after we installed Kamus as shown in the screenshot below.

Kamus URL in localhost

We need to follow the instruction printed on the screen to get the Kamus URL. To do so, we need to forward local port to the pod, as shown in the following screenshot.

Successfully forward the port and thus can use the URL as the Kamus URL.

Hence, let’s say we want to encrypt a secret “alamak”, we can do so as follows.

Since our localhost Kamus URL is using HTTP, so we have to specify “–allow-insecure-url”.

After we have encrypted our secret successfully, we need to configure our pod accordingly so that it can decrypt the value with Kamus Decrypt API. The simplest way will be storing our secret in a ConfigMap because it is already encrypted, so it’s safe to store it in ConfigMap.

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-encrypted-secret
  namespace: my-namespace
data:
  data: rADEn4o8pdN8Zcw40vFS/g==:zCPnDs8AzcTwqkvuu+k8iQ==

Then we can include an InitContainer in our pod. This is because the use of an initContainer allows one or more containers to run only if one or more previous containers run and exit successfully. So we can make use of Kamus Init Container to decrypt the secret using Kamus Decryptor API and output it to a file to be consumed by our app. There is an official demo from the Kamus Team on how to do that on the GitHub. Please take note that one of their YAML files is outdated and thus there is a need to update their deployment.yaml to use “apiVersion: apps/v1” with a proper selector.

Updated deployment.yaml.

After the deployment is successful, we can forward the port 8081 to the pod in the deployment as shown below.

kubectl port-forward deployment/kamus-example 8081:80

If the deployment is successful, we should be able to see the following when we visit localhost:8081 on our Internet browser, as shown in the following screenshot.

Yay, the original text “alamak” is successfully decrypted and displayed.

Deploy Our CronJob

Now, since we have everything setup, we can create our Kubernetes CronJob with the YAML file we have earlier. For local testing, I have edited the schedule to be “*/2 * * * *”. This means that an email will be sent to me every 2 minutes.

After waiting for a couple of minutes, I have received a few emails sent via the Azure Communication Services, as shown below.

Now the emails are received every 2 minutes. =)

Hoorey, this is how we build a simple Kubernetes CronJob and how we can send emails with the Azure Email Communication Services.

[KOSD] Let’s Talk about CASE

Last week, a developer in our team encountered an interesting question in his SQL script on SQL Server 2019. For the convenience of discussion, I’ve simplified his script as follow.

DECLARE @NUM AS TINYINT = 0
DECLARE @VAL AS VARCHAR(MAX) = '20.50'

SELECT CASE @NUM WHEN 0 THEN CAST(@VAL AS DECIMAL(10, 2))
                 WHEN 1 THEN CAST(@VAL AS DECIMAL(10, 4))
                 ELSE -1 
       END AS Result

The result he expected was 20.50 because @NUM equals to 0, so by right the first result expression should be executed. However, the truth is that it returned 20.5000 as if the second result expression which is casting @VAL into a decimal value with a scale of 4 was run.

So, what is the cause of this issue here?

SQL Data Types Implicit Conversion

First of all, according to the Microsoft Learn documentation, the data types of all result expressions must be the same or must be an implicit conversion.

In the script above, we have two data types in the result expressions, i.e. DECIMAL and INT (-1 in the ELSE result expression). Hence, we need to understand the implicit data type conversions that are allowed for SQL Server system-supplied data types, as shown below. The table below shows that INT can be implicit converted to DECIMAL and vice versa.

All data type conversions allowed for SQL Server system-supplied data types (Image Source: Microsoft Learn)

Data Precendence

While the above chart illustrates all the possible explicit and implicit conversions, we still do not know the resulting data type of the conversion. For our case above, the resulting data type depends on the rules of data type precedence.

According to the data type precedence in SQL Server, we have the following precedence order for data types.

  1. user-defined data types (highest)
  2. sql_variant
  3. xml
  4. datetimeoffset
  5. datetime2
  6. datetime
  7. smalldatetime
  8. date
  9. time
  10. float
  11. real
  12. decimal
  13. money
  14. smallmoney
  15. bigint
  16. int
  17. smallint
  18. tinyint
  19. bit
  20. ntext
  21. text
  22. image
  23. timestamp
  24. uniqueidentifier
  25. nvarchar (including nvarchar(max) )
  26. nchar
  27. varchar (including varchar(max) )
  28. char
  29. varbinary (including varbinary(max) )
  30. binary (lowest)

Since DECIMAL has a higher precedence than INT, hence we are sure that the script above will result in a DECIMAL output with the highest scale, i.e. DECIMAL(10, 4). This explains why the result of his script is 20.5000.

Conclusion

Now, if we change the script above to be something as follows, we should receive an error saying “Error converting data type varchar to numeric”.

DECLARE @NUM AS TINYINT = 0
DECLARE @VAL AS VARCHAR(MAX) = '20.50'

SELECT CASE @NUM WHEN 0 THEN 'A'
                 WHEN 1 THEN CAST(@VAL AS DECIMAL(10, 4))
                 ELSE -1 
       END AS Result

Yup, that’s all about our discussion about the little bug he found in his script. Hope you find it useful. =)

References

KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.

Micro Frontend with Single-SPA

In order to build applications which utilise the scalability, flexibility, and resilience of cloud computing, the applications are nowadays normally developed with microservice architecture using containers. Microservice architecture enables our applications to be composed of small independent backend services that communicate with each other over the network.

Project GitHub Repository

The complete source code of this project can be found at https://github.com/goh-chunlin/Lunar.MicroFrontEnd.SingleSpa.

Why Micro Frontend?

In general, when applying a microservice architecture, while backend systems are split up into microservices, frontend is still often developed as a monolith. This is not a problem when our application is small and we have a strong frontend team working on its frontend. However when the application grows to a larger scale, a monolithic frontend will start to be inefficient and unmaintainable due to the following reasons.

Firstly, it is challenging to keep the frontend technologies used in a large application up-to-date. Hence, with micro frontend, we can upgrade the version of the frontend on a functional basis. It also allows developers to use different frontend technologies to different functions based on the needs.

Secondly, since the source code of the micro frontend is separated, the source code of the individual frontend component is not as much as the monolith version of it. This improves the maintainability of the frontend because smaller code is easy to understand and distribute.

Thirdly, with micro frontend, we can split the frontend development team into smaller teams so that each team only needs to focus on relevant business functions.

Introduction of single-spa

In micro frontend architecture, we need a framework to bring together muliple JavaScript micro frotnends in our application. The framework we’re going to discuss here is called the single-spa.

The reason why we choose single-spa is because it is a framework allowing the implementation of micro frontend by supporting many popular JavaScript UI frameworks such as Angular and Vue. By leveraging the single-spa framework, we are able to register micro frontends such that the micro frontends are mounted and unmounted correctly for different URLs.

In single-spa, each micro frontend needs to implement their lifecycle functions by defining the actual implementation for how to bootstrap/mount/unmount components to the DOM tree with JavaScript or a different flavour of the JavaScript framework.

In this article, single-spa will work as an orchestrator to handle the micro frontend switch so that individual micro frontend does not need to worry about the global routing.

The Orchestrator

The orchestrator is nothing but a project holding single-spa which is responsible for global routing, i.e. determining which micro frontends get loaded.

We will be loading different micro frontends into the two placeholders which consume the same custom styles.

Fortunately, there is a very convenient way for us to get started quickly, i.e. using the create-single-spa, a utility for generating starter code. This guide will cover creating the root-config and our first single-spa application.

We can install the create-single-spa tool globally with the following command.

npm install --global create-single-spa

Once it is installed, we will create our project folder containing another empty called “orchestrator”, as shown in the following screenshot.

We have now initialised our project.

We will now create the single-spa root config, which is the core of our orchestrator, with the following command.

create-single-spa

Then we will need to answer a few questions, as shown in the screenshots below in order to generate our orchestrator.

We’re generating orchestrator using the single-spa root config type.

That’s all for now for our orchestrator. We will come back to it after we have created our micro frontends.

Micro Frontends

We will again use the create-single-spa to create the micro frontends. Instead of choosing root config as the type, this time we will choose to generate the parcel instead, as shown in the following screenshot.

We will be creating Vue 3.0 micro frontends.

To have our orchestrator import the micro frontends, the micro frontend app needs to be exposed as a System.register module, as shown below on how we edit the vue.config.js file with the following configuration.

const { defineConfig } = require('@vue/cli-service')
module.exports = defineConfig({
  transpileDependencies: true,
  configureWebpack: {
    output: {
      libraryTarget: "system",
      filename: "js/app.js"
    }
  }
})
Here we also force the generated output file name to be app.js for import convenience in the orchestrator.

Now, we can proceed to build this app with the following command so that the app.js file can be generated.

npm run build
The app.js file is generated after we run the build script that is defined in package.json file.

We then can serve this micro frontend app with http-server for local testing later. We will be running the following command in its dist directory to specify that we’re using port 8011 for the app1 micro frontend.

http-server . --port 8011 --cors
This is what we will be seeing if we navigate to the micro frontend app now.

Link Orchestrator with Micro Frontend AppS

Now, we can return to the index.ejs file to specify the URL of our micro frontend app as shown in the screenshot below.

Next, we need to define the place where we will display our micro frontend apps in the microfrontend-layout.js, as shown in the screenshot below.

<single-spa-router>
  <main>
    <route default>
      <div style="display: grid; column-gap: 50px; grid-template-columns: 30% auto; background-color: #2196F3; padding: 10px;">
        <div style="background-color: rgba(255, 255, 255, 0.8); padding: 20px;">
          <application name="@Lunar/app1"></application>
        </div>
        <div>

        </div>
      </div>
      
    </route>
  </main>
</single-spa-router>

We can now launch our orchestrator with the following command in the orchestrator directory.

npm start
Based on the package.json file, our orchestrator will be hosted at port 9000.

Now, if we repeat what we have done for app1 for another Vue 3.0 app called app2 (which we will deploy on port 8012), we can achieve something as follows.

Finally, to have the images shown properly, we simply need to update the Content-Security-Policy to be as follows.

<meta http-equiv="Content-Security-Policy" content="default-src 'self' https: localhost:*; img-src data:; script-src 'unsafe-inline' 'unsafe-eval' https: localhost:*; connect-src https: localhost:* ws://localhost:*; style-src 'unsafe-inline' https:; object-src 'none';">

Also, in order to make sure the orchestrator indeed loads two different micro frontends, we can edit the content of the two apps to look different, as shown below.

Design System

In a micro frontend architecture, every team builds its part of the frontend. With this drastic expansion of the frontend development work, there is a need for us to streamline the design work by having a complete set of frontend UI design standards.

In addition, in order to maintain the consistency of the look-and-feel of our application, it is important to make sure that all our relevant micro frontends are adopting the same design system which also enables developers to replicate designs quickly by utilising premade UI components.

Here in single-spa, we can host our CSS in one of the shared micro frontend app and then have it contains only the common CSS.

Both micro frontend apps are using the same design system Haneul (https://haneul-design.web.app/).

Closing

In 2016, Thoughtworks introduced the idea of micro frontend. Since then, the term micro frontend has been hyped.

However, micro frontend is not suitable for all projects, especially when the development team is small or when the project is just starting off. Micro frontend is only recommended when the backend is already on microservices and the team finds that scaling is getting more and more challenging. Hence, please plan carefully before migrating to micro frontend.

If you’d like to find out more about the single-spa framework that we are using in this article, please visit the following useful links.