EDI or Electronic Data Interchange is a computer-to-computer exchange of business documents between companies in a standard electronic format. Even though EDI has been around since 1960s and there are alternatives such as API, EDI still remains a crucial business component in industries such as supply chain and logistics.
With EDI, there is no longer a need to manually enter data into the system or send paper documents through the mail. EDI replaces the traditional paper-based communication, which is often slow, error-prone, and inefficient.
EDI works by converting business documents into a standard electronic format that can be exchanged between different computer systems. The documents later can be transmitted securely over a network using a variety of protocols, such as SFTP. Upon receipt, the documents are automatically processed and integrated into the computer system of the recipient.
EDI and SFTP
There are many EDI software applications using SFTP as one of the methods for transmitting EDI files between two systems. This allows the EDI software to create EDI files, translate them into the appropriate format, and then transmit them securely via SFTP.
SFTP provides a secure and reliable method for transmitting EDI files between trading partners. It uses a combination of SSH (Secure Shell) encryption and SFTP protocol for secure file transfer. This helps to ensure that EDI files are transmitted securely and that sensitive business data is protected from unauthorized access.
In this blog post, we will take a look at how to setup simple SFTP on Kubernetes.
This allows the user “user1” to login to this SFTP server with the encrypted password “$6$Zax4…Me3Px” (The “:e” means the password is encrypted. Here we are using SHA512).
The directory name at the end of the command, i.e. “ftp-file-storage”, will be created under the user’s home directory with write permission. This allows the files uploaded by the user to be stored at the “ftp-file-storage” folder. Hence, we choose to mount it to our local Documents sub-directory “atmoz-sftp”.
The OpenSSH server runs by default on port 22. Here, we are forwarding the port 22 of the container to the host port 2224.
Yay, we have successfully created a SFTP server on Docker.
Moving to Kubernetes
Setting up SFTP on Kubernetes can be done using a Deployment object with a container running the SFTP server.
Firstly, we will use back the same image “atmoz/sftp” for our container. We will then create a Kubernetes Deployment object using a YAML file which defines a container specification including the SFTP server image, environment variables, and volume mounts for data storage;
In addition, in order to avoid our container to starve other processes, we will add resource limits to it. In the YAML definition above, our container will be defined with a request for 0.25 CPU and 64MiB of memory. Also, the container has a limit of 0.5 CPU and 128MiB of memory.
After doing all these, we can now access the SFTP server on our Kubernetes cluster as shown in the following screenshot.
We can also upload files to the SFTP server on Kubernetes.
Conclusion
Yup, that’s all for having a SFTP server running on our Kubernetes easily.
After we have configured our SFTP server to use secure authentication and encryption, we can then proceed to create an SFTP user account on the server and give each of the parties who are going to communicate with us using EDI the necessary permissions to access the directories where EDI files will be stored.
Finally, we also need to monitor file transfers for errors or issues and troubleshoot as needed. This may involve checking SFTP server logs, EDI software logs, or network connections.
Overall, this is the quick start of using SFTP for EDI to ensure the secure and reliable transfer of electronic business documents between different parties.
To begin, we need to createa a new Email Communication Services resource from the marketplace, as shown in the screenshot below.
US is the only option for the Data Location now in Email Communication Services.
Take note that currently we can only choose United States as the Data Location, which determines where the data will be stored at rest. This cannot be changed after the resource has been created. This thus make our Azure Communication Services which we need to configure next to store the data in United States as well. We will talk about this later.
After getting the domain, we need to connect Azure Communication Services to it to send emails.
As we talked earlier, we need to make sure that the Azure Communication Services to have United States as its Data Location as well. Otherwise, we will not be able to link the email domain for email sending.
Before we begin, we have to get the connection string for the Azure Communication Service resource.
Getting connection string of the Azure Communication Service.
Here I have the following code to send a sample email to myself.
using Azure.Communication.Email.Models;
using Azure.Communication.Email;
string connectionString = Environment.GetEnvironmentVariable("COMMUNICATION_SERVICES_CONNECTION_STRING") ?? string.Empty;
string emailFrom = Environment.GetEnvironmentVariable("EMAIL_FROM") ?? string.Empty;
if (connectionString != string.Empty)
{
EmailClient emailClient = new EmailClient(connectionString);
EmailContent emailContent = new EmailContent("Welcome to Azure Communication Service Email APIs.");
emailContent.PlainText = "This email message is sent from Azure Communication Service Email using .NET SDK.";
List<EmailAddress> emailAddresses = new List<EmailAddress> {
new EmailAddress("gclin009@hotmail.com") { DisplayName = "Goh Chun Lin" }
};
EmailRecipients emailRecipients = new EmailRecipients(emailAddresses);
EmailMessage emailMessage = new EmailMessage(emailFrom, emailContent, emailRecipients);
SendEmailResult emailResult = emailClient.Send(emailMessage, CancellationToken.None);
}
Setting environment variables for local debugging purpose.
Tada, there should be an email successfully sent out as instructed.
Email is successfully sent and received. =)
Containerise the Console App
Next what we need to do is containerising our console app above.
Assume that our console app is called MyConsoleApp, then we will prepare a Dockerfile as follows.
FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["MyMedicalEmailSending.csproj", "."]
RUN dotnet restore "./MyConsoleApp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "MyConsoleApp.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "MyConsoleApp.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "MyConsoleApp.dll"]
We then can publish it to Docker Hub for consumption later.
In Kubernetes, pods are the smallest deployable units of computing we can create and manage. A pod can have one or more relevant containers, with shared storage and network resources. Here, we will be scheduling a job so that it creates pods containing our container with the image we created above to operate the execution of the pods, which is in our case, to send emails.
Hence, if we would like to have the email scheduler to be triggered at 8am of every Friday, we can create a CronJob in the namespace my-namespace with the following YAML file.
After the CronJob is created, we can proceed to annotate it with the command below.
kubectl annotate cj email-scheduler jobtype=scheduler frequency=weekly
This helps us to query the cron jobs with jsonpath easily in the future. For example, we can list all cronjobs which are scheduled weekly, we can do it with the following command.
kubectl get cj -A -o=jsonpath="{range .items[?(@.metadata.annotations.jobtype)]}{.metadata.namespace},{.metadata.name},{.metadata.annotations.jobtype},{.metadata.annotations.frequency}{'\n'}{end}"
Create ConfigMap
In our email sending programme, we have two environment variables. Hence, we can create ConfigMap to store the data as key-value pair.
Then, the Pods created by the CronJob can thus consume the ConfigMap and Secret above as environment variables. So, we need to update the CronJob YAML file to be as follows.
Problem with using Secrets is that we can’t really commit them to our code repository because the data are only encoded but not encrypted. Hence, in order to store our Secrets safely, we need to use SealedSecret which helps us to encrypt our Secret. The SealedSecret can only be decrypted by the controller running in the targer cluster.
Unpack it: tar -zxvf helm-v3.2.0-linux-amd64.tar.gz
Move the Helm binary to desired location: sudo mv linux-amd64/helm /usr/local/bin/helm
Once we have successfully downloaded Helm and have it ready, we can add a Chart repository. In our case, we need to add the repo of SealedSecret Helm Chart.
The Kamus URL could be found after we installed Kamus as shown in the screenshot below.
Kamus URL in localhost
We need to follow the instruction printed on the screen to get the Kamus URL. To do so, we need to forward local port to the pod, as shown in the following screenshot.
Successfully forward the port and thus can use the URL as the Kamus URL.
Hence, let’s say we want to encrypt a secret “alamak”, we can do so as follows.
Since our localhost Kamus URL is using HTTP, so we have to specify “–allow-insecure-url”.
After we have encrypted our secret successfully, we need to configure our pod accordingly so that it can decrypt the value with Kamus Decrypt API. The simplest way will be storing our secret in a ConfigMap because it is already encrypted, so it’s safe to store it in ConfigMap.
If the deployment is successful, we should be able to see the following when we visit localhost:8081 on our Internet browser, as shown in the following screenshot.
Yay, the original text “alamak” is successfully decrypted and displayed.
Deploy Our CronJob
Now, since we have everything setup, we can create our Kubernetes CronJob with the YAML file we have earlier. For local testing, I have edited the schedule to be “*/2 * * * *”. This means that an email will be sent to me every 2 minutes.
After waiting for a couple of minutes, I have received a few emails sent via the Azure Communication Services, as shown below.
Now the emails are received every 2 minutes. =)
Hoorey, this is how we build a simple Kubernetes CronJob and how we can send emails with the Azure Email Communication Services.
Last week, a developer in our team encountered an interesting question in his SQL script on SQL Server 2019. For the convenience of discussion, I’ve simplified his script as follow.
DECLARE @NUM AS TINYINT = 0
DECLARE @VAL AS VARCHAR(MAX) = '20.50'
SELECT CASE @NUM WHEN 0 THEN CAST(@VAL AS DECIMAL(10, 2))
WHEN 1 THEN CAST(@VAL AS DECIMAL(10, 4))
ELSE -1
END AS Result
The result he expected was 20.50 because @NUM equals to 0, so by right the first result expression should be executed. However, the truth is that it returned 20.5000 as if the second result expression which is casting @VAL into a decimal value with a scale of 4 was run.
All data type conversions allowed for SQL Server system-supplied data types (Image Source: Microsoft Learn)
Data Precendence
While the above chart illustrates all the possible explicit and implicit conversions, we still do not know the resulting data type of the conversion. For our case above, the resulting data type depends on the rules of data type precedence.
Since DECIMAL has a higher precedence than INT, hence we are sure that the script above will result in a DECIMAL output with the highest scale, i.e. DECIMAL(10, 4). This explains why the result of his script is 20.5000.
Conclusion
Now, if we change the script above to be something as follows, we should receive an error saying “Error converting data type varchar to numeric”.
DECLARE @NUM AS TINYINT = 0
DECLARE @VAL AS VARCHAR(MAX) = '20.50'
SELECT CASE @NUM WHEN 0 THEN 'A'
WHEN 1 THEN CAST(@VAL AS DECIMAL(10, 4))
ELSE -1
END AS Result
Yup, that’s all about our discussion about the little bug he found in his script. Hope you find it useful. =)
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
In order to build applications which utilise the scalability, flexibility, and resilience of cloud computing, the applications are nowadays normally developed with microservice architecture using containers. Microservice architecture enables our applications to be composed of small independent backend services that communicate with each other over the network.
In general, when applying a microservice architecture, while backend systems are split up into microservices, frontend is still often developed as a monolith. This is not a problem when our application is small and we have a strong frontend team working on its frontend. However when the application grows to a larger scale, a monolithic frontend will start to be inefficient and unmaintainable due to the following reasons.
Firstly, it is challenging to keep the frontend technologies used in a large application up-to-date. Hence, with micro frontend, we can upgrade the version of the frontend on a functional basis. It also allows developers to use different frontend technologies to different functions based on the needs.
Secondly, since the source code of the micro frontend is separated, the source code of the individual frontend component is not as much as the monolith version of it. This improves the maintainability of the frontend because smaller code is easy to understand and distribute.
Thirdly, with micro frontend, we can split the frontend development team into smaller teams so that each team only needs to focus on relevant business functions.
Introduction of single-spa
In micro frontend architecture, we need a framework to bring together muliple JavaScript micro frotnends in our application. The framework we’re going to discuss here is called the single-spa.
The reason why we choose single-spa is because it is a framework allowing the implementation of micro frontend by supporting many popular JavaScript UI frameworks such as Angular and Vue. By leveraging the single-spa framework, we are able to register micro frontends such that the micro frontends are mounted and unmounted correctly for different URLs.
In this article, single-spa will work as an orchestrator to handle the micro frontend switch so that individual micro frontend does not need to worry about the global routing.
The Orchestrator
The orchestrator is nothing but a project holding single-spa which is responsible for global routing, i.e. determining which micro frontends get loaded.
We will be loading different micro frontends into the two placeholders which consume the same custom styles.
We can install the create-single-spa tool globally with the following command.
npm install --global create-single-spa
Once it is installed, we will create our project folder containing another empty called “orchestrator”, as shown in the following screenshot.
We have now initialised our project.
We will now create the single-spa root config, which is the core of our orchestrator, with the following command.
create-single-spa
Then we will need to answer a few questions, as shown in the screenshots below in order to generate our orchestrator.
We’re generating orchestrator using the single-spa root config type.
That’s all for now for our orchestrator. We will come back to it after we have created our micro frontends.
Micro Frontends
We will again use the create-single-spa to create the micro frontends. Instead of choosing root config as the type, this time we will choose to generate the parcel instead, as shown in the following screenshot.
We will be creating Vue 3.0 micro frontends.
To have our orchestrator import the micro frontends, the micro frontend app needs to be exposed as a System.register module, as shown below on how we edit the vue.config.js file with the following configuration.
Here we also force the generated output file name to be app.js for import convenience in the orchestrator.
Now, we can proceed to build this app with the following command so that the app.js file can be generated.
npm run build
The app.js file is generated after we run the build script that is defined in package.json file.
We then can serve this micro frontend app with http-server for local testing later. We will be running the following command in its dist directory to specify that we’re using port 8011 for the app1 micro frontend.
http-server . --port 8011 --cors
This is what we will be seeing if we navigate to the micro frontend app now.
Link Orchestrator with Micro Frontend AppS
Now, we can return to the index.ejs file to specify the URL of our micro frontend app as shown in the screenshot below.
We can now launch our orchestrator with the following command in the orchestrator directory.
npm start
Based on the package.json file, our orchestrator will be hosted at port 9000.
Now, if we repeat what we have done for app1 for another Vue 3.0 app called app2 (which we will deploy on port 8012), we can achieve something as follows.
Finally, to have the images shown properly, we simply need to update the Content-Security-Policy to be as follows.
Also, in order to make sure the orchestrator indeed loads two different micro frontends, we can edit the content of the two apps to look different, as shown below.
Design System
In a micro frontend architecture, every team builds its part of the frontend. With this drastic expansion of the frontend development work, there is a need for us to streamline the design work by having a complete set of frontend UI design standards.
In addition, in order to maintain the consistency of the look-and-feel of our application, it is important to make sure that all our relevant micro frontends are adopting the same design system which also enables developers to replicate designs quickly by utilising premade UI components.
Here in single-spa, we can host our CSS in one of the shared micro frontend app and then have it contains only the common CSS.
However, micro frontend is not suitable for all projects, especially when the development team is small or when the project is just starting off. Micro frontend is only recommended when the backend is already on microservices and the team finds that scaling is getting more and more challenging. Hence, please plan carefully before migrating to micro frontend.
If you’d like to find out more about the single-spa framework that we are using in this article, please visit the following useful links.