Observing Orchard Core: Traces with Grafana Tempo and ADOT

In the previous article, we have discussed about how we can build a custom monitoring pipeline that has Grafana running on Amazon ECS to receive metrics and logs, which are two of the observability pillars, sent from the Orchard Core on Amazon ECS. Today, we will proceed to talk about the third pillar of observability, traces.

Source Code

The CloudFormation templates and relevant C# source codes discussed in this article is available on GitHub as part of the Orchard Core Basics Companion (OCBC) Project: https://github.com/gcl-team/Experiment.OrchardCore.Main.

Lisa Jung, senior developer advocate at Grafana, talks about the three pillars in observability (Image Credit: Grafana Labs)

About Grafana Tempo

To capture and visualise traces, we will use Grafana Tempo, an open-source, scalable, and cost-effective tracing backend developed by Grafana Labs. Unlike other tracing tools, Tempo does not require an index, making it easy to operate and scale.

We choose Tempo because it is fully compatible with OpenTelemetry, the open standard for collecting distributed traces, which ensures flexibility and vendor neutrality. In addition, Tempo seamlessly integrates with Grafana, allowing us to visualise traces alongside metrics and logs in a single dashboard.

Finally, being a Grafana Labs project means Tempo has strong community backing and continuous development.

About OpenTelemetry

With a solid understanding of why Tempo is our tracing backend of choice, let’s now dive deeper into OpenTelemetry, the open-source framework we use to instrument our Orchard Core app and generate the trace data Tempo collects.

OpenTelemetry is a Cloud Native Computing Foundation (CNCF) project and a vendor-neutral, open standard for collecting traces, metrics, and logs from our apps. This makes it an ideal choice for building a flexible observability pipeline.

OpenTelemetry provides SDKs for instrumenting apps across many programming languages, including C# via the .NET SDK, which we use for Orchard Core.

OpenTelemetry uses the standard OTLP (OpenTelemetry Protocol) to send telemetry data to any compatible backend, such as Tempo, allowing seamless integration and interoperability.

Both Grafana Tempo and OpenTelemetry are projects under the CNCF umbrella. (Image Source: CNCF Cloud Native Interactive Landscape)

Setup Tempo on EC2 With CloudFormation

It is straightforward to deploy Tempo on EC2.

Let’s walk through the EC2 UserData script that installs and configures Tempo on the instance.

First, we download the Tempo release binary, extract it, move it to a proper system path, and ensure it is executable.

wget https://github.com/grafana/tempo/releases/download/v2.7.2/tempo_2.7.2_linux_amd64.tar.gz
tar -xzvf tempo_2.7.2_linux_amd64.tar.gz
mv tempo /usr/local/bin/tempo
chmod +x /usr/local/bin/tempo

Next, we create a basic Tempo configuration file at /etc/tempo.yaml to define how Tempo listens for traces and where it stores trace data.

echo "
server:
http_listen_port: 3200
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
storage:
trace:
backend: local
local:
path: /tmp/tempo/traces
" > /etc/tempo.yaml

Let’s breakdown the configuration file above.

The http_listen_port allows us to set the HTTP port (3200) for Tempo internal web server. This port is used for health checks and Prometheus metrics.

After that, we configure where Tempo listens for incoming trace data. In the configuration above, we enabled OTLP receivers via both gRPC and HTTP, the two protocols that OpenTelemetry SDKs and agents use to send data to Tempo. Here, the ports 4317 (gRPC) and 4318 (HTTP) are standard for OTLP.

Last but not least, in the configuration, as demonstration purpose, we use the simplest one, local storage, to write trace data to the EC2 instance disk under /tmp/tempo/traces. This is fine for testing or small setups, but for production we will likely want to use services like Amazon S3.

In addition, since we are using local storage on EC2, we can easily SSH into the EC2 instance and directly inspect whether traces are being written. This is incredibly helpful during debugging. What we need to do is to run the following command to see whether files are being generated when our Orchard Core app emits traces.

ls -R /tmp/tempo/traces

The configuration above is intentionally minimal. As our setup grows, we can explore advanced options like remote storage, multi-tenancy, or even scaling with Tempo components.

Each flushed trace block (folder with UUID) contains a data.parquet file, which holds the actual trace data.

Finally, in order to enable Tempo to start on boot, we create a systemd unit file that allows Tempo to start on boot and automatically restart if it crashes.

cat <<EOF > /etc/systemd/system/tempo.service
[Unit]
Description=Grafana Tempo service
After=network.target

[Service]
ExecStart=/usr/local/bin/tempo -config.file=/etc/tempo.yaml
Restart=always
RestartSec=5
User=root
LimitNOFILE=1048576

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reexec
systemctl daemon-reload
systemctl enable --now tempo

This systemd service ensures that Tempo runs in the background and automatically starts up after a reboot or a crash. This setup is crucial for a resilient observability pipeline.

Did You Know: When we SSH into an EC2 instance running Amazon Linux 2023, we will be greeted by a cockatiel in ASCII art! (Image Credit: OMG! Linux)

Understanding OTLP Transport Protocols

In the previous section, we configured Tempo to receive OTLP data over both gRPC and HTTP. These two transport protocols are supported by the OTLP, and each comes with its own strengths and trade-offs. Let’s break them down.

Ivy Zhuang from Google gave a presentation on gRPC and Protobuf at gRPConf 2024. (Image Credit: gRPC YouTube)

Tempo has native support for gRPC, and many OpenTelemetry SDKs default to using it. gRPC is a modern, high-performance transport protocol built on top of HTTP/2. It is the preferred option when performanceis critical. gRPC also supports streaming, which makes it ideal for high-throughput scenarios where telemetry data is sent continuously.

However, gRPC is not natively supported in browsers, so it is not ideal for frontend or web-based telemetry collection unless a proxy or gateway is used. In such scenarios, we will normally choose HTTP which is browser-friendly. HTTP is a more traditional request/response protocol that works well in restricted environments.

Since we are collecting telemetry from server-side like Orchard Core running on ECS, gRPC is typically the better choice due to its performance benefits and native support in Tempo.

Please take note that since gRPC requires HTTP/2, which some environments, for example, IoT devices and embedding systems, might not have mature gRPC client support, OTLP over HTTP is often preferred in simpler or constrained systems.

Daniel Stenberg, Senior Network Engineer at Mozilla, sharing about HTTP/2 at GOTO Copenhagen 2015. (Image Credit: GOTO Conferences YouTube)

gRPC allows multiplexing over a single connection using HTTP/2. Hence, in gRPC, all telemetry signals, i.e. logs, metrics, and traces, can be sent concurrently over one connection. However, with HTTP, each telemetry signal needs a separate POST request to its own endpoint as listed below to enforce clean schema boundaries, simplify implementation, and stay aligned with HTTP semantics.

  • Logs: /v1/logs;
  • Metrics: /v1/metrics;
  • Traces: /v1/traces.

In HTTP, since each signal has its own POST endpoint with its own protobuf schema in the body, there is no need for the receiver to guess what is in the body.

AWS Distro for Open Telemetry (ADOT)

Now that we have Tempo running on EC2 and understand the OTLP protocols it supports, the next step is to instrument our Orchard Core to generate and send trace data.

The following code snippet shows what a typical direct integration with Tempo might look like in an Orchard Core.

builder.Services
.AddOpenTelemetry()
.ConfigureResource(resource => resource.AddService(serviceName: "cld-orchard-core"))
.WithTracing(tracing => tracing
.AddAspNetCoreInstrumentation()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://<tempo-ec2-host>:4317");
options.Protocol = OpenTelemetry.Exporter.OtlpExportProtocol.Grpc;
})
.AddConsoleExporter());

This approach works well for simple use cases during development stage, but it comes with trade-offs that are worth considering. Firstly, we couple our app directly to the observability backend, reducing flexibility. Secondly, central management becomes harder when we scale to many services or environments.

This is where AWS Distro for OpenTelemetry (ADOT) comes into play.

The ADOT collector. (Image credit: ADOT technical docs)

ADOT is a secure, AWS-supported distribution of the OpenTelemetry project that simplifies collecting and exporting telemetry data from apps running on AWS services, for example our Orchard Core on ECS now. ADOT decouples our apps from the observability backend, provides centralised configuration, and handles telemetry collection more efficiently.

Sidecar Pattern

We can deploy the ADOT in several ways, such as running it on a dedicated node or ECS service to receive telemetry from multiple apps. We can also take the sidecar approach which cleanly separates concerns. Our Orchard Core app will focus on business logic, while a nearby ADOT sidecar handles telemetry collection and forwarding. This mirrors modern cloud-native patterns and gives us more flexibility down the road.

The sidecar pattern running in Amazon ECS. (Image Credit: AWS Open Source Blog)

The following CloudFormation template shows how we deploy ADOT as a sidecar in ECS using CloudFormation. The collector config is stored in AWS Systems Manager Parameter Store under /myapp/otel-collector-config, and injected via the AOT_CONFIG_CONTENT environment variable. This keeps our infrastructure clean, decoupled, and secure.

ecsTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
Family: !Ref ServiceName
NetworkMode: awsvpc
ExecutionRoleArn: !GetAtt ecsTaskExecutionRole.Arn
TaskRoleArn: !GetAtt iamRole.Arn
ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref OrchardCoreImage
...

- Name: adot-collector
Image: public.ecr.aws/aws-observability/aws-otel-collector:latest
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Sub "/ecs/${ServiceName}-log-group"
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: adot
Essential: false
Cpu: 128
Memory: 512
HealthCheck:
Command: ["/healthcheck"]
Interval: 30
Timeout: 5
Retries: 3
StartPeriod: 60
Secrets:
- Name: AOT_CONFIG_CONTENT
ValueFrom: !Sub "arn:${AWS::Partition}:ssm:${AWS::Region}:${AWS::AccountId}:parameter/otel-collector-config"
Deploy an ADOT sidecar on ECS to collect observability data from Orchard Core.

There are several interesting and important details in the CloudFormation snippet above that are worth calling out. Let’s break them down one by one.

Firstly, we choose awsvpc as the NetworkMode of the ECS task. In awsvpc, each container in the ECS task, i.e. our Orchard Core container and the ADOT sidecar, receives its own ENI (Elastic Network Interface). This is great for network-level isolation. With this setup, we can reference the sidecar from our Orchard Core using its container name through ECS internal DNS, i.e. http://adot-collector:4317.

Secondly, we include a health check for the ADOT container. ECS will use this health check to restart the container if it becomes unhealthy, improving reliability without manual intervention. In November 2022, Paurush Garg from AWS added the healthcheck component with the new ADOT collector release, so we can simply specify that we will be using this healthcheck component in the configuration that we will discuss next.

Yes, the configuration! Instead of hardcoding the ADOT configuration into the task definition, we inject it securely at runtime using the AOT_CONFIG_CONTENT secret. This environment variable AOT_CONFIG_CONTENT is designed to enable us to configure the ADOT collector. It will override the config file used in the ADOT collector entrypoint command.

The SSM Parameter for the environment variable AOT_CONFIG_CONTENT.

Wrap-Up

By now, we have completed the journey of setting up Grafana Tempo on EC2, exploring how traces flow through OTLP protocols like gRPC and HTTP, and understanding why ADOT is often the better choice in production-grade observability pipelines.

With everything connected, our Orchard Core app is now able to send traces into Tempo reliably. This will give us end-to-end visibility with OpenTelemetry and AWS-native tooling.

References

Observing Orchard Core: Metrics and Logs with Grafana and Amazon CloudWatch

I recently deployed an Orchard Core app on Amazon ECS and wanted to gain better visibility into its performance and health.

Instead of relying solely on basic Amazon CloudWatch metrics, I decided to build a custom monitoring pipeline that has Grafana running on Amazon EC2 receiving metrics and EMF (Embedded Metrics Format) logs sent from the Orchard Core on ECS via CloudFormation configuration.

In this post, I will walk through how I set this up from scratch, what challenges I faced, and how you can do the same.

Source Code

The CloudFormation templates and relevant C# source codes discussed in this article is available on GitHub as part of the Orchard Core Basics Companion (OCBC) Project: https://github.com/gcl-team/Experiment.OrchardCore.Main.

Why Grafana?

In the previous post where we setup the Orchard Core on ECS, we talked about how we can send metrics and logs to CloudWatch. While it is true that CloudWatch offers us out-of-the-box infrastructure metrics and AWS-native alarms and logs, the dashboards CloudWatch provides are limited and not as customisable. Managing observability with just CloudWatch gets tricky when our apps span multiple AWS regions, accounts, or other cloud environments.

The GrafanaLive event in Singapore in September 2023. (Event Page)

If we are looking for solution that is not tied to single vendor like AWS, Grafana can be one of the options. Grafana is an open-source visualisation platform that lets teams monitor real-time metrics from multiple sources, like CloudWatch, X-Ray, Prometheus and so on, all in unified dashboards. It is lightweight, extensible, and ideal for observability in cloud-native environments.

Is Grafana the only solution? Definitely not! However, personally I still prefer Grafana because it is open-source and free to start. In this blog post, we will also see how easy to host Grafana on EC2 and integrate it directly with CloudWatch with no extra agents needed.

Three Pillars of Observability

In observability, there are three pillars, i.e. logs, metrics, and traces.

Lisa Jung, senior developer advocate at Grafana, talks about the three pillars in observability (Image Credit: Grafana Labs)

Firstly, logs are text records that capture events happening in the system.

Secondly, metrics are numeric measurements tracked over time, such as HTTP status code counts, response times, or ECS CPU and memory utilisation rates.

Finally, traces show the form a strong observability foundation which can help us to identify issues faster, reduce downtime, and improve system reliability. This will ultimately support better user experience for our apps.

This is where we need a tool like Grafana because Grafana assists us to visualise, analyse, and alert based on our metrics, making observability practical and actionable.

Setup Grafana on EC2 with CloudFormation

It is straightforward to install Grafana on EC2.

Firstly, let’s define the security group that we will be use for the EC2.

ec2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Allow access to the EC2 instance hosting Grafana
VpcId: {"Fn::ImportValue": !Sub "${CoreNetworkStackName}-${AWS::Region}-vpcId"}
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0 # Caution: SSH open to public, restrict as needed
- IpProtocol: tcp
FromPort: 3000
ToPort: 3000
CidrIp: 0.0.0.0/0 # Caution: Grafana open to public, restrict as needed
Tags:
- Key: Stack
Value: !Ref AWS::StackName

The VPC ID is imported from another of the common network stack, the cld-core-network, we setup. Please refer to the stack cld-core-network here.

For demo purpose, please notice that both SSH (port 22) and Grafana (port 3000) are open to the world (0.0.0.0/0). It is important to protect the access to EC2 by adding a bastion host, VPN, or IP restriction later.

In addition, the SSH should only be opened temporarily. The SSH access is for when we need to log in to the EC2 instance and troubleshoot Grafana installation manually.

Now, we can proceed to setup EC2 with Grafana installed using the CloudFormation resource below.

ec2Instance:
Type: AWS::EC2::Instance
Properties:
InstanceType: !Ref InstanceType
ImageId: !Ref Ec2Ami
NetworkInterfaces:
- AssociatePublicIpAddress: true
DeviceIndex: 0
SubnetId: {"Fn::ImportValue": !Sub "${CoreNetworkStackName}-${AWS::Region}-publicSubnet1Id"}
GroupSet:
- !Ref ec2SecurityGroup
UserData:
Fn::Base64: !Sub |
#!/bin/bash
yum update -y
yum install -y wget unzip
wget https://dl.grafana.com/oss/release/grafana-10.1.0-1.x86_64.rpm
yum install -y grafana-10.1.0-1.x86_64.rpm
systemctl enable --now grafana-server
Tags:
- Key: Name
Value: "Observability-Instance"

In the CloudFormation template above, we are expecting our users to access the Grafana dashboard directly over the Internet. Hence, we put the EC2 in public subnet and assign an Elastic IP (EIP) to it, as demonstrated below, so that we can have a consistent public accessible static IP for our Grafana.

ecsEip:
Type: AWS::EC2::EIP

ec2EIPAssociation:
Type: AWS::EC2::EIPAssociation
Properties:
AllocationId: !GetAtt ecsEip.AllocationId
InstanceId: !Ref ec2Instance

For production systems, placing instances in public subnets and exposing them with a public IP requires us to have strong security measures in place. Otherwise, it is recommended to place our Grafana EC2 instance in private subnets and accessed via Application Load Balancer (ALB) or NAT Gateway to reduce the attack surface.

Pump CloudWatch Metrics to Grafana

Grafana supports CloudWatch as a native data source.

With the appropriate AWS credentials and region, we can use Access Key ID and Secret Access Key to grant Grafana the access to CloudWatcch. The user that the credentials belong to must have the AmazonGrafanaCloudWatchAccess policy.

The user that Grafana uses to access CloudWatch must have the AmazonGrafanaCloudWatchAccess policy.

However, using AWS Access Key/Secret in Grafana data source connection details is less secure and not ideal for EC2 setups. In addition, AmazonGrafanaCloudWatchAccess is a managed policy optimised for running Grafana as a managed service within AWS. Thus, it is recommended to create our own custom policy so that we can limit the permissions to only what is needed, as demonstrated with the following CloudWatch template.

ec2InstanceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: ec2.amazonaws.com
Action: sts:AssumeRole

Policies:
- PolicyName: EC2MetricsAndLogsPolicy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: AllowReadingMetricsFromCloudWatch
Effect: Allow
Action:
- cloudwatch:ListMetrics
- cloudwatch:GetMetricData
Resource: "*"
- Sid: AllowReadingLogsFromCloudWatch
Effect: Allow
Action:
- logs:DescribeLogGroups
- logs:GetLogGroupFields
- logs:StartQuery
- logs:StopQuery
- logs:GetQueryResults
- logs:GetLogEvents
Resource: "*"

Again, using our custom policy provides better control and follows the best practices of least privilege.

With IAM role, we do not need to provide AWS Access Key/Secret in Grafana connection details for CloudWatch as a data source.

Visualising ECS Service Metrics

Now that Grafana is configured to pull data from CloudWatch, ECS metrics like CPUUtilization and MemoryUtilization, are available. We can proceed to create a dashboard and select the right namespace as well as the right metric name.

Setting up the diagram for memory utilisation of our Orchard Core app in our ECS cluster.

As shown in the following dashboard, we show memory and CPU utilisation rates because they help us ensure that our ECS services are performing within safe limits and not overusing or underutilizing resources. By monitoring the utilisation, we ensure our services are using just the right amount of resources.

Both ECS service metrics and container insights are displayed on Grafana dashboard.

Visualising ECS Container Insights Metrics

ECS Container Insights Metrics are deeper metrics like task counts, network I/O, storage I/O, and so on.

In the dashboard above, we can also see the number of Task Count. Task Count helps us make sure our services are running the right number of instances at all times.

Task Count by itself is not a cost metric, but if we consistently see high task counts with low CPU/memory usage, it indicates we can potentially consolidate workloads and reduce costs.

Instrumenting Orchard Core to Send Custom App Metrics

Now that we have seen how ECS metrics are visualised in Grafana, let’s move on to instrumenting our Orchard Core app to send custom app-level metrics. This will give us deeper visibility into what our app is really doing.

Metrics should be tied to business objectives. It’s crucial that the metrics you collect align with KPIs that can drive decision-making.

Metrics should be actionable. The collected data should help identify where to optimise, what to improve, and how to make decisions. For example, by tracking app-metrics such as response time and HTTP status codes, we gain insight into both performance and reliability of our Orchard Core. This allows us to catch slowdowns or failures early, improving user satisfaction.

SLA vs SLO vs SLI: Key Differences in Service Metrics (Image Credit: Atlassian)

By tracking response times and HTTP code counts at the endpoint level,
we are measuring SLIs that are necessary to monitor if we are meeting our SLOs.

With clear SLOs and SLIs, we can then focus on what really matters from a performance and reliability perspective. For example, a common SLO could be “99.9% of requests to our Orchard Core API endpoints must be processed within 500ms.”

In terms of sending custom app-level metrics from our Orchard Core to CloudWatch and then to Grafana, there are many approaches depending on our use case. If we are looking for simplicity and speed, CloudWatch SDK and EMF are definitely the easiest and most straightforward methods we can use to get started with sending custom metrics from Orchard Core to CloudWatch, and then visualising them in Grafana.

Using CloudWatch SDK to Send Metrics

We will start with creating a middleware called EndpointStatisticsMiddleware with AWSSDK.CloudWatch NuGet package referenced. In the middleware, we create a MetricDatum object to define the metric that we want to send to CloudWatch.

var metricData = new MetricDatum
{
MetricName = metricName,
Value = value,
Unit = StandardUnit.Count,
Dimensions = new List<Dimension>
{
new Dimension
{
Name = "Endpoint",
Value = endpointPath
}
}
};

var request = new PutMetricDataRequest
{
Namespace = "Experiment.OrchardCore.Main/Performance",
MetricData = new List<MetricDatum> { metricData }
};

In the code above, we see new concepts like Namespace, Metric, and Dimension. They are foundational in CloudWatch. We can think of them as ways to organize and label our data to make it easy to find, group, and analyse.

  • Namespace: A container or category for our metrics. It helps to group related metrics together;
  • Metric: A series of data points that we want to track. The thing we are measuring, in our example, it could be Http2xxCount and Http4xxCount;
  • Dimension:A key-value pair that adds context to a metric.

If we do not define the Namespace, Metric, and Dimensions carefully when we send data, Grafana later will not find them, or our charts on the dashboards will be very messy and hard to filter or analyse.

In addition, as shown in the code above, we are capturing the HTTP status code for our Orchard Core endpoints. We will then use PutMetricDataAsync to send the metric data PutMetricDataRequest asynchronously to CloudWatch.

The HTTP status codes of each of our Orchard Core endpoints are now captured on CloudWatch.

In Grafana, now when we want to configure a CloudWatch panel to show the HTTP status codes for each of the endpoint, the first thing we select is the Namespace, which is Experiment.OrchardCore.Main/Performance in our example. Namespace tells Grafana which group of metrics to query.

After picking the Namespace, Grafana lists the available Metrics inside that Namespace. We pick the Metrics we want to plot, such as Http2xxCount and Http4xxCount. Finally, since we are tracking metrics by endpoint, we set the Dimension to Endpoint and select the specific endpoint we are interested in, as shown in the following screenshot.

Using EMF to Send Metrics

While using the CloudWatch SDK works well for sending individual metrics, EMF (Embedded Metric Format) offers a more powerful and scalable way to log structured metrics directly from our app logs.

Before we can use EMF, we must first ensure that the Orchard Core application logs from our ECS tasks are correctly sent to CloudWatch Logs. This is done by configuring the LogConfiguration inside the ECS TaskDefinition as we discussed last time.

  # Unit 12: ECS Task Definition and Service
ecsTaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
...
ContainerDefinitions:
- Name: !Ref ServiceName
Image: !Ref OrchardCoreImage
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Sub "/ecs/${ServiceName}-log-group"
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: ecs
...

Once the ECS task is sending logs to CloudWatch Logs, we can start embedding custom metrics into the logs using EMF.

Instead of pushing metrics directly using the CloudWatch SDK, we send structured JSON messages into the container logs. CloudWatch will then auto detects these EMF messages and converts them into CloudWatch Metrics.

The following shows what a simple EMF log message looks like.

{
"_aws": {
"Timestamp": 1745653519000,
"CloudWatchMetrics": [
{
"Namespace": "Experiment.OrchardCore.Main/Performance",
"Dimensions": [["Endpoint"]],
"Metrics": [
{ "Name": "ResponseTimeMs", "Unit": "Milliseconds" }
]
}
]
},
"Endpoint": "/api/v1/packages",
"ResponseTimeMs": 142
}

When a log message reaches CloudWatch Logs, CloudWatch scans the text and looks for a valid _aws JSON object inside anywhere in the message. Thus, even if our log line has extra text before or after, as long as the EMF JSON is properly formatted, CloudWatch extracts it and publishes the metrics automatically.

An example of log with EMF JSON in it on CloudWatch.

After CloudWatch extracts the EMF block from our log message, it automatically turns it into a proper CloudWatch Metric. These metrics are then queryable just like any normal CloudWatch metric and thus available inside Grafana too, as shown in the screenshot below.

Metrics extracted from logs containing EMF JSON are automatically turned into metrics that can be visualised in Grafana just like any other metric.

As we can see, using EMF is easier as compared to going the CloudWatch SDK route because we do not need to change or add extra AWS infrastructure. With EMF, what our app does is just writing special JSON-format logs.

Then CloudWatch Metrics automatically extracts the metrics from those logs with EMF JSON. The entire process requires no new service, no special SDK code, and no CloudWatch PutMetric API calls.

Cost Optimisation with Logs vs Metrics

Logs are more expensive than metrics, especially when we are storing large amounts of data over time. This is also true when logs are stored at a higher retention rate and are more detailed, which means higher storage costs.

Metrics are cheaper to store because they are aggregated data points that do not require the same level of detail as logs.

CloudWatch treats each unique combination of dimensions as a separate metric, even if the metrics have the same metric name. However, compared to logs, metrics are still usually much cheaper at scale.

By embedding metrics into your log data via EMF, we are actually piggybacking metrics into logs, and letting CloudWatch extract metrics without duplicating effort. Thus, when using EMF, we will be paying for both, i.e.

  1. Log ingestion and storage (for the raw logs);
  2. The extracted custom metric (for the metric).

Hence, when we are leveraging EMF, we should consider expire logs faster if we only need the extracted metrics long-term.

Granularity and Sampling

Granularity refers to how frequent the metric data is collected. Fine granularity provides more detailed insights but can lead to increased data volume and costs.

Sampling is a technique to reduce the amount of data collected by capturing only a subset of data points (especially helpful in high-traffic systems). However, the challenge is ensuring that you maintain enough data to make informed decisions while keeping storage and processing costs manageable.

In our Orchard Core app above, currently the middleware that we implement will immediately PutMetricDataAsync to CloudWatch which will then not only slow down our API but it costs more because we need to pay when we send custom metrics to CloudWatch. Thus, we usually “buffer” the metrics first, and then batch-send periodically. This can be done with, for example, HostedService which is an ASP.NET Core background service, to flush metrics at interval.

using Amazon.CloudWatch;
using Amazon.CloudWatch.Model;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Options;
using System.Collections.Concurrent;

public class MetricsPublisher(
IAmazonCloudWatch cloudWatch,
IOptions<MetricsOptions> options,
ILogger<MetricsPublisher> logger) : BackgroundService
{
private readonly ConcurrentBag<MetricDatum> _pendingMetrics = new();

public void TrackMetric(string metricName, double value, string endpointPath)
{
_pendingMetrics.Add(new MetricDatum
{
MetricName = metricName,
Value = value,
Unit = StandardUnit.Count,
Dimensions = new List<Dimension>
{
new Dimension
{
Name = "Endpoint",
Value = endpointPath
}
}
});
}

protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
logger.LogInformation("MetricsPublisher started.");
while (!stoppingToken.IsCancellationRequested)
{
await Task.Delay(TimeSpan.FromSeconds(options.FlushIntervalSeconds), stoppingToken);
await FlushMetricsAsync();
}
}

private async Task FlushMetricsAsync()
{
if (_pendingMetrics.IsEmpty) return;

const int MaxMetricsPerRequest = 1000;

var metricsToSend = new List<MetricDatum>();
var metricsCount = 0;
while (_pendingMetrics.TryTake(out var datum))
{
metricsToSend.Add(datum);

metricsCount += 1;
if (metricsCount >= MaxMetricsPerRequest) break;
}

var request = new PutMetricDataRequest
{
Namespace = options.Namespace,
MetricData = metricsToSend
};

int attempt = 0;
while (attempt < options.MaxRetryAttempts)
{
try
{
await cloudWatch.PutMetricDataAsync(request);
logger.LogInformation("Flushed {Count} metrics to CloudWatch.", metricsToSend.Count);
break;
}
catch (Exception ex)
{
attempt++;
logger.LogWarning(ex, "Failed to flush metrics. Attempt {Attempt}/{MaxAttempts}", attempt, options.MaxRetryAttempts);
if (attempt < options.MaxRetryAttempts)
await Task.Delay(TimeSpan.FromSeconds(options.RetryDelaySeconds));
else
logger.LogError("Max retry attempts reached. Dropping {Count} metrics.", metricsToSend.Count);
}
}
}

public override async Task StopAsync(CancellationToken cancellationToken)
{
logger.LogInformation("MetricsPublisher stopping.");
await FlushMetricsAsync();
await base.StopAsync(cancellationToken);
}
}

In our Orchard Core API, each incoming HTTP request may run on a different thread. Hence, we need a thread-safe data structure like ConcurrentBag for storing the pending metrics.

Please take note that ConcurrentBag is designed to be an unordered collection. It does not maintain the order of insertion when items are taken from it. However, since the metrics we are sending, which is the counts of HTTP status codes, it does not matter in what order the requests were processed.

In addition, the limit of MetricData that we can send to CloudWatch per request is 1,000. Thus, we have the constant MaxMetricsPerRequest to help us make sure that we retrieve and remove at most 1,000 metrics from the ConcurrentBag.

Finally, we can inject MetricsPublisher to our middleware EndpointStatisticsMiddleware so that it can auto track every API request.

Wrap-Up

In this post, we started by setting up Grafana on EC2, connected it to CloudWatch to visualise ECS metrics. After that, we explored two ways, i.e. CloudWatch SDK and EMF log, to send custom app-level metrics from our Orchard Core app:

Whether we are monitoring system health or reporting on business KPIs, Grafana with CloudWatch offers a powerful observability stack that is both flexible and cost-aware.

References

Pushing Pkl Content from GitHub to AWS S3

In the previous article, we talked about using the S3 Object Lambda to transform the medical records, which are stored in a JSON file, into a presentable web page. However, maintaining medical records in JSON files could be challenging. In this article, we will further investigate how we can generate those JSON files.

We’re going to explore Pkl—pronounced “Pickle”—a configuration-as-code language renowned for its robust validation features and tooling. It was first introduced by Apple as an open-source project in February 2024. Pkl allows us to write configurations as code, validate them, and convert them to existing static formats.

The part highlighted in red will be the focus of this article.

About Pkl

Pkl streamlines the creation of JSON scripts, enhancing maintainability and reducing verbosity through reuse, templating, and abstraction, all supported seamlessly right out of the box.

As we can expect from our medical records in JSON, the JSON files will grow larger over time. Hence it will be increasingly difficult to maintain. Pkl can help reduce the size and complexity of our JSON files by introducing abstractions for common elements and describing similar elements in terms of their differences.

A .pkl file describes a module. Modules are objects that can be referred to from other modules.

Pkl comes with basic types, such as Numbers, Strings, Durations, etc. Having a notation for basic types, we can thus write typed objects. For example, the following module shows how we will define our medical records structure in Pkl.

module medicalVisitTemplate

class MedicalVisit {

medicalCentreName: String

centreType: "clinic"|"specialist"|"hospital"

visitStartDate: Date

visitEndDate: Date

remark: String

treatments: Listing<Treatment>

}

class Treatment {

name: String

type: "medicine"|"operation"|"scanning"

amount: String

startDate: Date

endDate: Date

}

class Date {

year: Int(isBetween(2000, 2100))

month: Int(isBetween(1, 12))

day: Int(isBetween(1, 31))
}

visits: Listing<MedicalVisit>

Listing is a collection in Pkl. It contains exclusively Elements, i.e., object members. In the code above, we define visits to be a collection of MedicalVisits. The MedicalVisit class contains information about the visit, for example type and name of the medical centres the patient visited, visiting period, remark, etc. The visiting period is then defined by Date class which stores year, month, and day.

In the Date class, since the month can only be an integer from 1 to 12, so we can restrict it to an integer range by using Int and isBetween constraint. Later, as Pkl evaluates our configuration, if there is an invalid value, for example 13, provided to the month, there will be an error shown to us, as demonstrated below.

Pkl CLI will evaluate our configuration and show detected invalid values.

Generate JSON with Pkl

So now how do we generate JSON with the module above?

Before we can generate a JSON file, we need to understand the Amending concept in Pkl. As a first intuition, think of “amending a module” as “filling out a form.”

So, to generate the chunlin.json file that was shown in the previous blog post, we can amend the medicalVisitTemplate module above with another Pkl file called chunlin.pkl as shown below.

amends "medicalVisitTemplate.pkl"

visits = new Listing<MedicalVisit> {

...
// Omitted for brevity

new {

medicalCentreName = "Tan Tock Seng Hospital"

centreType = "hospital"

visitStartDate {

year = 2024

month = 3

day = 24

}

visitEndDate {

year = 2024

month = 4

day = 19

}

remark = ""

treatments = new Listing<Treatment> {

...
// Omitted for brevity

new {

name = "Betamethasone (Valerate) 0.025% Cream 15g - Dermasone"

type = "medicine"

amount = "Applied after shower"

startDate {

year = 2024

month = 3

day = 26

}

endDate {

year = 2024

month = 4

day = 19

}

}

new {

name = "Betamethasone (Valerate) 0.1% Cream 15g - Uniflex(TM)"

type = "medicine"

amount = "Applied after shower"

startDate {

year = 2024

month = 3

day = 26

}

endDate {

year = 2024

month = 4

day = 19

}

}

}

}

}

Now if we execute the command below on Pkl CLI to evaluate the given modules and render the

$ ./pkl eval -f json -o ./output/chunlin.json ./input/chunlin.pkl

With the command above, we can get the same output as we see in chunlin.json.

Maintain Pkl in GitHub

Static files like Pkl or JSON can be easily maintained in code repositories such as GitHub. Using GitHub for version control allows us to track changes to our PKL files over time. This makes it easy to revert to previous versions if something goes wrong, compare changes, and understand the evolution of our configuration files. Additionally, we can use GitHub Actions to automate various tasks related to our PKL files, enhancing efficiency and reliability in our workflow.

GitHub Actions is an automation tool that allows us to create workflows triggered by events within our repository. These workflows can automate tasks like testing, building, and deploying code, or even running scripts. By using GitHub Actions, we can streamline the development and transformation process of our Pkl files, ensure consistency, and improve efficiency.

Thus, our mission is now to configure GitHub Actions so that a JSON file can be produced from the Pkl file and sent to the Amazon S3 bucket that we setup in another article earlier.

Configure GitHub Actions Workflow

Firstly, we need to give permission to GitHub Actions to access our S3 bucket. To do so, we will create a new user in AWS Console with appropriate rights.

We only need two permissions, s3:ListBucket and s3:PutObject, to copy files from local to the S3 bucket.

After attaching the policy, we proceed to generate an access key for this newly created user.

Secondly, we need to navigate to our repository and then click on the Actions tab to create a new simple workflow, as shown below.

Let’s start with the simple workflow.

To begin, let’s download a new Linter available for Pkl files in the workflow. The linter is known as pkl-linter done by Eduardo Aguilar Yépez, a senior software engineer at Draftea.

name: Evaluate Pkl and store it in S3 as JSON

on:
push:
branches: [ "main" ]

# Allows us to run this workflow manually from the Actions tab
workflow_dispatch:

jobs:
build:
runs-on: ubuntu-latest

steps:
- uses: actions/checkout@v4

- uses: actions/setup-go@v5
with:
go-version: '>=1.17.0'

- name: Get Go Version
run: go version

- name: Install Linter
run: go install github.com/Drafteame/pkl-linter@latest

- name: Run pkl-linter
run: pkl-linter medical-records
The linter analyses our code and shows detected stylistic errors.

Next, we need to install the Pkl CLI to evaluate Pkl modules and write their output to a file. There are native executables available for us to use. As shown in the workflow above, the GitHub Actions runner is ubuntu-latest, which uses the Ubuntu 22.04 LTS image as of Jun 2024. It uses the amd64 architecture. Hence, we can download the Pkl Linux executable for amd64 architecture.

name: Evaluate Pkl and store it in S3 as JSON

...
# Omitted for brevity

jobs:
build:
runs-on: ubuntu-latest

steps:
...
# Omitted for brevity

- name: Install Pkl CLI
run: curl -L -o pkl https://github.com/apple/pkl/releases/download/0.25.3/pkl-linux-amd64

- name: Grant execute permission to Pkl CLI
run: chmod +x pkl

- name: Get Pkl CLI version
run: ./pkl --version

- name: Eval the Pkl files
run: |
cd medical-records
files=$(find . -name "*.pkl")
count=0
for file in $files; do
output_filename="${file%.pkl}.json"
../pkl eval -f json -o ../output/$output_filename $file
done
cd ..

When my workflow is executed in June 2024, the version of the Pkl CLI is “Pkl 0.25.3 (Linux 5.15.0-1053-aws, native)”.

As shown in the last step above, it will loop through the JSON file in the medical-records folder and evaluate them one-by-one using the Pkl CLI. The JSON files generated will be stored in the output folder.

Eventually, what we need to do is to upload the file over to our AWS S3 bucket. However, before that, let’s make sure the AWS access key and secret access key we generated earlier are stored securely on GitHub Actions, a shown in the screenshot below.

The AWS access key and secret access key should be stored as GitHub Actions secrets.

Now, we can easily setup AWS CLI with the secrets above and use the s3 cp command to move the generated JSON files over to our S3 bucket. To do so, we only need to complete our workflow with the following.

name: Evaluate Pkl and store it in S3 as JSON

...
# Omitted for brevity

jobs:
build:
runs-on: ubuntu-latest

steps:
...
# Omitted for brevity

- name: Setup AWS CLI
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ap-southeast-1

- name: Copy files to S3 bucket
run: |
aws s3 cp output s3://lunar.medicalrecords --exclude "*" --include "*.json" --recursive

Please take note that the s3 cp command performs operation only on single file, hence we need to apply the --recursive flag to indicate that the command should run on all files under the specified directory, i.e. output.

Wrap-Up

In conclusion, utilising Pkl for generating and maintaining JSON files offers significant advantages in terms of reducing complexity and enhancing maintainability. By abstracting common elements and leveraging typed objects, Pkl simplifies the management of large and evolving datasets. The structured approach provided by Pkl not only minimises redundancy but also ensures that configurations remain consistent and error-free through its robust validation features.

Additionally, by using GitHub Actions, we can automate the process of evaluating Pkl files, generating the corresponding JSON files as output, and uploading these JSON files to our S3 bucket. This automation not only enhances efficiency but also ensures that changes are tracked and managed effectively.

In summary, we can conclude the infrastructure that we have gone through above and our previous article in the following diagram.

References

Learning Postman Flows and Newman: A Beginner’s Journey

For API developers, Postman is a popular tool that streamlines the process of testing APIs. Postman is like having a Swiss Army knife for API development as it allows developers to interact with APIs efficiently.

In March 2023, Postman Flows was officially announced. Postman Flows is a no-code visual tool within the Postman platform designed to help us create API workflows by dragging and dropping components, making it easier to build complex sequences of API calls without writing extensive code.

Please make sure Flows is enabled in our Postman Workspace.

Recently, my teammate demonstrated how Postman Flows works, and I’d like to share what I’ve learned from him in this article.

Use Case

Assume that there is a list of APIs that we need to call in sequence every time, then we can make use of Postman Flows. For demonstration purpose, let’s say we have to call the following three AWS APIs for a new user registration.

We will first need to define the environment variables, as shown in the following screenshot.

We keep the region variable to be empty because the region of IAM setup should be Global.

Next, we will setup the three API calls. For example, to create a new user on AWS, we need to make an HTTP GET request to the IAM base URL with CreateUser as the action, as shown in the following screenshot.

The UserName will be getting its value from the variable userName.

To tag the user, for example assigning the user above to a team in the organisation, we can do so by setting TagUser as the action, as shown in the screenshot below. The team that the user will be assigned to is based on their employee ID, which we will discuss later in this article.

The teamName is a variable with value determined by another variable.

Finally, we will assign the user to an existing user group by setting AddUserToGroup as its action.

The groupName must be having a valid existing group name in our AWS account.

Create the Flow

As demonstrated in the previous section, calling the three APIs sequentially is straightforward. However, managing variables carefully to avoid input errors can be challenging. Postman Flows allows us to automate these API calls efficiently. By setting up the Flow, we can execute all the API requests with just a few clicks, reducing the risk of mistakes and saving time.

Firstly, we will create a new Flow called “AWS IAM New User Registration”, as shown below.

Created a new Flow.

By default, it comes with three options that we can get started. We will go with the “Send a request” since we will be sending a HTTP GET request to create a user in IAM. As shown in the following screenshot, a list of variables that we defined earlier will be available. We only need to make sure that we choose a correct Environment. The values of service, region, accessKey, and secretKey will then be retrieved from the selected Environment.

Choosing the environment for the block.

Since the variable userName will be used in all three API calls, let’s create a variable block and assign it a value called “postman03”.

Created a variable and assigned a string value to it.

Next, we simply need to tell the API calling block to assign the value of userName to the field userName.

Assigning variable to the query string in the API call.

Now if we proceed to click on the “Run” button, by right, the call should respond with HTTP 200 and relevant info returned from AWS, as demonstrated below.

Yes, the user “postman03” is created successfully on AWS IAM.

With the success of user creation, the next step is to call the user tagging API. In this API, we will have two variables, i.e. userName and teamName. Let’s assume that the teamName is assigned based on whether the user’s employeeId is an even or odd number, we can design the Flow as shown in the following screenshot.

With two different employeeId, the teamNames are different too.

As shown in the Log, when we assign an even number to the user postman06, the team name assigned to it is “Team A”. However, when we assign an odd number to another user postman07, its team name is “Team B”.

Finally, we can complete the Flow with the third API call as shown below.

The groupName variable is introduced for the third API call.

Now, we can visit AWS Console to verify that the new user postman09 is indeed assigned to the testing-user-group.

The new user is assigned to the desired user group.

The Flow above only can create one new user in every Flow execution. By using the Repeat block which takes a numeric input N and iterates over it from 0 to N - 1, we can create multiple users in a single run, as shown in the following updated Flow.

This flow will create 5 users in one single execution.

Variable Limitation in Flows

Could we save the response in a variable that allows us to reuse its value in other Flow? We would guess this is possible with variables in Postman.

Postman supports variables at different scopes, in order from broadest to narrowest, these scopes are: global, collection, environment, data, and local.

The variable with the broadest scope in Postman is the global variables which are available throughout a workspace. So what we can do is actually storing the response in the global variables.

To do so, we need to update our request in the Collection. One of the snippets provided is actually about setting a global variable, so we can use it. For example, we add it to the post-response script of “Create IAM User” request, as shown in the screenshot below.

Postman provides snippets to help us quickly generate boilerplate code for common tasks, including setting global variables.

Let’s change the boilerplate code to be as follows.

pm.globals.set("my_global", pm.response.code);

Now, if we send a “Create IAM User” request, we will notice that a new global variable called my_global has been created, as demonstrated below.

The response code received is 400 and thus my_global is 400.

However, now if we run our Flow, we will realise that my_global is not being updated even though the response code received is 200, as illustrated in the following screenshot.

Our global variable is not updated by Flow.

Firstly, we need to understand that nothing is wrong with environments or variables. The reason why it did not work is because the variables just work differently in Flows from how they used to work in the Collection Runner or the Request Tab, as explained by Saswat Das, the creator of Postman Flows.

According to Saswat, global variables and environment variables are now treated as read-only values in the Flows, and any updates made to them through script are not respected.

So the answer to the question whether we can share variable across different Flows earlier is simply a big no by design currently.

Do It with CLI: Newman

Is there any alternative way that we can use instead of relying on the GUI of Postman?

Another teammate of mine from the QA team recently also showed how Newman, a command-line Collection Runner for Postman, can help.

A command-line Collection Runner is basically a tool that allows users to execute API requests defined in Postman collections directly from the command line. So, could we use Newman to do what we have done above with Postman Flows as well?

Before we can use Newman, we first need to export our Collection as well as the corresponding Environment. To do so, firstly, we click on the three dots (…) next to the collection name and select Export. Then, we choose the format (usually Collection v2.1 is recommended) and click Export. Secondly, we proceed to click on the three dots (…) next to the environment name and select Export as well.

Once we have the two exported files in the same folder, for example a folder canned “experiment.postman.newman”, we can run the following command.

$ newman run aws_iam.postman_collection.json -e aws_iam.postman_environment.json --env-var "userName=postman14462" --env-var "teamName=teamA" --env-var "groupName=testing-user-group"

While Newman and Postman Flows can both be used to automate API requests, they are tailored for different use cases: Newman is better suited for automated testing, integration into CI/CD pipelines, and command-line execution. Postman Flows, on the other hand, is ideal for visually designing the workflows and interactions between API calls.

References