In today’s interconnected world, APIs are the backbone of modern apps. Protecting these APIs and ensuring only authorised users access sensitive data is now more crucial than ever. While many authentication and authorisation methods exist, OAuth2 Introspection stands out as a robust and flexible approach. In this post, we will explore what OAuth2 Introspection is, why we should use it, and how to implement it in our .NET apps.
Before we dive into the technical details, let’s remind ourselves why API security is so important. Think about it: APIs often handle the most sensitive stuff. If those APIs are not well protected, we are basically opening the door to some nasty consequences. Data breaches? Yep. Regulatory fines (GDPR, HIPAA, you name it)? Potentially. Not to mention, losing the trust of our users. A secure API shows that we value their data and are committed to keeping it safe. And, of course, it helps prevent the bad guys from exploiting vulnerabilities to steal data or cause all sorts of trouble.
The most common method of securing APIs is using access tokens as proof of authorization. These tokens, typically in the form of JWTs (JSON Web Tokens), are passed by the client to the API with each request. The API then needs a way to validate these tokens to verify that they are legitimate and haven’t been tampered with. This is where OAuth2 Introspection comes in.
OAuth2 Introspection
OAuth2 Introspection is a mechanism for validating bearer tokens in an OAuth2 environment. We can think of it as a secure lookup service for our access tokens. It allows an API to query an auth server, which is also the “issuer” of the token, to determine the validity and attributes of a given token.
The workflow of an OAuth2 Introspection request.
To illustrate the process, the diagram above visualises the flow of an OAuth2 Introspection request. The Client sends the bearer token to the Web API, which then forwards it to the auth server via the introspection endpoint. The auth server validates the token and returns a JSON response, which is then processed by the Web API. Finally, the Web API grants (or denies) access to the requested resource based on the token validity.
Introspection vs. Direct JWT Validation
You might be thinking, “Isn’t this just how we normally validate a JWT token?” Well, yes… and no. What is the difference, and why is there a special term “Introspection” for this?
With direct JWT validation, we essentially check the token ourselves, verifying its signature, expiry, and sometimes audience. Introspection takes a different approach because it involves asking the auth server about the token status. This leads to differences in the pros and cons, which we will explore next.
With OAuth2 Introspection, we gain several key advantages. First, it works with various token formats (JWTs, opaque tokens, etc.) and auth server implementations. Furthermore, because the validation logic resides on the auth server, we get consistency and easier management of token revocation and other security policies. Most importantly, OAuth2 Introspection makes token revocation straightforward (e.g., if a user changes their password or a client is compromised). In contrast, revoking a JWT after it has been issued is significantly more complex.
.NET Implementation
Now, let’s see how to implement OAuth2 Introspection in a .NET Web API using the AddOAuth2Introspection authentication scheme.
The core configuration lives in our Program.cs file, where we set up the authentication and authorisation services.
// ... (previous code for building the app)
builder.Services.AddAuthentication("Bearer") .AddOAuth2Introspection("Bearer", options => { options.IntrospectionEndpoint = "<Auth server base URL>/connect/introspect"; options.ClientId = "<Client ID>"; options.ClientSecret = "<Client Secret>";
options.DiscoveryPolicy = new IdentityModel.Client.DiscoveryPolicy { RequireHttps = false, }; });
builder.Services.AddAuthorization();
// ... (rest of the Program.cs)
This code above configures the authentication service to use the “Bearer” scheme, which is the standard for bearer tokens. AddOAuth2Introspection(…) is where the magic happens because it adds the OAuth2 Introspection authentication handler by pointing to IntrospectionEndpoint, the URL our API will use to send the token for validation.
Usually, RequireHttps needs to be true in production. However, in situations like when the API and the auth server are both deployed to the same Elastic Container Service (ECS) cluster and they communicate internally within the AWS network, we can set it to false. This is because the Application Load Balancer (ALB) handles the TLS/SSL termination and the internal communication between services happens over HTTP, we can safely disable RequireHttps in the DiscoveryPolicy for the introspection endpoint within the ECS cluster. This simplifies the setup without compromising security, as the communication from the outside world to our ALB is already secured by HTTPS.
Finally, to secure our API endpoints and require authentication, we can simply use the [Authorize] attribute, as demonstrated below.
[ApiController] [Route("[controller]")] [Authorize] public class MyController : ControllerBase { [HttpGet("GetData")] public IActionResult GetData() { ... } }
Wrap-Up
OAuth2 Introspection is a powerful and flexible approach for securing our APIs, providing a centralised way to validate bearer tokens and manage access. By understanding the process, implementing it correctly, and following best practices, we can significantly improve the security posture of our apps and protect our valuable data.
In the previous article, we have discussed about how we can build a custom monitoring pipeline that has Grafana running on Amazon ECS to receive metrics and logs, which are two of the observability pillars, sent from the Orchard Core on Amazon ECS. Today, we will proceed to talk about the third pillar of observability, traces.
Source Code
The CloudFormation templates and relevant C# source codes discussed in this article is available on GitHub as part of the Orchard Core Basics Companion (OCBC) Project:https://github.com/gcl-team/Experiment.OrchardCore.Main.
Lisa Jung, senior developer advocate at Grafana, talks about the three pillars in observability (Image Credit: Grafana Labs)
We choose Tempo because it is fully compatible with OpenTelemetry, the open standard for collecting distributed traces, which ensures flexibility and vendor neutrality. In addition, Tempo seamlessly integrates with Grafana, allowing us to visualise traces alongside metrics and logs in a single dashboard.
Finally, being a Grafana Labs project means Tempo has strong community backing and continuous development.
About OpenTelemetry
With a solid understanding of why Tempo is our tracing backend of choice, let’s now dive deeper into OpenTelemetry, the open-source framework we use to instrument our Orchard Core app and generate the trace data Tempo collects.
OpenTelemetry is a Cloud Native Computing Foundation (CNCF) project and a vendor-neutral, open standard for collecting traces, metrics, and logs from our apps. This makes it an ideal choice for building a flexible observability pipeline.
OpenTelemetry provides SDKs for instrumenting apps across many programming languages, including C# via the .NET SDK, which we use for Orchard Core.
OpenTelemetry uses the standard OTLP (OpenTelemetry Protocol) to send telemetry data to any compatible backend, such as Tempo, allowing seamless integration and interoperability.
The http_listen_port allows us to set the HTTP port (3200) for Tempo internal web server. This port is used for health checks and Prometheus metrics.
After that, we configure where Tempo listens for incoming trace data. In the configuration above, we enabled OTLP receivers via both gRPC and HTTP, the two protocols that OpenTelemetry SDKs and agents use to send data to Tempo. Here, the ports 4317 (gRPC) and 4318 (HTTP) are standard for OTLP.
Last but not least, in the configuration, as demonstration purpose, we use the simplest one, local storage, to write trace data to the EC2 instance disk under /tmp/tempo/traces. This is fine for testing or small setups, but for production we will likely want to use services like Amazon S3.
In addition, since we are using local storage on EC2, we can easily SSH into the EC2 instance and directly inspect whether traces are being written. This is incredibly helpful during debugging. What we need to do is to run the following command to see whether files are being generated when our Orchard Core app emits traces.
ls -R /tmp/tempo/traces
The configuration above is intentionally minimal. As our setup grows, we can explore advanced options like remote storage, multi-tenancy, or even scaling with Tempo components.
Each flushed trace block (folder with UUID) contains a data.parquet file, which holds the actual trace data.
Finally, in order to enable Tempo to start on boot, we create a systemd unit file that allows Tempo to start on boot and automatically restart if it crashes.
cat <<EOF > /etc/systemd/system/tempo.service [Unit] Description=Grafana Tempo service After=network.target
systemctl daemon-reexec systemctl daemon-reload systemctl enable --now tempo
This systemd service ensures that Tempo runs in the background and automatically starts up after a reboot or a crash. This setup is crucial for a resilient observability pipeline.
Did You Know: When we SSH into an EC2 instance running Amazon Linux 2023, we will be greeted by a cockatiel in ASCII art! (Image Credit: OMG! Linux)
Understanding OTLP Transport Protocols
In the previous section, we configured Tempo to receive OTLP data over both gRPC and HTTP. These two transport protocols are supported by the OTLP, and each comes with its own strengths and trade-offs. Let’s break them down.
Ivy Zhuang from Google gave a presentation on gRPC and Protobuf at gRPConf 2024. (Image Credit: gRPC YouTube)
Tempo has native support for gRPC, and many OpenTelemetry SDKs default to using it. gRPC is a modern, high-performance transport protocol built on top of HTTP/2. It is the preferred option when performanceis critical. gRPC also supports streaming, which makes it ideal for high-throughput scenarios where telemetry data is sent continuously.
However, gRPC is not natively supported in browsers, so it is not ideal for frontend or web-based telemetry collection unless a proxy or gateway is used. In such scenarios, we will normally choose HTTP which is browser-friendly. HTTP is a more traditional request/response protocol that works well in restricted environments.
Since we are collecting telemetry from server-side like Orchard Core running on ECS, gRPC is typically the better choice due to its performance benefits and native support in Tempo.
Please take note that since gRPC requires HTTP/2, which some environments, for example, IoT devices and embedding systems, might not have mature gRPC client support, OTLP over HTTP is often preferred in simpler or constrained systems.
gRPC allows multiplexing over a single connection using HTTP/2. Hence, in gRPC, all telemetry signals, i.e. logs, metrics, and traces, can be sent concurrently over one connection. However, with HTTP, each telemetry signal needs a separate POST request to its own endpoint as listed below to enforce clean schema boundaries, simplify implementation, and stay aligned with HTTP semantics.
Logs:/v1/logs;
Metrics:/v1/metrics;
Traces:/v1/traces.
In HTTP, since each signal has its own POST endpoint with its own protobuf schema in the body, there is no need for the receiver to guess what is in the body.
AWS Distro for Open Telemetry (ADOT)
Now that we have Tempo running on EC2 and understand the OTLP protocols it supports, the next step is to instrument our Orchard Core to generate and send trace data.
The following code snippet shows what a typical direct integration with Tempo might look like in an Orchard Core.
This approach works well for simple use cases during development stage, but it comes with trade-offs that are worth considering. Firstly, we couple our app directly to the observability backend, reducing flexibility. Secondly, central management becomes harder when we scale to many services or environments.
ADOT is a secure, AWS-supported distribution of the OpenTelemetry project that simplifies collecting and exporting telemetry data from apps running on AWS services, for example our Orchard Core on ECS now. ADOT decouples our apps from the observability backend, provides centralised configuration, and handles telemetry collection more efficiently.
Sidecar Pattern
We can deploy the ADOT in several ways, such as running it on a dedicated node or ECS service to receive telemetry from multiple apps. We can also take the sidecar approach which cleanly separates concerns. Our Orchard Core app will focus on business logic, while a nearby ADOT sidecar handles telemetry collection and forwarding. This mirrors modern cloud-native patterns and gives us more flexibility down the road.
The following CloudFormation template shows how we deploy ADOT as a sidecar in ECS using CloudFormation. The collector config is stored in AWS Systems Manager Parameter Store under /myapp/otel-collector-config, and injected via the AOT_CONFIG_CONTENT environment variable. This keeps our infrastructure clean, decoupled, and secure.
Deploy an ADOT sidecar on ECS to collect observability data from Orchard Core.
There are several interesting and important details in the CloudFormation snippet above that are worth calling out. Let’s break them down one by one.
Firstly, we choose awsvpc as the NetworkMode of the ECS task. In awsvpc, each container in the ECS task, i.e. our Orchard Core container and the ADOT sidecar, receives its own ENI (Elastic Network Interface). This is great for network-level isolation. With this setup, we can reference the sidecar from our Orchard Core using its container name through ECS internal DNS, i.e. http://adot-collector:4317.
Secondly, we include a health check for the ADOT container. ECS will use this health check to restart the container if it becomes unhealthy, improving reliability without manual intervention. In November 2022, Paurush Garg from AWS added the healthcheck component with the new ADOT collector release, so we can simply specify that we will be using this healthcheck component in the configuration that we will discuss next.
Yes, the configuration! Instead of hardcoding the ADOT configuration into the task definition, we inject it securely at runtime using the AOT_CONFIG_CONTENT secret. This environment variable AOT_CONFIG_CONTENT is designed to enable us to configure the ADOT collector. It will override the config file used in the ADOT collector entrypoint command.
The SSM Parameter for the environment variable AOT_CONFIG_CONTENT.
Wrap-Up
By now, we have completed the journey of setting up Grafana Tempo on EC2, exploring how traces flow through OTLP protocols like gRPC and HTTP, and understanding why ADOT is often the better choice in production-grade observability pipelines.
With everything connected, our Orchard Core app is now able to send traces into Tempo reliably. This will give us end-to-end visibility with OpenTelemetry and AWS-native tooling.
Recently, while migrating our project from .NET 6 to .NET 8, my teammate Jeremy Chan uncovered an undocumented change in model binding behaviour that seems to appear since .NET 7. This change is not clearly explained in the official .NET documentation, so it can be something developers easily overlook.
To illustrate the issue, let’s begin with a simple Web API project and explore a straightforward controller method that highlights the change.
[ApiController] public class FooController { [HttpGet()] public async void Get([FromQuery] string value = "Hello") { Console.WriteLine($"Value is {value}");
return new JsonResult() { StatusCode = StatusCodes.Status200OK }; } }
Then we assume that we have nullable enabled in both .NET 6 and .NET 8 projects.
In .NET 6, when we call the endpoint with /foo?value=, we shall receive the following error.
{ "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1", "title": "One or more validation errors occurred.", "status": 400, "traceId": "00-5bc66c755994b2bba7c9d2337c1e5bc4-e116fa61d942199b-00", "errors": { "value": [ "The value field is required." ] } }
However, if we change the method to be as follows, the error will not be there.
public async void Get([FromQuery] string? value) { if (value is null) Console.WriteLine($"Value is null!!!"); else Console.WriteLine($"Value is {value}");
return new JsonResult() { StatusCode = StatusCodes.Status200OK }; }
The log when calling the endpoint with /foo?value= will then be “Value is null!!!”.
Hence, we can know that query string without value will be interpreted as being null. That is why there will be a validation error when value is not nullable.
Thus, we can say that, in order to make the endpoint work in .NET 6, we need to change it to be as follows to make the value optional. This will not mark value as a required field.
public async void Get([FromQuery] string? value = "Hello")
Now, if we call the endpoint with /foo?value=, we shall receive see the log “Value is Hello” printed.
Situation in .NET 8 (and .NET 7)
Then how about in .NET 8 with the same original setup, i.e. as shown below.
public async void Get([FromQuery] string value = "Hello")
In .NET 8, when we call the endpoint with /foo?value=, we shall see the log “Value is Hello” printed.
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
In a .NET Web API project, when we have to perform data processing tasks in the background, such as processing queued jobs, updating records, or sending notifications, it’s likely designed to concurrently perform database operations using Entity Framework (EF) in a BackgroundService when the project starts in order to significantly reduce the overall time required for processing.
In many ASP.NET Core applications, DbContext is registered with the Dependency Injection (DI) container, typically with a scoped lifetime. For example, in Program.cs, we can configure MyDbContext to connect to a MySQL database with .
Next, we have a scoped service defined as follows. It will retrieve a set of relevant records from the database MyTable.
public class MyService : IMyService { private readonly MyDbContext _myDbContext;
public MyService(MyDbContext myDbContext) { _myDbContext = myDbContext; }
public async Task RunAsync(CancellationToken cToken) { var result = await _myDbContext.MyTable .Where(...) .ToListAsync();
... } }
Here we will be consuming this MyService in a background task. In Program.cs, using the code below, we setup a background service called MyProcessor with the DI container as a hosted service.
for (var i = 0; i < numberOfProcessors; i++) { await using var scope = _services.CreateAsyncScope(); var myService = scope.ServiceProvider.GetRequiredService<IMyService>();
var workTask = myService.RunAsync(cToken); _processorWorkTasks.Add(workTask); }
await Task.WhenAll(_processorWorkTasks); } }
As shown above, we are calling myService.RunAsync, an async method, without await it. Hence, ExecuteAsync continues running without waiting for myService.RunAsync to complete. In other words, we will be treating the async method myService.RunAsync as a fire-and-forget operation. This can make it seem like the loop is executing tasks in parallel.
After the loop, we will be using Task.WhenAll to await all those tasks, allowing us to take advantage of concurrency while still waiting for all tasks to complete.
Problem
The code above will bring us an error as below.
System.ObjectDisposedException: Cannot access a disposed object. Object name: ‘MySqlConnection’.
System.ObjectDisposedException: Cannot access a disposed context instance. A common cause of this error is disposing a context instance that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling ‘Dispose’ on the context instance, or wrapping it in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.
If you are using AddDbContextPool instead of AddDbContext, the following error will occur also.
System.InvalidOperationException: A second operation was started on this context instance before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
The error is caused by the fact that, as we discussed earlier, EF Core does not support multiple parallel operations being run on the same DbContext instance. Hence, we need to solve the problem by having multiple DbContexts.
Solution 1: Scoped Service
This is a solution suggested by my teammate, Yimin. This approach is focusing on changing the background service, MyProcessor.
Since DbContext is registered as a scoped service, within the lifecycle of a web request, the DbContext instance is unique to that request. However, in background tasks, there is no “web request” scope, so we need to create our own scope to obtain a fresh DbContext instance.
Since our BackgroundService implementation above already has access to IServiceProvider, which is used to create scopes and resolve services, we can change it as follows to create multiple DbContexts.
public class MyProcessor : BackgroundService { private readonly IServiceProvider _services; private readonly IList<Task> _processorWorkTasks;
public Processor( IServiceProvider services) { _services = services; _processorWorkTasks = new List<Task>(); }
for (var i = 0; i < numberOfProcessors; i++) { _processorWorkTasks.Add( PerformDatabaseOperationAsync(cToken)); }
await Task.WhenAll(_processorWorkTasks); }
private async Task PerformDatabaseOperationAsync(CancellationToken cToken) { using var scope = _services.CreateScope(); var myService = scope.ServiceProvider.GetRequiredService<IMyService>(); await myService.RunAsync(cToken); } }
Another important change is to await for the myService.RunAsync method. If we do not await it, we risk leaving the task incomplete. This could lead to problem that DbContext does not get disposed properly.
In addition, if we do not await the action, we will also end up with multiple threads trying to use the same DbContext instance concurrently, which could result in exceptions like the one we discussed earlier.
Solution 2: DbContextFactory
I have proposed to my teammate another solution which can easily create multiple DbContexts as well. My approach is to update the MyService instead of the background service.
Instead of injecting DbContext to our services, we can inject DbContextFactory and then use it to create multiple DbContexts that allow us to execute queries in parallel.
Hence, the service MyService can be updated to be as follows.
public class MyService : IMyService { private readonly IDbContextFactory<MyDbContext> _contextFactory;
public MyService(IDbContextFactory<MyDbContext> contextFactory) { _contextFactory = contextFactory; }
public async Task RunAsync() { using (var context = _contextFactory.CreateDbContext()) { var result = await context.MyTable .Where(...) .ToListAsync();
... } } }
This also means that we need to update AddDbContext to AddDbContextFactory in Program.cs so that we can register this factory as follows.
Since each DbContext instance created by the factory is independent, we avoid the concurrency issues associated with using a single DbContext instance across multiple threads. The implementation above also reduces the risk of resource leaks and other lifecycle issues.
Wrap-Up
In this article, we have seen two different approaches to handle concurrency effectively in EF Core by ensuring that each database operation uses a separate DbContext instance. This prevents threading issues, such as the InvalidOperationException related to multiple operations being started on the same DbContext.
The first solution where we create a new scope with CreateAsyncScope is a bit more complicated but If we prefer to manage multiple scoped services, the CreateAsyncScope approach is appropriate. However, If we are looking for a simple method for managing isolated DbContext instances, AddDbContextFactory is a better choice.
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.