I'm Chun Lin, a software developer from a small village in Malaysia, currently based in Singapore. I specialise in building cloud-first web applications and APIs.
Recently, while migrating our project from .NET 6 to .NET 8, my teammate Jeremy Chan uncovered an undocumented change in model binding behaviour that seems to appear since .NET 7. This change is not clearly explained in the official .NET documentation, so it can be something developers easily overlook.
To illustrate the issue, let’s begin with a simple Web API project and explore a straightforward controller method that highlights the change.
[ApiController] public class FooController { [HttpGet()] public async void Get([FromQuery] string value = "Hello") { Console.WriteLine($"Value is {value}");
return new JsonResult() { StatusCode = StatusCodes.Status200OK }; } }
Then we assume that we have nullable enabled in both .NET 6 and .NET 8 projects.
In .NET 6, when we call the endpoint with /foo?value=, we shall receive the following error.
{ "type": "https://tools.ietf.org/html/rfc7231#section-6.5.1", "title": "One or more validation errors occurred.", "status": 400, "traceId": "00-5bc66c755994b2bba7c9d2337c1e5bc4-e116fa61d942199b-00", "errors": { "value": [ "The value field is required." ] } }
However, if we change the method to be as follows, the error will not be there.
public async void Get([FromQuery] string? value) { if (value is null) Console.WriteLine($"Value is null!!!"); else Console.WriteLine($"Value is {value}");
return new JsonResult() { StatusCode = StatusCodes.Status200OK }; }
The log when calling the endpoint with /foo?value= will then be “Value is null!!!”.
Hence, we can know that query string without value will be interpreted as being null. That is why there will be a validation error when value is not nullable.
Thus, we can say that, in order to make the endpoint work in .NET 6, we need to change it to be as follows to make the value optional. This will not mark value as a required field.
public async void Get([FromQuery] string? value = "Hello")
Now, if we call the endpoint with /foo?value=, we shall receive see the log “Value is Hello” printed.
Situation in .NET 8 (and .NET 7)
Then how about in .NET 8 with the same original setup, i.e. as shown below.
public async void Get([FromQuery] string value = "Hello")
In .NET 8, when we call the endpoint with /foo?value=, we shall see the log “Value is Hello” printed.
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
Quick note: I received a free license for NDepend to try it out and share my experience. All opinions in this blog post are on my own.
From O2DES.Net to Ea
In 2019, I had the honour of working closely with the team behind the O2DES.NET during my time working at the C4NGP research center in NUS, where I spent around two and a half years. After I left the team in 2022, O2DES.NET has not been actively updated on their GitHub public repository and it is still targeting at .NET Standard 2.1.
While .NET Standard 2.1 is not as old as the .NET Framework, it is considered somewhat outdated compared to the latest .NET versions. In the article “The Future of .NET Standard” written by Immo Landwerth, .NET Standard has been largely superseded by .NET 5 (and later versions), which unify these platforms into a single runtime. Hence, moving to .NET 8 is a forward-looking decision that aligns with current and future software development trends.
Immo Landwerth, program manager on the .NET Framework team at Microsoft, talked about .NET Standard 2.0 back in 2016. (Image Credit: dotnet – YouTube Channel)
Hence, in this article, I will walk you through the process of migrating O2DES.NET from targeting .NET Standard 2.1 to supporting .NET 8. To prevent any confusion, I’ve renamed the project to ‘Ea’ because I am no longer the active developer of O2DES.NET. Throughout this article, ‘Ea’ will refer to the version of the project updated to .NET 8.
In this migration journey, I will be relying on NDepend, a static code analysis tool for .NET developers.
Show Me the Code!
The complete source code of my project after migrating O2DES.NET to target at .NET 8 can be found on GitHub at https://github.com/gcl-team/Ea.
About NDepend: Why Do We Need a Static Code Analysis?
Why do we need NDepend, a static code analysis tool?
Static code analysis is a way of automatically checking our code for potential issues without actually running our apps. Think of it like a spell-checker, but for programming, scanning our codebase to find bugs, performance issues, and security vulnerabilities early in the development process.
During the migration of an older library, such as moving O2DES.NET from .NET Standard 2.1 to .NET 8, the challenges can add up. We are expected to run into outdated code patterns, performance bottlenecks, or even compatibility issues.
The O2DES.NET on GitHub has some of its NuGet references outdated too.
NDepend is designed to help with this by performing a deep static analysis of the entire codebase. It gives us detailed reports on code quality, shows where our dependencies are, and highlights areas that need attention. We can then focus on modernising the code with confidence, knowing that we are not likely introducing new bugs or performance issues as we are updating the codebase.
NDepend also helps enforce good coding practices by pointing out issues like overly complex methods, dead code, or potential security vulnerabilities. With features like code metrics, dependency maps, and rule enforcement, it acts as a guide to help us write better, more maintainable code.
Bringing Down Debt from 6.22% to 0.35%
One of the standout features of NDepend is its comprehensive dashboard, which I heavily rely on to get an overview of the entire O2DES.NET codebase.
Right after targeting the O2DES.NET library to .NET 8, a lot of issues surfaced.
From code quality metrics to technical debt, the dashboard presents critical insights in a visual and easy-to-understand format. Having all this information in one place is indeed invaluable to us during the migration project.
We can configure the SQALE Debt Ratio and Debt Rating.
In the book, the SQALE method for Managing Technical Debt written by Jean-Louis Letouzey, SQALE stands for Software Quality Assessment based on Life Expectations. SQALE is a method used to assess and manage technical debt in software projects. In the context of NDepend, the SQALE method is used to calculate the Debt Ratio and Debt Rating:
Debt Ratio: The percentage of effort needed to fix the technical debt compared to rewriting the code from scratch.
Debt Rating: A letter-based rating (A to E) derived from the Debt Ratio to give a quick overview of the severity of technical debt.
As shown in one of the earlier screenshots, Ea has a Debt Ratio of 6.22% and a B rating. This means that its technical debt is considered moderate and manageable. Nevertheless, it is a signal that it is now time we should start addressing the identified issues before they accumulate.
After just two weeks of code cleanup, we successfully reduced Ea’s Debt Ratio from 6.22% to an impressive 0.35%, elevating its rating to an A. This significant improvement not only enhances the overall quality of the codebase but also positions Ea for better maintainability.
The most recent analysis shows that the Debt Ratio of Ea is down to just 0.35%.
Issues and Trends
In Visual Studio, NDepend also provides interactive UI which indicates the number of critical rules violated and critical issues to solve. Unlike most of the static code analysis tools that show overwhelming number of issues, NDepend has this concept of baseline.
When we first set up an NDepend project, the very first analysis of our code becomes the “baseline.” This baseline serves as a starting point, capturing the current state of our code. As we continue to work on the project, future analyses will be compared against this baseline. The idea is to track how our code changes over time so that we can focus on knowing whether we are improving or introducing more issues to the codebase while we are changing it.
At some point during the code change, we fixed 31 “High” issues (shown in green) while introducing 42 new “High” issues (shown in red).
As shown in the screenshot above, those new issues added since the baseline need to be our priority to fix. This is to make sure the newly written code and refactored code will remain clean.
In fact, when fixing the issues, I get to learn from the NDepend rules. When we click on the numbers, we will be shown the corresponding issues. Then clicking on each of the issue will show us the detailed information about it. For example, as shown in the screenshot below, when we click on one of the green numbers, it shows us a list of issues that have been fixed by us.
As indicated, the issue is one which has been fixed since the baseline.
When we click on the red numbers, as shown in the following screenshot, we will get to see the new issues that we need to fix. The following example shows how the original O2DES.NET has some methods declared with high visibility unnecessarily.
This is an issue that has been newly added since the baseline.
By default, the dashboard also comes with some helpful trend charts. These charts give us a visual overview of how our codebase is evolving over time.
We have made significant progress in Ea library development over the past half month.
These charts give us a visual overview of how our codebase is evolving over time. For those new to static code analysis, think of these charts as the “health check” of the project. During the migration, they help us to track important metrics, like code coverage, issues, or technical debt, and show how they change with each analysis.
Code Dependency Graphs
NDepend offers a Dependency Graph. It is used to visually represent the relationships between different components such as namespaces and classes within our codebase. The graph helps us understand how tightly coupled our code is and how different parts of our codebase depend on each other.
When we are refactoring Ea during the migration, we depend on the Dependency Graph to visually shows us how the different parts of the codebase are connected. We use the insight provided by Dependency Graph to plan how to split components, which will then make the code easier to manage.
A dependency diagram made of all classes in the Ea project.
As shown in the diagram above, we can see a graph made of some entangled classes which are connected with a red bi-directional arrow. This is because in the original O2DES.NET library, there are some classes having circular dependency. This thus makes parts of the code heavily reliant on each other, reducing modularity and making it harder to unit test the code independently.
To further investigate the classes, we can double click the edge between those two classes. Doing so will generate a graph made of methods and fields involved in the dependency between the two classes, as shown in the screenshot below.
The coupling graph between two classes.
This coupling graph is a powerful tool for us as it offers detailed insights into how the two classes interact. This level of detail allows us to focus on the exact code causing the coupling, making it easier to assess whether the dependency is necessary or can be refactored. For instance, if multiple methods are too intertwined, it might be time to extract common logic into a new class or interface.
In addition, the Dependency Matrix is another way to visualise the dependencies between namespaces, classes, or methods. A number in a cell at the intersection of two elements indicates how many times the element in the row depends on the element in the column. This gives us an overview of the dependencies within our codebase.
The Dependency Matrix.
From the Dependency Matrix above, we first should look for cells with large numbers. This is because having large numbers indicating the two methods are highly dependent on each other. We should review those methods to understand why there is so much interaction and to make sure they are not tightly coupled.
If there is a cycle in the codebase, there will be a red square shown on the Dependency Matrix. We then can refactor by breaking the cycle, possibly by introducing new interfaces or decoupling responsibilities between the methods.
Code Metrics View
In the Code Metric View, each rectangle represents a method. The area of a rectangle is proportional to metrics such as the # lines of codes (LOC), cyclomatic complexity (CC), of the corresponding method, field, type, namespace, or assembly.
This treemap shows the # lines of code (LOC) of the methods in our project.
During the migration, the tree view format enables us to navigate our codebase and prioritise areas that require refactoring by spotting those methods that are too big and too complex. In addition, to help quickly identify problem areas, NDepend uses colour coding in the tree view. For example, red may indicate high complexity or large size, while green might indicate simpler, more maintainable code.
The tree view is interactive. Right-clicking on the rectangles provides options such as opening the source code declaration for the selected element, allowing us to navigate directly to the method.
Right-clicking on the rectangles will show the available actions to perform.
Integrating with GitHub Actions
NDepend integrate well with several CI/CD pipelines, making it a valuable tool for maintaining code quality throughout the development lifecycle. It can automatically analyse our code after each build. This ensures that every change in our codebase adheres to defined quality standards before the merge to main branch.
NDepend comes with Quality Gates that enforce standards such as unfixed critical issues. If the code fails to meet the required thresholds, the build can fail in the pipelines.
In NDepend, Quality Gates are predefined sets of code quality criteria that our project must meet before it is considered acceptable for deployment. They serve as automated checkpoints to help ensure that our code maintains a certain standard of quality, reducing technical debt and promoting maintainability.
One of our build failed because there was code violating a critical rule in our codebase.
As shown in the screenshot above, NDepend provides detailed reports on issues and violations after each build. We can also download the detailed report from the CI servers, such as GitHub Actions. These reports help us quickly identify where issues exist in our code.
NDepend report of the build can be found in the Artifacts of the pipeline.
The NDepend report is divided into seven sections, each providing detailed insights into various aspects of your codebase:
Overview: It gives a high-level view of the overall code quality and metrics, similar to what is displayed in the NDepend Dashboard within Visual Studio.
Projects: Similar to the Issues section but focuses on projects instead of individual files. It displays the total issues and associated debt at the project level.
Rules: This section highlights the violated rules, showing the issues and debt in terms of the rules that have been broken. It’s another way to assess code quality by focusing on adherence to coding standards.
Quality Gates: This section mirrors the Quality Gates you might have seen earlier in the CI/CD pipelines, such as in GitHub Actions.
Trend: The Trend section provides a visualisation of trends over time, similar to the trend charts found in the NDepend Dashboard in Visual Studio.
Logs: This section contains the logs generated during NDepend analysis.
Number of un-resolved issues and debt of the files in our project.
As described in the NDpend documentation, it has complete support for Azure DevOps, meaning it can be seamlessly integrated into the CI/CD pipelines without a lot of manual setup. We thus can easily configure NDepend to run as part of our Azure Pipelines, generating code quality reports after each build.
For our Ea project, since it is an open-source project hosted on GitHub, we can also integrate NDepend with our GitHub Actions instead.
In conclusion, NDepend has proven to be an invaluable tool in our journey to modernise and maintain the Ea library.
By offering comprehensive static code analysis, insightful metrics, and seamless integration with CI/CD pipelines like GitHub Actions, it empowers us to catch issues early, reduce technical debt, and ensure a high standard of code quality.
NDepend provides the guidance and clarity needed to ensure our code remains clean, efficient, and maintainable. For any .NET individual or development team serious about improving code quality, NDepend is definitely a must-have in the toolkit.
In a .NET Web API project, when we have to perform data processing tasks in the background, such as processing queued jobs, updating records, or sending notifications, it’s likely designed to concurrently perform database operations using Entity Framework (EF) in a BackgroundService when the project starts in order to significantly reduce the overall time required for processing.
In many ASP.NET Core applications, DbContext is registered with the Dependency Injection (DI) container, typically with a scoped lifetime. For example, in Program.cs, we can configure MyDbContext to connect to a MySQL database with .
Next, we have a scoped service defined as follows. It will retrieve a set of relevant records from the database MyTable.
public class MyService : IMyService { private readonly MyDbContext _myDbContext;
public MyService(MyDbContext myDbContext) { _myDbContext = myDbContext; }
public async Task RunAsync(CancellationToken cToken) { var result = await _myDbContext.MyTable .Where(...) .ToListAsync();
... } }
Here we will be consuming this MyService in a background task. In Program.cs, using the code below, we setup a background service called MyProcessor with the DI container as a hosted service.
for (var i = 0; i < numberOfProcessors; i++) { await using var scope = _services.CreateAsyncScope(); var myService = scope.ServiceProvider.GetRequiredService<IMyService>();
var workTask = myService.RunAsync(cToken); _processorWorkTasks.Add(workTask); }
await Task.WhenAll(_processorWorkTasks); } }
As shown above, we are calling myService.RunAsync, an async method, without await it. Hence, ExecuteAsync continues running without waiting for myService.RunAsync to complete. In other words, we will be treating the async method myService.RunAsync as a fire-and-forget operation. This can make it seem like the loop is executing tasks in parallel.
After the loop, we will be using Task.WhenAll to await all those tasks, allowing us to take advantage of concurrency while still waiting for all tasks to complete.
Problem
The code above will bring us an error as below.
System.ObjectDisposedException: Cannot access a disposed object. Object name: ‘MySqlConnection’.
System.ObjectDisposedException: Cannot access a disposed context instance. A common cause of this error is disposing a context instance that was resolved from dependency injection and then later trying to use the same context instance elsewhere in your application. This may occur if you are calling ‘Dispose’ on the context instance, or wrapping it in a using statement. If you are using dependency injection, you should let the dependency injection container take care of disposing context instances.
If you are using AddDbContextPool instead of AddDbContext, the following error will occur also.
System.InvalidOperationException: A second operation was started on this context instance before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
The error is caused by the fact that, as we discussed earlier, EF Core does not support multiple parallel operations being run on the same DbContext instance. Hence, we need to solve the problem by having multiple DbContexts.
Solution 1: Scoped Service
This is a solution suggested by my teammate, Yimin. This approach is focusing on changing the background service, MyProcessor.
Since DbContext is registered as a scoped service, within the lifecycle of a web request, the DbContext instance is unique to that request. However, in background tasks, there is no “web request” scope, so we need to create our own scope to obtain a fresh DbContext instance.
Since our BackgroundService implementation above already has access to IServiceProvider, which is used to create scopes and resolve services, we can change it as follows to create multiple DbContexts.
public class MyProcessor : BackgroundService { private readonly IServiceProvider _services; private readonly IList<Task> _processorWorkTasks;
public Processor( IServiceProvider services) { _services = services; _processorWorkTasks = new List<Task>(); }
for (var i = 0; i < numberOfProcessors; i++) { _processorWorkTasks.Add( PerformDatabaseOperationAsync(cToken)); }
await Task.WhenAll(_processorWorkTasks); }
private async Task PerformDatabaseOperationAsync(CancellationToken cToken) { using var scope = _services.CreateScope(); var myService = scope.ServiceProvider.GetRequiredService<IMyService>(); await myService.RunAsync(cToken); } }
Another important change is to await for the myService.RunAsync method. If we do not await it, we risk leaving the task incomplete. This could lead to problem that DbContext does not get disposed properly.
In addition, if we do not await the action, we will also end up with multiple threads trying to use the same DbContext instance concurrently, which could result in exceptions like the one we discussed earlier.
Solution 2: DbContextFactory
I have proposed to my teammate another solution which can easily create multiple DbContexts as well. My approach is to update the MyService instead of the background service.
Instead of injecting DbContext to our services, we can inject DbContextFactory and then use it to create multiple DbContexts that allow us to execute queries in parallel.
Hence, the service MyService can be updated to be as follows.
public class MyService : IMyService { private readonly IDbContextFactory<MyDbContext> _contextFactory;
public MyService(IDbContextFactory<MyDbContext> contextFactory) { _contextFactory = contextFactory; }
public async Task RunAsync() { using (var context = _contextFactory.CreateDbContext()) { var result = await context.MyTable .Where(...) .ToListAsync();
... } } }
This also means that we need to update AddDbContext to AddDbContextFactory in Program.cs so that we can register this factory as follows.
Since each DbContext instance created by the factory is independent, we avoid the concurrency issues associated with using a single DbContext instance across multiple threads. The implementation above also reduces the risk of resource leaks and other lifecycle issues.
Wrap-Up
In this article, we have seen two different approaches to handle concurrency effectively in EF Core by ensuring that each database operation uses a separate DbContext instance. This prevents threading issues, such as the InvalidOperationException related to multiple operations being started on the same DbContext.
The first solution where we create a new scope with CreateAsyncScope is a bit more complicated but If we prefer to manage multiple scoped services, the CreateAsyncScope approach is appropriate. However, If we are looking for a simple method for managing isolated DbContext instances, AddDbContextFactory is a better choice.
KOSD, or Kopi-O Siew Dai, is a type of Singapore coffee that I enjoy. It is basically a cup of coffee with a little bit of sugar. This series is meant to blog about technical knowledge that I gained while having a small cup of Kopi-O Siew Dai.
The exam consists of 10 sections to measure different Azure skills, and I have passed 8 of them, which are:
Explore Azure Functions;
Develop Azure Functions;
Implement Azure Key Vault (full score!);
Implement Azure App Configuration;
Monitor App Performance;
Manage Container Images in Azure Container Registry (full score!);
Work with Azure Cosmos DB;
Consume an Azure Cosmos DB for NoSQL change feed using the SDK.
I didn’t pass the section “Implement Azure Container Apps” and scored 0 in “Run Container Images in Azure Container Instances” section. These areas remind me that there is always room for improvement and growth.
The certificate is signed by Microsoft CEO!
The Kaizen Journey
Since 2019, I have not been actively using Azure at my work. I now work with AWS at work, but I still learn Azure on my own. Even though I’m not using Azure at work, I keep learning and growing my cloud computing skills. I share my AWS knowledge with the community, but my heart is still with Azure too. I want to be good at both AWS and Azure!
As a developer working primarily with AWS, taking the Azure certification may seem unconventional, but it’s a wise move. Not only Azure certification allows me to demonstrate my commitment to continuous learning and adaptability, but also having expertise in multiple cloud platforms makes a developer a more attractive candidate in the current job market.
I hope my journey inspires you to pursue your own path of learning and growth. As Riza Marhaban, my senior who is also Senior Associate Director (IT) at NUS told me, certifications are not just about achieving a credential, but about the journey of self-improvement and the positive impact it can have on those around us.
Riza shared with me the Kaizen philosophy. The Kaisan philosophy teaches us to embrace challenges, learn from failures, and strive for excellence. Hence, I apply this philosophy to my own journeys, embracing each step as an opportunity to learn and grow.
Wrap-Up
Renewing my certification has reminded me of the importance of continuous learning. I hope my story inspires you to stay humble, stay hungry, and always strive for excellence.