.NET Blazor & scoped dependency injected DB context - c#

Problem:
We have a Blazor server app with a DevExpress grid component, showing data directly from the DB. We want all the operations - filtering, grouping, etc. - to take place on the DB layer, that’s why we don’t use a Service layer to fetch the data, rather we hook directly onto the DB context.
Let’s say we have 2 users, looking at the same grid, each in his own browser (that implicitly means 2 different SignalR connections). User 1 changes the state, but user 2 isn’t aware of that, even if he refreshes the grid. Only when user 2 refreshes the page (F5) are the differences shown.
Explanation:
DB contexts are “scoped DI” by default. In a classic HTTP request-response architecture, that means that for the duration of a request, one and the same instance of the DB context is provided by the DI to all who request it. In the example above, data would be refreshed, because each request will instantiate a new DB context.
In a Blazor app, things are different. DB context in our case is not refreshed with each WEB request. Actually, the term ‘request’ doesn’t even exist in SignalR (WebSocket) and WebAssembly. So, what happens in our example? As long as the SignalR connection is alive, user 2 has the same instance of the DB context. If another user changes state in his own instance of the context, these changes aren’t propagated to other context instances. Roughly, this means that a ‘scoped’ DB context actually becomes a ‘singleton’ (well, almost, singleton in the scope of a user / session / signalR connection).
Links:
https://learn.microsoft.com/en-us/aspnet/core/blazor/fundamentals/dependency-injection
https://learn.microsoft.com/en-us/aspnet/core/blazor/blazor-server-ef-core
https://www.thinktecture.com/blazor/dependency-injection-scopes-in-blazor/
Thoughts:
Our service layer is stateless, so it isn't a problem. DB contexts are problematic
Blazor doesn’t have a concept of a ‘scoped’ service
‘Scoped’ is actually a singleton in the scope of a single connection
‘Singleton’ provides the same service for all the connections
There is an approximation of a scoped service, scoped to the ‘component’ level
Each razor component will use the same instances in its lifetime
But this lifetime can be long lived nonetheless
Another, similar approximation
If truth be told, things are pretty similar to the classic request-response architecture: if 2 requests would happen at exactly the same time, there would be 2 DB context instances with different states. This surely can happen, but the probability of it is low, so it’s not such a problem
Having a ‘transient’ DB context also isn’t OK
we want our API (service layer) methods to be a “unit of work” (1 API - 1 DB transaction)
one API can call multiple BL functions, each in a separate ServiceBL class - those should have the same DB context instances
Solutions:
Scoped is already treated almost the same as a singleton. What if we would register DB contexts as singletons?
Sounds like a bad idea - everybody would use one long-lived instance, it would/could present a bottleneck, what about thread safety?
"EF Core does not support multiple parallel operations being run on the same context instance"
‘Page refresh‘ in the right places can be a substitute for ‘scopes’
await JSRuntime.InvokeVoidAsync("location.reload");
NavigationManager.NavigateTo(NavigationManager.Uri, forceLoad: true);
When the ‘refresh data grid’ button is clicked, we can create a new instance of the DB context
This is only a solution for this specific case, though. The underlying problem still exists, multiple users have different instances of DB contexts which will sooner or later blow up in our faces
API methods are our unit-of-work. We could manually create a DI scope, use it for the duration of the API and then dispose of it. But that would mean we would have to bubble the services (at least DB context) down to each and every class that would need them :/
Any ideas would be much appreciated

Related

Db Context for Console Application

I have a console application written in C# that runs as a service each hour. The application has a Data Access Layer (DAL) to connect to the database with a Db Context. This context is a property of the DAL and is created each time the DAL is created. I believe this has lead to errors when updating various elements.
Question: Should the application create a Db Context when it runs and use this throughout the application so that all objects are being worked on with the same context?
Since a service can be running for a long time, it is a good practice to open the connection, do the job and then close the connection.
If you have a cadence of methods then you could pass your opened DbContext as a parameter.
For instance:
call to A
call to B(DbConteext)
call to C(DbContext)
Another good practice is to protect your code with try/catch, because your database could be offline, not reachable, etc.
Question: Should the application create a Db Context when it runs and use this throughout the application so that all objects are being worked on with the same context?
You should (re)create your DbContext whenever you suspect the underlying data has changed. Because the DbContext assumes that data once fetched from the data source is never changed and can be returned as a result of a query, even if that query might come minutes, hours or years later. It's caching, with all it's advantages and disadvantages.
I would suggest you (re)create your DbContext whenever you start a new loop of your service.
DbContext is really an implementation of Unit of Work pattern so it represents a single business transaction which is typically a request in a web app. So it should be instantiated when a business transactions begins, then some operations on db should be performed and commited (thats SaveChanges) and the context should be closed.
If running a console app represents a business transaction so it's kind of like a web request then of course you can have a singleton instance of DbContext. You cannot use this instance from different threads so your app should be single-threaded and you should be aware that DbContext is caching some data so eventually you may have memory issues. If your db is used by many clients and the data changes often you may have concurrency issues if the time between fetching some data from db and saving them is too long which might be the issue here.
If not try to separate your app into some business transactions and resolve your db context per those transactions. Such a transaction could be a command entered by user.

Parallel Transactions in distinct Session in NHibernate / SQL Server

we are building a WinForms desktop application which talks to an SQL Server through NHibernate. After extensive research we settled on the Session / Form strategy using Ninject to inject a new ISession into each Form (or the backing controller to be precise). So far it is working decently.
Unfortunately the main Form holds a lot of data (mostly read-only) which gets stale after some time. To prevent this we implemented a background service (really just a seperate class) which polls the DB for changes and issues an event which lets the main form selectively update the changed rows.
This background service also gets a separate session to minimize interference with the other forms. Our understanding was that it is possible to open a transaction per session in parallel as long as they are not nested.
Sadly this doesn't seem to be the case and we either get an ObjectDisposedException in one of the forms or the service (because the service session used an existing transaction from on of the forms and committed it, which fails the commit in the form or the other way round) or we get an InvalidOperationException stating that "Parallel transactions are not supported by SQL Server".
Is there really no way to open more than one transaction in parallel (across separate sessions)?
And alternatively is there a better way to update stale data in a long running form?
Thanks in advance!
I'm pretty sure you have messed something up, and are sharing either session or connection instances in ways you did not intend.
It can depend a bit on which sort of transactions you use:
If you use only NHibernate transactions (session.BeginTransaction()), each session acts independently. Unless you do something special to insert your own underlying database connections (and made an error there), each session will have their own connection and transaction.
If you use TransactionScope from System.Transactions in addition to the NHibernate transactions, you need to be careful about thread handling and the TransactionScopeOption. Otherwise different parts of your code may unexpectedly share the same transaction if a single thread runs through both parts and you haven't used TransactionScopeOption.RequiresNew.
Perhaps you are not properly disposing your transactions (and sessions)?

Should the DbContext in EF have a short life span?

I have a few long running tasks on my server. Basically they are like scheduled tasks - they run from time to time.
They all required access to the DB and I use Entity Framework for that. Each task uses a DbContext for access.
Should the DbContext object be recreated on every run or should I reuse it?
I should say "it depends" as there are probably scenarios where both answers are valid, however the most reasonable answer is "the context should be disposed as soon as it is not needed" which in practice means "dispose rather sooner than later".
The risk that comes from such answer is that newcomers sometimes conclude that the context should be disposed as otfen as possible which sometimes lead to a code I review where there are consecutive "usings" that create a context, use it for one or two operations, dispose and then another context comes up next line. This is of course not recommended also.
In case of web apps, the natural lifecycle is connected with a lifecycle of web requests. In case of system services / other long running applications one of lifecycle strategies is "per business process instance" / "per usecase instance" where business processing / use case implementations define natural borders where separate instances of contexts make sense.
Yes, DbContext should only live for a very short time. It is effectively your unit of work
You should definitely create it each time you're going to use it. (Well, you should inject it but that's another discussion :-))
Update : OK, I accept that 'create it each time you're going to use it' could be misleading. I'm so used to context being an instance on a class that is injected and so lives only for the life of a request that I struggle to think of it any other way... #wiktor's answer is definitely better as it more correctly expresses the idea that you should "dispose sooner rather than later"

In Service Oriented Architecture (SOA), should each service own its own data?

Under Service Oriented Architecture (SOA), I am interested in the question of whether a service should own its own data or not.
One of the constraints is that if anything fails at any point, we need to be able to roll the state of the entire system back to a prior state so we can retry or resume an operation.
If each service owns its own data, then does this imply that the system deals with change better from the programmers point of view?
However, if each service owns its own data, are there any mechanisms to roll the entire system back to a prior state so a failed operation can be resumed or retried?
It sounds like the granularity of what you call services might be wrong. A single service can have multiple endpoints (using same or different protocols) and if a message received on one endpoint requires rolling back state that was received on another it is still an internal transaction within the boundary of the service.
If we consider the simplistic example of order and customer services. The order services may have contracts with messages relating to the whole order or to an order line and cancelling the order will undo state that was affected by both. Usually the address change in the customer service would not be rolled back with that.
Sometimes service actions are tied together in a longer business process, to continue on the example above let's also add an invoicing service. so when we cancel an order we also want to cancel the invoice. However it is important to note that business rules within the realm of the invoicing service can behave differently, for instance and not "roll back" e.g. canceling an order late may require cancelation fees. This sort of long running interaction is what I call a saga (you can see a draft of that pattern here)
Also note that distributed transactions between services is usually not a good idea for several reasons (like holding locks for an external party you don't necessarily trust) you can read more about that here
The problem you raised here is (partially) solved by the two-phase commit protocol (see wikipedia article)
To avoid implementing this complex algorithm, you can dedicate one of the service of the architecture to data management. If you need data synchronization between different databases, try to do it on the lowest layer (ie system or DBMS).
SOA system defines more services within one system. This can provide more autonomous services in order to every service can be hosted on different machine.
But it does not mean that you can not provide unified persistent layer for all (domain) models which can point into one storage => simple business transaction when the whole system is spread into more computers or transaction for one system.
Autonomous domain model is useful besides other things during refactoring to avoid situation where a change in one model causes a change in another service => global changes in the whole application.
In short: No. Services don't "own" data.
Data are truths about the world, and implicitly durable and shared. Logical services (API) don't always map to real-world data in a 1-1 way. Physical services (code) are implementations that are very refactorable, which opposes the durable nature of data.
When you partition the data, you lose descriptive power and analytic insight. But where it really kills you is integrity. Data cannot be kept coherent across silos as you scale. For complex data, you need those foreign keys.
Put another way: a platform only has one "logical" DB (per environment), because there is only one universe. There are many valid reasons to break up a DB, such as HW limits, performance, coordination, replication, and compliance. But treat them as needed evils, used only when needed.
But I think you may be asking a different question: "should a long-running, data-based transaction be managed by a single authoritative service?" And typically, that answer is: Yes. That transaction service can implement the multiple steps to sequence the flow as it sees fit, such as 2-phase commit. All your other services should use that transaction service to execute the transaction.
BUT! That transaction service must interact with the DB as a shared resource using only atomic semantics. That includes all the transaction states (intent, then action, then result) so that recovery and rollbacks are possible. The database must be empowered to maintain integrity in the event of faults. I cannot stress this enough: everything, always must decompose into atomic DB operations if you want fault tolerance.

should ObjectContexts in Entity Framework 5 be singletons?

When using ObjectContext in EF 5, should it be a singleton or is it better to create new instance every time like SqlConnection. If so, why?
Create and dispose the context as soon as possible. Quote from the guidelines on MSDN:
Here are some general guidelines when deciding on the lifetime of the
context:
When working with long-running context consider the following:
As you load more objects and their references into memory, the memory
consumption of the context may increase rapidly. This may cause performance issues.
Remember to dispose of the context when it is no
longer required.
If an exception causes the context to be in an
unrecoverable state, the whole application may terminate. The chances
of running into concurrency-related issues increase as the gap between
the time when the data is queried and updated grows.
When working with Web applications, use a context instance per request.
When working with Windows Presentation Foundation (WPF) or Windows Forms, use a
context instance per form. This lets you use change-tracking
functionality that context provides.

Categories

Resources