I have a console application written in C# that runs as a service each hour. The application has a Data Access Layer (DAL) to connect to the database with a Db Context. This context is a property of the DAL and is created each time the DAL is created. I believe this has lead to errors when updating various elements.
Question: Should the application create a Db Context when it runs and use this throughout the application so that all objects are being worked on with the same context?
Since a service can be running for a long time, it is a good practice to open the connection, do the job and then close the connection.
If you have a cadence of methods then you could pass your opened DbContext as a parameter.
For instance:
call to A
call to B(DbConteext)
call to C(DbContext)
Another good practice is to protect your code with try/catch, because your database could be offline, not reachable, etc.
Question: Should the application create a Db Context when it runs and use this throughout the application so that all objects are being worked on with the same context?
You should (re)create your DbContext whenever you suspect the underlying data has changed. Because the DbContext assumes that data once fetched from the data source is never changed and can be returned as a result of a query, even if that query might come minutes, hours or years later. It's caching, with all it's advantages and disadvantages.
I would suggest you (re)create your DbContext whenever you start a new loop of your service.
DbContext is really an implementation of Unit of Work pattern so it represents a single business transaction which is typically a request in a web app. So it should be instantiated when a business transactions begins, then some operations on db should be performed and commited (thats SaveChanges) and the context should be closed.
If running a console app represents a business transaction so it's kind of like a web request then of course you can have a singleton instance of DbContext. You cannot use this instance from different threads so your app should be single-threaded and you should be aware that DbContext is caching some data so eventually you may have memory issues. If your db is used by many clients and the data changes often you may have concurrency issues if the time between fetching some data from db and saving them is too long which might be the issue here.
If not try to separate your app into some business transactions and resolve your db context per those transactions. Such a transaction could be a command entered by user.
Related
Problem:
We have a Blazor server app with a DevExpress grid component, showing data directly from the DB. We want all the operations - filtering, grouping, etc. - to take place on the DB layer, that’s why we don’t use a Service layer to fetch the data, rather we hook directly onto the DB context.
Let’s say we have 2 users, looking at the same grid, each in his own browser (that implicitly means 2 different SignalR connections). User 1 changes the state, but user 2 isn’t aware of that, even if he refreshes the grid. Only when user 2 refreshes the page (F5) are the differences shown.
Explanation:
DB contexts are “scoped DI” by default. In a classic HTTP request-response architecture, that means that for the duration of a request, one and the same instance of the DB context is provided by the DI to all who request it. In the example above, data would be refreshed, because each request will instantiate a new DB context.
In a Blazor app, things are different. DB context in our case is not refreshed with each WEB request. Actually, the term ‘request’ doesn’t even exist in SignalR (WebSocket) and WebAssembly. So, what happens in our example? As long as the SignalR connection is alive, user 2 has the same instance of the DB context. If another user changes state in his own instance of the context, these changes aren’t propagated to other context instances. Roughly, this means that a ‘scoped’ DB context actually becomes a ‘singleton’ (well, almost, singleton in the scope of a user / session / signalR connection).
Links:
https://learn.microsoft.com/en-us/aspnet/core/blazor/fundamentals/dependency-injection
https://learn.microsoft.com/en-us/aspnet/core/blazor/blazor-server-ef-core
https://www.thinktecture.com/blazor/dependency-injection-scopes-in-blazor/
Thoughts:
Our service layer is stateless, so it isn't a problem. DB contexts are problematic
Blazor doesn’t have a concept of a ‘scoped’ service
‘Scoped’ is actually a singleton in the scope of a single connection
‘Singleton’ provides the same service for all the connections
There is an approximation of a scoped service, scoped to the ‘component’ level
Each razor component will use the same instances in its lifetime
But this lifetime can be long lived nonetheless
Another, similar approximation
If truth be told, things are pretty similar to the classic request-response architecture: if 2 requests would happen at exactly the same time, there would be 2 DB context instances with different states. This surely can happen, but the probability of it is low, so it’s not such a problem
Having a ‘transient’ DB context also isn’t OK
we want our API (service layer) methods to be a “unit of work” (1 API - 1 DB transaction)
one API can call multiple BL functions, each in a separate ServiceBL class - those should have the same DB context instances
Solutions:
Scoped is already treated almost the same as a singleton. What if we would register DB contexts as singletons?
Sounds like a bad idea - everybody would use one long-lived instance, it would/could present a bottleneck, what about thread safety?
"EF Core does not support multiple parallel operations being run on the same context instance"
‘Page refresh‘ in the right places can be a substitute for ‘scopes’
await JSRuntime.InvokeVoidAsync("location.reload");
NavigationManager.NavigateTo(NavigationManager.Uri, forceLoad: true);
When the ‘refresh data grid’ button is clicked, we can create a new instance of the DB context
This is only a solution for this specific case, though. The underlying problem still exists, multiple users have different instances of DB contexts which will sooner or later blow up in our faces
API methods are our unit-of-work. We could manually create a DI scope, use it for the duration of the API and then dispose of it. But that would mean we would have to bubble the services (at least DB context) down to each and every class that would need them :/
Any ideas would be much appreciated
I have developed a Windows service that uses database connections.
I have created the following field:
private MyDBEntities _db;
and in OnStart I have:
_db = new MyDBEntities();
Then the service does its work.
In OnStop method I have:
_db.Dispose();
_db = null;
Is there a disadvantage with this approach? For performance reasons, I need the database (which is SQL Server) to be opened all the time, while the service is running.
Thanks
Jaime
If your service is the only app that accesses this database, it shouldn't have any performance decrease. However, in my opinion, it is not the best approach to have a long-lived connection to the database. Imagine a situation where you don't keep your database on your server, but you use some cloud provider (Google, AWS, Azure). With cloud solutions, the address of your server may not be fixed, and it may vary over time. It may happen that IP address will change during the execution of one query (most likely, you'll get SqlTransientException or similar, I don't remember).
If your service will be the only one app that accesses the database and you will have only the one instance of it - then this approach might be beneficial in terms of performance - as you don't have to open and close connection always. However, you have to remember that with this approach, many other issues may appear (you may have to reconnect from stale connection, connect to other replica instances, or destroy existing connection because of something I don't think about at the moment). Moreover, remember about multithreading issues that most likely will arise with this approach if you won't develop it correctly.
IMHO - you should open a connection to the database always when it is needed, and close just after using it. You'll avoid most of the issues I've mentioned earlier.
Having a Singleton context will cause threads to lock on SaveChanges() (slowing performance).
Also each event (which i presume run asynchronously) could possibly save some other event information causing unexpected behavior.
As someone already pointed out you can use connection pooling to avoid connection issue and dispose the context on each request/event fired.
we are building a WinForms desktop application which talks to an SQL Server through NHibernate. After extensive research we settled on the Session / Form strategy using Ninject to inject a new ISession into each Form (or the backing controller to be precise). So far it is working decently.
Unfortunately the main Form holds a lot of data (mostly read-only) which gets stale after some time. To prevent this we implemented a background service (really just a seperate class) which polls the DB for changes and issues an event which lets the main form selectively update the changed rows.
This background service also gets a separate session to minimize interference with the other forms. Our understanding was that it is possible to open a transaction per session in parallel as long as they are not nested.
Sadly this doesn't seem to be the case and we either get an ObjectDisposedException in one of the forms or the service (because the service session used an existing transaction from on of the forms and committed it, which fails the commit in the form or the other way round) or we get an InvalidOperationException stating that "Parallel transactions are not supported by SQL Server".
Is there really no way to open more than one transaction in parallel (across separate sessions)?
And alternatively is there a better way to update stale data in a long running form?
Thanks in advance!
I'm pretty sure you have messed something up, and are sharing either session or connection instances in ways you did not intend.
It can depend a bit on which sort of transactions you use:
If you use only NHibernate transactions (session.BeginTransaction()), each session acts independently. Unless you do something special to insert your own underlying database connections (and made an error there), each session will have their own connection and transaction.
If you use TransactionScope from System.Transactions in addition to the NHibernate transactions, you need to be careful about thread handling and the TransactionScopeOption. Otherwise different parts of your code may unexpectedly share the same transaction if a single thread runs through both parts and you haven't used TransactionScopeOption.RequiresNew.
Perhaps you are not properly disposing your transactions (and sessions)?
I'm trying to develop a web forms application using NHibernate and the Session Per Request model. All the examples I've seen have an HTTPModule that create a session and transaction at the beging of each request and then commits the transaction and closes the session at the end of the request. I've got this working but I have some concerns.
The main concern is that objects are automatically saved to the database when the web request is finished. I'm not particularly pleased with this and would much prefer some way to take a more active approach to deciding what is actually saved when the request is finished. Is this possible with the Session Per Request approach?
Ideally I'd like for the interaction with the database to go something like this:
Retreive object from the database or create a new one
Modify it in some way
Call a save method on the object which validates that it's indeed ready to be commited to the database
Object gets saved to the database
I'm able to accomplish this if I do not use the Sessions Per Request model and wrap the interactions in a using session / using transaction blocks. The problem I ran into in taking this approach is that after the object is loaded from the database the session is closed an I am not able to utilize lazy loading. Most of the time that's okay but there are a few objects which have lists of other objects that then cannot be modified because, as stated, the session has been closed. I know I could eagerly load those objects but they don't always get used and I feel that in doing so I'm failing at utilizing NHibernate.
Is there some way to use the Session Per Request (or any other model, it seems like that one is the most common) which will allow me to utilize lazy loading AND provide me with a way to manually decide when an object is saved back to the database? Any code, tutorials, or feedback is greatly appreciated.
Yes, this is possible and you should be able to find examples of it. This is how I do it:
Use session-per-request but do not start a transaction at the start of the request.
Set ISession.FlushMode to Commit.
Use individual transactions (occasionally multiple per session) as needed.
At the end of the session, throw an exception if there's an active uncommitted transaction. If the session is dirty, flush it and log a warning.
With this approach, the session is open during the request lifetime so lazy loading works, but the transaction scope is limited as you see fit. In my opinion, using a transaction-per-request is a bad practice. Transactions should be compact and surround the data access code.
Be aware that if you use database assigned identifiers (identity columns in SQL Server), NHibernate may perform inserts outside of your transaction boundaries. And lazy loads can of course occur outside of transactions (you should use transactions for reads also).
I have a WCF service which has two methods exposed:
Note: The wcf service and sql server is deployed in same machine.
Sql server has one table called employee which maintains employee information.
Read() This method retrieves all employees from sql server.
Write() This method writes (add,update,delete) employee info in employee table into sql server.
Now I have developed a desktop based application through which any client can query, add,update and delete employee information by consuming a web service.
Question:
How can I handle the scenario, if mulitple clients want update the employee information at the same time? Is the sql server itself handle this by using database locks ??
Please suggest me the best approach!!
Generally, in a disconnected environment optimistic concurrency with a rowversion/timestamp is the preferred approach. WCF does support distributed transactions, but that is a great way to introduce lengthy blocking into the system. Most ORM tools will support rowversion/timestamp out-of-the-box.
Of course, at the server you might want to use transactions (either connection-based or TransactionScope) to make individual repository methods "ACID", but I would try to avoid transactions on the wire as far as possible.
Re comments; sorry about that, I honestly didn't see those comments; sometimes stackoverflow doesn't make this easy if you get a lot of comments at once. There are two different concepts here; the waiting is a symptom of blocking, but if you have 100 clients updating the same record it is entirely appropriate to block during each transaction. To keep things simple: unless I can demonstrate a bottleneck (requiring extra work), I would start with a serializable transaction around the update operations (TransactionScope uses this by default). That way yes: you get appropriate blocking (ACID etc) for most scenarios.
However; the second issue is concurrency: if you get 100 updates for the same record, how do you know which to trust? Most systems will let the first update in, and discard the rest as they are operating on stale assumptions about the data. This is where the timestamp/rowversion come in. By enforcing "the timestamp/rowversion must match" on the UPDATE statement, you ensure that people can only update data that hasn't changed since they took their snapshot. For this purpose, it is common to keep the rowversion alongside any interesting data you are updating.
Another alternative is that you could instantiate the WCF service as a singleton (InstanceContext.Single) - which means there is only one instance of it running ever. Then, you could keep a simple object in memory for the purpose of update locking, and lock in your update method based on that object. When update calls come in from other sessions, they will have to wait until the lock is released.
Regards,
Steve