Parallel Transactions in distinct Session in NHibernate / SQL Server - c#

we are building a WinForms desktop application which talks to an SQL Server through NHibernate. After extensive research we settled on the Session / Form strategy using Ninject to inject a new ISession into each Form (or the backing controller to be precise). So far it is working decently.
Unfortunately the main Form holds a lot of data (mostly read-only) which gets stale after some time. To prevent this we implemented a background service (really just a seperate class) which polls the DB for changes and issues an event which lets the main form selectively update the changed rows.
This background service also gets a separate session to minimize interference with the other forms. Our understanding was that it is possible to open a transaction per session in parallel as long as they are not nested.
Sadly this doesn't seem to be the case and we either get an ObjectDisposedException in one of the forms or the service (because the service session used an existing transaction from on of the forms and committed it, which fails the commit in the form or the other way round) or we get an InvalidOperationException stating that "Parallel transactions are not supported by SQL Server".
Is there really no way to open more than one transaction in parallel (across separate sessions)?
And alternatively is there a better way to update stale data in a long running form?
Thanks in advance!

I'm pretty sure you have messed something up, and are sharing either session or connection instances in ways you did not intend.
It can depend a bit on which sort of transactions you use:
If you use only NHibernate transactions (session.BeginTransaction()), each session acts independently. Unless you do something special to insert your own underlying database connections (and made an error there), each session will have their own connection and transaction.
If you use TransactionScope from System.Transactions in addition to the NHibernate transactions, you need to be careful about thread handling and the TransactionScopeOption. Otherwise different parts of your code may unexpectedly share the same transaction if a single thread runs through both parts and you haven't used TransactionScopeOption.RequiresNew.
Perhaps you are not properly disposing your transactions (and sessions)?

Related

Calling Dispose method of a field in OnStop of a Windows service

I have developed a Windows service that uses database connections.
I have created the following field:
private MyDBEntities _db;
and in OnStart I have:
_db = new MyDBEntities();
Then the service does its work.
In OnStop method I have:
_db.Dispose();
_db = null;
Is there a disadvantage with this approach? For performance reasons, I need the database (which is SQL Server) to be opened all the time, while the service is running.
Thanks
Jaime
If your service is the only app that accesses this database, it shouldn't have any performance decrease. However, in my opinion, it is not the best approach to have a long-lived connection to the database. Imagine a situation where you don't keep your database on your server, but you use some cloud provider (Google, AWS, Azure). With cloud solutions, the address of your server may not be fixed, and it may vary over time. It may happen that IP address will change during the execution of one query (most likely, you'll get SqlTransientException or similar, I don't remember).
If your service will be the only one app that accesses the database and you will have only the one instance of it - then this approach might be beneficial in terms of performance - as you don't have to open and close connection always. However, you have to remember that with this approach, many other issues may appear (you may have to reconnect from stale connection, connect to other replica instances, or destroy existing connection because of something I don't think about at the moment). Moreover, remember about multithreading issues that most likely will arise with this approach if you won't develop it correctly.
IMHO - you should open a connection to the database always when it is needed, and close just after using it. You'll avoid most of the issues I've mentioned earlier.
Having a Singleton context will cause threads to lock on SaveChanges() (slowing performance).
Also each event (which i presume run asynchronously) could possibly save some other event information causing unexpected behavior.
As someone already pointed out you can use connection pooling to avoid connection issue and dispose the context on each request/event fired.

Db Context for Console Application

I have a console application written in C# that runs as a service each hour. The application has a Data Access Layer (DAL) to connect to the database with a Db Context. This context is a property of the DAL and is created each time the DAL is created. I believe this has lead to errors when updating various elements.
Question: Should the application create a Db Context when it runs and use this throughout the application so that all objects are being worked on with the same context?
Since a service can be running for a long time, it is a good practice to open the connection, do the job and then close the connection.
If you have a cadence of methods then you could pass your opened DbContext as a parameter.
For instance:
call to A
call to B(DbConteext)
call to C(DbContext)
Another good practice is to protect your code with try/catch, because your database could be offline, not reachable, etc.
Question: Should the application create a Db Context when it runs and use this throughout the application so that all objects are being worked on with the same context?
You should (re)create your DbContext whenever you suspect the underlying data has changed. Because the DbContext assumes that data once fetched from the data source is never changed and can be returned as a result of a query, even if that query might come minutes, hours or years later. It's caching, with all it's advantages and disadvantages.
I would suggest you (re)create your DbContext whenever you start a new loop of your service.
DbContext is really an implementation of Unit of Work pattern so it represents a single business transaction which is typically a request in a web app. So it should be instantiated when a business transactions begins, then some operations on db should be performed and commited (thats SaveChanges) and the context should be closed.
If running a console app represents a business transaction so it's kind of like a web request then of course you can have a singleton instance of DbContext. You cannot use this instance from different threads so your app should be single-threaded and you should be aware that DbContext is caching some data so eventually you may have memory issues. If your db is used by many clients and the data changes often you may have concurrency issues if the time between fetching some data from db and saving them is too long which might be the issue here.
If not try to separate your app into some business transactions and resolve your db context per those transactions. Such a transaction could be a command entered by user.

Create a 'Licensing' feature with SQL-Server

I want to implement the following interface on a 2-Tier application with an MS SQL-Server 2008R2 (i.e. no app server in between)
interface ILicense {
void Acquire(string license);
void Release(string license);
}
However, I want to release the license even if the application is killed or bombs out without calling the Release method. I also want to avoid using a timer which refreshes the license every minute or so.
So I thought: Use a dedicated SqlConnection together with the sp_getapplock and sp_releaseapplock SP because that's what they are seemed to be made for. Now I found out that the SP only work from within a transaction, so I would need to keep the transaction open all the time (i.e. while the application is running). Anyway, it works that way. The application starts, opens the connection, starts the transaction, and locks the license.
When the application terminates, the connection is closed, everything is rolled back and the license is released. Super.
Whenever the running app needs to switch licenses (e.g. for another module), it calls Release on the old license and then Acquire on the new one. Cool.
Now to my question(s):
Is it acceptable to have an open (uncommitted) transaction open on a separate connection for a long time?
Are there any better possibilities to implement such a 'lock' mechanism? The problem is that the license shall be released even if the application terminates unexpectedly. I thought of some sort of 'logout' trigger, but that does not exist in SQL-Server 2008R2
I am by no means the SQL or DB guru that some of the members of this site are but your setup brings up a few concerns or things to consider.
this could really limit the number of concurrent users that your application could have especially in a 2-tier architecture. Now in a 3 tier approach the app server would manage and pool these connections/transactions but then you would lose the ability to use those stored procs to implement your licensing mechanism, i believe.
with the transaction being open for some indeterminate period of time I would worry about the possibility of the tempdb growing too big or exceeding the space allocated to it. i don't know what is going on in the app and if there is anything else going on in that transaction, my guess is no but thought i would mention it.
I hope i am not getting my SQL versions mixed up here but transaction wraparound could cause the db to shutdown.
This limits your app significantly as the data in the transaction has a lock on it that won't be released until you commot or rollback.
There must be a more elegant way to implement a licensing model that doesn't rely on leaving a transaction open for the life of the app or app module. If you have a two tier app then that implies that the client always has some kind of connectivity so maybe generate some kind of unique id for the client and either add a call home method or if you really are set on there being an instantaneous verification then everytime the client performs an action that queries the db have it check to see if the client is properly licensed etc.
Lastly, in all of the SQL teachings I have received from other db guys who actually really know there stuff this kind of setup (long running open transaction) were never recommended unless there was a very specific need that could not be solved otherwise.

Nhibernate; control over when Session Per Request is saved

I'm trying to develop a web forms application using NHibernate and the Session Per Request model. All the examples I've seen have an HTTPModule that create a session and transaction at the beging of each request and then commits the transaction and closes the session at the end of the request. I've got this working but I have some concerns.
The main concern is that objects are automatically saved to the database when the web request is finished. I'm not particularly pleased with this and would much prefer some way to take a more active approach to deciding what is actually saved when the request is finished. Is this possible with the Session Per Request approach?
Ideally I'd like for the interaction with the database to go something like this:
Retreive object from the database or create a new one
Modify it in some way
Call a save method on the object which validates that it's indeed ready to be commited to the database
Object gets saved to the database
I'm able to accomplish this if I do not use the Sessions Per Request model and wrap the interactions in a using session / using transaction blocks. The problem I ran into in taking this approach is that after the object is loaded from the database the session is closed an I am not able to utilize lazy loading. Most of the time that's okay but there are a few objects which have lists of other objects that then cannot be modified because, as stated, the session has been closed. I know I could eagerly load those objects but they don't always get used and I feel that in doing so I'm failing at utilizing NHibernate.
Is there some way to use the Session Per Request (or any other model, it seems like that one is the most common) which will allow me to utilize lazy loading AND provide me with a way to manually decide when an object is saved back to the database? Any code, tutorials, or feedback is greatly appreciated.
Yes, this is possible and you should be able to find examples of it. This is how I do it:
Use session-per-request but do not start a transaction at the start of the request.
Set ISession.FlushMode to Commit.
Use individual transactions (occasionally multiple per session) as needed.
At the end of the session, throw an exception if there's an active uncommitted transaction. If the session is dirty, flush it and log a warning.
With this approach, the session is open during the request lifetime so lazy loading works, but the transaction scope is limited as you see fit. In my opinion, using a transaction-per-request is a bad practice. Transactions should be compact and surround the data access code.
Be aware that if you use database assigned identifiers (identity columns in SQL Server), NHibernate may perform inserts outside of your transaction boundaries. And lazy loads can of course occur outside of transactions (you should use transactions for reads also).

Simple query regarding WCF service

I have a WCF service which has two methods exposed:
Note: The wcf service and sql server is deployed in same machine.
Sql server has one table called employee which maintains employee information.
Read() This method retrieves all employees from sql server.
Write() This method writes (add,update,delete) employee info in employee table into sql server.
Now I have developed a desktop based application through which any client can query, add,update and delete employee information by consuming a web service.
Question:
How can I handle the scenario, if mulitple clients want update the employee information at the same time? Is the sql server itself handle this by using database locks ??
Please suggest me the best approach!!
Generally, in a disconnected environment optimistic concurrency with a rowversion/timestamp is the preferred approach. WCF does support distributed transactions, but that is a great way to introduce lengthy blocking into the system. Most ORM tools will support rowversion/timestamp out-of-the-box.
Of course, at the server you might want to use transactions (either connection-based or TransactionScope) to make individual repository methods "ACID", but I would try to avoid transactions on the wire as far as possible.
Re comments; sorry about that, I honestly didn't see those comments; sometimes stackoverflow doesn't make this easy if you get a lot of comments at once. There are two different concepts here; the waiting is a symptom of blocking, but if you have 100 clients updating the same record it is entirely appropriate to block during each transaction. To keep things simple: unless I can demonstrate a bottleneck (requiring extra work), I would start with a serializable transaction around the update operations (TransactionScope uses this by default). That way yes: you get appropriate blocking (ACID etc) for most scenarios.
However; the second issue is concurrency: if you get 100 updates for the same record, how do you know which to trust? Most systems will let the first update in, and discard the rest as they are operating on stale assumptions about the data. This is where the timestamp/rowversion come in. By enforcing "the timestamp/rowversion must match" on the UPDATE statement, you ensure that people can only update data that hasn't changed since they took their snapshot. For this purpose, it is common to keep the rowversion alongside any interesting data you are updating.
Another alternative is that you could instantiate the WCF service as a singleton (InstanceContext.Single) - which means there is only one instance of it running ever. Then, you could keep a simple object in memory for the purpose of update locking, and lock in your update method based on that object. When update calls come in from other sessions, they will have to wait until the lock is released.
Regards,
Steve

Categories

Resources