On my c# project, i have an SQL connection in MARS mode that is being used by multiple threads to perform CRUD operations. Some of these operations are required to be performed as a transaction. After i completed the data access module, i started testing and got an InvalidOperationException from one of the selects, stating that since the connection had an active transaction, the select itself needed to be in a transaction. Snooping around MSDN i found the following remark:
Caution: When your query returns a large amount of data and calls BeginTransaction, a SqlException is thrown because SQL Server does not allow parallel transactions when using MARS. To avoid this problem, always associate a transaction with the command, the connection, or both before any readers are open.
I could easily create a method that would aggregate commands into a transaction, this would even allow me to have a timer thread committing transactions on a regular interval, but is this the right way? Should i instead halt commands that don't need a transaction until an active transaction is committed?
I would stay away from MARS.
See:
used by multiple threads to perform CRUD operations
That screams "every thread one connection, and it's own transaction" unless you have a very rare case here. This absolutely does not sound like a valid use case for MARS.
Related
I'm maintaining an ASP.NET website on .NET 4.7.1 that displays some fairly extensive information using Entity Framework 6.0. Right now, all these DB queries are being performed in serial, so I'm attempting to improve performance by implementing async/await.
The problem I'm having is that running multiple simultaneous queries against the same database seems to be somewhat delicate, and I'm having trouble searching up any best practices for this type of scenario.
The site's initial implementation created a context for each of these queries inside an ambient transaction, and disposed the context after use. Upon converting the whole site to use async (and noting TransactionScopeAsyncFlowOption.Enabled), the page load began throwing exceptions claiming Distributed Transaction Coordinator needed to be configured.
System.Transactions.TransactionManagerCommunicationException: Network access for Distributed Transaction Manager (MSDTC) has been disabled.
Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
Some searching at that point led me to believe that this could be remedied in code without perturbing configuration, so I next redesigned data layer to manage connections in a way that would allow them to share the same context. However, when testing that approach, new exceptions are thrown claiming that the connection is too busy.
System.Data.SqlClient.SqlException: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The request failed to run because the batch is aborted, this can be caused by abort signal sent from client, or another request is running in the same session, which makes the session busy.
Normally this page's load time is slow (several seconds) but nowhere near the default timeout threshold.
Is async/await best suited only when the queries to run in parallel connect to different databases? If not, is MSDTC the only way to enable this behavior? Or is it perhaps just not a wise thing to blast a single database with so many simultaneous queries?
I am not able to understand exactly what changes you have done to the application. I am also not sure that the application was correctly written in the first place and it was following reasonable practices. But here are a few data points that I hope can help:
Async support in EF is designed to be used to yield threads back to the pool while waiting for I/O so that the application can process a higher number of requests using less threads and resources. It is not meant to enable parallel execution using the same DbContext. Like the majority of the types in .NET, the DbContext is not thread safe (in any version of EF) so you cannot safely execute multiple queries (async or not) in parallel on the same context instance.
Using separate DbContext instances that don't share state or connection objects should be fine, however it is recommended that in ASP.NET you still use a single thread at any point in time to process a request (when you make an async call that yields, processing may continue on a different thread, but that is not a concern) rather than trying to parallelize work within the same request.
Also, regarding the exception from System.Transaction, it may very well be that something you changed is now causing multiple connections to auto-enlist in the same System.Transactions.Transaction, which may require escalating the transaction to a distributed transaction.
I won't try to come up with a complete explanation for the timeouts, because as I said, I am not sure I understand what changes you made to the application. But it is perfectly possible that if you create too many threads, some of them will end up starving and timing out. It also extremely hard to anticipate everything that could go wrong if you start using types are not thread safe (e.g. database connections, DbContext) from multiple threads.
It is a .Net application which works with an external device. When some entity (corresponds to the row in a table) wants to communicate with device, the corresponding row in the SQL Server table should be locked until the device return a result or SQL Server times out.
I need to:
lock a specific row in a table so that row could be read, but could not be deleted or updated.
locking mechanism should run in a separate thread so that application's main thread works as usual
lock should be released if a response has made
lock should be released after a while if no response is received
What is the best practice?
Is there any standardize way to accomplish this?
Should I:
run a new thread (task) in my C# code and begin a serializable transaction, select desired row transactionally and wait to either time is up or cancellation token is received?
or use some combination of sp_getapplock and ...etc?
You cannot operate on locks across transactions or sessions. That approach is not feasible.
You need to run one transaction and keep it open for the duration that you want the lock to persist.
The kind of parallelism technology you use is immaterial. An async method with async ADO.NET IO would be suitable. So would be a separate LongRunning task.
You probably need to pass a CancellationToken to the transaction code that when signaled makes the transaction shut down. That way you can implement arbitrary shutdown conditions without cluttering the transaction code.
Here's few points that you should consider:
Sp_getapplock is not row based, so I would assume it's not something you can use
"application's main thread works as usual." -- but if you're locking rows, any update / delete operation will get stuck, so is that working as usual?
Once the locking ends, is it ok to do all the updates right after that that were blocked?
Is your blocker thread going to do updates too?
If the application and the external device are doing updates, how to be sure they are handled in correct order / way?
I would say you need to design your application to work properly in this situation, not just try to add this kind of feature as an add-on.
The title says about releasing in another transaction, but that's not really explained in the question.
What happens if I call Thread.Abort() (in C#/.NET) on a thread that is currently executing an ODBC Command (specifically against MSSQL and Oracle, but also generally)? Will the command get cancelled? Will the DB server recognize there's nothing at the other end of the connection and kill the process (again, specifically MSSQL and Oracle)? Or do I need to explicitly call Cancel() on the connection first?
My goal is to ensure the safety of the database I'm connecting to if the worst should happen to my application (or the worst that I can catch and respond to, like system shutdowns etc).
I'd like to program defensively and try to issue a Cancel() if at all possible, but I'd like to know the behavior anyway.
If you want to ensure cancelling the SQL command, why not to use the TransactionScope.Dispose() method or simply do not Complete the transaction? It works above Thread, Process and such abstractions, and there will be no races between Thread cancelling and SQL command.
Also, as was stated in comments, your SQL driver can work in other Thread and can even be an unmanaged code, so the cancelling the Thread will not affect the SQL command, and you really need to Cancel() your connection or command.
I have a unique ObjectContext, on which I perform a SaveChanges(). This operation takes some time (~60 seconds).
This operation is executed in a thread.
My user have a "Cancel" button on the screen.
I'm able to stop the thread, but if the SaveChanges() has already started I can't find anyway to cancel it.
In fact I found no way to access the underlying transaction (I also have an Isolation level issue : this operation locks almost all tables in database, so the application cannot be used by other users).
Would it work if I closed the underlying connection ? The EF won't be able to send a Rollback instruction but I guess the database would perform it anyway, no ?
I've seen that I could use TransactionScope but it needs to access DTC and my host is not really performant when it comes to edit server/network configuration.
So if a "Entity Framework" solution exist I'd prefer that one.
is your SaveChanges() saving multiple updates? Is it possible to update then save each individually?
Then, if you are inside a transaction, and user cancels, you'd have more granularity in your saves.
The Goal
Use an ADO.NET IDbConnection and IDbCommand to execute multiple commands at the same time, against the same database, given that the ADO.NET implementation is specified at runtime.
Investigation
The MSDN Documentation for IDbConnection does not specify any threading limitations. The SqlConnection page has the standard disclaimer saying "Any instance members are not guaranteed to be thread safe." The IDbCommand and SqlCommand documentation is equally un-informative.
Assuming that no individual instance member is thread-safe, I can still create multiple commands from a connection (on the same thread) and then execute them concurrently on different threads.
Presumably this would still not achieve the desired effect, because (I assume) only one command can execute at a time on the single underlying connection to the database. So the concurrent IDbCommand executions would get serialized at the connection.
Conclusion
So this means we have to create a separate IDbConnection, which is ok if you know you're using SqlConnection because that supports pooling. If your ADO.NET implementation is determined at runtime, these assumptions cannot be made.
Does this mean I need to implement my own connection pooling in order to support performant multi-threaded access to the database?
You will need to manage thread access to your instance members, but most ADO implementations manage their own connection pool. They generally expect that multiple queries will be run simultaneously.
I would feel free to open and close as many connections as is necessary, and handle an exceptions that could be thrown if pooling were not available.
Here's an article on ADO connection pooling
If you create a connection on one thread, you shouldn't use it on a different thread. The same goes for commands.
However, you can create a connection on each of your threads and use those objects safely on their own thread.
Pooling is for when you create lots of short-lived connection objects. It means the underlying ( expensive ) database connections are re-used.
Nick