Hi firstly thank you for your attention on this question;
Is there any way to implement a transaction like this in c#
using (Transactionscope x=new Transactionscope ())
{
Thead A()=> Independent Transactionscope() A(Insert into table X )
Thead B()=> Independent Transactionscope() B(Insert into table Y )
Thead C()=> Independent Transactionscope() C(Insert into table Z )
Thread.WaitAll(A,B,C)
commit big transaction x/ rollback big transaction x
}
Note that Distributed Transactions are currently not supported on .Net Core, only on .Net Framework.
In order to use a TransactionScope to span multiple threads, you'll need to use DependentClone to tie the threads into the parent TransactionScope.
The steps are:
Start a TransactionScope on your main / first thread
Just before creating each thread, use DependentClone to create a DependentTransaction, and then pass this DependentTransaction instance to the new thread.
On the child thread, you can use the TransactionScope(DependentTransaction) constructor overload to create a linked TransactionScope, in which the child thread can perform local transactions.
As the work on each child thread is completed successfully, then commit both the thread TransactionScope and the DependentTransaction
On the main thread, wait until all threads are complete, and then commit the root TransactionScope
There's some caveats too:
Using DependentTransaction on multiple threads will immediately require the use of MSDTC.
Using multiple threads under a large DTC transaction isn't going to make insertion into the same table any quicker (Use SqlBulkCopy for that), and you'll want to measure whether parallel inserts into different tables, same database under a DTC transaction warrants the locking overhead or returns any performance benefit.
If you're using async, then you'll need TransactionScopeAsyncFlowOption.Enabled
More about Transction Scope here
Related
It is a .Net application which works with an external device. When some entity (corresponds to the row in a table) wants to communicate with device, the corresponding row in the SQL Server table should be locked until the device return a result or SQL Server times out.
I need to:
lock a specific row in a table so that row could be read, but could not be deleted or updated.
locking mechanism should run in a separate thread so that application's main thread works as usual
lock should be released if a response has made
lock should be released after a while if no response is received
What is the best practice?
Is there any standardize way to accomplish this?
Should I:
run a new thread (task) in my C# code and begin a serializable transaction, select desired row transactionally and wait to either time is up or cancellation token is received?
or use some combination of sp_getapplock and ...etc?
You cannot operate on locks across transactions or sessions. That approach is not feasible.
You need to run one transaction and keep it open for the duration that you want the lock to persist.
The kind of parallelism technology you use is immaterial. An async method with async ADO.NET IO would be suitable. So would be a separate LongRunning task.
You probably need to pass a CancellationToken to the transaction code that when signaled makes the transaction shut down. That way you can implement arbitrary shutdown conditions without cluttering the transaction code.
Here's few points that you should consider:
Sp_getapplock is not row based, so I would assume it's not something you can use
"application's main thread works as usual." -- but if you're locking rows, any update / delete operation will get stuck, so is that working as usual?
Once the locking ends, is it ok to do all the updates right after that that were blocked?
Is your blocker thread going to do updates too?
If the application and the external device are doing updates, how to be sure they are handled in correct order / way?
I would say you need to design your application to work properly in this situation, not just try to add this kind of feature as an add-on.
The title says about releasing in another transaction, but that's not really explained in the question.
I have read that when a DbContext.SaveChanges() runs all the operations are automatically wrapped in a transaction for you behind the scenes. That is, if any of the operations inside during SaveChanges() fail, everything is rolled back maintaining consistent state.
However, one term I've come across several times is that the changes can run as part of an ambient transaction. What exactly does that mean?
My specific concern is: I have a multithreaded application, in which I have one context per operation. None of my DbContext objects are shared across different threads. Am I guaranteed that the operations of each DbContext.SaveChanges() will run in separate transactions?
In your case, yes, you are guaranteed that each DbContext.SaveChanges() will run in separate transactions.
The term "ambient" transaction refers to a transaction that was started higher-up in the call stack. So that this is a per-thread concept. See Transaction.Current and TransactionScope. It is a feature that allows you do to something like this:
using (TransactionScope scope123 = new TransactionScope())
{
using (SqlConnection connection1 = new SqlConnection(connectString1))
{
// Do some work
using (SqlConnection connection2 = new SqlConnection(connectString2))
{
// Do some more work
}
}
Both of the above connections automatically use the "ambient" transaction "scope123." It sounds like the entity framework now knows how to do this. But the TransactionScope won't cross threads, so you are okay. And it doesn't sound like you are explicitly creating transaction scopes anyway.
Based on this: http://msdn.microsoft.com/en-us/data/dn456843.aspx
By default, SaveChanges will open a new transaction, and dispose of it once complete. In EF 6, functionality was added such that you can override this behavior. So long as you don't go out of your way to reuse transactions - you should be okay.
On my c# project, i have an SQL connection in MARS mode that is being used by multiple threads to perform CRUD operations. Some of these operations are required to be performed as a transaction. After i completed the data access module, i started testing and got an InvalidOperationException from one of the selects, stating that since the connection had an active transaction, the select itself needed to be in a transaction. Snooping around MSDN i found the following remark:
Caution: When your query returns a large amount of data and calls BeginTransaction, a SqlException is thrown because SQL Server does not allow parallel transactions when using MARS. To avoid this problem, always associate a transaction with the command, the connection, or both before any readers are open.
I could easily create a method that would aggregate commands into a transaction, this would even allow me to have a timer thread committing transactions on a regular interval, but is this the right way? Should i instead halt commands that don't need a transaction until an active transaction is committed?
I would stay away from MARS.
See:
used by multiple threads to perform CRUD operations
That screams "every thread one connection, and it's own transaction" unless you have a very rare case here. This absolutely does not sound like a valid use case for MARS.
I currently have a C# process that is saving millions of records to Oracle, currently all in a single thread and within a transaction. I am interested in doing some parallel processing on this where I can split the data across threads. Will an ADO.NET/Oracle transaction work properly across the threads? Do I just create the transaction on the main thread, or do I need to also create a sub-transaction for each thread?
Any experience with this providing some performance improvements, or is the bottleneck Oracle itself.
If your code is, essentially:
for each record
add record to database
Then it's unlikely that adding multiple threads is going to be of much help. You might be able to get performance increase with two threads, when one is gathering and transmitting one record while the other's record is being inserted. But it's unlikely that the overlap would be huge.
You're much better off doing something like:
while not end of records
add 1,000 records to block
call stored proc to insert 1,000 records
That should speed things up quite a bit because you reduce the amount of back-and-forth between client and server.
The way to speed it up beyond that probably isn't to create multiple threads that run the loop, but rather to issue an asynchronous call so that the database can be doing the inserts while you're creating the next block of records. Something like this:
while not end of records
add 1,000 records to block
wait for pending asynchronous call to complete
issue asynchronous call to insert 1,000 records
There are many different ways to issue that asynchronous call. I would recommend using Tasks.
Edit
It occurs to me that you might have a problem trying to keep a transaction alive across asynchronous calls. If so, then you do the database insert on the main thread, and have the asynchronous task fill the buffer. It looks like this:
start transaction
buffer = fill_buffer(); // this is synchronous
while buffer.count > 0
{
task = start asynchronous task to fill the next buffer
call database to insert records from buffer
buffer = task.result // waits for task to complete
}
end transaction
This technique ensures that all database calls for the transaction occur on the main thread.
My suggestion would be, if you can (your workplace allows it), write this as a pl/sql procedure using bulk inserts instead of relying on a middleware app. The improvement will be huge as long as is coded well.
If you have to use a middleware (.net), I recommend you use ODP.NET Link since ADO.NET-to-Oracle is deprecated (if I am not mistaken). In addition, ODP.NET will give you a boost in performance because it uses oracle 11g new features and improvements.
As far as middleware, I never done any parallel threading, but I suspect you will have transactions issues with oracle (since you are inserting and the way relational databases work). I know it is possible, but for the extra effort, it is just better to move the processing to the database and let oracle do its magic.
I have a SQL Server table and this table will be updated by a batch job every 5 minutes using backgroundworker multithread calls. Also i am using thread lock when i am inserting the table via batch job. The same data can be accessed by the user of the application simultaneously. I have my business logic in C#. What is the best and optimized solution for this? Can i use thread lock on this situation or not?
What's the problem you have (or that you anticipate)?
SQL Server is made and optimized for handling lots of concurrent connections and users updating, inserting, reading data. Just let it handle the work!
When your background worker thread updates the table, it will take exclusive (X) locks on those rows that it updates - but only on those rows (as long as you don't update more than 5000 rows at once).
During that time, any other row in the table can be read - no problem at all, no deadlock in sight....
The problem could occur if you update more than 5000 rows in a single transaction - then SQL Server will do a Lock escalation to avoid having to keep track of too many locks, and it will lock the entire table with an (X) lock. Until the end of that update transaction - no reads are possible anymore - but those are normal transactional locks - NOT deadlocks.
So where is your problem / your issue?