Transactions should be handled in .NET or SQL Server? - c#

I got into an application which is using .NET/C# as front end and SQL Server 2008 as back end. I found that ALWAYS transactions are handled in the c# code. Seems its an unwritten rule for this project that we shouldn't use Transactions within stored procedure.
I personally feel that transactions should be handled within Stored Procedure as it would give more control over the code! We might have lot of validations happening within the script all this while we don't need a open transaction. We need to open a transaction just before we do a Insert/Update/Delete and can close it asap.
Looking for answers which would help me understand the best practice for handling transactions and when exactly do we need to opt for Transactions within Stored Proc / C#.

There isn't a hard and fast rule, but I see several reasons to control transactions from the business tier:
Communication across data store boundaries. Transactions don't have to be against a RDBMS; they can be against a variety of entities.
The ability to rollback/commit transactions based on business logic that may not be available to the particular stored procedure you are calling.
The ability to invoke an arbitrary set of queries within a single transaction. This also eliminates the need to worry about transaction count.
Personal preference: c# has a more elegant structure for declaring transactions: a using block. By comparison, I've always found transactions inside stored procedures to be cumbersome when jumping to rollback/commit.
We might have lot of validations happening within the script all this
while we don't need a open transaction. We need to open a transaction
just before we do a Insert/Update/Delete and can close it asap.
This may or may not be a problem depending on how many transactions are being opened (it's not clear if this is a single job, or a procedure which is run with high concurrency). I would suggest looking at what locks are being placed on objects, and how long those locks are being held.
Keep in mind that validation possibly should lock; what if the data changes between the time you validated it and the time the action occurs?
If it is a problem, you could break the offending procedure into two procedures, and call one from outside of a TransactionScope.

Related

How can I add nested transactions in NHibernate?

I have a use case where I am processing multiple configuration within a function, each configuration processing runs within a separate transaction and transaction gets commited if everything is fine, now if at all anything goes wrong in processing of further configuration I want to revert all the commuted transaction. Can anyone please help me with code snippet? My application is on .net.
To the best of my knowledge, NH doesn't support nested transactions.
You can use a transaction at the root of your use case, or at any point along the way, but it's all or nothing, AFAIK.
It's not a matter of using nested transactions. It's a matter of ensuring that you have a transaction that surrounds all the relevant code - so it should be open/closed "higher up". Each individual section should then either not care about transactions at all, or it should "piggy-back" on any existing transaction and only open a new transaction when one does not already exist.
As a guideline, transaction management is an overall concern that should be handled in different sorts of wrapper methods and applied as needed by the application - not hidden away in specific low-level support routines.

Is my SQL transaction taking too long?

There is something that worries me about my application. I have a SQL query that does a bunch of inserts into the database across various tables. I timed how long it takes to complete the process, it takes about 1.5 seconds. At this point I'm not even done developing the query, I still have more inserts to program into this. So I fully expect this to process to take even longer, perhaps up to 3 seconds.
Now, it is important that all of this data be consistent and finish either completely, or not at all. So What I'm wondering about is, is it OK for a transaction to take that long. Doesn't it lock up the table, so selects, inserts, updates, etc... cannot be run until the transaction is finished? My concern is if this query is being run frequently it could lock up the entire application so that certain parts of it become either incredibly slow, or unusable. With a low user base, I doubt this would be an issue, but if my application should gain some traction, this query could potentially be a lot.
Should I be concerned about this or am I missing something where the database won't act how I am thinking. I'm using a SQL Server 2014 database.
To note, I timed this by using the StopWatch C# object immediately before the transaction starts, and stop it right after the changes are committed. So it's about as accurate as can be.
You're right to be concerned about this, as a transaction will lock the rows it's written until the transaction commits, which can certainly cause problems such as deadlocks, and temporary blocking which will slow the system response. But there are various factors that determine the potential impact.
For example, you probably largely don't need to worry if your users are only updating and querying their own data, and your tables have indexing to support both read and write query criteria. That way each user's row locking will largely not affect the other users--depending on how you write your code of course.
If your users share data, and you want to be able to support efficient searching across multiple user's data even with multiple concurrent updates for example, then you may need to do more.
Some general concepts:
-- Ensure your transactions write to tables in the same order
-- Keep your transactions as short as possible by preparing the data to be written as much as possible before starting the transaction.
-- If this is a new system (and even if not new), definitely consider enabling Snapshot Isolation and/or Read Committed Snapshot Isolation on the database. SI will (when explicitly set on the session) allow your read queries not to be blocked by concurrent writes. RCSI will allow all your read queries by default not to be blocked by concurrent writes. But read this to understand both the benefits and gotchas of both isolation levels: https://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
I think its depend on your code, how you used loop effectively, select query and the other statement.

Do SQL Server stored procedures perform better in network clusters?

From what I've read, there appears to be marginal performance benefits using stored procedures vs simply building the commands in C# and calling them explicitly in the program's code, at least when it comes to machines that share the server program and db engine (and when the procedures are simple). Most people seem to think it's a 'preference issue', and add a few other minor benefits to justify their case.
However, one I couldn't find any information on, is the benefit of a stored procedure when the database engine is located on a separate physical machine from the main application.
If I am not mistaken, in a server farm, wouldn't a stored procedure offload the processing on some cpu threads from the main server application, and have the primary processing done on the db engine server's cpu instead? Or, is this already done on the db engine's cpu anyways, when the C# libraries 'build' the information for the db engine to process?
Specifically, I have a long-running transaction that I could do multiple calls in a C# transaction block, but I suspect that a stored proc will in fact have a huge performance benefit by reducing the network calls to the db engine, as well as guaranteeing the processing is not being done on the main server application.
Is this true?
Performance gains from a stored procedure (versus something like Dapper or an OR/M like Entity Framework) can vary anywhere from nearly identical to a very noticeable performance improvement. I don't think your question can be answered without seeing the code that would be translated to a stored procedure.
Having said that, in my experience making a single stored procedure call versus multiple statements from the application code, yes, it would likely be faster.
If the SP is just a simple query (ie one SELECT statement) the performance gain is that a SP is pre-compiled. While the query is running you should not see any difference if it is a query or a SP.
I'm not sure of the effect if the SP is more complicated because this would depend on the query.
The more important benefit of a SP is that all the data are kept in the DBMS instead of being sent back and forward to the client. If you are dealing with large amount of data the benefit is more evident. The difference rises if your DB is located on a different machine and even more if the connection between them is slow.
On the contrary you must consider that a SP usually is not compiled to machine code so if the SP implements very complex logic it could be faster to implement the logic on the client.
Then you should also consider that moving the business logic to the server is not so good for code maintenance, you could add a technology debit implementing in the DB something that should be in your client code.
So, there isn't a solution valid for all the seasons but usually a well written SP is faster than the same code running on the client
There are a few issues at play here. As others have said, it kind of depends. A raw select statement will be barely noticeable. If there's a hugely complex query then a SP can save a lot of repetitive parsing. If there's a lot of intermediate data then SP will keep the data local, reducing network traffic. If your DB has a higher spec than the client it might run faster due to CPU horsepower.
Downsides can be things like bogging down the DB for everyone with processing that could be done on the client. This is generally if you're running an underpowered SQL server. Another subtle side to this is that licensing costs for a multi-core DB server can be impressive. Your $ per cycle on a SQL Server box can be many times what it is on your client.

Nhibernate large transactions, flushes vs locks

I am having a challenge of maintaining an incredibly large transaction using Nhibernate. So, let us say, I am saving large number of entities. If I do not flush on a transaction N, let us say 10000, then the performance gets killed due to overcrowded Nh Session. If I do flush, I place locks on DB level which in combination with read committed isolation level do affect working application. Also note that in reality I import an entity whose business logic is one of the hearts of the system and on its import around 10 tables are affected. That makes Stateless session a bad idea due to manual maintaining of cascades.
Moving BL to stored procedure is a big challenge due to to reasons:
there is already complicated OO business logic in the domain
classes of application,
duplicated BL will be introduced.
Ideally I would want to Flush session to some file and only then preparation of data is completed, I would like to execute its contents. Is it possible?
Any other suggestions/best practices are more than welcome.
You scenario is a typical ORM batch problem. In general we can say that no ORM is meant to be used for stuff like that. If you want to have high batch processing performance (not everlasting locks and maybe deadlocks) you should not use the ORM to insert 1000s of records.
Instead use native batch inserts which will always be a lot faster. (like SqlBulkCopy for MMSQL)
Anyways, if you want to use nhibernate for this, try to make use of the batch size setting.
Call save or update to all your objects and only call session.Flush once at the end. This will create all your objects in memory...
Depending on the batch size, nhibernate should try to create insert/update batches with this size, meaning you will have lot less roundtrips to the database and therefore fewer locks or at least it shouldn't take that long...
In general, your operations should only lock the database the moment your first insert statement gets executed on the server if you use normal transactions. It might work differently if you work with TransactionScope.
Here are some additional reads of how to improve batch processing.
http://fabiomaulo.blogspot.de/2011/03/nhibernate-32-batching-improvement.html
NHibernate performance insert
http://zvolkov.com/clog/2010/07/16?s=Insert+or+Update+records+in+bulk+with+NHibernate+batching

Prevent insert data into table at the same time

I'm working on a online sales web site. I'm using C# 4,0 and SQL server 2008 and I want to control and prevent users simultaneously insert into the table like dbo.orders... How can I do that?
Inserts will not be a problem, but updates can be. The term that you need to research is database concurrency. There are four basic models you can implement each with its own pros and cons. Some are better suited for certain situations and there are hundreds of articles on the web for this subject.
Are you trying to solve this in C# code on in SQL? Because in SQL it's simple. If you add BEGIN TRAN in the beginning of the stored procedure and COMMIT in the end, this will act as a lock in C# preventing concurrent code executions effectively serializing the requests. So if there are two inserts, they will be executed one after another. One thing to remember is that it will be blocking operation, i.e. the second insert won't start until the first one is finished (regardless successfully or not).
In your Add method you can use Locking with lock keyword, this will allow one thread at a time.

Categories

Resources