Trouble with duplicating a transaction's functionality - c#

I'm updating a current program that is working and in use on a Live environment, it saves Customers and Orders then exports them to an old database as well. All of the Reporting is still done in the old system while the reporting system in the new system is in development, which is why these all need to be exported.
This program has a built-in C# TransactionManager that is used to group multiple calls from C# to SQL within one transaction. Whenever I try to duplicate this I get errors and can't get it working.
Here's the code that is in place, working:
using (ITransactionInfo trx = this.TransactionManager.BeginTransaction())
{
//
// Update the customer. If the customer doesn't exist, then create a new one.
//
this.SaveCustomer(Order);
//
// Save the Order.
//
this.Store.SaveCorporateOrder(Order, ServiceContext.UserId);
//
// Save the Order notes and the customer notes.
//
this.NotesService.AppendNotes(NoteObjectTypes.CorporateOrder, Order.Id, Order.OrderNotes);
this.NotesService.AppendNotes(NoteObjectTypes.Customer, Order.Customer.Id, Order.CustomerNotes);
//
// Export the Order if it's new.
//
this.ExportOrder(Order, lastSavedVersion);
//
// Commit the transaction.
//
trx.Commit();
}
All of these functions just format the data and send parameters to Stored Procedures in the DB that perform the Select / Insert / Update operations on the DB.
The SaveCustomer stored procedure saves the customer to the new database.
The SaveCorporateOrder stored procedure gets information that was writen by the Save Customer stored procedure and uses it to save the Order to the new database.
The ExportOrder stored procedure gets information that was written by both of the previous ones and exports the Order to the old database.
Each of these stored procedures contain code that starts a new transaction if ##TRANCOUNT == 0 and have a commit statement at the end. It appears that none of these are being used because of the transaction in C#, but there is no code that passes transaction information or connection information to the stored procedures that I can see. This is working and in use on a SQL 2005 server.
When I try to build this and use it on my development environment that uses SQL 2008R2, I get errors like
"Uncommittable transaction is detected at the end of the batch"
and
"The server failed to resume the transaction"
It appears that each one is starting it's own transaction and is unable to read the data from the previous, uncommitted transaction instead of seeing that it is in the same transaction. I don't know if the different SQL version could be causing this to work differently or not, but the exact same code works in the Live install but not on my Dev environment.
Any ideas, or even direction where to look next would be wonderfull!
Thanks!
-Jacob

I think that the problem is because transaction fails and is not rejected. You don't have rollback call for the situation when any of SQL queries fail. Have you checked those queries?

Related

Database operations and calling API in single transaction

We have 2 systems, one Identity Management System that handles authentication and another is an application (say UserApp) (website) that user access. When a user registers, the user account is created in Identity Management System and UserApp database. The data should be in sync between these 2 systems. So the current code does the following when user registers
the data is inserted into database (using Entity Framework)
Account is created in IAM using an API call
Scenarios:
If the database insert is failed then API is not called
If database insert is successful, API fails then we delete the record. Question is what needs to be done if the delete fails, then the data is not in sync.
What is the best way to handle? The application is developed in C# with SQL Server.
You could make use of database transactions. You could create a database connection and open it.. The first line should be BEGIN TRANSACTION. This means any subsequent SQL INSERTS/UPDATES you execute wont be commited until you run the statement COMMIT TRANSACTION.. If you want to roll back the transaction you would call ROLLBACK TRANACTION.
So you could:
Step 01: BEGIN TRANSACTION
Step 02: Perform INSERT Statement.
If the SQL statement succeeds, you know the databse is up and accessible and this step has succeeded. It just that the row has not been commited to the database yet.
Step 03: On success of the INSERT statement, then Call the API
Step 04: If API SUCCEEDS then COMMT TRANSACTION.
Step 05: If API FAILS or there is an exception, then ROLLBACK TRANACTION
That way:
If the SQL statement fails in any way (DB down, T-SQL error, etc), you exit early
If the API call fails in any way, you exit early
You only commit the SQL statement when the INSERT and the API succeeds
If the COMMIT Fails
Now there might be a slim chance the COMMIT fails due to power loss or network outage at that second, etc.. In that case you you would need to call the API to remove/deactivate the user you just created.

C# localdb SHRINKDATABASE command from C# code

I'm trying to shrink a LocalDb with Visual Studio 2017 Community. I have a Win7 client windows form application with a small database (~10MB of data) that results into 150MB database size due to LocalDb free space allocation.
I found this answer (Executing Shrink on SQL Server database using command from linq-to-sql) that suggest to use the following code:
context.Database.ExecuteSqlCommand(
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
DatabaseTools.Instance.DatabasePathName returns the filesystem location of my database from a singleton DatabaseTools class instance.
The code runs, but I keep getting this exception:
System.Data.SqlClient.SqlException: 'Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.'
I tried COMMIT before, but no success at all. Any idea on how to effectively shrink database from C# code?
Thanks!
As the docs for ExecuteSqlCommand say, "If there isn't an existing local or ambient transaction a new transaction will be used to execute the command.".
This is what's causing your problem, as you cannot call DBCC SHRINKDATABASE in a transaction. Which isn't really surprising, given what it does.
Use the overload that allows you to pass a TransactionalBehavior and specify TransactionalBehavior.DoNotEnsureTransaction:
context.Database.ExecuteSqlCommand(
TransactionalBehavior.DoNotEnsureTransaction,
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);

I can exec my stored procedure in SQL Server Management Studio but webservice can not

I have a webservice method that executes a tabular stored procedure. This sp takes 1 minute to be executed completely. I call that webservice method remotely and get results.
In normal situations everything is OK and I get results successfully.
But when the server is busy the webservice can not execute that sp (I tested it in SQL Profiler and nothing comes to profiler) but I can execute that sp manually in SQL Server Management Studio.
When I restart SQL Server, the problem is solved and the webservice can execute sp.
Why in busy situations webservice can not execute sp but I can do it in SQL Server Management Studio?
How this situation can be explained? How can I solve that?
Execute sp_who and see what is happening; my guess is that it is being blocked - perhaps your "isolation level" is different between SSMS and the web-service.
Equally, it could well be that the connection's SET options are different between SSMS and the web-service, which can lead to certain changes in behavior - for example, computed-stored-indexed values are very susceptible to SET options: if the caller's options aren't compatible with the options that were set when the column was created, then it can be forced to table-scan them, recalculating them all, instead of using the pre-indexed values. This also applies to hoisted xml values.
A final consideration is parameter sniffing; if the cache gets generated for yourproc 'abc' which has very different stats than yourproc 'def', then it can run very bad query plans. The optimize for / unknown hint can help with this.

Inserting different data simultaneously from different clients

I created a windows forms application in C #, and a database MS SQL server 2008 Express, and I use LINQ-to-SQL query to insert and edit data.
The database is housed on a server with Windows Server 2008 R2 (standard edition). Right now I have the application running on five different computers, and users are authenticated through active directory.
One complaint reported to me was that sometimes when different data is entered and submitted, the same data do not appear in the listing that contains the application. I use try catch block to send the errors but errors do not appear in the application; but the data simply disappear.
The id of the table records is an integer auto-increment. As I have to tell them the registration number that was entered I use the following piece of code:
try{
ConectionDataContext db = new ConectionDataContext();
Table_Registers tr = new Table_Registers();
tr.Name=textbox1.text;
tr.sector=textbox2.text;
db.Table_Registers.InsertOnSubmit(tr);
db.SubmitChanges();
int numberRegister=tr.NumberRegister;
MessageBox.Show(tr.ToString());
}
catch{Exception e}
I wonder if I'm doing something wrong or if you know of any article on the web that speaks how to insert data from different clients in MSSQL Server databases, please let me know.
Thanks.
That's what a database server DOES: "insert data simultaneously from different clients".
One thing you can do is to consider "transactions":
http://www.sqlteam.com/article/introduction-to-transactions
Another thing you can (and should!) do is to insure as much work as possible is done on the server, by using "stored procedures":
http://www.sql-server-performance.com/2003/stored-procedures-basics/
You should also check the SQL Server error logs, especially for potential deadlocks. You can see these in your SSMS GUI, or in the "logs" directory under your SQL Server installation.
But the FIRST thing you need to do is to determine exactly what's going on. Since you've only got MSSQL Express (which is not a good choice for production use!), perhaps the easiest approach is to create a "log" table: insert an entry in your "log" every time you insert a row in the real table, and see if stuff is "missing" (i.e. you have more entires in the log table than the data table).

ActiveRecordMediator.SaveAndFlush locking SQL Server table

I am trying to investigate a problem related to .NET ActiveRecord on SQL Server 2008.
We have a base repository class where we have methods defined for Saving and Updating entities, these methods generally call directly onto the ActiveRecordMediator.
We have a particular instance where if we call ActiveRecordMediator.SaveAndFlush on one of our entities and then try to execute a stored proc that reads from the table we just saved the sproc will hang.
Looking at SQL Server the table is locked thus why it cannot be read. So my questions are:
Why is my SaveAndFlush locking the table?
How can I ensure the locking doesn't occur?
This application is running as an ASP.NET web site so I assume it is maintaining sessions on a request basis, but I cannot be sure.
I believe I have figured out why this was occurring.
NHibernate when used in our environment will hold a transaction open for the entire request and then finally when the session is disposed will commit the transaction.
Our sproc was not using the same transaction as NHibernate thus why the locking occurred.
I have partially fixed the problem by wrapping my saving of the entity server side in a using
using(var ts = new TransactionScope(TransactionMode.New))
{
ActiveRecordMediator.SaveAndFlush(value);
ts.VoteCommit();
}
This way the entity will be saved and committed immediately.

Categories

Resources