ActiveRecordMediator.SaveAndFlush locking SQL Server table - c#

I am trying to investigate a problem related to .NET ActiveRecord on SQL Server 2008.
We have a base repository class where we have methods defined for Saving and Updating entities, these methods generally call directly onto the ActiveRecordMediator.
We have a particular instance where if we call ActiveRecordMediator.SaveAndFlush on one of our entities and then try to execute a stored proc that reads from the table we just saved the sproc will hang.
Looking at SQL Server the table is locked thus why it cannot be read. So my questions are:
Why is my SaveAndFlush locking the table?
How can I ensure the locking doesn't occur?
This application is running as an ASP.NET web site so I assume it is maintaining sessions on a request basis, but I cannot be sure.

I believe I have figured out why this was occurring.
NHibernate when used in our environment will hold a transaction open for the entire request and then finally when the session is disposed will commit the transaction.
Our sproc was not using the same transaction as NHibernate thus why the locking occurred.
I have partially fixed the problem by wrapping my saving of the entity server side in a using
using(var ts = new TransactionScope(TransactionMode.New))
{
ActiveRecordMediator.SaveAndFlush(value);
ts.VoteCommit();
}
This way the entity will be saved and committed immediately.

Related

C# localdb SHRINKDATABASE command from C# code

I'm trying to shrink a LocalDb with Visual Studio 2017 Community. I have a Win7 client windows form application with a small database (~10MB of data) that results into 150MB database size due to LocalDb free space allocation.
I found this answer (Executing Shrink on SQL Server database using command from linq-to-sql) that suggest to use the following code:
context.Database.ExecuteSqlCommand(
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
DatabaseTools.Instance.DatabasePathName returns the filesystem location of my database from a singleton DatabaseTools class instance.
The code runs, but I keep getting this exception:
System.Data.SqlClient.SqlException: 'Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.'
I tried COMMIT before, but no success at all. Any idea on how to effectively shrink database from C# code?
Thanks!
As the docs for ExecuteSqlCommand say, "If there isn't an existing local or ambient transaction a new transaction will be used to execute the command.".
This is what's causing your problem, as you cannot call DBCC SHRINKDATABASE in a transaction. Which isn't really surprising, given what it does.
Use the overload that allows you to pass a TransactionalBehavior and specify TransactionalBehavior.DoNotEnsureTransaction:
context.Database.ExecuteSqlCommand(
TransactionalBehavior.DoNotEnsureTransaction,
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);

EntityFramework.BulkInsert and MSDTC

I'm currently using EntityFramework.BulkInsert and this is wrapped within a using block with a transaction scope to produce batch saving of entities. e.g.
using(var tranScope = new TransactionScope())
{
using(var context = new EFContext())
{
context.BulkInsert(entities, 100); // batching in size of 100
context.Save();
}
tranScope.Complete();
}
I need to determine if there is a dependency on using the BulkInsert to do bulk insert and MSDTC. I have done a bit of testing changing the Max Pool Size to a variety of low and high numbers and running load tests with 10-100 concurrent users with the MSDTC service turned off (all on local box at the moment). So far I cannot get it to throw any 'require MSDTC turned on' type of exceptions. I am using SQL2014, EF6.X and .net 4.6, MVC 5. I understand that SQL2014 is likely using lightweight transactions in this case, I have used perfmon to confirm that if Max Pool Size is set to X in the connection string then the perf counter NumberOfPooledConnections reflects the same number X, and when I change it in the connection string to something else this is also reflected in the counter (so at least that is working as expected...). Other info - Im using integrated security and have not set anything in the connection string for Enlist=...
The bulk insert package is located here https://efbulkinsert.codeplex.com/ and under the hood its looks to be using sqlBulkCopy. I'm concerned that even though I cannot reproduce the dependency on MSDTC in my testing, and even though Im not explicitly opening 2 connections within the same transaction scope, there is a still a dependency on MSDTC just by the pure nature of the batching?
Can anyone confirm a yay or nay..., thanks.
What you are using is a lightweight transaction so it does not need MS DTC. All the work will be handled by 1 SQL Server machine. If one server started the transaction and then another server does the rest, then MS DTC is required. In your case it is not. Here is a quote from MSDN:
A promotable transaction is a special form of a System.Transactions transaction that effectively delegates the work to a simple SQL Server transaction.
If more than 1 physical computer is needed to perform the transaction then you need MS DTC.
You should be fine.

How to perform an IO operation and a database execution both together in one transaction in C#? [duplicate]

I have service that contains a processor running, and it do two things:
1- Create a file in a directory.
2- Set your own status to "Processed".
But, when the service is stopped exactly in the middle of processing, the file is created in the directory but, the process is not finalized, like this:
1- Create a file in a directory.
-----SERVICE STOPPED-----
2- Set your own status to "Processed".
I need a way to transact the IO operations with the database commands, how to do this?
EDIT - IMPORTANT
The problem is that the file created is captured by another application, so the file needs to be really created only if the commands are executed successfully. Because if the file be created and the another application capture him, and after an database error occurs, the problem to be continued.
OBS: I'm using c# to develop.
You can use Transactional NTFS (TxF). This provides the ability to perform actions that are fully atomic, consistent, isolated, and durable for file operations.
It can be intergrated to work with a large number of other transactional technologies. Because TxF uses the new Kernel Transaction Manager (KTM) features, and because the new KTM can work directly with the Microsoft® Distributed Transaction Coordinator (DTC).
Any technology that can work with DTC as a transaction coordinator can use transacted file operations within a single transaction. This means that you can now enlist transacted file operations within the same transaction as SQL operations, Web service calls via WS-AtomicTransaction, Windows Communication Foundation services via the OleTransactionProtocol, or even transacted MSMQ operations.
An example of file and database atomic transaction:
using (connectionDb)
{
connectionDb.Open();
using (var ts = new System.Transactions.TransactionScope())
{
try
{
File.Copy(sourceFileName, destFileName, overwrite);
connectionDb.ExecuteNonQuery();
ts.Complete();
}
catch (Exception)
{
throw;
}
finally
{ }
}
}
See the following links for more information:
TxF on Codeplex
Msdn reference
Note: Remember DTC comes with a heavy performance penalty.
You didn't specify the database server, but Microsoft SQL Server 2008 R2 supports streaming file data as part of a transaction.
See: https://technet.microsoft.com/en-us/library/bb933993%28v=sql.105%29.aspx
Transactional Durability
With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access.
For very large files, I wouldn't recommend it, because you often want the transaction to be as quick as possible when you have a lot of simultaneous transactions.
I'd normally use a compensation behaviour, e.g. storing status in a database and when a service is restarted, get it to first check for operations which have started but not completed and finish them off.
Operation started on Server x at datetime y
Operation completed on Server x at datetime y

How to "transaction" a IO operation and a database execution?

I have service that contains a processor running, and it do two things:
1- Create a file in a directory.
2- Set your own status to "Processed".
But, when the service is stopped exactly in the middle of processing, the file is created in the directory but, the process is not finalized, like this:
1- Create a file in a directory.
-----SERVICE STOPPED-----
2- Set your own status to "Processed".
I need a way to transact the IO operations with the database commands, how to do this?
EDIT - IMPORTANT
The problem is that the file created is captured by another application, so the file needs to be really created only if the commands are executed successfully. Because if the file be created and the another application capture him, and after an database error occurs, the problem to be continued.
OBS: I'm using c# to develop.
You can use Transactional NTFS (TxF). This provides the ability to perform actions that are fully atomic, consistent, isolated, and durable for file operations.
It can be intergrated to work with a large number of other transactional technologies. Because TxF uses the new Kernel Transaction Manager (KTM) features, and because the new KTM can work directly with the Microsoft® Distributed Transaction Coordinator (DTC).
Any technology that can work with DTC as a transaction coordinator can use transacted file operations within a single transaction. This means that you can now enlist transacted file operations within the same transaction as SQL operations, Web service calls via WS-AtomicTransaction, Windows Communication Foundation services via the OleTransactionProtocol, or even transacted MSMQ operations.
An example of file and database atomic transaction:
using (connectionDb)
{
connectionDb.Open();
using (var ts = new System.Transactions.TransactionScope())
{
try
{
File.Copy(sourceFileName, destFileName, overwrite);
connectionDb.ExecuteNonQuery();
ts.Complete();
}
catch (Exception)
{
throw;
}
finally
{ }
}
}
See the following links for more information:
TxF on Codeplex
Msdn reference
Note: Remember DTC comes with a heavy performance penalty.
You didn't specify the database server, but Microsoft SQL Server 2008 R2 supports streaming file data as part of a transaction.
See: https://technet.microsoft.com/en-us/library/bb933993%28v=sql.105%29.aspx
Transactional Durability
With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access.
For very large files, I wouldn't recommend it, because you often want the transaction to be as quick as possible when you have a lot of simultaneous transactions.
I'd normally use a compensation behaviour, e.g. storing status in a database and when a service is restarted, get it to first check for operations which have started but not completed and finish them off.
Operation started on Server x at datetime y
Operation completed on Server x at datetime y

Trouble with duplicating a transaction's functionality

I'm updating a current program that is working and in use on a Live environment, it saves Customers and Orders then exports them to an old database as well. All of the Reporting is still done in the old system while the reporting system in the new system is in development, which is why these all need to be exported.
This program has a built-in C# TransactionManager that is used to group multiple calls from C# to SQL within one transaction. Whenever I try to duplicate this I get errors and can't get it working.
Here's the code that is in place, working:
using (ITransactionInfo trx = this.TransactionManager.BeginTransaction())
{
//
// Update the customer. If the customer doesn't exist, then create a new one.
//
this.SaveCustomer(Order);
//
// Save the Order.
//
this.Store.SaveCorporateOrder(Order, ServiceContext.UserId);
//
// Save the Order notes and the customer notes.
//
this.NotesService.AppendNotes(NoteObjectTypes.CorporateOrder, Order.Id, Order.OrderNotes);
this.NotesService.AppendNotes(NoteObjectTypes.Customer, Order.Customer.Id, Order.CustomerNotes);
//
// Export the Order if it's new.
//
this.ExportOrder(Order, lastSavedVersion);
//
// Commit the transaction.
//
trx.Commit();
}
All of these functions just format the data and send parameters to Stored Procedures in the DB that perform the Select / Insert / Update operations on the DB.
The SaveCustomer stored procedure saves the customer to the new database.
The SaveCorporateOrder stored procedure gets information that was writen by the Save Customer stored procedure and uses it to save the Order to the new database.
The ExportOrder stored procedure gets information that was written by both of the previous ones and exports the Order to the old database.
Each of these stored procedures contain code that starts a new transaction if ##TRANCOUNT == 0 and have a commit statement at the end. It appears that none of these are being used because of the transaction in C#, but there is no code that passes transaction information or connection information to the stored procedures that I can see. This is working and in use on a SQL 2005 server.
When I try to build this and use it on my development environment that uses SQL 2008R2, I get errors like
"Uncommittable transaction is detected at the end of the batch"
and
"The server failed to resume the transaction"
It appears that each one is starting it's own transaction and is unable to read the data from the previous, uncommitted transaction instead of seeing that it is in the same transaction. I don't know if the different SQL version could be causing this to work differently or not, but the exact same code works in the Live install but not on my Dev environment.
Any ideas, or even direction where to look next would be wonderfull!
Thanks!
-Jacob
I think that the problem is because transaction fails and is not rejected. You don't have rollback call for the situation when any of SQL queries fail. Have you checked those queries?

Categories

Resources