EntityFramework.BulkInsert and MSDTC - c#

I'm currently using EntityFramework.BulkInsert and this is wrapped within a using block with a transaction scope to produce batch saving of entities. e.g.
using(var tranScope = new TransactionScope())
{
using(var context = new EFContext())
{
context.BulkInsert(entities, 100); // batching in size of 100
context.Save();
}
tranScope.Complete();
}
I need to determine if there is a dependency on using the BulkInsert to do bulk insert and MSDTC. I have done a bit of testing changing the Max Pool Size to a variety of low and high numbers and running load tests with 10-100 concurrent users with the MSDTC service turned off (all on local box at the moment). So far I cannot get it to throw any 'require MSDTC turned on' type of exceptions. I am using SQL2014, EF6.X and .net 4.6, MVC 5. I understand that SQL2014 is likely using lightweight transactions in this case, I have used perfmon to confirm that if Max Pool Size is set to X in the connection string then the perf counter NumberOfPooledConnections reflects the same number X, and when I change it in the connection string to something else this is also reflected in the counter (so at least that is working as expected...). Other info - Im using integrated security and have not set anything in the connection string for Enlist=...
The bulk insert package is located here https://efbulkinsert.codeplex.com/ and under the hood its looks to be using sqlBulkCopy. I'm concerned that even though I cannot reproduce the dependency on MSDTC in my testing, and even though Im not explicitly opening 2 connections within the same transaction scope, there is a still a dependency on MSDTC just by the pure nature of the batching?
Can anyone confirm a yay or nay..., thanks.

What you are using is a lightweight transaction so it does not need MS DTC. All the work will be handled by 1 SQL Server machine. If one server started the transaction and then another server does the rest, then MS DTC is required. In your case it is not. Here is a quote from MSDN:
A promotable transaction is a special form of a System.Transactions transaction that effectively delegates the work to a simple SQL Server transaction.
If more than 1 physical computer is needed to perform the transaction then you need MS DTC.
You should be fine.

Related

C# localdb SHRINKDATABASE command from C# code

I'm trying to shrink a LocalDb with Visual Studio 2017 Community. I have a Win7 client windows form application with a small database (~10MB of data) that results into 150MB database size due to LocalDb free space allocation.
I found this answer (Executing Shrink on SQL Server database using command from linq-to-sql) that suggest to use the following code:
context.Database.ExecuteSqlCommand(
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);
DatabaseTools.Instance.DatabasePathName returns the filesystem location of my database from a singleton DatabaseTools class instance.
The code runs, but I keep getting this exception:
System.Data.SqlClient.SqlException: 'Cannot perform a shrinkdatabase operation inside a user transaction. Terminate the transaction and reissue the statement.'
I tried COMMIT before, but no success at all. Any idea on how to effectively shrink database from C# code?
Thanks!
As the docs for ExecuteSqlCommand say, "If there isn't an existing local or ambient transaction a new transaction will be used to execute the command.".
This is what's causing your problem, as you cannot call DBCC SHRINKDATABASE in a transaction. Which isn't really surprising, given what it does.
Use the overload that allows you to pass a TransactionalBehavior and specify TransactionalBehavior.DoNotEnsureTransaction:
context.Database.ExecuteSqlCommand(
TransactionalBehavior.DoNotEnsureTransaction,
"DBCC SHRINKDATABASE(#file)",
new SqlParameter("#file", DatabaseTools.Instance.DatabasePathName)
);

How to perform an IO operation and a database execution both together in one transaction in C#? [duplicate]

I have service that contains a processor running, and it do two things:
1- Create a file in a directory.
2- Set your own status to "Processed".
But, when the service is stopped exactly in the middle of processing, the file is created in the directory but, the process is not finalized, like this:
1- Create a file in a directory.
-----SERVICE STOPPED-----
2- Set your own status to "Processed".
I need a way to transact the IO operations with the database commands, how to do this?
EDIT - IMPORTANT
The problem is that the file created is captured by another application, so the file needs to be really created only if the commands are executed successfully. Because if the file be created and the another application capture him, and after an database error occurs, the problem to be continued.
OBS: I'm using c# to develop.
You can use Transactional NTFS (TxF). This provides the ability to perform actions that are fully atomic, consistent, isolated, and durable for file operations.
It can be intergrated to work with a large number of other transactional technologies. Because TxF uses the new Kernel Transaction Manager (KTM) features, and because the new KTM can work directly with the Microsoft® Distributed Transaction Coordinator (DTC).
Any technology that can work with DTC as a transaction coordinator can use transacted file operations within a single transaction. This means that you can now enlist transacted file operations within the same transaction as SQL operations, Web service calls via WS-AtomicTransaction, Windows Communication Foundation services via the OleTransactionProtocol, or even transacted MSMQ operations.
An example of file and database atomic transaction:
using (connectionDb)
{
connectionDb.Open();
using (var ts = new System.Transactions.TransactionScope())
{
try
{
File.Copy(sourceFileName, destFileName, overwrite);
connectionDb.ExecuteNonQuery();
ts.Complete();
}
catch (Exception)
{
throw;
}
finally
{ }
}
}
See the following links for more information:
TxF on Codeplex
Msdn reference
Note: Remember DTC comes with a heavy performance penalty.
You didn't specify the database server, but Microsoft SQL Server 2008 R2 supports streaming file data as part of a transaction.
See: https://technet.microsoft.com/en-us/library/bb933993%28v=sql.105%29.aspx
Transactional Durability
With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access.
For very large files, I wouldn't recommend it, because you often want the transaction to be as quick as possible when you have a lot of simultaneous transactions.
I'd normally use a compensation behaviour, e.g. storing status in a database and when a service is restarted, get it to first check for operations which have started but not completed and finish them off.
Operation started on Server x at datetime y
Operation completed on Server x at datetime y

How to "transaction" a IO operation and a database execution?

I have service that contains a processor running, and it do two things:
1- Create a file in a directory.
2- Set your own status to "Processed".
But, when the service is stopped exactly in the middle of processing, the file is created in the directory but, the process is not finalized, like this:
1- Create a file in a directory.
-----SERVICE STOPPED-----
2- Set your own status to "Processed".
I need a way to transact the IO operations with the database commands, how to do this?
EDIT - IMPORTANT
The problem is that the file created is captured by another application, so the file needs to be really created only if the commands are executed successfully. Because if the file be created and the another application capture him, and after an database error occurs, the problem to be continued.
OBS: I'm using c# to develop.
You can use Transactional NTFS (TxF). This provides the ability to perform actions that are fully atomic, consistent, isolated, and durable for file operations.
It can be intergrated to work with a large number of other transactional technologies. Because TxF uses the new Kernel Transaction Manager (KTM) features, and because the new KTM can work directly with the Microsoft® Distributed Transaction Coordinator (DTC).
Any technology that can work with DTC as a transaction coordinator can use transacted file operations within a single transaction. This means that you can now enlist transacted file operations within the same transaction as SQL operations, Web service calls via WS-AtomicTransaction, Windows Communication Foundation services via the OleTransactionProtocol, or even transacted MSMQ operations.
An example of file and database atomic transaction:
using (connectionDb)
{
connectionDb.Open();
using (var ts = new System.Transactions.TransactionScope())
{
try
{
File.Copy(sourceFileName, destFileName, overwrite);
connectionDb.ExecuteNonQuery();
ts.Complete();
}
catch (Exception)
{
throw;
}
finally
{ }
}
}
See the following links for more information:
TxF on Codeplex
Msdn reference
Note: Remember DTC comes with a heavy performance penalty.
You didn't specify the database server, but Microsoft SQL Server 2008 R2 supports streaming file data as part of a transaction.
See: https://technet.microsoft.com/en-us/library/bb933993%28v=sql.105%29.aspx
Transactional Durability
With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access.
For very large files, I wouldn't recommend it, because you often want the transaction to be as quick as possible when you have a lot of simultaneous transactions.
I'd normally use a compensation behaviour, e.g. storing status in a database and when a service is restarted, get it to first check for operations which have started but not completed and finish them off.
Operation started on Server x at datetime y
Operation completed on Server x at datetime y

ActiveRecordMediator.SaveAndFlush locking SQL Server table

I am trying to investigate a problem related to .NET ActiveRecord on SQL Server 2008.
We have a base repository class where we have methods defined for Saving and Updating entities, these methods generally call directly onto the ActiveRecordMediator.
We have a particular instance where if we call ActiveRecordMediator.SaveAndFlush on one of our entities and then try to execute a stored proc that reads from the table we just saved the sproc will hang.
Looking at SQL Server the table is locked thus why it cannot be read. So my questions are:
Why is my SaveAndFlush locking the table?
How can I ensure the locking doesn't occur?
This application is running as an ASP.NET web site so I assume it is maintaining sessions on a request basis, but I cannot be sure.
I believe I have figured out why this was occurring.
NHibernate when used in our environment will hold a transaction open for the entire request and then finally when the session is disposed will commit the transaction.
Our sproc was not using the same transaction as NHibernate thus why the locking occurred.
I have partially fixed the problem by wrapping my saving of the entity server side in a using
using(var ts = new TransactionScope(TransactionMode.New))
{
ActiveRecordMediator.SaveAndFlush(value);
ts.VoteCommit();
}
This way the entity will be saved and committed immediately.

Understanding MSDTC in Windows

To use transaction construct(as follows) in Subsonic, MSDTC needs to be running on Windows machine. Right?
using (TransactionScope ts = new TransactionScope())
{
using (SharedDbConnectionScope sharedConnectionScope = new SharedDbConnectionScope())
{
// update table 1
// update table 2
// ts.commit here
}
}
Is MS-DTC a default service on Windows systems(XP, Vista, Windows 7, Servers etc)?
If it is not enabled, how can I make sure it gets enabled during the installation process of my application?
MSDTC should come installed with windows. If it's not it can be installed with the following command:
msdtc -install
You can configure the MSDTC service using sc.exe. Set the service to start automatically and start the service:
sc config msdtc start= auto
sc start msdtc
Note you will need administrator privilege to perform the above.
I use:
private bool InitMsdtc()
{
System.ServiceProcess.ServiceController control = new System.ServiceProcess.ServiceController("MSDTC");
if (control.Status == System.ServiceProcess.ServiceControllerStatus.Stopped)
control.Start();
else if (control.Status == System.ServiceProcess.ServiceControllerStatus.Paused)
control.Continue();
return true;
}
This might be helpful:
http://www.thereforesystems.com/turn-on-msdtc-windows-7/
If your DBMS is SQL Server 2000 and you use a TransactionScope a distributed transaction is created even for local Transaction. However SQL Server 2005 (and probably SQL Server 2008) are smart enough to figure out that a distributed Transaction is not needed. I don't know if that only applies to local DB's or even is true if you Transaction only involves a single DB even if it's on a remove server. http://davidhayden.com/blog/dave/archive/2005/12/09/2615.aspx
One hint, you can use a batch query to avoid the TransactionScope.
http://subsonicproject.com/docs/BatchQuery
BatchQuery, QueueForTransaction and ExecuteTransaction will not use a TransactionScope (of course that depends on the provider implementation) but choose the transaction mechanismn of the underlying data provider (SqlTransaction in this case) which won't require MSTDC.

Categories

Resources