How to "transaction" a IO operation and a database execution? - c#

I have service that contains a processor running, and it do two things:
1- Create a file in a directory.
2- Set your own status to "Processed".
But, when the service is stopped exactly in the middle of processing, the file is created in the directory but, the process is not finalized, like this:
1- Create a file in a directory.
-----SERVICE STOPPED-----
2- Set your own status to "Processed".
I need a way to transact the IO operations with the database commands, how to do this?
EDIT - IMPORTANT
The problem is that the file created is captured by another application, so the file needs to be really created only if the commands are executed successfully. Because if the file be created and the another application capture him, and after an database error occurs, the problem to be continued.
OBS: I'm using c# to develop.

You can use Transactional NTFS (TxF). This provides the ability to perform actions that are fully atomic, consistent, isolated, and durable for file operations.
It can be intergrated to work with a large number of other transactional technologies. Because TxF uses the new Kernel Transaction Manager (KTM) features, and because the new KTM can work directly with the Microsoft® Distributed Transaction Coordinator (DTC).
Any technology that can work with DTC as a transaction coordinator can use transacted file operations within a single transaction. This means that you can now enlist transacted file operations within the same transaction as SQL operations, Web service calls via WS-AtomicTransaction, Windows Communication Foundation services via the OleTransactionProtocol, or even transacted MSMQ operations.
An example of file and database atomic transaction:
using (connectionDb)
{
connectionDb.Open();
using (var ts = new System.Transactions.TransactionScope())
{
try
{
File.Copy(sourceFileName, destFileName, overwrite);
connectionDb.ExecuteNonQuery();
ts.Complete();
}
catch (Exception)
{
throw;
}
finally
{ }
}
}
See the following links for more information:
TxF on Codeplex
Msdn reference
Note: Remember DTC comes with a heavy performance penalty.

You didn't specify the database server, but Microsoft SQL Server 2008 R2 supports streaming file data as part of a transaction.
See: https://technet.microsoft.com/en-us/library/bb933993%28v=sql.105%29.aspx
Transactional Durability
With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access.
For very large files, I wouldn't recommend it, because you often want the transaction to be as quick as possible when you have a lot of simultaneous transactions.
I'd normally use a compensation behaviour, e.g. storing status in a database and when a service is restarted, get it to first check for operations which have started but not completed and finish them off.
Operation started on Server x at datetime y
Operation completed on Server x at datetime y

Related

"The wait operation timed out" after 120secs of CREATE INDEX called from SMO TransferData method

Recently created a C# tool using SMO class to automate the refactor and merge of SQL Server databases for migration into Azure.
The TransferData method successfully adheres to the BulkCopyTimeout for the data copy phase; proved this by extending it when it timed out.
When the transfer phase moves to CREATE INDEX statements they appear to hit a timeout after 120sec / 2mins on a particularly large table.
The ServerConnection object has StatementTimeout and ConnectionTimeout both set to 0 (as initial research suggested doing) to no avail.
Running a profiler trace, I noticed the "Application Name" differs from the original set (MergeDB v1.8) when the bulk copy and index create phases are running.
The original connection is still present but it appears that the Transfer class spawns additional connections (but whilst appearing to pass on BulkCopyTimeout; failing to pass on the application name and (my hypothesis) the StatementTimeout property.
I'm using SMO v150.18131.0 connecting to SQL 2008 R2.

EntityFramework.BulkInsert and MSDTC

I'm currently using EntityFramework.BulkInsert and this is wrapped within a using block with a transaction scope to produce batch saving of entities. e.g.
using(var tranScope = new TransactionScope())
{
using(var context = new EFContext())
{
context.BulkInsert(entities, 100); // batching in size of 100
context.Save();
}
tranScope.Complete();
}
I need to determine if there is a dependency on using the BulkInsert to do bulk insert and MSDTC. I have done a bit of testing changing the Max Pool Size to a variety of low and high numbers and running load tests with 10-100 concurrent users with the MSDTC service turned off (all on local box at the moment). So far I cannot get it to throw any 'require MSDTC turned on' type of exceptions. I am using SQL2014, EF6.X and .net 4.6, MVC 5. I understand that SQL2014 is likely using lightweight transactions in this case, I have used perfmon to confirm that if Max Pool Size is set to X in the connection string then the perf counter NumberOfPooledConnections reflects the same number X, and when I change it in the connection string to something else this is also reflected in the counter (so at least that is working as expected...). Other info - Im using integrated security and have not set anything in the connection string for Enlist=...
The bulk insert package is located here https://efbulkinsert.codeplex.com/ and under the hood its looks to be using sqlBulkCopy. I'm concerned that even though I cannot reproduce the dependency on MSDTC in my testing, and even though Im not explicitly opening 2 connections within the same transaction scope, there is a still a dependency on MSDTC just by the pure nature of the batching?
Can anyone confirm a yay or nay..., thanks.
What you are using is a lightweight transaction so it does not need MS DTC. All the work will be handled by 1 SQL Server machine. If one server started the transaction and then another server does the rest, then MS DTC is required. In your case it is not. Here is a quote from MSDN:
A promotable transaction is a special form of a System.Transactions transaction that effectively delegates the work to a simple SQL Server transaction.
If more than 1 physical computer is needed to perform the transaction then you need MS DTC.
You should be fine.

How to perform an IO operation and a database execution both together in one transaction in C#? [duplicate]

I have service that contains a processor running, and it do two things:
1- Create a file in a directory.
2- Set your own status to "Processed".
But, when the service is stopped exactly in the middle of processing, the file is created in the directory but, the process is not finalized, like this:
1- Create a file in a directory.
-----SERVICE STOPPED-----
2- Set your own status to "Processed".
I need a way to transact the IO operations with the database commands, how to do this?
EDIT - IMPORTANT
The problem is that the file created is captured by another application, so the file needs to be really created only if the commands are executed successfully. Because if the file be created and the another application capture him, and after an database error occurs, the problem to be continued.
OBS: I'm using c# to develop.
You can use Transactional NTFS (TxF). This provides the ability to perform actions that are fully atomic, consistent, isolated, and durable for file operations.
It can be intergrated to work with a large number of other transactional technologies. Because TxF uses the new Kernel Transaction Manager (KTM) features, and because the new KTM can work directly with the Microsoft® Distributed Transaction Coordinator (DTC).
Any technology that can work with DTC as a transaction coordinator can use transacted file operations within a single transaction. This means that you can now enlist transacted file operations within the same transaction as SQL operations, Web service calls via WS-AtomicTransaction, Windows Communication Foundation services via the OleTransactionProtocol, or even transacted MSMQ operations.
An example of file and database atomic transaction:
using (connectionDb)
{
connectionDb.Open();
using (var ts = new System.Transactions.TransactionScope())
{
try
{
File.Copy(sourceFileName, destFileName, overwrite);
connectionDb.ExecuteNonQuery();
ts.Complete();
}
catch (Exception)
{
throw;
}
finally
{ }
}
}
See the following links for more information:
TxF on Codeplex
Msdn reference
Note: Remember DTC comes with a heavy performance penalty.
You didn't specify the database server, but Microsoft SQL Server 2008 R2 supports streaming file data as part of a transaction.
See: https://technet.microsoft.com/en-us/library/bb933993%28v=sql.105%29.aspx
Transactional Durability
With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access.
For very large files, I wouldn't recommend it, because you often want the transaction to be as quick as possible when you have a lot of simultaneous transactions.
I'd normally use a compensation behaviour, e.g. storing status in a database and when a service is restarted, get it to first check for operations which have started but not completed and finish them off.
Operation started on Server x at datetime y
Operation completed on Server x at datetime y

ActiveRecordMediator.SaveAndFlush locking SQL Server table

I am trying to investigate a problem related to .NET ActiveRecord on SQL Server 2008.
We have a base repository class where we have methods defined for Saving and Updating entities, these methods generally call directly onto the ActiveRecordMediator.
We have a particular instance where if we call ActiveRecordMediator.SaveAndFlush on one of our entities and then try to execute a stored proc that reads from the table we just saved the sproc will hang.
Looking at SQL Server the table is locked thus why it cannot be read. So my questions are:
Why is my SaveAndFlush locking the table?
How can I ensure the locking doesn't occur?
This application is running as an ASP.NET web site so I assume it is maintaining sessions on a request basis, but I cannot be sure.
I believe I have figured out why this was occurring.
NHibernate when used in our environment will hold a transaction open for the entire request and then finally when the session is disposed will commit the transaction.
Our sproc was not using the same transaction as NHibernate thus why the locking occurred.
I have partially fixed the problem by wrapping my saving of the entity server side in a using
using(var ts = new TransactionScope(TransactionMode.New))
{
ActiveRecordMediator.SaveAndFlush(value);
ts.VoteCommit();
}
This way the entity will be saved and committed immediately.

How do I run that database backup and restore scripts from my winform application?

I am developing a small business application which uses Sqlserver 2005 database.
Platform: .Net framework 3.5;
Application type: windows application;
Language: C#
Question:
I need to take and restore the backup from my application. I have the required script generated from SSME.
How do I run that particular script (or scripts) from my winform application?
You can run these scripts the same way you run a query, only you don't connect to the database you want to restore, you connect to master instead.
If the machine where your application is running has the SQL Server client tools installed, you can use sqlcmd.
If you want to do it programatically you can use SMO
Tutorial
Just use your connection to the database (ADO I presume?) and send your plain TSQL instructions to the server through this connection.
For the backup you probably want to use xp_sqlmaint. It has the handy ability to remove old backups, and it creates a nice log file. You can call it via something like:
EXECUTE master.dbo.xp_sqlmaint N''-S "[ServerName]" [ServerLogonDetails] -D [DatabaseName] -Rpt "[BackupArchive]\BackupLog.txt" [RptExpirationSchedule] -CkDB -BkUpDB "[BackupArchive]" -BkUpMedia DISK [BakExpirationSchedule]''
(replace the [square brackets] with suitable values).
Also for the backup you may need to backup the transaction log. Something like:
IF DATABASEPROPERTYEX((SELECT db_name(dbid) FROM master..sysprocesses WHERE spid=##SPID), ''Recovery'') <> ''SIMPLE'' EXECUTE master.dbo.xp_sqlmaint N'' -S "[ServerName]" [ServerLogonDetails] -D [DatabaseName] -Rpt "[BackupArchive]\BackupLog_TRN.txt" [RptExpirationSchedule] -BkUpLog "[BackupArchive]" -BkExt TRN -BkUpMedia DISK [BakExpirationSchedule]''
I'd recommend storing the actual commands you're using in a database table (1 row per command) and use some sort of template replacement scheme to handle the configurable values. This would allow for easy changes to the commands, without needing to deploy new code.
For the restore you will need to kill all connections except for internal sql server ones. Basically take the results of "exec sp_who" and for rows that match on dbname, and have a status that is not "background", and a cmd that is not one of "SIGNAL HANDLER", "LOCK MONITOR", "LAZY WRITER", "LOG WRITER", "CHECKPOINT SLEEP" do a "kill" on the spid (eg: ExecuteNonQuery("kill 1283")).
You'll want to trap and ignore any exceptions from the KILL command. There's nothing you can do about them. If the restore cannot proceed because of existing connections it will raise an error.
One danger with killing connections is ADO's connection pool (more for asp.net apps than windows apps). ADO assumes the a connection fetched from the connection pool is valid... and it does not react well to connections that have been killed. The next operation that occurs on that connection will fail. I can't recall the error... you might be able to trap just that specific error and handle it... also with 3.5 I think you can flush the connection pool (so... trap the error, flush the connection pool, open the connection, try the command again... ugly but might be doable).

Categories

Resources