Understanding MSDTC in Windows - c#

To use transaction construct(as follows) in Subsonic, MSDTC needs to be running on Windows machine. Right?
using (TransactionScope ts = new TransactionScope())
{
using (SharedDbConnectionScope sharedConnectionScope = new SharedDbConnectionScope())
{
// update table 1
// update table 2
// ts.commit here
}
}
Is MS-DTC a default service on Windows systems(XP, Vista, Windows 7, Servers etc)?
If it is not enabled, how can I make sure it gets enabled during the installation process of my application?

MSDTC should come installed with windows. If it's not it can be installed with the following command:
msdtc -install
You can configure the MSDTC service using sc.exe. Set the service to start automatically and start the service:
sc config msdtc start= auto
sc start msdtc
Note you will need administrator privilege to perform the above.

I use:
private bool InitMsdtc()
{
System.ServiceProcess.ServiceController control = new System.ServiceProcess.ServiceController("MSDTC");
if (control.Status == System.ServiceProcess.ServiceControllerStatus.Stopped)
control.Start();
else if (control.Status == System.ServiceProcess.ServiceControllerStatus.Paused)
control.Continue();
return true;
}

This might be helpful:
http://www.thereforesystems.com/turn-on-msdtc-windows-7/

If your DBMS is SQL Server 2000 and you use a TransactionScope a distributed transaction is created even for local Transaction. However SQL Server 2005 (and probably SQL Server 2008) are smart enough to figure out that a distributed Transaction is not needed. I don't know if that only applies to local DB's or even is true if you Transaction only involves a single DB even if it's on a remove server. http://davidhayden.com/blog/dave/archive/2005/12/09/2615.aspx
One hint, you can use a batch query to avoid the TransactionScope.
http://subsonicproject.com/docs/BatchQuery
BatchQuery, QueueForTransaction and ExecuteTransaction will not use a TransactionScope (of course that depends on the provider implementation) but choose the transaction mechanismn of the underlying data provider (SqlTransaction in this case) which won't require MSTDC.

Related

EntityFramework.BulkInsert and MSDTC

I'm currently using EntityFramework.BulkInsert and this is wrapped within a using block with a transaction scope to produce batch saving of entities. e.g.
using(var tranScope = new TransactionScope())
{
using(var context = new EFContext())
{
context.BulkInsert(entities, 100); // batching in size of 100
context.Save();
}
tranScope.Complete();
}
I need to determine if there is a dependency on using the BulkInsert to do bulk insert and MSDTC. I have done a bit of testing changing the Max Pool Size to a variety of low and high numbers and running load tests with 10-100 concurrent users with the MSDTC service turned off (all on local box at the moment). So far I cannot get it to throw any 'require MSDTC turned on' type of exceptions. I am using SQL2014, EF6.X and .net 4.6, MVC 5. I understand that SQL2014 is likely using lightweight transactions in this case, I have used perfmon to confirm that if Max Pool Size is set to X in the connection string then the perf counter NumberOfPooledConnections reflects the same number X, and when I change it in the connection string to something else this is also reflected in the counter (so at least that is working as expected...). Other info - Im using integrated security and have not set anything in the connection string for Enlist=...
The bulk insert package is located here https://efbulkinsert.codeplex.com/ and under the hood its looks to be using sqlBulkCopy. I'm concerned that even though I cannot reproduce the dependency on MSDTC in my testing, and even though Im not explicitly opening 2 connections within the same transaction scope, there is a still a dependency on MSDTC just by the pure nature of the batching?
Can anyone confirm a yay or nay..., thanks.
What you are using is a lightweight transaction so it does not need MS DTC. All the work will be handled by 1 SQL Server machine. If one server started the transaction and then another server does the rest, then MS DTC is required. In your case it is not. Here is a quote from MSDN:
A promotable transaction is a special form of a System.Transactions transaction that effectively delegates the work to a simple SQL Server transaction.
If more than 1 physical computer is needed to perform the transaction then you need MS DTC.
You should be fine.

Accessing MongoDB from a SQL Server 2008 CLR trigger

I'm using a SQL CLR trigger to push updates from the relational DB to a MongoDB instance. Both databases are running on the same Windows 2012 machine.
My SQL CLR project is built on .NET 3.5 and is using the mongocsharpdriver 1.10.0.
The C# code within my trigger is as follows:
SqlPipe pipe = SqlContext.Pipe;
pipe.Send("Begin ReportUpdateTrigger.VTProperty");
try
{
var settings = new MongoClientSettings();
settings.Server = new MongoServerAddress("127.0.0.1", 27017);
var client = new MongoClient(settings);
var server = client.GetServer();
var db = server.GetDatabase("VTProperty");
var coll = db.GetCollection<object>("Property");
var item = new { name = "test", datecreated = DateTime.Now };
coll.Insert(item);
}
catch (Exception ex)
{
pipe.Send("Error sending update to Reporting database: " + ex.Message);
}
pipe.Send("Done ReportUpdateTrigger.VTProperty");
(this is test code just to verify that the MongoDB operation will work).
I run the exact same code from a separate console app, and the data is posted to Mongo with no problems.
When running from the trigger, I see the following error:
Begin ReportUpdateTrigger.VTProperty Error sending update to Reporting
database: Unable to connect to server 127.0.0.1:27017: The type
initializer for 'MongoDB.Bson.Serialization.BsonSerializer' threw an
exception.. Done ReportUpdateTrigger.VTProperty
I have my DLL (and all supporting DLLs, including the MongoDB drivers) referenced as assemblies within the DB server. CLR is enabled. I know that the trigger is executing because I am getting the custom status and error messages in the SQL output window.
My hunch is that whatever user/process is executing the trigger code does not have access to the Mongo instance. Hence the "Unable to connect to server 127.0.0.1:27017" error. Not sure what my next step should be.
By default, SQLCLR external access has a security context of the service account (i.e. "Logon As" account) of the MSSQLSERVER service (or MSSQL$InstanceName, or something like that). And that service account, by default, is the Local System account. You can do one of two things here:
If your MSSQLSERVER service is using "Local System" as the account, then create a real local or domain / AD account and make that the service account. Then just make sure that the new real account has access to MongoDB.
Regardless of anything else, it is often best to have services such as SQL Server uses their own service accounts. That makes it easier to control and confine the permssions.
If you are using Windows logins, then you have the option of enabling Impersonation in the .NET code. When using Impersonation, the security context of external calls is set to the Windows Login that is executing the SQLCLR object.
For this, you would need to add something like the following to your code:
using System.Security.Principal;
// above the "try" block
WindowsImpersonationContext _ImpersonationIdentity = null;
// inside the "try", before anything else
_ImpersonationIdentity = SqlContext.WindowsIdentity.Impersonate();
// in a "finally" block
if (_ImpersonationIdentity != null)
{
_ImpersonationIdentity.Undo();
}

How to "transaction" a IO operation and a database execution?

I have service that contains a processor running, and it do two things:
1- Create a file in a directory.
2- Set your own status to "Processed".
But, when the service is stopped exactly in the middle of processing, the file is created in the directory but, the process is not finalized, like this:
1- Create a file in a directory.
-----SERVICE STOPPED-----
2- Set your own status to "Processed".
I need a way to transact the IO operations with the database commands, how to do this?
EDIT - IMPORTANT
The problem is that the file created is captured by another application, so the file needs to be really created only if the commands are executed successfully. Because if the file be created and the another application capture him, and after an database error occurs, the problem to be continued.
OBS: I'm using c# to develop.
You can use Transactional NTFS (TxF). This provides the ability to perform actions that are fully atomic, consistent, isolated, and durable for file operations.
It can be intergrated to work with a large number of other transactional technologies. Because TxF uses the new Kernel Transaction Manager (KTM) features, and because the new KTM can work directly with the Microsoft® Distributed Transaction Coordinator (DTC).
Any technology that can work with DTC as a transaction coordinator can use transacted file operations within a single transaction. This means that you can now enlist transacted file operations within the same transaction as SQL operations, Web service calls via WS-AtomicTransaction, Windows Communication Foundation services via the OleTransactionProtocol, or even transacted MSMQ operations.
An example of file and database atomic transaction:
using (connectionDb)
{
connectionDb.Open();
using (var ts = new System.Transactions.TransactionScope())
{
try
{
File.Copy(sourceFileName, destFileName, overwrite);
connectionDb.ExecuteNonQuery();
ts.Complete();
}
catch (Exception)
{
throw;
}
finally
{ }
}
}
See the following links for more information:
TxF on Codeplex
Msdn reference
Note: Remember DTC comes with a heavy performance penalty.
You didn't specify the database server, but Microsoft SQL Server 2008 R2 supports streaming file data as part of a transaction.
See: https://technet.microsoft.com/en-us/library/bb933993%28v=sql.105%29.aspx
Transactional Durability
With FILESTREAM, upon transaction commit, the Database Engine ensures transaction durability for FILESTREAM BLOB data that is modified from the file system streaming access.
For very large files, I wouldn't recommend it, because you often want the transaction to be as quick as possible when you have a lot of simultaneous transactions.
I'd normally use a compensation behaviour, e.g. storing status in a database and when a service is restarted, get it to first check for operations which have started but not completed and finish them off.
Operation started on Server x at datetime y
Operation completed on Server x at datetime y

The Underlying Provider Failed to open

HI,
I am using VS2010 and working with Microsoft Entity Framework 4.0
I am working on a Windows Application. I have bound several combo in my application; it's working fine.
void BindNatureOfIndustryCombo()
{
using (var obj = new EASDBEntitiesCon())
{
var natureOfIndustryColl = from c in obj.IndustryTypes select c;
var natureOfIndustryList = natureOfIndustryColl.ToList();
cmbNatureOfIndustry.DataSource = natureOfIndustryList;
cmbNatureOfIndustry.DisplayMember = "IndustryType";
cmbNatureOfIndustry.ValueMember = "IndustryTypeID";
}
}
ConnectionString is
<add name="EntrpriseApplicationSuit.Properties.Settings.EASDBConnectionString" connectionString="Data Source=192.168.0.150\GSERVER;Initial Catalog=EASDB;Persist Security Info=True;User ID=sa;Password=$1234;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />
But when I start my application on another system, it gives the error:
The Underlying Provider Failed to open
Why does this error occur and what is the solution?
That suggests that the connection string is invalid from the other computer. Perhaps it's using Windows authentication and the other user or computer doesn't have permission to access it - or perhaps it's on a different network and can't reach the server? We can't really diagnose that without knowing a bit about what it's trying to connect to.
Please give as much context as you can around the connection, and any differences between the computer it is working on and the computer it's not working on.
You should double check your connection string!
To ensure that the database server/instance is correct, the database name is correct, the user id and (or) password you use are valid.
Distributed Transaction Coordinator (DTC) might be disabled. Try enabling DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool.
Open DTC by going to Component Services / Computers / My Computer / Distributed Transaction Coordinator / Right click Local DTC / Properties / Security Tab / Check Enable Network DTC Access / Allow Remote Clients.

How do I run that database backup and restore scripts from my winform application?

I am developing a small business application which uses Sqlserver 2005 database.
Platform: .Net framework 3.5;
Application type: windows application;
Language: C#
Question:
I need to take and restore the backup from my application. I have the required script generated from SSME.
How do I run that particular script (or scripts) from my winform application?
You can run these scripts the same way you run a query, only you don't connect to the database you want to restore, you connect to master instead.
If the machine where your application is running has the SQL Server client tools installed, you can use sqlcmd.
If you want to do it programatically you can use SMO
Tutorial
Just use your connection to the database (ADO I presume?) and send your plain TSQL instructions to the server through this connection.
For the backup you probably want to use xp_sqlmaint. It has the handy ability to remove old backups, and it creates a nice log file. You can call it via something like:
EXECUTE master.dbo.xp_sqlmaint N''-S "[ServerName]" [ServerLogonDetails] -D [DatabaseName] -Rpt "[BackupArchive]\BackupLog.txt" [RptExpirationSchedule] -CkDB -BkUpDB "[BackupArchive]" -BkUpMedia DISK [BakExpirationSchedule]''
(replace the [square brackets] with suitable values).
Also for the backup you may need to backup the transaction log. Something like:
IF DATABASEPROPERTYEX((SELECT db_name(dbid) FROM master..sysprocesses WHERE spid=##SPID), ''Recovery'') <> ''SIMPLE'' EXECUTE master.dbo.xp_sqlmaint N'' -S "[ServerName]" [ServerLogonDetails] -D [DatabaseName] -Rpt "[BackupArchive]\BackupLog_TRN.txt" [RptExpirationSchedule] -BkUpLog "[BackupArchive]" -BkExt TRN -BkUpMedia DISK [BakExpirationSchedule]''
I'd recommend storing the actual commands you're using in a database table (1 row per command) and use some sort of template replacement scheme to handle the configurable values. This would allow for easy changes to the commands, without needing to deploy new code.
For the restore you will need to kill all connections except for internal sql server ones. Basically take the results of "exec sp_who" and for rows that match on dbname, and have a status that is not "background", and a cmd that is not one of "SIGNAL HANDLER", "LOCK MONITOR", "LAZY WRITER", "LOG WRITER", "CHECKPOINT SLEEP" do a "kill" on the spid (eg: ExecuteNonQuery("kill 1283")).
You'll want to trap and ignore any exceptions from the KILL command. There's nothing you can do about them. If the restore cannot proceed because of existing connections it will raise an error.
One danger with killing connections is ADO's connection pool (more for asp.net apps than windows apps). ADO assumes the a connection fetched from the connection pool is valid... and it does not react well to connections that have been killed. The next operation that occurs on that connection will fail. I can't recall the error... you might be able to trap just that specific error and handle it... also with 3.5 I think you can flush the connection pool (so... trap the error, flush the connection pool, open the connection, try the command again... ugly but might be doable).

Categories

Resources