using multiple transactions in single connection - c#

Basically it's a patching mechanism
Here is what I'm doing :
Open a SQL connection.
Begin the transaction.
Update a record in database for the version of the software.
Execute some more queries on same database by using same connection.
Download a 15 to 20 MB file.
Execute a select query by using the same connection.
Commit the transaction.
Close the transaction.
This sequence is causing the problem of SQL Connection timeout as it takes time to download the file.
The problem is that I can commit the transaction only after downloading the file and not before that.
Writting the code in C#. Database used is SQLCE
Here is some part of the code:
SqlCeConnection conn = new SqlCeConnection("ConnectionString");
conn.Open();
SqlCeTransaction ts = conn.BeginTransaction();
//A method call executes all the methods that with parameters
(string sqlQuery, ref SqlCeConnection conn, SqlCeTransaction ts)
{
SqlCeCommand cmd = new SqlCeCommand();
cmd.Connection = conn;
cmd.Transaction = ts;
cmd.CommandText = sqlQuery;
cmd.ExecuteNonQuery();
}
//A method call downloads the file of 15 to 20 MB
//A method executes a select query that returns the version of the software by using same SQL connection.
//The above query gives the error of SQl connection timeout
ts.Commit();
conn.Close();
Can any one help me to solve the problem

Set you command timeout manually.
cmd.CommandTimeout = 180;
This example sets the Timeout to 180 seconds (3 minutes)

This is what I did.
I passed the same connection and transaction object to the method which was downloading the file.
In that method I executed a simple select query in the loop where the file was getting downloaded. This helped to keep the connection and transaction active.
Here even if your internet connection speed is slow it does not affect as in each loop your SQl query gets fired and keep the connection active.

Related

SqlCommand Times Out, CommandTimeout and ConnectionTimeout both = 0

I'm running a large number of extracts from a SQL database via a C# Script Task within a SSIS package. The connection to the source is acquired from a connection manager within the package:
object rawConnection = Dts.Connections[sqlSpecItems["ConnectionManager"]].AcquireConnection(Dts.Transaction);
SqlConnection connectionFromCM = (SqlConnection)rawConnection;
(splSpecItems is a Dictionary object that supplies the name of the connection manager to use)
The ConnectionTimeout property of the connection manager is set to 0. The connection string generated for the CM is:
Data Source=MyDatabase;User ID=MyUserName;Initial Catalog=MyDatabaseName;Persist Security Info=True;Asynchronous Processing=True;Connect Timeout=0;Application Name=MyPackageApplicationName;
The connection is used to return a SqlDataReader object as follows:
private SqlDataReader GetDataReaderFromQuery(string sqlQueryToExecute)
{
// connect to server
SqlConnection sqlReaderSource = GetSourceSQLConnection();
// create command
SqlCommand sqlReaderCmd = new SqlCommand(sqlQueryToExecute, sqlReaderSource)
{
CommandType = CommandType.Text,
CommandTimeout = 0
};
// execute query to return data to reader
SqlDataReader sqlReader = sqlReaderCmd.ExecuteReader();
return sqlReader;
}
The operation fails with the error message:
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Logged start and end times for the operation are typically around 50s apart, but can be up to 120s.
The source database is a Business Central cloud hosted SQL db. The failures occur on a small number of specific extracts, the larger ones (although by no means large in absolute terms, c20k rows). When attempting to query these via SSMS, there is typically a delay as the data has to be fetched from the source into memory, which I suspect to be the reason for the timeout (notwithstanding the setting of timeout = 0 for the connection and command). Note that once the failure has happened once, the data is in memory and so it doesn't repeat if I restart the job.
I've looked at various answers on this site, as well as across other sites. Nothing I have seen so far has given me any idea of what I might try to resolve this problem. Any help would be appreciated.
So, I found the problem. Just in case it helps anyone else, the timeout wasn't throwing because of the reader. Rather, it was the SqlBulkCopy operation downstream of it that I pass the reader into. Adding:
bulkCopy.BulkCopyTimeout = 0;
Has cleared the problem...

Getting SQL Server timeout exception even with command timeout set to 0

I have set the command timeout to 0 as per the documentation in SQL Server. I'm indexing a large table, and still get an exception "Execution timeout expired". The timeout period elapsed prior to completion of the operation or the server is not responding. The server is responding as I watch it though the SQL Server Monitor.
Here is the pertinent code:
private void ExecuteQuery(string qStr)
{
using (SqlConnection cnx = new SqlConnection(_ConnectionString))
{
cnx.Open();
using (SqlCommand cmd = new SqlCommand(qStr, cnx))
{
cmd.CommandTimeout = 0;
cmd.ExecuteNonQuery();
}
}
}
This is the connection string
Data Source='tcp:aplace.database.windows.net,1433';Initial Catalog='SQL-Dev';User Id='user#place';Password='password';Connection Timeout=360
Why am I getting a execution timeout? I have the connection timeout set to 7200 seconds, also. I am indexing a 31 million rows table on one column.
First, connection timeout and command timeout are not the same thing. Make sure you understand the difference and are using them correctly.
Second, if this is from a web page, you also need to consider timeout values relating to the web server, etc.
Third, to verify that it is in fact a timeout issue, execute your index statement in SSMS and find out how long it takes. Since the actual indexing takes place on the SQL server no matter where it is called from, the indexing time should be roughly equal whether running from SSMS or your application.

SQL Server: "CREATE DATABASE" query timeout

I have a .Net application that is creating a new database. The current code is working great in development environment and in many production environments. So I am confident the code is fine.
However, we have a specific instance where the user is getting a timeout while the application is running the following SQL Command:
CREATE DATABASE NameOfDatabase
The code is pretty simple, and as you can see it uses the default timeout period for SQL commands which is 30 seconds:
using (SqlConnection connection = new SqlConnection(connectionString))
{
string query = "CREATE DATABASE " + databaseName;
SqlCommand command = new SqlCommand(query, connection);
connection.Open();
command.ExecuteNonQuery();
}
Note: our log file shows the error occurs on ExecuteNonQuery which suggests that this is NOT a timeout while opening the connection, and rather during query execution.
The specific .Net error is:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
With a line from the stack trace to show my reasoning on it being a command timeout (not a connection timeout):
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
Note: This error information is obtained using a try/catch around the code which was left out of the question for simplicity as it is not relevant to the problem.
Questions
Under any reasonable (or even uncommon) situations, should a CREATE DATABASE query take this long to run and still be successful?
If it shouldn't take that long, what are the common causes for a timeout? (where should I be looking to debug the problem?)
I do not have enough rep points to comment. To get the specific error wrap command.ExecuteNonQuery(); in a try catch. at least you can get the specific error that is occurring. Also use using with SqlCommand that will dispose that object. Connection State
using (SqlConnection connection = new SqlConnection(connectionString))
{
string query = "CREATE DATABASE " + databaseName;
using(SqlCommand command = new SqlCommand(query, connection))
{
try
{
connection.Open();
if (connection.State == ConnectionState.Open)
command.ExecuteNonQuery();
}
catch(Exception ex)
{
System.Diagnostics.Debug.WriteLine(ex.Message);
}
}
}
Maybe this TechNet article can help? Troubleshooting: Timeout Expired

Oracle Connection Expires and doesn't open

This is a bit strange. I have two Oracle databases. One is local. Other is from Amazon. What happens is very weird. I'm using C# to this.
If I open a connection and do some query, it works.
If after 1+ hour I open a connection, It denies it in the first time, then allows it.
When after 1+ hour that I executed the last query in Amazon database, it throws a new OracleException and it just happens in Amazon database. In my local database it leaves me to open a connection anytime, but there it denies it on first time and allows on the second. Also if I use try{} catch() {} it doesn't allows. My current code:
private OracleConnection _oracleConnection = new OracleConnection(ConfigurationManager.ConnectionStrings["Oracle"].ConnectionString);
private void OpenConnection()
{
if (_oracleConnection.State != ConnectionState.Open)
_oracleConnection.Open();
}
public void MyAction(employer)
{
OpenConnection();
using (OracleCommand command = _oracleConnection.CreateCommand())
{
command.CommandType = CommandType.StoredProcedure;
command.CommandText = "proc_0001";
command.Parameters.Clear();
command.Parameters.Add("pi_employer_id, OracleDbType.Int32").Value = employer;
command.ExecuteNonQuery();
}
}
The weird is that if I just execute it in a short time, it goes OK, but if I stay more than 1 hour without to tick Oracle, it throws me an Exception in OpenConnection() and as I'm calling this method in an event of a button click, when I do it again, after the exception, I get my result. I'm starting to think that Oracle is just joking with me. I saw Oracle Documentation and C# Documentation and Connection LifeTime/Connection TimeOut in Oracle's string, but they are to KEEP a connection and I get the error when OPENING a connection, bu as I said, only after a big time and it works fine on second. I don't want to use a workaround to it. Do you know what is the cause of this?
Thanks in advance.

How to rollback multiple Queries on different database servers in case of any error

I am using different SQL procedures in an application.
First procedures insert some rows then some processing in C#code and then 2nd procedure
do some updation then again some code processing then third procedure delete some record and then insert new record. When all is done on Sever 1 then data is fetch from this server and sent to Server 2 there record is deleted and new record is inserted.
IF there is error at any stage on any server in any procedure i want to roll back all the record.
I can not use begin trans because processing takes time and can not block table as others users are also using same tables in parallel. So kindly tell me how can i achieve it without blocking the table for other users.
Thanks in advance.
Edited (Added code example):
I tried Transaction Scope but i am getting exception while opening the connection. I configured MS DTC but may be not configured properly.
"
Network access for Distributed Transaction Manager (MSDTC) has been disabled. Please enable DTC for network access in the security configuration for MSDTC using the Component Services Administrative tool."
using (TransactionScope ts = new TransactionScope(TransactionScopeOption.Required))
{
try
{
dl.SetBookReadyToLive(13570, false);
//SetBookReadyToLive
dl.AddTestSubmiitedTitleID(23402);
dl.AddBookAuthorAtLIve(13570, 1);
ts.Complete();
}
catch (Exception ex)
{
Response.Write(ex.Message);
}
}
public void SetBookReadyToLive(long BookID, bool status)
{
try
{
if (dbConMeta.State != ConnectionState.Open)
dbConMeta.Open();
SqlCommand cmd = new SqlCommand("spSetBookReadyToLive", dbConMeta);
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Clear();
cmd.Parameters.Add("#BookID", BookID);
cmd.Parameters.Add("#status", status);
cmd.ExecuteNonQuery();
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
catch
{
if (dbConMeta.State == ConnectionState.Open)
dbConMeta.Close();
}
}
I get the exception on opening the connection of method>
I am using SQL Server 2000, i have set the configuration of MS DTC on the machine where SQL Server is installed and also on my PC from where i am running the code. But still same exception.
Kindly help me to configure it
You can use the TransactionScope class. It works generally well but in case of distributed SQL servers like in your case requires the MS DTC enabled in both servers and configured properly (security has to be granted for execution of network transactions, distributed ones and so on...)
here a copy paste from an example on MSDN, you could "almost" use it like this... :)
// Create the TransactionScope to execute the commands, guaranteeing
// that both commands can commit or roll back as a single unit of work.
using (TransactionScope scope = new TransactionScope())
{
using (SqlConnection connection1 = new SqlConnection(connectString1))
{
// Opening the connection automatically enlists it in the
// TransactionScope as a lightweight transaction.
connection1.Open();
// Create the SqlCommand object and execute the first command.
SqlCommand command1 = new SqlCommand(commandText1, connection1);
returnValue = command1.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command1: {0}", returnValue);
// If you get here, this means that command1 succeeded. By nesting
// the using block for connection2 inside that of connection1, you
// conserve server and network resources as connection2 is opened
// only when there is a chance that the transaction can commit.
using (SqlConnection connection2 = new SqlConnection(connectString2))
{
// The transaction is escalated to a full distributed
// transaction when connection2 is opened.
connection2.Open();
// Execute the second command in the second database.
returnValue = 0;
SqlCommand command2 = new SqlCommand(commandText2, connection2);
returnValue = command2.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command2: {0}", returnValue);
}
}
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
source: TransactionScope Class
to minimize locks you could specify the IsolationLevel with the overload of the constructor which takes a TransactionScopeOptions, default is Serializable if you are fine with that you could set it to ReadCommitted.
Note: Personally I would not use this one unless absolutely needed, because it's a bit of a pain to have the DTC always configured and Distributed Transactions are in general slower than local ones but really depends on your BL / DAL logic.
Short answer : The same way you would do it if you would do it in MS SQL Management Studio.
You open a connection to a server.
Open a transaction for a specific server
You run your queries related to this server
You make sure to keep your connection alive while you... [go back to 1. for next server]
If all your queries worked, commit all your changes.
Else, rollback all your queries.
Warning : The first table will most likely be locked until you're done with all your servers/queries. What you could do here to help this : If you got a lot of data, you can transfer the data to temporary tables on every servers before doing the step #2. Once this is done, you open the transaction, do your fast things, then commit/rollback asap.
Note: I know you asked how to achieve this without locking the tables, hence why I added an idea in the « warning » part.

Categories

Resources