Introduction
I'm writing a web application (C#/ASP.NET MVC 3, .NET Framework 4, MS SQL Server 2008, System.Data.ODBC for database connections) and I'm having quite some issues regarding database creation/deletion.
I have a requirement that application should be able to create and delete databases.
Problem
Application fails stress testing for that function. More specifically, if client starts to quickly create, delete, create again a database with the same name then eventually (~on 5th request) server code throws ODBCException 'Connection has been disabled.'. This behavior is observed on all machines that test has been performed on - the exact failing request may be not 5th but somewhere around that value.
Research
Googling on exception gave very low output - the exception seems very generic one and no analogue issues found. One of suggestions I've found was that my development Windows 7 might not be able to handle numerous simultaneous connections as it's not Server OS. I've tried installing our app on Windows 2008 Server - almost no change in behavior, just a bit more requests processed before exception occurs.
Code and additional comments on implementation
Databases are created using stored procedure like this:
CREATE PROCEDURE [dbo].[sp_DBCreate]
...
#databasename nvarchar(124) -- 124 is max length of database file names
AS
DECLARE #sql nvarchar(150);
BEGIN
...
-- Create a new database
SET #sql = N'CREATE DATABASE ' + quotename(#databasename, '[');
EXEC(#sql);
IF ##ERROR <> 0
RETURN -2;
...
RETURN 0;
END
Databases are deleted using the following SP:
CREATE PROCEDURE [dbo].[sp_DomainDelete]
...
#databasename nvarchar(124) -- 124 is max length of database file names
AS
DECLARE #sql nvarchar(200);
BEGIN
...
-- check if database exists
IF EXISTS(SELECT * FROM [sys].[databases] WHERE [name] = #databasename)
BEGIN
-- drop all active connections
SET #sql = N'ALTER DATABASE' + quotename(#databasename, '[') + ' SET SINGLE_USER WITH ROLLBACK IMMEDIATE';
EXEC(#sql);
-- Delete database
SET #sql = N'DROP DATABASE ' + quotename(#databasename, '[');
EXEC(#sql);
IF ##ERROR <> 0
RETURN -1; --error deleting database
END
--ELSE database does not exist. consider it deleted.
RETURN 0;
END
In both SPs I've skipped less relevant parts like sanity checks.
I'm not using any ORMs, all SPs are called from code by using OdbcCommand instances. New OdbcConnection is created for each function call.
I sincerely hope someone might give me clue to the problem.
UPD: The exactly same problem occurs if we just rapidly create a bunch of databases. Thanks to everyone for suggestions on database delete code, but I'd prefer to have a solution or at least a hint for more general problem - the one which occurs even without deleting DBs at all.
UPD2: The following code is used for SP calls:
public static int ExecuteNonQuery(string sql, params object[] parameters)
{
try
{
var command = new OdbcCommand();
Prepare(command, new OdbcConnection( GetConnectionString() /*irrelevant*/), null, CommandType.Text, sql,
parameters == null ?
new List<OdbcParameter>().ToArray() :
parameters.Select(p => p is OdbcParameter ? (OdbcParameter)p : new OdbcParameter(string.Empty, p)).ToArray());
return command.ExecuteNonQuery();
}
catch (OdbcException ex)
{
// Logging here
throw;
}
}
public static void Prepare(
OdbcCommand command,
OdbcConnection connection,
OdbcTransaction transaction,
CommandType commandType,
string commandText,
params OdbcParameter[] commandParameters)
{
if (connection.State != ConnectionState.Open)
{
connection.Open();
}
command.Connection = connection;
command.CommandText = commandText;
if (transaction != null)
{
command.Transaction = transaction;
}
command.CommandType = commandType;
if (commandParameters != null)
{
command.Parameters.AddRange(
commandParameters.Select(p => p.Value==null &&
p.Direction == ParameterDirection.Input ?
new OdbcParameter(p.ParameterName, DBNull.Value) : p).ToArray());
}
}
Sample connection string:
Driver={SQL Server}; Server=LOCALHOST;Uid=sa;Pwd=<password here>;
Okay. There may be issues of scope for OdbcConnection but also you don't appear to be closing connections after you've finished with them. This may mean that you're reliant on the pool manager to close off unused connections and return them to the pool as they timeout. The using block will automatically close and dispose of the connection when finished, allowing it to be returned to the connection pool.
Try this code:
public static int ExecuteNonQuery(string sql, params object[] parameters)
{
int result = 0;
try
{
var command = new OdbcCommand();
using (OdbcConnection connection = new OdbcConnection(GetConnectionString() /*irrelevant*/))
{
connection.Open();
Prepare(command, connection, null, CommandType.Text, sql,
parameters == null ?
new List<OdbcParameter>().ToArray() :
parameters.Select(p => p is OdbcParameter ? (OdbcParameter)p : new OdbcParameter(string.Empty, p)).ToArray());
result = command.ExecuteNonQuery();
}
}
catch (OdbcException ex)
{
// Logging here
throw;
}
return result;
}
Related
I'm building a Windows Forms Application with a connection to an SQL Database.
On start-up of my app, it will send some queries to the database to compare values:
Here is the code that generates the query:
private void CreateInsertQuery(DirectoryInfo source, string Printer)
{
foreach (FileInfo file in source.GetFiles())
{
queries.Add("EXECUTE sqlp_UpdateInsertFiles '"+ file.Name +"', '" + Printer + "'");
}
foreach (DirectoryInfo folder in source.GetDirectories())
{
queries.Add("EXECUTE sqlp_UpdateInsertFiles '" + folder.Name + "', '" + Printer + "'");
CreateInsertQuery(folder, Printer);
}
}
queries is a public List.
This is the code that sends the query to the db:
public bool InsertQueries()
{
con.Open();
using(OleDbTransaction trans = con.BeginTransaction())
{
try
{
OleDbCommand cmd;
foreach (string query in queries)
{
try
{
cmd = new OleDbCommand(query, con, trans);
cmd.ExecuteNonQuery();
}
catch (Exception ex)
{
if (ex.HResult != -2147217873)
{
MessageBox.Show(ex.Message);
}
}
}
trans.Commit();
con.Close();
return true;
}
catch (Exception ex)
{
trans.Rollback();
con.Close();
return false;
}
}
}
In my SQL database, I've created a stored procedure that gets called when the database receives the query:
AS
BEGIN
BEGIN TRANSACTION;
SET NOCOUNT ON;
BEGIN TRY
IF EXISTS
(SELECT TOP 1 fName, Printer
FROM dbo.FileTranslation
WHERE fName = #fName AND Printer = #Printer)
BEGIN
UPDATE dbo.FileTranslation
SET fName = #fName, Printer = #Printer
END;
ELSE
BEGIN
INSERT INTO dbo.FileTranslation(fName, Printer) VALUES (#fName, #Printer);
END;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
BEGIN
ROLLBACK TRANSACTION;
END
END CATCH
END;
GO
When I run my application on an empty database, then the values will get added without any problem:
.
I also do not get any error occurrences. It's only when I start my application for a second time, that the first 2 query's do not get checked on the IF EXISTS. Because it is still inserting the data into my database, 5x to be exact.
.
Which is weird as there are only 2 queries containing the data, but it gets executed every time.
I assume the id column is an sql identity column, right?
Because the first continous 7 entries are all the same I think your app is started on multiple threads which at the beginning are executing head-by-head but later their execution diverges maybe because of extra time of exception handling block. That's why only the first records are multiplied.
The problem is that your stored procedure isn't thread-safe. No locks placed on dbo.FileTranslation table by the IF EXISTS(SELECT ... which in parallel execution may result in situation where multiple executing stored procedures find the required record unexisting and will continue with the INSERT branch.
Applying the answers from https://dba.stackexchange.com/questions/187405/sql-server-concurrent-inserts-and-deletes thread this may work for you:
...
IF EXISTS
(SELECT TOP 1 fName, Printer
FROM dbo.FileTranslation WITH (UPDLOCK, SERIALIZABLE)
WHERE fName = #fName AND Printer = #Printer)
...
PS: Not related to your question but take care about #Lamu's comment on SQL injection and use try...finally or using pattern for you conn handling!
I have been stuck all day on this issue and cannot seem to find anything online pointing me to what might be causing it.
I have the below logging method in a Logger class and the below code calling the logger. When no exception occurs all the log statements work perfectly, however when an exception occurs the log statements do not run at all (however they do run from the web service call).
Logger Log Method:
public static Guid WriteToSLXLog(string ascendId, string masterDataId, string masterDataType, int? status,
string request, string requestRecieved, Exception ex, bool isError)
{
var connection = ConfigurationManager.ConnectionStrings["AscendConnectionString"];
string connectionString = "context connection=true";
// define INSERT query with parameters
var query =
"INSERT INTO " + AscendTable.SmartLogixLogDataTableName +
" (LogID, LogDate, AscendId, MasterDataId, MasterDataType, Status, Details, Request, RequestRecieved, StackTrace, IsError) " +
"VALUES (#LogID, #LogDate, #AscendId, #MasterDataId, #MasterDataType, #Status, #Details, #Request, #RequestRecieved, #StackTrace, #IsError)";
var logId = Guid.NewGuid();
using (var cn = new SqlConnection(connectionString))
{
if (!cn.State.Equals(ConnectionState.Open))
{
cn.Open();
}
// create command
using (var cmd = new SqlCommand(query, cn))
{
try
{
// define parameters and their values
cmd.Parameters.Add("#LogID", SqlDbType.UniqueIdentifier).Value = logId;
cmd.Parameters.Add("#LogDate", SqlDbType.DateTime).Value = DateTime.Now;
if (ascendId != null)
{
cmd.Parameters.Add("#AscendId", SqlDbType.VarChar, 24).Value = ascendId;
}
else
{
cmd.Parameters.Add("#AscendId", SqlDbType.VarChar, 24).Value = DBNull.Value;
}
cmd.Parameters.Add("#MasterDataId", SqlDbType.VarChar, 50).Value = masterDataId;
cmd.Parameters.Add("#MasterDataType", SqlDbType.VarChar, 50).Value = masterDataType;
if (ex == null)
{
cmd.Parameters.Add("#Status", SqlDbType.VarChar, 50).Value = status.ToString();
}
else
{
cmd.Parameters.Add("#Status", SqlDbType.VarChar, 50).Value = "2";
}
if (ex != null)
{
cmd.Parameters.Add("#Details", SqlDbType.VarChar, -1).Value = ex.Message;
if (ex.StackTrace != null)
{
cmd.Parameters.Add("#StackTrace", SqlDbType.VarChar, -1).Value =
ex.StackTrace;
}
else
{
cmd.Parameters.Add("#StackTrace", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
}
else
{
cmd.Parameters.Add("#Details", SqlDbType.VarChar, -1).Value = "Success";
cmd.Parameters.Add("#StackTrace", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
if (!string.IsNullOrEmpty(request))
{
cmd.Parameters.Add("#Request", SqlDbType.VarChar, -1).Value = request;
}
else
{
cmd.Parameters.Add("#Request", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
if (!string.IsNullOrEmpty(requestRecieved))
{
cmd.Parameters.Add("#RequestRecieved", SqlDbType.VarChar, -1).Value = requestRecieved;
}
else
{
cmd.Parameters.Add("#RequestRecieved", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
if (isError)
{
cmd.Parameters.Add("#IsError", SqlDbType.Bit).Value = 1;
}
else
{
cmd.Parameters.Add("#IsError", SqlDbType.Bit).Value = 0;
}
// open connection, execute INSERT, close connection
cmd.ExecuteNonQuery();
}
catch (Exception e)
{
// Do not want to throw an error if something goes wrong logging
}
}
}
return logId;
}
My Method where the logging issues occur:
public static void CallInsertTruckService(string id, string code, string vinNumber, string licPlateNo)
{
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "1", "", null, false);
try
{
var truckList = new TruckList();
var truck = new Truck();
truck.TruckId = code;
if (!string.IsNullOrEmpty(vinNumber))
{
truck.VIN = vinNumber;
}
else
{
truck.VIN = "";
}
if (!string.IsNullOrEmpty(licPlateNo))
{
truck.Tag = licPlateNo;
}
else
{
truck.Tag = "";
}
if (!string.IsNullOrEmpty(code))
{
truck.BackOfficeTruckId = code;
}
truckList.Add(truck);
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "2", "", null, false);
if (truckList.Any())
{
// Call SLX web service
using (var client = new WebClient())
{
var uri = SmartLogixConstants.LocalSmartLogixIntUrl;
uri += "SmartLogixApi/PushTruck";
client.Headers.Clear();
client.Headers.Add("content-type", "application/json");
client.Headers.Add("FirestreamSecretToken", SmartLogixConstants.FirestreamSecretToken);
var serialisedData = JsonConvert.SerializeObject(truckList, new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Serialize
});
// HTTP POST
var response = client.UploadString(uri, serialisedData);
var result = JsonConvert.DeserializeObject<SmartLogixResponse>(response);
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "3", "", null, false);
if (result == null || result.ResponseStatus != 1)
{
// Something went wrong
throw new ApplicationException("Error in SLX");
}
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, result.ResponseStatus, serialisedData,
null, null, false);
}
}
}
catch (Exception ex)
{
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "4", "", null, false);
throw;
}
finally
{
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "5", "", null, false);
}
}
As you can see I have added several log statements throughout the method. All of these log statements except the one in the catch block are successful if no exception is thrown. If an exception is thrown then none of them are successful. For most of them the values are exactly the same whether or not there is an exception so I know its not an issue with the values being passed. I am thinking something weird is happening that causes a rollback or something, but I am not using a transaction or anything here. One last thing this DLL is being run through the SQL CLR which is why I am using "context connection=true" for my connection string.
Thanks in advance.
Edit:
I tried adding the following as my connection string but I get an exception when trying to .Open the connection now that says "Transaction context in use by another session". I am thinking this has to do with me calling this SQL CLR procedure through a trigger. The connection string I tried is
connectionString = "Trusted_Connection=true;Data Source=(local)\\AARONSQLSERVER;Initial Catalog=Demo409;Integrated Security=True;";
Also here is the trigger:
CREATE TRIGGER [dbo].[PushToSLXOnVehicleInsert]
ON [dbo].[Vehicle] AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #returnValue int
DECLARE #newLastModifiedDate datetime = null
DECLARE #currentId bigint = null
DECLARE #counter int = 0;
DECLARE #maxCounter int
DECLARE #currentCode varchar(24) = null
DECLARE #currentVinNumber varchar(24)
DECLARE #currentLicPlateNo varchar(30)
declare #tmp table
(
id int not null
primary key(id)
)
insert #tmp
select VehicleID from INSERTED
SELECT #maxCounter = Count(*) FROM INSERTED GROUP BY VehicleID
BEGIN TRY
WHILE (#counter < #maxCounter)
BEGIN
select top 1 #currentId = id from #tmp
SELECT #currentCode = Code, #currentVinNumber = VINNumber, #currentLicPlateNo = LicPlateNo FROM INSERTED WHERE INSERTED.VehicleID = #currentId
if (#currentId is not null)
BEGIN
EXEC dbo.SLX_CallInsertTruckService
#id = #currentId,
#code = #currentCode,
#vinNumber = #currentVinNumber,
#licPlateNo = #currentLicPlateNo
END
delete from #tmp where id = #currentId
set #counter = #counter + 1;
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(4000);
DECLARE #ErrorSeverity INT;
DECLARE #ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
IF (#ErrorMessage like '%Error in SLX%')
BEGIN
SET #ErrorMessage = 'Error in SLX. Please contact SLX for more information.'
END
RAISERROR (#ErrorMessage, -- Message text.
#ErrorSeverity, -- Severity.
#ErrorState -- State.
);
END CATCH;
END
GO
The main issue here is that the SQLCLR Stored Procedure is being called from within a Trigger. A Trigger always runs within the context of a Transaction (to bind it to the DML operation that initiated the Trigger). A Trigger also implicitly sets XACT_ABORT to ON which cancels the Transaction if any error occurs. This is why none of the logging statements persist when an exception is thrown: the Transaction is auto-rolled-back, taking with it any changes made in the same Session, including the logging statements (because the Context Connection is the same Session), as well as the original DML statement.
You have three fairly simple options, though they leave you with an overall architectural problem, or a not-so-difficult-but-a-little-more-work option that solves the immediate issue as well as the larger architectural problem. First, the three simple options:
You can execute SET XACT_ABORT OFF; at the beginning of the Trigger. This will allow the TRY ... CATCH construct to work as you are expecting it to. HOWEVER, this also shifts the responsibility to you issue a ROLLBACK (usually in the CATCH block), unless you want the original DML statement to succeed no matter what, even if the Web Service calls and logging fail. Of course, if you issue a ROLLBACK, then none of the logging statements will persist, even if the Web Service still registers all of the calls that were successful, if any were.
You can leave SET XACT_ABORT alone and use a regular / external connection to SQL Server. A regular connection will be an entirely separate Connection and Session, hence it can operate independantly with regards to the Transaction. Unlike the SET XACT_ABORT OFF; option, this would allow the Trigger to operate "normally" (i.e. any error would roll-back any changes made natively in the Trigger as well as the original DML statement) while still allowing the logging INSERT statements to persist (since they were made outside of the local Transaction).
You are already calling a Web Service so the Assembly already has the necessary permissions to do this without making any additional changes. You just need to use a proper connection string (there are a few errors in your syntax), probably something along the lines of:
connectionString = #"Trusted_Connection=True; Server=(local)\AARONSQLSERVER; Database=Demo409; Enlist=False;";
The "Enlist=False;" part (scroll to the far right) is very important: without it you will continue to get the "Transaction context in use by another session" error.
If you want to stick with the Context Connection (it is a little faster) and allow for any errors outside of the Web Service to roll-back the original DML statement and all logging statements, while ignoring errors from the Web Service, or even from the logging INSERT statements, then you can simply not re-throw the exception in the catch block of CallInsertTruckService. You could instead set a variable to indicate a return code. Since this is a Stored Procedure, it can return SqlInt32 instead of void. Then you can get that value by declaring an INT variable and including it in the EXEC call as follows:
EXEC #ReturnCode = dbo.SLX_CallInsertTruckService ...;
Just declare a variable at the top of CallInsertTruckService and initialize it to 0. Then set it to some other value in the catch block. And at the end of the method, include a return _ReturnCode;.
That being said, no matter which of those choices you pick, you are still left with two fairly large problems:
The DML statement and its system-initiated Transaction are impeded by the Web Service calls. The Transaction will be left open for much longer than it should be, and this could at the very least increase blocking related to the Vehicle Table. While I am certainly an advocate of doing Web Service calls via SQLCLR, I would strongly recommend against doing so within a Trigger.
If each VehicleID that is inserted should be passed over to the Web Service, then if there is an error in one Web Service call, the remaining VehicleIDs will be skipped, and even if they aren't (option # 3 above would continue processing the rows in #tmp) then at the very least the one that just had the error won't ever be retried later.
Hence the ideal approach, which solves these two rather important issues as well the initial logging issue, is to move to a disconnected asynchronous model. You can set up a queue table to hold the Vehile info to process based on each INSERT. The Trigger would do a simple:
INSERT INTO dbo.PushToSLXQueue (VehicleID, Code, VINNumber, LicPlateNo)
SELECT VehicleID, Code, VINNumber, LicPlateNo
FROM INSERTED;
Then create a Stored Procedure that reads an item from the queue table, calls the Web Service, and if successful, then deletes that entry from the queue table. Schedule this Stored Procedure from a SQL Server Agent job to run every 10 minutes or something like that.
If there are records that will never process, then you can add a RetryCount column to the queue table, default it to 0, and upon the Web Service getting an error, increment RetryCount instead of removing the row. Then you can update the "get entry to process" SELECT query to include WHERE RetryCount < 5 or whatever limit you want to set.
There are a few issues here, with various levels of impact:
Why is id a BIGINT in the T-SQL code yet a string in the C# code?
Just FYI, the WHILE (#counter < #maxCounter) loop is inefficient and error prone compared to using an actual CURSOR. I would get rid of the #tmp Table Variable and #maxCounter.
At the very least change SELECT #maxCounter = Count(*) FROM INSERTED GROUP BY VehicleID to be just SET #maxCounter = ##ROWCOUNT; ;-). But swapping out for a real CURSOR would be best.
If the CallInsertTruckService(string id, string code, string vinNumber, string licPlateNo) signature is the actual method decorated with [SqlProcedure()], then you really should be using SqlString instead of string. Get the native string value from each parameter using the .Value property of the SqlString parameter. You can then set the proper size using the [SqlFacet()] attribute as follows:
[SqlFacet(MaxSize=24)] SqlString vinNumber
For more info on working with SQLCLR in general, please see the series that I am writing on this topic over at SQL Server Central: Stairway to SQLCLR (free registration is required to read content on that site).
I have an C# method to execute a SQL job. It executes the SQL job successfully.
And the code works perfect.
And I'm using standard SQL stored procedure msdb.dbo.sp_start_job for this.
Here is my code..
public int ExcecuteNonquery()
{
var result = 0;
using (var execJob =new SqlCommand())
{
execJob.CommandType = CommandType.StoredProcedure;
execJob.CommandText = "msdb.dbo.sp_start_job";
execJob.Parameters.AddWithValue("#job_name", "myjobname");
using (_sqlConnection)
{
if (_sqlConnection.State == ConnectionState.Closed)
_sqlConnection.Open();
sqlCommand.Connection = _sqlConnection;
result = sqlCommand.ExecuteNonQuery();
if (_sqlConnection.State == ConnectionState.Open)
_sqlConnection.Close();
}
}
return result;
}
Here is the sp which executing inside the job
ALTER PROCEDURE [Area1].[Transformation]
AS
BEGIN
SET NOCOUNT ON;
SELECT NEXT VALUE FOR SQ_COMMON
-- Transform Master Data
exec [dbo].[sp_Transform_Address];
exec [dbo].[sp_Transform_Location];
exec [dbo].[sp_Transform_Product];
exec [dbo].[sp_Transform_Supplier];
exec [dbo].[sp_Transform_SupplierLocation];
-- Generate Hierarchies and Product References
exec [dbo].[sp_Generate_HierarchyObject] 'Area1',FGDemand,1;
exec [dbo].[sp_Generate_HierarchyObject] 'Area1',RMDemand,2;
exec [dbo].[sp_Generate_Hierarchy] 'Area1',FGDemand,1;
exec [dbo].[sp_Generate_Hierarchy] 'Area1',RMDemand,2;
exec [dbo].[sp_Generate_ProductReference] 'Area1',FGDemand,1;
exec [dbo].[sp_Generate_ProductReference] 'Area1',RMDemand,2;
-- Transform Demand Allocation BOM
exec [Area1].[sp_Transform_FGDemand];
exec [Area1].[sp_Transform_FGAllocation];
exec [Area1].[sp_Transform_RMDemand];
exec [Area1].[sp_Transform_RMAllocation];
exec [Area1].[sp_Transform_BOM];
exec [Area1].[sp_Transform_RMDemand_FK];
-- Transform Purchasing Document Data
exec [dbo].[sp_Transform_PurchasingDoc];
exec [dbo].[sp_Transform_PurchasingItem];
exec [dbo].[sp_Transform_ScheduleLine];
exec [dbo].[sp_CalculateRequirement] 'Area1'
exec [dbo].[sp_Create_TransformationSummary] 'Area1'
-- Trauncate Integration Tables
exec [dbo].[sp_TruncateIntegrationTables] 'Area1'
END
The problem is, even the job is executed successfully or not it always returns -1. How can I identify whether job is successfully executed or not.
After running msdb.dbo.sp_start_job the return code is mapped to an output parameter. You have the opportunity to control the parameter's name prior to execution:
public int StartMyJob( string connectionString )
{
using (var sqlConnection = new SqlConnection( connectionString ) )
{
sqlConnection.Open( );
using (var execJob = sqlConnection.CreateCommand( ) )
{
execJob.CommandType = CommandType.StoredProcedure;
execJob.CommandText = "msdb.dbo.sp_start_job";
execJob.Parameters.AddWithValue("#job_name", "myjobname");
execJob.Parameters.Add( "#results", SqlDbType.Int ).Direction = ParameterDirection.ReturnValue;
execJob.ExecuteNonQuery();
return ( int ) sqlCommand.Parameters["results"].Value;
}
}
}
You need to know the datatype of the return code to do this - and for sp_start_job, it's SqlDbType.Int.
However, this is only the results of starting the job, which is worth knowing, but isn't the results of running your job. To get the results running of your job, you can periodically execute:
msdb.dbo.sp_help_job #jobName
One of the columns returned by the procedure is last_run_outcome and probably contains what you're really interested in. It will be 5 (unknown) while it's still running.
A job is usually the a number of steps - where each step may or may not be executed according to the outcome of previous steps. Another procedure called sp_help_jobhistory supports a lot of filters to specify which specific invocation(s) and/or steps of the job you're interested in.
SQL likes to think about jobs as scheduled work - but there's nothing to keep you from just starting a job ad-hoc - although it doesn't really provide you with much support to correlate your ad-hoc job with an instance is the job history. Dates are about as good as it gets (unless somebody knows a trick I don't know.)
I've seen where the job is created ad-hoc job just prior to running it, so the current ad-hoc execution is the only execution returned. But you end up with a lot of duplicate or near-duplicate jobs laying around that are never going to be executed again. Something you'll have to plan on cleaning up afterwards, if you go that route.
A note on your use of the _sqlConnection variable. You don't want to do that. Your code disposes of it, but it was apparently created elsewhere before this method gets called. That's bad juju. You're better off just creating the connection and disposing of it the same method. Rely on SQL connection pooling to make the connection fast - which is probably already turned on.
Also - in the code you posted - it looks like you started with execJob but switched to sqlCommand - and kinda messed up the edit. I assumed you meant execJob all the way through - and that's reflected in the example.
From MSDN about SqlCommand.ExecuteNonQuery Method:
For UPDATE, INSERT, and DELETE statements, the return value is the number of rows affected by the command. When a trigger exists on a table being inserted or updated, the return value includes the number of rows affected by both the insert or update operation and the number of rows affected by the trigger or triggers. For all other types of statements, the return value is -1. If a rollback occurs, the return value is also -1.
In this line:
result = sqlCommand.ExecuteNonQuery();
You want to return the number of rows affected by the command and save it to an int variable but since the type of statement is select so it returns -1. If you test it with INSERT or DELETE or UPDATE statements you will get the correct result.
By the way if you want to get the number of rows affected by the SELECT command and save it to an int variable you can try something like this:
select count(*) from jobs where myjobname = #myjobname
And then use ExecuteScalar to get the correct result:
result = (int)execJob.ExecuteScalar();
You need to run stored proceedure msdb.dbo.sp_help_job
private int CheckAgentJob(string connectionString, string jobName) {
SqlConnection dbConnection = new SqlConnection(connectionString);
SqlCommand command = new SqlCommand();
command.CommandType = System.Data.CommandType.StoredProcedure;
command.CommandText = "msdb.dbo.sp_help_job";
command.Parameters.AddWithValue("#job_name", jobName);
command.Connection = dbConnection;
using (dbConnection)
{
dbConnection.Open();
using (command){
SqlDataReader reader = command.ExecuteReader();
reader.Read();
int status = reader.GetInt32(21); // Row 19 = Date Row 20 = Time 21 = Last_run_outcome
reader.Close();
return status;
}
}
}
enum JobState { Failed = 0, Succeeded = 1, Retry = 2, Cancelled = 3, Unknown = 5};
Keep polling on Unknown, until you get an answer. Lets hope it is succeeded :-)
I'm having an issue getting my code to execute a MySQL routine.
Keeps popping error:
Procedure or function 'ShortenedURLS' cannot be found in database 'Get'.
Routine
DELIMITER $$
USE `o7thurlshortner`$$
DROP PROCEDURE IF EXISTS `Get.ShortenedURLS`$$
CREATE DEFINER=`root`#`localhost` PROCEDURE `Get.ShortenedURLS`(IN `ID` BIGINT)
NO SQL
SELECT `ShortID`, `ShortCode`, `URL`, `ClickThroughs`
FROM `Shortener`
WHERE `AccountID` = ID$$
DELIMITER ;
Code - Accessing and running the routine
internal DbDataReader GetResults()
{
try
{
// check for parameters
if (AreParams())
{
PrepareParams(_Cmd);
}
// set our connection
_Cmd.Connection = _Conn;
// set the type of query to run
_Cmd.CommandType = _QT;
// set the actual query to run
_Cmd.CommandText = _Qry;
// open the connection
_Cmd.Connection.Open();
// prepare the command with any parameters that may have gotten added
_Cmd.Prepare();
// Execute the SqlDataReader, and set the connection to close once returned
_Rdr = _Cmd.ExecuteReader(CommandBehavior.CloseConnection);
// clear out any parameters
_Cmd.Parameters.Clear();
// return our reader object
return (!_Rdr.HasRows) ? null : _Rdr;
}
catch (DbException SqlEx)
{
_Msg += "Acccess.GetResults SqlException: " + SqlEx.Message;
ErrorReporting.WriteEm.WriteItem(SqlEx, "o7th.Class.Library.Data.MySql.Access.GetResults", _Msg);
return null;
}
catch (Exception ex)
{
_Msg += "Acccess.GetResults Exception: " + ex.Message;
ErrorReporting.WriteEm.WriteItem(ex, "o7th.Class.Library.Data.MySql.Access.GetResults", _Msg);
return null;
}
}
Code - to fire it off
IList<Typing> _T = Wrapper.GetResults<Typing>("Get.ShortenedURLS",
System.Data.CommandType.StoredProcedure,
new string[] { "?ID" },
new object[] { 1 },
new MySqlDbType[] { MySqlDbType.Int32 },
false);
Update
Verified this does work properly once I fireoff a routine without a . in it.
How can I get this to work if my routines do have .'s, I cannot simply re-write existing procedures in a production database tied to a high traffic website...
In order to call you stored procedure you do have to wrap the name of it in backticks since it contains the special character .
However, there is a bug in the mysql connector code that is causing it to escape it again. When you specify
cmd.CommandType = CommandType.StoredProcedure;
The code branches during execute reader as you can see below...
if (statement == null || !statement.IsPrepared)
{
if (CommandType == CommandType.StoredProcedure)
statement = new StoredProcedure(this, sql);
else
statement = new PreparableStatement(this, sql);
}
// stored procs are the only statement type that need do anything during resolve
statement.Resolve(false); // the problem occurs here
part of what Resolve does is fix the procedure name..
//StoredProcedure.cs line 104-112
private string FixProcedureName(string name)
{
string[] parts = name.Split('.');
for (int i = 0; i < parts.Length; i++)
if (!parts[i].StartsWith("`", StringComparison.Ordinal))
parts[i] = String.Format("`{0}`", parts[i]);
if (parts.Length == 1) return parts[0];
return String.Format("{0}.{1}", parts[0], parts[1]);
}
As you can see here, anytime the stored procedure name has a . in it, the name of the stored procedure is broken up into parts and escaped individually, and this is causing your code to not be able to call your stored procedure.
So your only options to fix this are..
1) Open a bug with oracle to fix the provider (assuming there is not one already open)
2) Change the name of your stored procedure to not use a .
3) Download the code for the provider, fix it, recompile
4) Find another connector
I'm working with electronic equipment that digitizes waveforms in real-time (each device generates around 1000 512 byte arrays per second - we have 12 devices). I've written a client for these devices in C# that for the most part works fine and has no performance issues.
However, one of the requirements for the application is archival, and Microsoft SQL Server 2010 was mandated as the storage mechanism (outside of my control). The database layout is very simple: there is one table per device per day ("Archive_Dev02_20131015" etc). Each table has an Id column, a timestamp column, a Data column (varbinary) and 20 more integer columns with some metadata. There's a clustered primary key on Id and timestamp, and another separate index on timestamp. My naive approach was queue all data in the client application, and then inserting everything into the database in 5 second intervals using SqlCommand.
The basic mechanism looks like this:
using (SqlTransaction transaction = connection.BeginTransaction()
{
//Beginning of the insert sql statement...
string sql = "USE [DatabaseName]\r\n" +
"INSERT INTO [dbo].[Archive_Dev02_20131015]\r\n" +
"(\r\n" +
" [Timestamp], \r\n" +
" [Data], \r\n" +
" [IntField1], \r\n" +
" [...], \r\n" +
") \r\n" +
"VALUES \r\n" +
"(\r\n" +
" #timestamp, \r\n" +
" #data, \r\n" +
" #int1, \r\n" +
" #..., \r\n" +
")";
using (SqlCommand cmd = new SqlCommand(sql))
{
cmd.Connection = connection;
cmd.Transaction = transaction;
cmd.Parameters.Add("#timestamp", System.Data.SqlDbType.DateTime);
cmd.Parameters.Add("#data", System.Data.SqlDbType.Binary);
cmd.Parameters.Add("#int1", System.Data.SqlDbType.Int);
foreach (var sample in samples)
{
cmd.Parameters[0].Value = amples.ReceiveDate;
cmd.Parameters[1].Value = samples.Data; //Data is a byte array
cmd.Parameters[1].Size = samples.Data.Length;
cmd.Parameters[2].Value = sample.IntValue1;
...
int affected = cmd.ExecuteNonQuery();
if (affected != 1)
{
throw new Exception("Could not insert sample into the database!");
}
}
}
}
transaction.Commit();
}
To summarize: a batch of 1 transaction with a loop that generates insert statements and executes them.
This method turned out to be very, very slow. On my machine (i5-2400 # 3.1GHz, 8GB RAM, using .NET 4.0 and SQL Server 2008, 2 internal HDs in mirror, everything runs locally), it takes about 2,5 seconds to save the data from 2 devices, so saving 12 devices each 5 seconds is impossible.
To compare, I've written a small SQL script (actually I extracted the code C# runs with the sql server profiler) that does the same directly on the server (still running on my own machine):
set statistics io on
go
begin transaction
go
declare #i int = 0;
while #i < 24500 begin
SET #i = #i + 1
exec sp_executesql N'USE [DatabaseName]
INSERT INTO [dbo].[Archive_Dev02_20131015]
(
[Timestamp],
[Data],
[int1],
...
[int20]
)
VALUES
(
#timestamp,
#data,
#compressed,
#int1,
...
#int20,
)',N'#timestamp datetime,#data binary(118),#int1 int,...,#int20 int,',
#timestamp='2013-10-14 14:31:12.023',
#data=0xECBD07601C499625262F6DCA7B7F4AF54AD7E074A10880601324D8904010ECC188CDE692EC1D69472329AB2A81CA6556655D661640CCED9DBCF7DE7BEFBDF7DE7BEFBDF7BA3B9D4E27F7DFFF3F5C6664016CF6CE4ADAC99E2180AAC81F3F7E7C1F3F22FEEF5FE347FFFDBFF5BF1FC6F3FF040000FFFF,
#int=0,
...
#int20=0
end
commit transaction
This does (imo, but I'm probably wrong ;) ) the same thing, only this time I'm using 24500 iterations, to simulate the 12 devices at once. The query takes about 2 seconds. If I use the same amount of iterations as the C# version, the query runs in less than a second.
So my first question is: why does it run way faster on SQL server than in C#? Does this have anything to do with the connection (local tcp)?
To make matters more confusing (to me) this code runs twice as slow on the production server (IBM bladecenter, 32GB ram, fiber connection to SAN, ... filesystem operations are really fast). I've tried looking at the sql activity monitor and write performance never goes above 2MB/sec, but this might as well be normal. I'm a complete newbie to sql server (about the polar opposite of a competent DBA in fact).
Any ideas on how I can make the C# code more performant?
By far the best approach for loading this sort of data is to use a table-valued parameter, and a stored procedure that takes the data. A really simple example of a table type and procedure that uses it would be:
CREATE TYPE [dbo].[StringTable]
AS TABLE ([Value] [nvarchar] (MAX) NOT NULL)
GO
CREATE PROCEDURE [dbo].[InsertStrings]
#Paths [dbo].[StringTable] READONLY
AS
INSERT INTO [dbo].[MyTable] ([Value])
SELECT [Value] FROM #Paths
GO
Then the C# code would be something along the lines of (please bear in mind that I've typed this into the S/O editor so there might be typos):
private static IEnumerable<SqlDataRecord> TransformStringList(ICollection<string> source)
{
if (source == null || source.Count == 0)
{
return null;
}
return GetRecords(source,
() => new SqlDataRecord(new SqlMetaData("Value", SqlDbType.NVarChar, -1)),
(record, value) => record.SetString(0, value));
}
private static IEnumerable<SqlDataRecord> GetRecords<T>(IEnumerable<T> source, Func<SqlDataRecord> factory, Action<SqlDataRecord, T> hydrator)
{
SqlDataRecord dataRecord = factory();
foreach (var value in source)
{
hydrator(dataRecord, value);
yield return dataRecord;
}
}
private InsertStrings(ICollection<string> strings, SqlConnection connection)
{
using (var transaction = connection.BeginTransaction())
{
using (var cmd = new SqlCommand("dbo.InsertStrings"))
{
cmd.Connection = connection;
cmd.Transaction = transaction;
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add(new SqlParameter("#Paths", SqlDbType.Structured) { Value = TransformStringList(strings) };
cmd.ExecuteNonQuery();
}
}
}
This approach has speed that rivals SqlBulkCopy, but it also yields some better control through the ability to run the things that you're updating through a procedure, and also makes it a lot easier to deal with concurrency.
Edit -> Just for completeness, this approach works on SQL Server 2008 and up. Seeing as there isn't such a thing as SQL Server 2010 I thought I'd better mention that.
In sql server,
CREATE TYPE [dbo].[ArchiveData]
AS TABLE (
[Timestamp] [DateTime] NOT NULL,
[Data] [VarBinary](MAX) NOT NULL,
[IntField1] [Int] NOT NULL,
[...] [Int] NOT NULL,
[IntField20] NOT NULL)
GO
Then your code should be something like the code below. This code uses a Table Value Parameter to insert all pending data at once, is a single transaction.
Note the ommission of the the slow and unecessaery USE DATABASE and the use of verbatim strings (#"") to make the code more readable.
// The insert sql statement.
string sql =
#"INSERT INTO [dbo].[Archive_Dev02_20131015] (
[Timestamp],
[Data],
[IntField1],
[...],
[IntField20])
SELECT * FROM #data;";
using (SqlCommand cmd = new SqlCommand(sql))
{
using (SqlTransaction transaction = connection.BeginTransaction()
{
cmd.Connection = connection;
cmd.Transaction = transaction;
cmd.Parameters.Add(new SqlParameter("#data", SqlDbType.Structured)
{
Value = TransformSamples(samples);
});
int affected = cmd.ExecuteNonQuery();
transaction.Commit();
}
}
...
private static IEnumerable<SqlDataRecord> TransformSamples(
{YourSampleType} samples)
{
var schema = new[]
{
new SqlMetaData("Timestamp", SqlDbType.DateTime),
new SqlMetaData("Timestamp", SqlDbType.VarBinary, -1),
new SqlMetaData("IntField1", SqlDbType.Int),
new SqlMetaData("...", SqlDbType.Int),
new SqlMetaData("IntField20", SqlDbType.Int)
};
foreach (var sample in samples)
{
var row = new SqlDataRecord(schema);
row.SetSqlDate(0, sample.ReceiveDate);
row.SetSqlBinary(1, sample.Data);
row.SetSqlInt(2, sample.Data.Length);
row.SetSqlInt(..., ...);
row.SetSqlInt(24, sample.IntValue19);
yield return row;
}
}
I've managed to solve my issue by using SqlBulkInsert as suggested by juharr in one of the comments above.
I've mainly based myself on this post to convert my data to a DataTable that can be bulk inserted into the database:
Convert generic List/Enumerable to DataTable?
Thanks for all your answers!