Say I have the following SQL statements that I'm executing using ExecuteNonQuery(DbCommand) from C# in a Web Application
DECLARE #InsertedProductID INT -- this is passed as a parameter
DECLARE #GroupID INT -- this is passed as a parameter
DECLARE #total INT
SET #total = (SELECT COUNT (*) FROM Products WHERE GroupID = #GroupID)
UPDATE Products SET ProdName = 'Prod_'+ CAST(#total as varchar(15))
WHERE ProductID = #InsertedProductID
My problem is that I want to ensure that the whole block executes at one. My goal is to always have the ProdName unique per group. If I leave everything the way it is, there is a good chance that I will get duplicate product names if an insert took place in between getting the #total and performing the UPDATE. Is there a way to make sure that the whole SQL block executes at once with no interruption. Will exec or sp_executesql achieve this? My last resort would be to put a lock around the ExecuteNonQuery(DbCommand) But I don't like that since it would create a bottleneck. I don't think that using a sql transaction is helpful here because I'm not worried about the integrity of the commands, I'm rather worried about the parallelism of the commands.
Generally any DML statement (UPDATE/INSERT/DELETE) places a lock (row level / table level) on the particular table but if you want to explicitly guarantee that your operation shouldn't interfere with other executing statement then you should consider placing that entire SQL block inside a transaction block saying
Begin transaction
begin try
DECLARE #InsertedProductID INT -- this is passed as a parameter
DECLARE #GroupID INT -- this is passed as a parameter
DECLARE #total INT
SET #total = (SELECT COUNT (*) FROM Products WHERE GroupID = #GroupID)
UPDATE Products SET ProdName = 'Prod_'+ CAST(#total as varchar(15)) WHERE ProductID = #InsertedProductID
commit; // commits the transaction
end try
begin catch
rollback; //Rolls back the transaction
end catch
end
You should also consider making the Transaction Isolation Level to READ COMMITTED to avoid dirty reads. Also, obviously you should wrap this entire logic in a stored procedure rather executing them as adhoc SQL
If you have control of the creation of your SqlConnection objects, consider relying on database locks using Transactions and an appropriate IsolationLevel. Using Snapshot, for example, will cause the second transaction committed to fail if a separate transaction touched the data before the commit occurred.
Something like:
var c = new SqlConnection(...);
var tran1 = c.BeginTransaction(IsolationLevel.Snapshot);
var tran2 = c.BeginTransaction(IsolationLevel.Snapshot);
DoStuff(c, tran1);//Touch some database data
tran1.Commit();
DoStuff(c, tran2);//Change the same data
tran2.Commit();//Error!
not so sure you could not just do this
UPDATE Products
SET ProdName = 'Prod_'+ CAST((SELECT COUNT (*)
FROM Products
WHERE GroupID = #GroupID) as varchar(15))
WHERE ProductID = #InsertedProductID
But to me that is an odd update
Using a transaction is the right way to go. Along with the other answers, you can also use TransactionScope. The TransactionScope implicitly enrolls the connection and SQL command(s) into a transaction. A rollback will happen automatically if there is an issue since the TransactionScope is in a using block.
Example:
try
{
using (var scope = new TransactionScope())
{
using (var conn = new SqlConnection("your connection string"))
{
conn.Open();
var cmd = new SqlCommand("your SQL here", conn);
cmd.ExecuteNonQuery();
}
scope.Complete();
}
}
catch (TransactionAbortedException ex)
{
}
catch (ApplicationException ex)
{
}
Related
I am having an issue with my code and the numerous SQL calls that it makes with deadlocks. I pasted the code into PasteBin here: https://pastebin.com/p1YDkKsB. Can someone help me out here? It happens most often in the CheckClanActivity task here:
using(SqlCommand cmd = new SqlCommand(string.Format("select * from ClanMembers where MembershipId={0} and IsActive = 1", entry.Player.DestinyUserInfo.MembershipId), conn))
, but it also happens all over the place.
Edit:
Okay, the involved SQL statements are as follows:
if not exists(select * from ClanMembers where MembershipId={0}) begin insert into ClanMembers(ID, MembershipId, BattleNetId, ClanId, DateLastPlayed, IsActive, LastUpdated) select ISNULL(MAX(ID) + 1, 0),{0},'{1}',{2},'{3}', 1, GETDATE() from ClanMembers end else begin update ClanMembers set DateLastPlayed='{3}', LastUpdated=GETDATE() where MembershipId={0} end
if not exists(select * from ClanMemberCharacters where CharacterId={1}) begin insert into ClanMemberCharacters(ID,MembershipId,CharacterId) select ISNULL(MAX(ID)+1,0),{0},{1} from ClanMemberCharacters end
select c.* from ClanMemberCharacters c join ClanMembers m on m.MembershipId = c.MembershipId where m.IsActive = 1 ORDER BY m.MembershipId desc
select * from ActivityHistory where InstanceId = {0}
if not exists(select * from ActivityHistory where InstanceId = {0}) begin insert into ActivityHistory(InstanceId,MembershipId,CharacterId,GameMode,ActivityDate,ReferenceId,DirectorActivityHash,IsPrivate,ClanActivity,ClanActivityCount) values( {0},{1},{2},'{3}','{4}',{5},{6},{7},{8},{9} ) END
select * from ClanMembers where IsActive = 1
Select * from ClanMembers where IsActive = 0
update ClanMembers set IsActive = 1 where MembershipId = {0}
select * from ClanMembers where MembershipId={0} and IsActive = 1
I suspect the problem is the nesting of SqlConnections with the same connection string one within the other. Suggestions:
Move the connection using as close to the scope of the command as possible. For example, avoid calls to your other methods from within it.
Avoid the if (conn.State == ConnectionState.Closed) { conn.Open(); } and simply move the conn.Open(); to be the first thing inside the using of the SqlConnection.
Remove the conn.Close() which will be done automatically when leaving the using block.
SqlDataAdapter is also disposable so should be in a using block.
The way you catch and swallow exceptions means the caller has no idea that something has gone wrong. Consider rethrowing the exception with throw; or not catching the exception here, and catching it higher up the call stack.
The way you are logging the errors will not help you diagnose the problem. Consider logging ex.ToString() rather than just ex.Message.
Separate your concerns. You are cramming everything together. Refactor your code so that the data layer is separated from the business logic.
Avoid doing string.Format to construct your queries, because by doing so they are vulnerable to SQL injection attacks.
I have been stuck all day on this issue and cannot seem to find anything online pointing me to what might be causing it.
I have the below logging method in a Logger class and the below code calling the logger. When no exception occurs all the log statements work perfectly, however when an exception occurs the log statements do not run at all (however they do run from the web service call).
Logger Log Method:
public static Guid WriteToSLXLog(string ascendId, string masterDataId, string masterDataType, int? status,
string request, string requestRecieved, Exception ex, bool isError)
{
var connection = ConfigurationManager.ConnectionStrings["AscendConnectionString"];
string connectionString = "context connection=true";
// define INSERT query with parameters
var query =
"INSERT INTO " + AscendTable.SmartLogixLogDataTableName +
" (LogID, LogDate, AscendId, MasterDataId, MasterDataType, Status, Details, Request, RequestRecieved, StackTrace, IsError) " +
"VALUES (#LogID, #LogDate, #AscendId, #MasterDataId, #MasterDataType, #Status, #Details, #Request, #RequestRecieved, #StackTrace, #IsError)";
var logId = Guid.NewGuid();
using (var cn = new SqlConnection(connectionString))
{
if (!cn.State.Equals(ConnectionState.Open))
{
cn.Open();
}
// create command
using (var cmd = new SqlCommand(query, cn))
{
try
{
// define parameters and their values
cmd.Parameters.Add("#LogID", SqlDbType.UniqueIdentifier).Value = logId;
cmd.Parameters.Add("#LogDate", SqlDbType.DateTime).Value = DateTime.Now;
if (ascendId != null)
{
cmd.Parameters.Add("#AscendId", SqlDbType.VarChar, 24).Value = ascendId;
}
else
{
cmd.Parameters.Add("#AscendId", SqlDbType.VarChar, 24).Value = DBNull.Value;
}
cmd.Parameters.Add("#MasterDataId", SqlDbType.VarChar, 50).Value = masterDataId;
cmd.Parameters.Add("#MasterDataType", SqlDbType.VarChar, 50).Value = masterDataType;
if (ex == null)
{
cmd.Parameters.Add("#Status", SqlDbType.VarChar, 50).Value = status.ToString();
}
else
{
cmd.Parameters.Add("#Status", SqlDbType.VarChar, 50).Value = "2";
}
if (ex != null)
{
cmd.Parameters.Add("#Details", SqlDbType.VarChar, -1).Value = ex.Message;
if (ex.StackTrace != null)
{
cmd.Parameters.Add("#StackTrace", SqlDbType.VarChar, -1).Value =
ex.StackTrace;
}
else
{
cmd.Parameters.Add("#StackTrace", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
}
else
{
cmd.Parameters.Add("#Details", SqlDbType.VarChar, -1).Value = "Success";
cmd.Parameters.Add("#StackTrace", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
if (!string.IsNullOrEmpty(request))
{
cmd.Parameters.Add("#Request", SqlDbType.VarChar, -1).Value = request;
}
else
{
cmd.Parameters.Add("#Request", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
if (!string.IsNullOrEmpty(requestRecieved))
{
cmd.Parameters.Add("#RequestRecieved", SqlDbType.VarChar, -1).Value = requestRecieved;
}
else
{
cmd.Parameters.Add("#RequestRecieved", SqlDbType.VarChar, -1).Value = DBNull.Value;
}
if (isError)
{
cmd.Parameters.Add("#IsError", SqlDbType.Bit).Value = 1;
}
else
{
cmd.Parameters.Add("#IsError", SqlDbType.Bit).Value = 0;
}
// open connection, execute INSERT, close connection
cmd.ExecuteNonQuery();
}
catch (Exception e)
{
// Do not want to throw an error if something goes wrong logging
}
}
}
return logId;
}
My Method where the logging issues occur:
public static void CallInsertTruckService(string id, string code, string vinNumber, string licPlateNo)
{
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "1", "", null, false);
try
{
var truckList = new TruckList();
var truck = new Truck();
truck.TruckId = code;
if (!string.IsNullOrEmpty(vinNumber))
{
truck.VIN = vinNumber;
}
else
{
truck.VIN = "";
}
if (!string.IsNullOrEmpty(licPlateNo))
{
truck.Tag = licPlateNo;
}
else
{
truck.Tag = "";
}
if (!string.IsNullOrEmpty(code))
{
truck.BackOfficeTruckId = code;
}
truckList.Add(truck);
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "2", "", null, false);
if (truckList.Any())
{
// Call SLX web service
using (var client = new WebClient())
{
var uri = SmartLogixConstants.LocalSmartLogixIntUrl;
uri += "SmartLogixApi/PushTruck";
client.Headers.Clear();
client.Headers.Add("content-type", "application/json");
client.Headers.Add("FirestreamSecretToken", SmartLogixConstants.FirestreamSecretToken);
var serialisedData = JsonConvert.SerializeObject(truckList, new JsonSerializerSettings
{
ReferenceLoopHandling = ReferenceLoopHandling.Serialize
});
// HTTP POST
var response = client.UploadString(uri, serialisedData);
var result = JsonConvert.DeserializeObject<SmartLogixResponse>(response);
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "3", "", null, false);
if (result == null || result.ResponseStatus != 1)
{
// Something went wrong
throw new ApplicationException("Error in SLX");
}
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, result.ResponseStatus, serialisedData,
null, null, false);
}
}
}
catch (Exception ex)
{
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "4", "", null, false);
throw;
}
finally
{
Logger.WriteToSLXLog(id, code, MasterDataType.TruckType, 4, "5", "", null, false);
}
}
As you can see I have added several log statements throughout the method. All of these log statements except the one in the catch block are successful if no exception is thrown. If an exception is thrown then none of them are successful. For most of them the values are exactly the same whether or not there is an exception so I know its not an issue with the values being passed. I am thinking something weird is happening that causes a rollback or something, but I am not using a transaction or anything here. One last thing this DLL is being run through the SQL CLR which is why I am using "context connection=true" for my connection string.
Thanks in advance.
Edit:
I tried adding the following as my connection string but I get an exception when trying to .Open the connection now that says "Transaction context in use by another session". I am thinking this has to do with me calling this SQL CLR procedure through a trigger. The connection string I tried is
connectionString = "Trusted_Connection=true;Data Source=(local)\\AARONSQLSERVER;Initial Catalog=Demo409;Integrated Security=True;";
Also here is the trigger:
CREATE TRIGGER [dbo].[PushToSLXOnVehicleInsert]
ON [dbo].[Vehicle] AFTER INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
DECLARE #returnValue int
DECLARE #newLastModifiedDate datetime = null
DECLARE #currentId bigint = null
DECLARE #counter int = 0;
DECLARE #maxCounter int
DECLARE #currentCode varchar(24) = null
DECLARE #currentVinNumber varchar(24)
DECLARE #currentLicPlateNo varchar(30)
declare #tmp table
(
id int not null
primary key(id)
)
insert #tmp
select VehicleID from INSERTED
SELECT #maxCounter = Count(*) FROM INSERTED GROUP BY VehicleID
BEGIN TRY
WHILE (#counter < #maxCounter)
BEGIN
select top 1 #currentId = id from #tmp
SELECT #currentCode = Code, #currentVinNumber = VINNumber, #currentLicPlateNo = LicPlateNo FROM INSERTED WHERE INSERTED.VehicleID = #currentId
if (#currentId is not null)
BEGIN
EXEC dbo.SLX_CallInsertTruckService
#id = #currentId,
#code = #currentCode,
#vinNumber = #currentVinNumber,
#licPlateNo = #currentLicPlateNo
END
delete from #tmp where id = #currentId
set #counter = #counter + 1;
END
END TRY
BEGIN CATCH
DECLARE #ErrorMessage NVARCHAR(4000);
DECLARE #ErrorSeverity INT;
DECLARE #ErrorState INT;
SELECT
#ErrorMessage = ERROR_MESSAGE(),
#ErrorSeverity = ERROR_SEVERITY(),
#ErrorState = ERROR_STATE();
IF (#ErrorMessage like '%Error in SLX%')
BEGIN
SET #ErrorMessage = 'Error in SLX. Please contact SLX for more information.'
END
RAISERROR (#ErrorMessage, -- Message text.
#ErrorSeverity, -- Severity.
#ErrorState -- State.
);
END CATCH;
END
GO
The main issue here is that the SQLCLR Stored Procedure is being called from within a Trigger. A Trigger always runs within the context of a Transaction (to bind it to the DML operation that initiated the Trigger). A Trigger also implicitly sets XACT_ABORT to ON which cancels the Transaction if any error occurs. This is why none of the logging statements persist when an exception is thrown: the Transaction is auto-rolled-back, taking with it any changes made in the same Session, including the logging statements (because the Context Connection is the same Session), as well as the original DML statement.
You have three fairly simple options, though they leave you with an overall architectural problem, or a not-so-difficult-but-a-little-more-work option that solves the immediate issue as well as the larger architectural problem. First, the three simple options:
You can execute SET XACT_ABORT OFF; at the beginning of the Trigger. This will allow the TRY ... CATCH construct to work as you are expecting it to. HOWEVER, this also shifts the responsibility to you issue a ROLLBACK (usually in the CATCH block), unless you want the original DML statement to succeed no matter what, even if the Web Service calls and logging fail. Of course, if you issue a ROLLBACK, then none of the logging statements will persist, even if the Web Service still registers all of the calls that were successful, if any were.
You can leave SET XACT_ABORT alone and use a regular / external connection to SQL Server. A regular connection will be an entirely separate Connection and Session, hence it can operate independantly with regards to the Transaction. Unlike the SET XACT_ABORT OFF; option, this would allow the Trigger to operate "normally" (i.e. any error would roll-back any changes made natively in the Trigger as well as the original DML statement) while still allowing the logging INSERT statements to persist (since they were made outside of the local Transaction).
You are already calling a Web Service so the Assembly already has the necessary permissions to do this without making any additional changes. You just need to use a proper connection string (there are a few errors in your syntax), probably something along the lines of:
connectionString = #"Trusted_Connection=True; Server=(local)\AARONSQLSERVER; Database=Demo409; Enlist=False;";
The "Enlist=False;" part (scroll to the far right) is very important: without it you will continue to get the "Transaction context in use by another session" error.
If you want to stick with the Context Connection (it is a little faster) and allow for any errors outside of the Web Service to roll-back the original DML statement and all logging statements, while ignoring errors from the Web Service, or even from the logging INSERT statements, then you can simply not re-throw the exception in the catch block of CallInsertTruckService. You could instead set a variable to indicate a return code. Since this is a Stored Procedure, it can return SqlInt32 instead of void. Then you can get that value by declaring an INT variable and including it in the EXEC call as follows:
EXEC #ReturnCode = dbo.SLX_CallInsertTruckService ...;
Just declare a variable at the top of CallInsertTruckService and initialize it to 0. Then set it to some other value in the catch block. And at the end of the method, include a return _ReturnCode;.
That being said, no matter which of those choices you pick, you are still left with two fairly large problems:
The DML statement and its system-initiated Transaction are impeded by the Web Service calls. The Transaction will be left open for much longer than it should be, and this could at the very least increase blocking related to the Vehicle Table. While I am certainly an advocate of doing Web Service calls via SQLCLR, I would strongly recommend against doing so within a Trigger.
If each VehicleID that is inserted should be passed over to the Web Service, then if there is an error in one Web Service call, the remaining VehicleIDs will be skipped, and even if they aren't (option # 3 above would continue processing the rows in #tmp) then at the very least the one that just had the error won't ever be retried later.
Hence the ideal approach, which solves these two rather important issues as well the initial logging issue, is to move to a disconnected asynchronous model. You can set up a queue table to hold the Vehile info to process based on each INSERT. The Trigger would do a simple:
INSERT INTO dbo.PushToSLXQueue (VehicleID, Code, VINNumber, LicPlateNo)
SELECT VehicleID, Code, VINNumber, LicPlateNo
FROM INSERTED;
Then create a Stored Procedure that reads an item from the queue table, calls the Web Service, and if successful, then deletes that entry from the queue table. Schedule this Stored Procedure from a SQL Server Agent job to run every 10 minutes or something like that.
If there are records that will never process, then you can add a RetryCount column to the queue table, default it to 0, and upon the Web Service getting an error, increment RetryCount instead of removing the row. Then you can update the "get entry to process" SELECT query to include WHERE RetryCount < 5 or whatever limit you want to set.
There are a few issues here, with various levels of impact:
Why is id a BIGINT in the T-SQL code yet a string in the C# code?
Just FYI, the WHILE (#counter < #maxCounter) loop is inefficient and error prone compared to using an actual CURSOR. I would get rid of the #tmp Table Variable and #maxCounter.
At the very least change SELECT #maxCounter = Count(*) FROM INSERTED GROUP BY VehicleID to be just SET #maxCounter = ##ROWCOUNT; ;-). But swapping out for a real CURSOR would be best.
If the CallInsertTruckService(string id, string code, string vinNumber, string licPlateNo) signature is the actual method decorated with [SqlProcedure()], then you really should be using SqlString instead of string. Get the native string value from each parameter using the .Value property of the SqlString parameter. You can then set the proper size using the [SqlFacet()] attribute as follows:
[SqlFacet(MaxSize=24)] SqlString vinNumber
For more info on working with SQLCLR in general, please see the series that I am writing on this topic over at SQL Server Central: Stairway to SQLCLR (free registration is required to read content on that site).
I have an C# method to execute a SQL job. It executes the SQL job successfully.
And the code works perfect.
And I'm using standard SQL stored procedure msdb.dbo.sp_start_job for this.
Here is my code..
public int ExcecuteNonquery()
{
var result = 0;
using (var execJob =new SqlCommand())
{
execJob.CommandType = CommandType.StoredProcedure;
execJob.CommandText = "msdb.dbo.sp_start_job";
execJob.Parameters.AddWithValue("#job_name", "myjobname");
using (_sqlConnection)
{
if (_sqlConnection.State == ConnectionState.Closed)
_sqlConnection.Open();
sqlCommand.Connection = _sqlConnection;
result = sqlCommand.ExecuteNonQuery();
if (_sqlConnection.State == ConnectionState.Open)
_sqlConnection.Close();
}
}
return result;
}
Here is the sp which executing inside the job
ALTER PROCEDURE [Area1].[Transformation]
AS
BEGIN
SET NOCOUNT ON;
SELECT NEXT VALUE FOR SQ_COMMON
-- Transform Master Data
exec [dbo].[sp_Transform_Address];
exec [dbo].[sp_Transform_Location];
exec [dbo].[sp_Transform_Product];
exec [dbo].[sp_Transform_Supplier];
exec [dbo].[sp_Transform_SupplierLocation];
-- Generate Hierarchies and Product References
exec [dbo].[sp_Generate_HierarchyObject] 'Area1',FGDemand,1;
exec [dbo].[sp_Generate_HierarchyObject] 'Area1',RMDemand,2;
exec [dbo].[sp_Generate_Hierarchy] 'Area1',FGDemand,1;
exec [dbo].[sp_Generate_Hierarchy] 'Area1',RMDemand,2;
exec [dbo].[sp_Generate_ProductReference] 'Area1',FGDemand,1;
exec [dbo].[sp_Generate_ProductReference] 'Area1',RMDemand,2;
-- Transform Demand Allocation BOM
exec [Area1].[sp_Transform_FGDemand];
exec [Area1].[sp_Transform_FGAllocation];
exec [Area1].[sp_Transform_RMDemand];
exec [Area1].[sp_Transform_RMAllocation];
exec [Area1].[sp_Transform_BOM];
exec [Area1].[sp_Transform_RMDemand_FK];
-- Transform Purchasing Document Data
exec [dbo].[sp_Transform_PurchasingDoc];
exec [dbo].[sp_Transform_PurchasingItem];
exec [dbo].[sp_Transform_ScheduleLine];
exec [dbo].[sp_CalculateRequirement] 'Area1'
exec [dbo].[sp_Create_TransformationSummary] 'Area1'
-- Trauncate Integration Tables
exec [dbo].[sp_TruncateIntegrationTables] 'Area1'
END
The problem is, even the job is executed successfully or not it always returns -1. How can I identify whether job is successfully executed or not.
After running msdb.dbo.sp_start_job the return code is mapped to an output parameter. You have the opportunity to control the parameter's name prior to execution:
public int StartMyJob( string connectionString )
{
using (var sqlConnection = new SqlConnection( connectionString ) )
{
sqlConnection.Open( );
using (var execJob = sqlConnection.CreateCommand( ) )
{
execJob.CommandType = CommandType.StoredProcedure;
execJob.CommandText = "msdb.dbo.sp_start_job";
execJob.Parameters.AddWithValue("#job_name", "myjobname");
execJob.Parameters.Add( "#results", SqlDbType.Int ).Direction = ParameterDirection.ReturnValue;
execJob.ExecuteNonQuery();
return ( int ) sqlCommand.Parameters["results"].Value;
}
}
}
You need to know the datatype of the return code to do this - and for sp_start_job, it's SqlDbType.Int.
However, this is only the results of starting the job, which is worth knowing, but isn't the results of running your job. To get the results running of your job, you can periodically execute:
msdb.dbo.sp_help_job #jobName
One of the columns returned by the procedure is last_run_outcome and probably contains what you're really interested in. It will be 5 (unknown) while it's still running.
A job is usually the a number of steps - where each step may or may not be executed according to the outcome of previous steps. Another procedure called sp_help_jobhistory supports a lot of filters to specify which specific invocation(s) and/or steps of the job you're interested in.
SQL likes to think about jobs as scheduled work - but there's nothing to keep you from just starting a job ad-hoc - although it doesn't really provide you with much support to correlate your ad-hoc job with an instance is the job history. Dates are about as good as it gets (unless somebody knows a trick I don't know.)
I've seen where the job is created ad-hoc job just prior to running it, so the current ad-hoc execution is the only execution returned. But you end up with a lot of duplicate or near-duplicate jobs laying around that are never going to be executed again. Something you'll have to plan on cleaning up afterwards, if you go that route.
A note on your use of the _sqlConnection variable. You don't want to do that. Your code disposes of it, but it was apparently created elsewhere before this method gets called. That's bad juju. You're better off just creating the connection and disposing of it the same method. Rely on SQL connection pooling to make the connection fast - which is probably already turned on.
Also - in the code you posted - it looks like you started with execJob but switched to sqlCommand - and kinda messed up the edit. I assumed you meant execJob all the way through - and that's reflected in the example.
From MSDN about SqlCommand.ExecuteNonQuery Method:
For UPDATE, INSERT, and DELETE statements, the return value is the number of rows affected by the command. When a trigger exists on a table being inserted or updated, the return value includes the number of rows affected by both the insert or update operation and the number of rows affected by the trigger or triggers. For all other types of statements, the return value is -1. If a rollback occurs, the return value is also -1.
In this line:
result = sqlCommand.ExecuteNonQuery();
You want to return the number of rows affected by the command and save it to an int variable but since the type of statement is select so it returns -1. If you test it with INSERT or DELETE or UPDATE statements you will get the correct result.
By the way if you want to get the number of rows affected by the SELECT command and save it to an int variable you can try something like this:
select count(*) from jobs where myjobname = #myjobname
And then use ExecuteScalar to get the correct result:
result = (int)execJob.ExecuteScalar();
You need to run stored proceedure msdb.dbo.sp_help_job
private int CheckAgentJob(string connectionString, string jobName) {
SqlConnection dbConnection = new SqlConnection(connectionString);
SqlCommand command = new SqlCommand();
command.CommandType = System.Data.CommandType.StoredProcedure;
command.CommandText = "msdb.dbo.sp_help_job";
command.Parameters.AddWithValue("#job_name", jobName);
command.Connection = dbConnection;
using (dbConnection)
{
dbConnection.Open();
using (command){
SqlDataReader reader = command.ExecuteReader();
reader.Read();
int status = reader.GetInt32(21); // Row 19 = Date Row 20 = Time 21 = Last_run_outcome
reader.Close();
return status;
}
}
}
enum JobState { Failed = 0, Succeeded = 1, Retry = 2, Cancelled = 3, Unknown = 5};
Keep polling on Unknown, until you get an answer. Lets hope it is succeeded :-)
I'm using Rob Conery's Massive for database access. I want to wrap a transaction around a couple of inserts but the second insert uses the identity returned from the first insert. It's not obvious to me how to do this in a transaction. Some assistance would be appreciated.
var commandList = new List<DbCommand>
{
contactTbl.CreateInsertCommand(new
{
newContact.Name,
newContact.Contact,
newContact.Phone,
newContact.ForceChargeThreshold,
newContact.MeterReadingMethodId,
LastModifiedBy = userId,
LastModifiedDate = modifiedDate,
}),
branchContactTbl.CreateInsertCommand(new
{
newContact.BranchId,
ContactId = ????, <-- how to set Id as identity from previous command
}),
};
Make a query between those two inserts, this method from Massive may be useful:
public object Scalar(string sql, params object[] args) {
object result = null;
using (var conn = OpenConnection()) {
result = CreateCommand(sql, conn, args).ExecuteScalar();
}
return result;
}
Your sql will be = "select scope_identity()"
UPDATE 2013/02/26
Looking again at the Massive code there is no reliable way to retrieve last inserted ID.
Code above will work only when connection that makes "select scope_identity()" is pooled. (It must be the same connection that made insert).
Massive table.Insert(..) method returns Dynamic that contains ID field, which is filled with "SELECT ##IDENTITY". It gets last inserted ID from global scope, which is obvious bug (apparent in multithreading scenarios).
Can you just do it in a stored proc? The you can use scope_identity or better yet the output clause to get the value(s) you need. And all the inserts to all the tables are in one transaction which can be rolled back if any of them fail.
TransactionScope TransactionABC = new TransactionScope();
try
{
context.Connection.Open();
{
context.ExecuteCommand("insert into test (test) values (1)")
context.SubmitChanges();
context.ExecuteCommand("savepoint test");
context.ExecuteCommand("insert into test (test) values (2)")
context.SubmitChanges();
context.ExecuteCommand("rollback to test");
}
TransactionABC.Complete();
TransactionABC.Dispose();
}
catch (Exception ec)
{
MessageBox.Show(" ", ec.Message);
}
finally
{
context.Connection.Close();
}
It works, but only with ExecuteCommand. I want to use a function, because i can't see what happens in the savepoint !
I would advise simply not to. It isn't necessarily what you want to hear, but especially when mixing with TransactionScope, save-points aren't a great idea. TransactionScopes can be nested, but the first rollback dooms everything, and the commit only happens at the outermost transaction.
In most scenarios I can think of, it is better to sanitise the data first. You can (and should) also use contraints for a safety net, but if you hit that safety net, assume big problems and rollback everything.
Example of nested transactions:
public void DebitCreditAccount(int accountId, decimal amount, string reference)
{
using(var tran = new TransactionScope())
{
// confirm account exists, and update estimated balance
var acc = db.Accounts.Single(a => a.Id == accountId);
acc.BalanceEstimate += amount;
// add a transaction (this defines the **real** balance)
db.AccountTransactions.InsertOnSubmit(
new AccountTransaction {
AccountId = accountId, Amount = amount,
Code = amount >= 0 ? "C" : "D",
Reference = reference });
db.SubmitChanges();
tran.Complete();
}
}
public void Transfer(int fromAccountId, int toAccountId,
decimal amount, string reference)
{
using(var tran = new TransactionScope())
{
DebitCreditAccount(fromAccountId, -amount, reference);
DebitCreditAccount(toAccountId, amount, reference);
tran.Complete();
}
}
In the above, DebitCreditAccount is atomic - we'll either add the account-transaction and update the estimated balance, or neither. If this is the only transaction, then it is committed at the end of this method.
However, in the Transfer method, we create another outer transaction; we'll either perform both DebitCreditAccount, or neither. Here, the inner tran.Complete() (in DebitCreditAccount) doesn't commit the db-transaction, as there is an outer transaction. It simply says "I'm happy". Conversely, though, if either of the inner transactions is aborted (Dispose() called without Complete()), then the outer transaction is rolled back immediately, and that transaction will refuse any additional work. The outer transaction is committed only if no inner transaction was aborted, and Complete() is called on the outer transaction.
How about ExecuteQuery?
With DataContext.ExecuteQuery, you send text into the database, just like ExecuteCommand - but you can get query results back from that text.
IEnumerable<int> results = ExecuteQuery<int>(#"
DECLARE #Table TABLE(Id int)
INSERT INTO #Table SELECT {0}
INSERT INTO #Table SELECT {1}
SELECT Id FROM Table", 101, -101);
IEnumerable<Customer> results = ExecuteQuery<Customer>( #"
Rollback transaction
SELECT *
FROM Customer
WHERE ID = {0}", myId);