We have a backend job written in C# with threading and generate a new thread in every second (Somehow, we can't increase this time. ).
It's read data from DB for processing help of a stored procedure and send request to interfacing system.
Currently we are facing issue where same data is pulled in multiple process and found deadlock in our log table. Please suggest how can we impalement locking so the same type of data can be processed by only a single process and other process will have the different data.
DB: SQL Server
SP Code: Given below
ALTER PROCEDURE [Migration]
AS
BEGIN
declare #ConversationID varchar(200)='',Group varchar(100) =''
-- Select records with Flag 0
select
top 1 #ConversationID = ConversationID,
#Group = Group
from [Migration]
where NewCode = (select top 1 NewCode
from [Migration]
where Flag = 0 group by NewCode, Group, InsertDate
)
and Flag = 0;
select * from [Migration] where ConversationID = #ConversationID and Group = #Group;
BEGIN TRANSACTION
BEGIN TRY
update [Migration] set Flag = 1 where ConversationID = #ConversationID and Group = #Group and
Flag = 0;
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
insert into Logs(ErrorType,Description) values('MigrationError',ERROR_MESSAGE());
END CATCH
END
The typical deadlock message is something like
Transaction (Process ID 100) was deadlocked on lock resources with
another process and has been chosen as the deadlock victim. Rerun the
transaction.
However, it's hard to see where your query would get deadlocked the traditional way - I can see it being blocked, but not deadlocked, as you typically need more than one update to be occurring within the same transaction. Indeed, your transaction above is only a single command - the transaction around it is fairly meaningless as each command is run in its own implicit transaction anyway.
Instead, I'm guessing that your deadlock message is, instead, something like
Transaction (Process ID 100) was deadlocked on lock | communication
buffer resources with another process and has been chosen as the
deadlock victim. Rerun the transaction.
Note the difference in what the deadlock was on - in this case, the lock/communication buffer resources.
These deadlocks are related to issues with parallelism; see (for example) What does "lock | communication buffer resources" mean? for more information.
You will tend to get this if the SP does parallel processing and you're running this SP many times in a row. Setting MAXDOP 1 and/or improving your indexing often fixes such issues - but obviously you will need to do your research on the best approach in your own specific circumstances.
Regarding the question itself - how to make it so that only one thing can deal with a given row at a time?
There are numerous methods. My usual methods involve statements with OUTPUT clauses. The advantage of those is that you can do a data insert/update as well as record what you have inserted or updated within the same command.
Here is an example from your current thing - it sets the flag to be -1 for the rows it's dealing with.
CREATE TABLE #ActiveMigrations (ConversationID varchar(200), CGroup varchar(100));
-- Select records with Flag 0
WITH TopGroup AS (
select top 1 ConversationID, CGroup, Flag
from [Migration]
where NewCode = (select top 1 NewCode
from [Migration]
where Flag = 0 group by NewCode, CGroup, InsertDate
)
and Flag = 0
)
UPDATE TopGroup
SET Flag = -1
OUTPUT ConversationID, CGroup
INTO #ActiveMigrations (ConversationID, CGroup)
FROM TopGroup;
In the above, you have the ConversaionID and Group in the temporary table (rather than variables) for further processing, and the flags set to -1 so they're not picked up by further processing.
A similar version can be used to track (in a separate table) which are being operated on. In this case, you create a scratch table that includes active conversations/groups e.g.,
-- Before you switch to the new process, create the table as below
CREATE TABLE ActiveMigrations (ConversationID varchar(200), CGroup varchar(100));
You can code this with OUTPUT clauses as above, and (say) include this table in your initial SELECT as a LEFT OUTER JOIN to exclude any active rows.
A simpler version is to use your SELECT as above to get a #ConversationID and #CGroup, then try to insert it the table
INSERT INTO ActiveMigrations (ConversationID, CGroup)
SELECT #ConversationID, #Group
WHERE NOT EXISTS
(SELECT 1 FROM ActiveMigrations WHERE ConversationID = #ConversationID AND CGroup = #Group);
IF ##ROWCOUNT = 1
BEGIN
...
The key thing to think about with these, is if you have the command running twice simultaneously, how could they interact badly?
To be honest, your code itself
update [Migration]
set Flag = 1
where ConversationID = #ConversationID
and Group = #Group
and Flag = 0;
protects itself because a) it's one command only, and b) it has the Flag=0 protection on the end.
It does waste resources as the second concurrent process does all the work, then gets to the end and has nothing to do as the first process already updated the row. For something running regularly and likely to have concurrency issues, then you probably should code it better.
Related
What is the best way to test if rows are being locked while they are being updated?
I have a query which select the top x records of a table and it updates them but multiple worker threads will be calling the same query so I want to ensure this are locked and if locked that it throws an error of some sort so that I can handle it accordingly but I don't seem to be able to throw an exception that I can catch in .NET (or in SQL for that matter).
The query looks like:
BEGIN TRANSACTION
UPDATE MyTable WITH (ROWLOCK)
SET x = #X,
y = #Y,
WHERE ID IN (SELECT TOP 50 ID
FROM MyTable
WHERE Z IS NULL)
SELECT TOP 50 x
FROM MyTable
WHERE x = #W
COMMIT TRANSACTION
I've tried to step through the debugger in SQL to just call the BEGIN TRANSACTION and then call the same query from my .NET application and expected an error but it just ran fine in .NET
So my question is how can I generate an error so that I know the records are currently being updated? I want to take a specific action when this occurs i.e. retry in x milliseconds for example.
Thanks.
Based on your recent comments (please add this information to your question body), you just want to make sure that each thread only “gets” rows that are not made available to the other threads. Like picking tasks from a table of pending tasks, and making sure no task is picked up by two threads.
You are overthinking the locking. Your problem is not something that requires fiddling with the SQL locking mechanism. You might fiddle with locking if you need to tune for performance reasons but you’re far from being able to establish if that is needed. I wouldn’t even bother with lock hints at this stage.
What you want to do is have a field in the row that indicates whether the row has been taken, and by whom. Your made-up sample T-SQL doesn’t use anything consistently, but the closest thing is the Z column.
You need to select rows that have a value of NULL (not taken yet) in Z. You are clearly on that track already. This field could be a BIT Yes/No and it can be made to work (look up the OUTPUT clause of UPDATE to see how you can pick up which rows were selected); but I suspect you will find far more useful for tracing/debugging purposes to be able to identify rows taken together by looking at the database alone. I would use a column that can hold a unique value that cannot be used by any other thread at the same time.
There are a number of approaches. You could use a (client process) Thread ID or similar, but that will repeat over time which might be undesirable. You could create a SQL SEQUENCE and use those values, which has the nice feature of being incremental, but that makes the SQL a little harder to understand. I’ll go with a GUID for illustration purposes.
DECLARE #BatchId AS UNIQUEIDENTIFIER
SET #BatchId = NEWID()
UPDATE MyTable
SET x = #X,
y = #Y,
Z = #BatchId
WHERE ID IN (SELECT TOP 50 ID
FROM MyTable
WHERE Z IS NULL)
Now only your one thread can use those rows (assuming of course that no thread cheats and violates the code pattern). Notice that I didn’t even open an explicit transaction. In SQL Server, UPDATEs are transactionally atomic by default (they run inside an implicit transaction). No thread can pick those rows again because no thread (by default) is allowed to update or even see those rows until the UPDATE has been committed in its entirety (for all rows). Any thread that tries to run the UPDATE at the same time either gets different unassigned rows (depending on your locking hints and how cleverly you select the rows to pick - that’s part of the “advanced” performance tuning), or is paused for a few milliseconds waiting for the first UPDATE to complete.
That’s all you need. Really. The rows are yours and yours only:
SELECT #BatchId AS BatchId, x
FROM MyTable
WHERE Z = #BatchId
Now your code gets the data from its dedicated rows, plus the unique ID which it can use later to reference the same rows, if you need to (take out the BatchId from the return set if you truly don’t need it).
If my reading of what you are trying to do is correct, you will probably want to flag the rows when your code is done by setting another field to a flag value or a timestamp or something. That way you will be able to tell if rows were “orphaned” because they were taken for processing but the process died for any reason (in which case they probably need to be made available again by setting Z to NULL again).
I'm going to start off by saying I am pretty sure this is not possible. Googling has not turned up anyone asking this question, so I am pessimistically hopeful.
I am using a SqlTransaction to connect to a database, and allowing several consumers to use this transaction before I close it. Is it possible to determine whether or not the transaction was only used to read data or to read/write data, even when stored procedures may have been used? Does some property or SQL method (other than performing some unholy diff) exist that can check this?
sys.dm_exec_sessions has a column writes. You can query it like this:
SELECT writes FROM sys.dm_exec_sessions WHERE session_id = ##SPID
That will impose some overhead, though. I think that DMV is O(N) in the number of sessions running. The system will slow down the more sessions there are. Eventually it becomes a case of negative scaling.
Note, that this is a per-session value. If you have multiple transactions per session (per opening of a SqlConnection object) the writes will be tracked cumulatively. You'd need to account for that, or use the idea in the other answer.
You can use the sys.dm_ views to check Transaction status.
When a transaction is opened, but no work is done (select only), it doesn't register, but if you do any work, the transaction log size is increased.
Try this first... note your SPID.
USE tempdb
begin tran
create table dave (id int)
insert into dave (id) select 1
-- rollback tran
Leave the transaction open, then look at it with this (replace your spid):
select database_transaction_log_bytes_used, *
from sys.dm_tran_database_transactions dt
inner join sys.dm_tran_active_transactions at on (at.transaction_id = dt.transaction_id)
inner join sys.dm_tran_session_transactions st on (st.transaction_id = dt.transaction_id)
where (dt.database_id = DB_ID('tempdb')) and (st.is_user_transaction=1)
and session_id=83
Then rollback the original.
Then try this:
USE tempdb
begin tran
select getdate()
-- rollback tran
And run the above on the sys.dm_ views...
The view is blank until there is something logged in the transaction. Selects do not get logged.
Don't forget to roll it all back.
So you could write a proc/view that you could query before closing your transaction, assuming you know the spid (easy enough to query for with SELECT ##SPID) that would return you the log size of the active session.
mmm, This is only an idea. I am thinking about something like.. creating a table in your database, calling CountRead, and another table CountWrite, and then, modify your querys to write in read table if the query is a SELECT and in the write table if the query is an INSERT or something. Then, when you want to check if only read or write, you only need to read this tables (if they already have data, you can or remove all the data when your application starts, or check the count in the beginning).
Then with a simple SELECT to this tables, you will be able to check if only read querys have been used, or only write querys, or both, and how many times.
As I said, is only an idea. ;) I hope this helps, or at least give you some idea about how to do it
I am writing a web application in Visual Studio 2010 using C#. The web app executes complex SQL Server 2008 statements that sometimes result in a deadlock if the same .aspx page is called more than once at the same time. The proposed solution is to use SQL server means to prevent these deadlocks, but my issue is that I don't really understand it well, which is not true about C#, that I know way better.
So I'm wondering, what is the downside of me using locks in the ASP.NET page (or locks in C#, or Windows named mutex) instead of doing the locking through SQL server to prevent these deadlocks?
PS. The SQL Server database in question is used only by this web application.
EDIT: The following is C# code that executes SQL statements:
int iNumRows = 0;
using (SqlConnection cn = new SqlConnection(strConnection))
{
cn.Open();
using (SqlCommand cmd = new SqlCommand(strSQL, cn))
{
//Use C# lock here
iNumRows = Convert.ToInt32(cmd.ExecuteScalar());
//Release C# lock here
}
}
And here's a sample of SQL (that in reality is dynamically composed by C# script):
SET XACT_ABORT ON;
BEGIN TRANSACTION;
DELETE FROM [dbo].[t_Log_2]
WHERE [idtm]<'2011-03-12 08:41:57';
WITH ctx AS(
SELECT MIN([idtm]) AS mdIn,
MAX([odtm]) AS mdOut
FROM [dbo].[t_Log_2]
WHERE [type] = 0
AND [state] = 0
AND [huid] = N'18ef4d56-6ef3-906a-a711-88d1bd6ab2d4'
AND [odtm] >= '2013-03-11 06:33:32'
AND [idtm] <= '2013-03-11 06:43:12'
)
INSERT INTO [dbo].[t_Log_2]
([oid],[idtm],[odtm],[type],[state],[huid],
[cnm],[cmdl],[batt],[dvtp0],[dvtp1])
SELECT
2,
CASE WHEN mdIn IS NOT NULL
AND mdIn < '2013-03-11 06:33:32'
THEN mdIn
ELSE '2013-03-11 06:33:32'
END,
CASE WHEN mdOut IS NOT NULL
AND mdOut > '2013-03-11 06:43:12'
THEN mdOut
ELSE '2013-03-11 06:43:12'
END,
0,
0,
N'18ef4d56-6ef3-906a-a711-88d1bd6ab2d4',
null,
null,
0,
1,
null
FROM ctx
SELECT ROWCOUNT_BIG()
DELETE FROM [dbo].[t_Log_2]
WHERE [type] = 0
AND [state] = 0
AND [huid] = N'18ef4d56-6ef3-906a-a711-88d1bd6ab2d4'
AND [odtm] >= '2013-03-11 06:33:32'
AND [idtm] <= '2013-03-11 06:43:12'
AND [id] <> SCOPE_IDENTITY()
DELETE FROM [dbo].[t_Log_2]
WHERE [type] = 0
AND [huid] = N'18ef4d56-6ef3-906a-a711-88d1bd6ab2d4'
AND [idtm] >= (SELECT [idtm] FROM [dbo].[t_Log_2]
WHERE [id] = SCOPE_IDENTITY())
AND [odtm] <= (SELECT [odtm] FROM [dbo].[t_Log_2]
WHERE [id] = SCOPE_IDENTITY())
AND [id] <> SCOPE_IDENTITY()
;WITH ctx1 AS(
SELECT [idtm] AS dI
FROM [dbo].[t_Log_2]
WHERE [id] = SCOPE_IDENTITY()
)
UPDATE [dbo].[t_Log_2]
SET [odtm] = ctx1.dI
FROM ctx1
WHERE [id] <> SCOPE_IDENTITY()
AND [type] = 0
AND [huid] = N'18ef4d56-6ef3-906a-a711-88d1bd6ab2d4'
AND [idtm] < ctx1.dI
AND [odtm] > ctx1.dI
;WITH ctx2 AS(
SELECT [odtm] AS dO
FROM [dbo].[t_Log_2]
WHERE [id] = SCOPE_IDENTITY()
)
UPDATE [dbo].[t_Log_2]
SET [idtm] = ctx2.dO
FROM ctx2
WHERE [id] <> SCOPE_IDENTITY()
AND [type] = 0
AND [huid] = N'18ef4d56-6ef3-906a-a711-88d1bd6ab2d4'
AND [idtm] < ctx2.dO
AND [odtm] > ctx2.dO
COMMIT TRANSACTION;
SET XACT_ABORT OFF
what is the downside of me using locks in the ASP.NET page (or locks in C#, or Windows named mutex) instead of doing the locking through SQL server to prevent these deadlocks?
Instead of causing deadlocks you will cause livelocks.
A deadlock occurs when the wait graph contains a cycle (A wait on B, B waits on A). SQL Server periodically inspects all the wait graphs and looks for cycles. When one such cycle is detected the cycle is broken by choosing a victim and aborting it's transaction.
If you move some of these locks outside of the SQL Server controlled realm, ie. in process mutexes, critical sections, C# events or whatever, the wait graphs cycles will still occur but now the cycle will complete through the app, thus it will be undetectable by SQL Server (A waits for B in SQL, but B waits for A in the app). Since the deadlock monitor will not see a cycle, it will not run the deadlock resolving algorithm (choose a victim, abort it's transaction) and the deadlock will stay on forever. Congratulations, now your application simply hangs instead of raising a deadlock exception!
You don't have to take my word from it, other more experienced have already been burned by this issue and learned the hard way, but fortunately wrote about it so you can learn the easy way. This very site you're reading is an example.
Solving deadlocks in SQL Server is fairly easy once you understand the issue. If you capture and attach the deadlock graph (the XML, not the picture!), along with the exact definition of your tables, perhaps we can help. Alas, you already ignored such request so I guess the only question to ask is Would you like more rope?
Without knowing enough detail, could not find where it really cause deadlock, I can guess it probably because of key range caused the deadlock, which means deadlock happened on the index of table t_log_2, since you have the delete and update, definitely they are not happening on same row, but they can happen on same key range, or one process can hold A range, request B range, and another process can hold B range and request A range. You can use SQL Profiler to trace the deadlock and see where it exactly happened. Or, simply, if it doesn't hurt your performance too much, you can set transaction isolation level to [repeatable read] or even [serializable].
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ
....
BEGIN TRANSACTION
....
I have an insert statement that was deadlocking using linq. So I placed it in a stored proc incase the surrounding statements were affecting it.
Now the Stored Proc is dead locked. Something about the insert statement is locking itself according to the Server Profiler. It claims that two of those insert statements were waiting for the PK index to be freed:
When I placed the code in the stored procedure it is now stating that this stored proc has deadlocked with another instance of this stored proc.
Here is the code. The select statement is similar to that used by linq when it did its own query. I simply want to see if the item exists and if not then insert it. I can find the system by either the PK or by some lookup values.
SET NOCOUNT ON;
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION SPFindContractMachine
DECLARE #id int;
set #id = (select [m].pkID from Machines as [m]
WHERE ([m].[fkContract] = #fkContract) AND ((
(CASE
WHEN #bByID = 1 THEN
(CASE
WHEN [m].[pkID] = #nMachineID THEN 1
WHEN NOT ([m].[pkID] = #nMachineID) THEN 0
ELSE NULL
END)
ELSE
(CASE
WHEN ([m].[iA_Metric] = #lA) AND ([m].[iB_Metric] = #lB) AND ([m].[iC_Metric] = #lC) THEN 1
WHEN NOT (([m].[iA_Metric] = #lA) AND ([m].[iB_Metric] = #lB) AND ([m].[iC_Metric] = #lC)) THEN 0
ELSE NULL
END)
END)) = 1));
if (#id IS NULL)
begin
Insert into Machines(fkContract, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
values (#fkContract, #lA, #lB, #lC, GETDATE());
set #id = SCOPE_IDENTITY();
end
COMMIT TRANSACTION SPFindContractMachine
return #id;
END TRY
BEGIN CATCH
if ##TRANCOUNT > 0
ROLLBACK TRANSACTION SPFindContractMachine
END CATCH
Any procedure that follows the pattern:
BEGIN TRAN
check if row exists with SELECT
if row doesn't exist INSERT
COMMIT
is going to run into trouble in production because there is nothing to prevent two treads doing the check simultaneously and both reach the conclusion that they should insert. In particular, under serialization isolation level (as in your case), this pattern is guaranteed to deadlock.
A much better pattern is to use database unique constraints and always INSERT, capture duplicate key violation errors. This is also significantly more performant.
Another alternative is to use the MERGE statement:
create procedure usp_getOrCreateByMachineID
#nMachineId int output,
#fkContract int,
#lA int,
#lB int,
#lC int,
#id int output
as
begin
declare #idTable table (id int not null);
merge Machines as target
using (values (#nMachineID, #fkContract, #lA, #lB, #lC, GETDATE()))
as source (MachineID, ContractID, lA, lB, lC, dteFirstAdded)
on (source.MachineID = target.MachineID)
when matched then
update set #id = target.MachineID
when not matched then
insert (ContractID, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
values (source.contractID, source.lA, source.lB, source.lC, source.dteFirstAdded)
output inserted.MachineID into #idTable;
select #id = id from #idTable;
end
go
create procedure usp_getOrCreateByMetrics
#nMachineId int output,
#fkContract int,
#lA int,
#lB int,
#lC int,
#id int output
as
begin
declare #idTable table (id int not null);
merge Machines as target
using (values (#nMachineID, #fkContract, #lA, #lB, #lC, GETDATE()))
as source (MachineID, ContractID, lA, lB, lC, dteFirstAdded)
on (target.iA_Metric = source.lA
and target.iB_Metric = source.lB
and target.iC_Metric = source.lC)
when matched then
update set #id = target.MachineID
when not matched then
insert (ContractID, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
values (source.contractID, source.lA, source.lB, source.lC, source.dteFirstAdded)
output inserted.MachineID into #idTable;
select #id = id from #idTable;
end
go
This example separates the two cases, since T-SQL queries should never attempt to resolve two different solutions in one single query (the result is never optimizable). Since the two tasks at hand (get by mahcine id and get by metrics) are completely separate, the should be separate procedures and the caller should call the apropiate one, rather than passing a flag. This example shouws how to achieve the (probably) desired result using MERGE, but of course, a correct and optimal solution depends on the actual schema (table definition, indexes and cosntraints in place) and on the actual requirements (is not clear what the procedure is expected to do if the criteria is already matched, not output and #id?).
By eliminating the SERIALIZABLE isolation, this is no longer guaranteed to deadlock, but it may still deadlock. Solving the deadlock is, of course, completely dependent on the schema which was not specified, so a solution to the deadlock cannotactually be provided in this context. There is a sledge hammer of locking all candidate row (force UPDLOCK or even TABLOCX) but such a solution would kill throughput on heavy use, so I cannot recommend it w/o knowing the use case.
Get rid of the transaction. It's not really helping you, instead it is hurting you. That should clear up your problem.
How about this SQL? It moves the check for existing data and the insert into a single statement. This way, when two threads are running they're not deadlocked waiting for each other. At best, thread two is deadlocked waiting for thread one, but as soon as thread one finishes, thread two can run.
BEGIN TRY
BEGIN TRAN SPFindContractMachine
INSERT INTO Machines (fkContract, iA_Metric, iB_Metric, iC_Metric, dteFirstAdded)
SELECT #fkContract, #lA, #lB, #lC, GETDATE()
WHERE NOT EXISTS (
SELECT * FROM Machines
WHERE fkContract = #fkContract
AND ((#bByID = 1 AND pkID = #nMachineID)
OR
(#bByID <> 1 AND iA_Metric = #lA AND iB_Metric = #lB AND iC_Metric = #lC))
DECLARE #id INT
SET #id = (
SELECT pkID FROM Machines
WHERE fkContract = #fkContract
AND ((#bByID = 1 AND pkID = #nMachineID)
OR
(#bByID <> 1 AND iA_Metric = #lA AND iB_Metric = #lB AND iC_Metric = #lC)))
COMMIT TRAN SPFindContractMachine
RETURN #id
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRAN SPFindContractMachine
END CATCH
I also changed those CASE statements to ORed clauses just because they were easier to read to me. If I recall my SQL theory, ORing might make this query a little slower.
I wonder if adding an UPDLOCK hint to the earlier SELECT(s) would fix this; it should avoid sone deadlock scenarios be preventing another spud getting a read-lock on the data you are about to mutate.
I want to lock one record and then no one may make changes to that record. When I release the lock, then people may change the record.
In the meantime that a record is locked, I want to show the user a warning that the record is locked and that changes are not allowed.
How can I do this?
I've tried all the IsolationLevel levels, but none of them has the behavior I want. Some of the Isolation levels wait until the lock is released and then make a change. I don't want this, because updating is not allowed at the moment a record is locked.
What can I do to lock a record and deny all changes?
I use SQL Server 2008
With the assumption that this is MS SQL server, you probably want UPDLOCK, possibly combined with ROWLOCK (Table hints). I'm having trouble finding a decent article which describes the theory, but here is quick example:
SELECT id From mytable WITH (ROWLOCK, UPDLOCK) WHERE id = 1
This statement will place an update lock on the row for the duration of the transaction (so it is important to be aware of when the transaction will end). As update locks are incompatible with exclusive locks (required to update records), this will prevent anyone from updating this record until the transaction has ended.
Note that other processes attempting to modify this record will be blocked until the transaction completes, however will continue with whatever write operation they requested once the transaction has ended (unless they are timed out or killed off as a deadlocked process). If you wish to prevent this then your other processes need to use additional hints in order to either abort if an incompatible lock is detected, or skip the record if it has changed.
Also, You should not use this method to lock records while waiting for user input. If this is your intention then you should add some sort of "being modified" column to your table instead.
The SQL server locking mechanisms are really only suited for use to preserve data integrity / preventing deadlocks - transactions should generally be kept as short as possible and should certainly not be maintained while waiting for user input.
Sql Server has locking hints, but these are limited to the scope of a query.
If the decision to lock the record is taken in an application, you can use the same mechanisms as optimistic locking and deny any changes to the record from the application.
Use a timestamp or guid as a lock on the record and deny access or changes to the record if the wrong locking key is given. Be careful to unlock records again or you will get orphans
See this duplicate question on SO.
Basically it's:
begin tran
select * from [table] with(holdlock,rowlock) where id = #id
--Here goes your stuff
commit tran
Archive
Something like this maybe?
update t
set t.IsLocked = 1
from [table] t
where t.id = #id
Somewhere in the update trigger:
if exists (
select top 1 1
from deleted d
join inserted i on i.id = d.id
where d.IsLocked = 1 and i.RowVersion <> d.RowVersion)
begin
print 'Row is locked'
rollback tran
end
You don't want to wait for the lock to be released and show the message as soon as you encounter a lock, if this is the case then did you try NOWAIT. See Table Hints (Transact-SQL) and SQL Server 2008 Table Hints for more details. To get benefit of NOWAIT you need to lock records on edits, google for more details.