I want to lock one record and then no one may make changes to that record. When I release the lock, then people may change the record.
In the meantime that a record is locked, I want to show the user a warning that the record is locked and that changes are not allowed.
How can I do this?
I've tried all the IsolationLevel levels, but none of them has the behavior I want. Some of the Isolation levels wait until the lock is released and then make a change. I don't want this, because updating is not allowed at the moment a record is locked.
What can I do to lock a record and deny all changes?
I use SQL Server 2008
With the assumption that this is MS SQL server, you probably want UPDLOCK, possibly combined with ROWLOCK (Table hints). I'm having trouble finding a decent article which describes the theory, but here is quick example:
SELECT id From mytable WITH (ROWLOCK, UPDLOCK) WHERE id = 1
This statement will place an update lock on the row for the duration of the transaction (so it is important to be aware of when the transaction will end). As update locks are incompatible with exclusive locks (required to update records), this will prevent anyone from updating this record until the transaction has ended.
Note that other processes attempting to modify this record will be blocked until the transaction completes, however will continue with whatever write operation they requested once the transaction has ended (unless they are timed out or killed off as a deadlocked process). If you wish to prevent this then your other processes need to use additional hints in order to either abort if an incompatible lock is detected, or skip the record if it has changed.
Also, You should not use this method to lock records while waiting for user input. If this is your intention then you should add some sort of "being modified" column to your table instead.
The SQL server locking mechanisms are really only suited for use to preserve data integrity / preventing deadlocks - transactions should generally be kept as short as possible and should certainly not be maintained while waiting for user input.
Sql Server has locking hints, but these are limited to the scope of a query.
If the decision to lock the record is taken in an application, you can use the same mechanisms as optimistic locking and deny any changes to the record from the application.
Use a timestamp or guid as a lock on the record and deny access or changes to the record if the wrong locking key is given. Be careful to unlock records again or you will get orphans
See this duplicate question on SO.
Basically it's:
begin tran
select * from [table] with(holdlock,rowlock) where id = #id
--Here goes your stuff
commit tran
Archive
Something like this maybe?
update t
set t.IsLocked = 1
from [table] t
where t.id = #id
Somewhere in the update trigger:
if exists (
select top 1 1
from deleted d
join inserted i on i.id = d.id
where d.IsLocked = 1 and i.RowVersion <> d.RowVersion)
begin
print 'Row is locked'
rollback tran
end
You don't want to wait for the lock to be released and show the message as soon as you encounter a lock, if this is the case then did you try NOWAIT. See Table Hints (Transact-SQL) and SQL Server 2008 Table Hints for more details. To get benefit of NOWAIT you need to lock records on edits, google for more details.
Related
What is the best way to test if rows are being locked while they are being updated?
I have a query which select the top x records of a table and it updates them but multiple worker threads will be calling the same query so I want to ensure this are locked and if locked that it throws an error of some sort so that I can handle it accordingly but I don't seem to be able to throw an exception that I can catch in .NET (or in SQL for that matter).
The query looks like:
BEGIN TRANSACTION
UPDATE MyTable WITH (ROWLOCK)
SET x = #X,
y = #Y,
WHERE ID IN (SELECT TOP 50 ID
FROM MyTable
WHERE Z IS NULL)
SELECT TOP 50 x
FROM MyTable
WHERE x = #W
COMMIT TRANSACTION
I've tried to step through the debugger in SQL to just call the BEGIN TRANSACTION and then call the same query from my .NET application and expected an error but it just ran fine in .NET
So my question is how can I generate an error so that I know the records are currently being updated? I want to take a specific action when this occurs i.e. retry in x milliseconds for example.
Thanks.
Based on your recent comments (please add this information to your question body), you just want to make sure that each thread only “gets” rows that are not made available to the other threads. Like picking tasks from a table of pending tasks, and making sure no task is picked up by two threads.
You are overthinking the locking. Your problem is not something that requires fiddling with the SQL locking mechanism. You might fiddle with locking if you need to tune for performance reasons but you’re far from being able to establish if that is needed. I wouldn’t even bother with lock hints at this stage.
What you want to do is have a field in the row that indicates whether the row has been taken, and by whom. Your made-up sample T-SQL doesn’t use anything consistently, but the closest thing is the Z column.
You need to select rows that have a value of NULL (not taken yet) in Z. You are clearly on that track already. This field could be a BIT Yes/No and it can be made to work (look up the OUTPUT clause of UPDATE to see how you can pick up which rows were selected); but I suspect you will find far more useful for tracing/debugging purposes to be able to identify rows taken together by looking at the database alone. I would use a column that can hold a unique value that cannot be used by any other thread at the same time.
There are a number of approaches. You could use a (client process) Thread ID or similar, but that will repeat over time which might be undesirable. You could create a SQL SEQUENCE and use those values, which has the nice feature of being incremental, but that makes the SQL a little harder to understand. I’ll go with a GUID for illustration purposes.
DECLARE #BatchId AS UNIQUEIDENTIFIER
SET #BatchId = NEWID()
UPDATE MyTable
SET x = #X,
y = #Y,
Z = #BatchId
WHERE ID IN (SELECT TOP 50 ID
FROM MyTable
WHERE Z IS NULL)
Now only your one thread can use those rows (assuming of course that no thread cheats and violates the code pattern). Notice that I didn’t even open an explicit transaction. In SQL Server, UPDATEs are transactionally atomic by default (they run inside an implicit transaction). No thread can pick those rows again because no thread (by default) is allowed to update or even see those rows until the UPDATE has been committed in its entirety (for all rows). Any thread that tries to run the UPDATE at the same time either gets different unassigned rows (depending on your locking hints and how cleverly you select the rows to pick - that’s part of the “advanced” performance tuning), or is paused for a few milliseconds waiting for the first UPDATE to complete.
That’s all you need. Really. The rows are yours and yours only:
SELECT #BatchId AS BatchId, x
FROM MyTable
WHERE Z = #BatchId
Now your code gets the data from its dedicated rows, plus the unique ID which it can use later to reference the same rows, if you need to (take out the BatchId from the return set if you truly don’t need it).
If my reading of what you are trying to do is correct, you will probably want to flag the rows when your code is done by setting another field to a flag value or a timestamp or something. That way you will be able to tell if rows were “orphaned” because they were taken for processing but the process died for any reason (in which case they probably need to be made available again by setting Z to NULL again).
I am trying to insert the data into the Wallets table of a SQL Server database.
There are can be many requests at the same time so due to the sensitive info I have to use transactions.
The work flow is the following:
read the amount of the user's wallet
insert the new record based on the previously received data
I tried different isolation levels but in all the cases the transaction blocks the whole table, not just the record I am working with. Even ReadUncommitted or RepeatableRead block the whole table.
Is there a way to block only the records I am working with?
Let me detail:
I don't use any indexes in the table
The workflow (translating C# into SQL) is the following:
1) Select * from Balance
2) Insert ... INTO Balance
UPDLOCK is used when you want to lock a row or rows during a select statement for a future update statement
Ttransaction-1 :
BEGIN TRANSACTION
SELECT * FROM dbo.Test WITH (UPDLOCK) /*read the amount of the user's wallet*/
/* update the record on same transaction that were selected in previous select statement */
COMMIT TRANSACTION
Ttransaction-2 :
BEGIN TRANSACTION
/* insert a new row in table is allowed as we have taken UPDLOCK, that only prevents updating the same record in other transaction */
COMMIT TRANSACTION
It isn't possible to handle lock escalation process (row - page - table - database). Unfortunately, It makes automatically. But you can get some positive effects if:
reduce the amount of data which used in queries
optimize queries by hints, indexes etc
For INSERT INTO TABLE hint with (rowlock) can improve performance.
Also, select statement use shared (S/IS) lock types which don't allow any update of data, but doesn't block reading.
you should use optimistic locking. That will only lock the current row. Not whole table.
You can read below links for more reference :-
optimistic locking
Optimistic Concurrency
I'm going to start off by saying I am pretty sure this is not possible. Googling has not turned up anyone asking this question, so I am pessimistically hopeful.
I am using a SqlTransaction to connect to a database, and allowing several consumers to use this transaction before I close it. Is it possible to determine whether or not the transaction was only used to read data or to read/write data, even when stored procedures may have been used? Does some property or SQL method (other than performing some unholy diff) exist that can check this?
sys.dm_exec_sessions has a column writes. You can query it like this:
SELECT writes FROM sys.dm_exec_sessions WHERE session_id = ##SPID
That will impose some overhead, though. I think that DMV is O(N) in the number of sessions running. The system will slow down the more sessions there are. Eventually it becomes a case of negative scaling.
Note, that this is a per-session value. If you have multiple transactions per session (per opening of a SqlConnection object) the writes will be tracked cumulatively. You'd need to account for that, or use the idea in the other answer.
You can use the sys.dm_ views to check Transaction status.
When a transaction is opened, but no work is done (select only), it doesn't register, but if you do any work, the transaction log size is increased.
Try this first... note your SPID.
USE tempdb
begin tran
create table dave (id int)
insert into dave (id) select 1
-- rollback tran
Leave the transaction open, then look at it with this (replace your spid):
select database_transaction_log_bytes_used, *
from sys.dm_tran_database_transactions dt
inner join sys.dm_tran_active_transactions at on (at.transaction_id = dt.transaction_id)
inner join sys.dm_tran_session_transactions st on (st.transaction_id = dt.transaction_id)
where (dt.database_id = DB_ID('tempdb')) and (st.is_user_transaction=1)
and session_id=83
Then rollback the original.
Then try this:
USE tempdb
begin tran
select getdate()
-- rollback tran
And run the above on the sys.dm_ views...
The view is blank until there is something logged in the transaction. Selects do not get logged.
Don't forget to roll it all back.
So you could write a proc/view that you could query before closing your transaction, assuming you know the spid (easy enough to query for with SELECT ##SPID) that would return you the log size of the active session.
mmm, This is only an idea. I am thinking about something like.. creating a table in your database, calling CountRead, and another table CountWrite, and then, modify your querys to write in read table if the query is a SELECT and in the write table if the query is an INSERT or something. Then, when you want to check if only read or write, you only need to read this tables (if they already have data, you can or remove all the data when your application starts, or check the count in the beginning).
Then with a simple SELECT to this tables, you will be able to check if only read querys have been used, or only write querys, or both, and how many times.
As I said, is only an idea. ;) I hope this helps, or at least give you some idea about how to do it
I need to be able to create a new User entity only if the provided email is unique.
I've always handled this before by performing a simple if (!UserSet.Any(...)) before my AddToUserSet(...). However, this is not a concurrent solution and will break under heavy load.
I've been looking into Transactions, but AFAIK I would need to set an UPDLOCK on the SELECT too, but EF4 does not support this.
How does everyone else handle this?
You can force locking by including SELECT in transaction:
using (var scope = new TransactionScope())
{
// Create context
// Check non existing email
// Insert user
// Save changes
}
This will use serializable transaction which is what you need if you want concurrent solution for inserts - UPDLOCK is not enough to ensure that new record is not added during your transaction.
This can be pretty bad bottleneck so I agree with #paolo: simply place the unique constraint to the database and catch exception during insert if email is not unique.
Serializable transaction from Books online:
Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that
would fall in the range of keys read by any statements in the current
transaction until the current transaction completes.
Range locks are placed in the range of key values that match the
search conditions of each statement
executed in a transaction. This blocks
other transactions from updating or
inserting any rows that would qualify
for any of the statements executed by
the current transaction. This means
that if any of the statements in a
transaction are executed a second
time, they will read the same set of
rows. The range locks are held until
the transaction completes. This is the
most restrictive of the isolation
levels because it locks entire ranges
of keys and holds the locks until the
transaction completes. Because
concurrency is lower, use this option
only when necessary. This option has
the same effect as setting HOLDLOCK on
all tables in all SELECT statements in
a transaction.
in addition to your check, you could add a unique constraint on the email field directly on the DB
I have a sql server table of licence keys/serial numbers.
Table structure is something like;
[
RecordId int,
LicenceKey string,
Status int (available, locked, used, expired etc.)
AssignedTo int (customerId)
....
]
Through my ASP.NET application, when the user decides to buy a licence clicking the accept button, i need to reserve a licence key for the user.
My approach is like,
Select top 1 licenceKey from KeysTable Where Status = available
Update KeysTable Set status = locked
then return the key back to the application.
My concern is, if two asp.net threads access the same record and returns the same licencekey.
What do you think is the best practice of doing such assignments ? Is there a well known aproach or a pattern to this kind of problem ?
Where to use lock() statements if i need any ?
I'm using Sql Server 2005, stored procedures for data access, a DataLayer a BusinessLayer and Asp.Net GUI.
Thanks
There's probably no need to use explicit locks or transactions in this case.
In your stored procedure you can update the table and retrieve the license key in a single, atomic operation by using an OUTPUT clause in your UPDATE statement.
Something like this:
UPDATE TOP (1) KeysTable
SET Status = 'locked'
OUTPUT INSERTED.LicenseKey
-- if you want more than one column...
-- OUTPUT INSERTED.RecordID, INSERTED.LicenseKey
-- if you want all columns...
-- OUTPUT INSERTED.*
WHERE Status = 'available'
To achieve what you're talking about, you'll want to use a serializable transaction. To do this, follow this pattern:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
--Execute select
--Execute update
COMMIT TRANSACTION
However, why do you have a table with every possible license key? Why not have a key generation algorithm, then create a new key when a user purchases it?
You could also try using locks (in SQL) in addition to transactions, to verify that only one thread has access at a time.
I believe that an application lock may be of help here.
I think that you should actually mark the key as unavailable in the same stored proc that you are querying for it, because otherwise there will always be some sort of race condition. Manually locking tables is not a good practise IMHO.
If you have a two staged process (e.g. like booking airline tickets), you could introduce a concept of reserving a key for a specified period of time (e.g. 30 mins), so that when you query for a new key, you reserve it at the same time.
EDIT: Locking in business logic probably would work if you can guarantee that only one process is going to change the database, but it is much better to do it on the database level, preferably in a single stored proc. To do it correctly you have to set the transaction level and use transactions in the database, just as #Adam Robinson suggested in his answer.