Concurrency in EF4 - How to conditionally create an entity - c#

I need to be able to create a new User entity only if the provided email is unique.
I've always handled this before by performing a simple if (!UserSet.Any(...)) before my AddToUserSet(...). However, this is not a concurrent solution and will break under heavy load.
I've been looking into Transactions, but AFAIK I would need to set an UPDLOCK on the SELECT too, but EF4 does not support this.
How does everyone else handle this?

You can force locking by including SELECT in transaction:
using (var scope = new TransactionScope())
{
// Create context
// Check non existing email
// Insert user
// Save changes
}
This will use serializable transaction which is what you need if you want concurrent solution for inserts - UPDLOCK is not enough to ensure that new record is not added during your transaction.
This can be pretty bad bottleneck so I agree with #paolo: simply place the unique constraint to the database and catch exception during insert if email is not unique.
Serializable transaction from Books online:
Specifies the following:
Statements cannot read data that has been modified but not yet
committed by other transactions.
No other transactions can modify data that has been read by the
current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that
would fall in the range of keys read by any statements in the current
transaction until the current transaction completes.
Range locks are placed in the range of key values that match the
search conditions of each statement
executed in a transaction. This blocks
other transactions from updating or
inserting any rows that would qualify
for any of the statements executed by
the current transaction. This means
that if any of the statements in a
transaction are executed a second
time, they will read the same set of
rows. The range locks are held until
the transaction completes. This is the
most restrictive of the isolation
levels because it locks entire ranges
of keys and holds the locks until the
transaction completes. Because
concurrency is lower, use this option
only when necessary. This option has
the same effect as setting HOLDLOCK on
all tables in all SELECT statements in
a transaction.

in addition to your check, you could add a unique constraint on the email field directly on the DB

Related

DB Transaction: can we block just a record, not the whole table?

I am trying to insert the data into the Wallets table of a SQL Server database.
There are can be many requests at the same time so due to the sensitive info I have to use transactions.
The work flow is the following:
read the amount of the user's wallet
insert the new record based on the previously received data
I tried different isolation levels but in all the cases the transaction blocks the whole table, not just the record I am working with. Even ReadUncommitted or RepeatableRead block the whole table.
Is there a way to block only the records I am working with?
Let me detail:
I don't use any indexes in the table
The workflow (translating C# into SQL) is the following:
1) Select * from Balance
2) Insert ... INTO Balance
UPDLOCK is used when you want to lock a row or rows during a select statement for a future update statement
Ttransaction-1 :
BEGIN TRANSACTION
SELECT * FROM dbo.Test WITH (UPDLOCK) /*read the amount of the user's wallet*/
/* update the record on same transaction that were selected in previous select statement */
COMMIT TRANSACTION
Ttransaction-2 :
BEGIN TRANSACTION
/* insert a new row in table is allowed as we have taken UPDLOCK, that only prevents updating the same record in other transaction */
COMMIT TRANSACTION
It isn't possible to handle lock escalation process (row - page - table - database). Unfortunately, It makes automatically. But you can get some positive effects if:
reduce the amount of data which used in queries
optimize queries by hints, indexes etc
For INSERT INTO TABLE hint with (rowlock) can improve performance.
Also, select statement use shared (S/IS) lock types which don't allow any update of data, but doesn't block reading.
you should use optimistic locking. That will only lock the current row. Not whole table.
You can read below links for more reference :-
optimistic locking
Optimistic Concurrency

Lock timeout when dropping table after having dropped column

I spare you from code details. Basically I run several SQL queries on a SQL CE database within a ado.net transaction, i.e.
ALTER TABLE [FIRST] DROP CONSTRAINT [FK_FIRST_SECOND]
ALTER TABLE [FIRST] DROP COLUMN [FK2SECOND]
DROP TABLE [SECOND]
In words
I remove the foreign key constraint related to [FK2SECOND]
I remove [FK2SECOND]
I remove table [SECOND], i.e. the one which had been
referenced.
My transaction will fail at (3) with a lock timeout, saying:
SQL Server Compact timed out waiting for a lock. The default lock time
is 2000ms for devices and 5000ms for desktops. The default lock
timeout can be increased in the connection string using the ssce:
default lock timeout property.
Separating the DROP TABLE from the rest by taking this query out of the transaction works.
FMO that lock is a generic problem, which could/should not be healed by increasing the default lock time. What would instead be best practice to cope with this (table) locking?

SQL Server UPDATE/SELECT key order

We have a table with a key field, and another table which contains the current value of that key sequence, ie, to insert a new record you need to:
UPDATE seq SET key = key + 1
SELECT key FROM seq
INSERT INTO table (id...) VALUES (#key...)
Today I have been investigating collisions, and have found that without using transactions the above code run in parallel induces collisions, however, swapping the UPDATE and SELECT lines does not induce collisions, ie:
SELECT key + 1 FROM seq
UPDATE seq SET key = key + 1
INSERT INTO table (id...) VALUES (#key...)
Can anyone explain why? (I am not interested in better ways to do this, I am going to use transactions, and I cannot change the database design, I am just interested in why we observed what we did.)
I am running the two lines of SQL as a single string using C#'s SqlConnection, SqlCommand and SqlDataAdapter.
First off, your queries do not entirely make sense. Here's what I presume you are actually doing:
UPDATE seq SET key = key + 1
SELECT #key = key FROM seq
INSERT INTO table (id...) VALUES (#key...)
and
SELECT #key = key + 1 FROM seq
UPDATE seq SET key = #key
INSERT INTO table (id...) VALUES (#key...)
You're experiencing concurrency issues tied to the Transaction Isolation Level.
Transaction Isolation Levels represent a compromise between the need for concurrency (i.e. performance) and the need for data quality (i.e. accuracy).
By default, SQL uses a Read Committed isolation level, which means you can't get "dirty" reads (reads of data that has been modified by another transaction that but not yet committed to the table). It does not, however, mean that you are immune from other types of concurrency issues.
In your case, the issue you are having is called a non-repeatable read.
In your first example, the first line is reading the key value, then updating it. (In order for the UPDATE to set the column to key+1 it must first read the value of key). Then the second line's SELECT is reading the key value again. In a Read Committed or Read Uncommitted isolation level, it is possible that another transaction meanwhile completes an update to the key field, meaning that line 2 will read it as key+2 instead of the expected key+1.
Now, with your second example, once the key value has been read and modified and placed in the #key variable, it is not being read again. This prevents the non-repeatable read issue, but you're still not totally immune from concurrency problems. What can happen in this scenario is a lost update, in which two or more transactions end up trying to update key to the same value, and subsequently inserting duplicate keys to the table.
To be absolutely certain of having no concurrency problems with this structure as designed, you will need to use locking hints to ensure that all reads and updates to key are serializable (i.e. not concurrent). This will have horrendous performance, but "WITH UPDLOCK,HOLDLOCK" will get you there.
Your best solution, if you cannot change the database design, is to find someone who can. As Brian Hoover indicated, an auto-incrementing IDENTITY column is the way to do this with superb performance. The way you're doing it now reduces SQL's V-8 engine to one that is only allowed to fire on one cylinder.

How to perform a row lock?

I want to lock one record and then no one may make changes to that record. When I release the lock, then people may change the record.
In the meantime that a record is locked, I want to show the user a warning that the record is locked and that changes are not allowed.
How can I do this?
I've tried all the IsolationLevel levels, but none of them has the behavior I want. Some of the Isolation levels wait until the lock is released and then make a change. I don't want this, because updating is not allowed at the moment a record is locked.
What can I do to lock a record and deny all changes?
I use SQL Server 2008
With the assumption that this is MS SQL server, you probably want UPDLOCK, possibly combined with ROWLOCK (Table hints). I'm having trouble finding a decent article which describes the theory, but here is quick example:
SELECT id From mytable WITH (ROWLOCK, UPDLOCK) WHERE id = 1
This statement will place an update lock on the row for the duration of the transaction (so it is important to be aware of when the transaction will end). As update locks are incompatible with exclusive locks (required to update records), this will prevent anyone from updating this record until the transaction has ended.
Note that other processes attempting to modify this record will be blocked until the transaction completes, however will continue with whatever write operation they requested once the transaction has ended (unless they are timed out or killed off as a deadlocked process). If you wish to prevent this then your other processes need to use additional hints in order to either abort if an incompatible lock is detected, or skip the record if it has changed.
Also, You should not use this method to lock records while waiting for user input. If this is your intention then you should add some sort of "being modified" column to your table instead.
The SQL server locking mechanisms are really only suited for use to preserve data integrity / preventing deadlocks - transactions should generally be kept as short as possible and should certainly not be maintained while waiting for user input.
Sql Server has locking hints, but these are limited to the scope of a query.
If the decision to lock the record is taken in an application, you can use the same mechanisms as optimistic locking and deny any changes to the record from the application.
Use a timestamp or guid as a lock on the record and deny access or changes to the record if the wrong locking key is given. Be careful to unlock records again or you will get orphans
See this duplicate question on SO.
Basically it's:
begin tran
select * from [table] with(holdlock,rowlock) where id = #id
--Here goes your stuff
commit tran
Archive
Something like this maybe?
update t
set t.IsLocked = 1
from [table] t
where t.id = #id
Somewhere in the update trigger:
if exists (
select top 1 1
from deleted d
join inserted i on i.id = d.id
where d.IsLocked = 1 and i.RowVersion <> d.RowVersion)
begin
print 'Row is locked'
rollback tran
end
You don't want to wait for the lock to be released and show the message as soon as you encounter a lock, if this is the case then did you try NOWAIT. See Table Hints (Transact-SQL) and SQL Server 2008 Table Hints for more details. To get benefit of NOWAIT you need to lock records on edits, google for more details.

assigning a serial number to a client from a pool of serial numbers

I have a sql server table of licence keys/serial numbers.
Table structure is something like;
[
RecordId int,
LicenceKey string,
Status int (available, locked, used, expired etc.)
AssignedTo int (customerId)
....
]
Through my ASP.NET application, when the user decides to buy a licence clicking the accept button, i need to reserve a licence key for the user.
My approach is like,
Select top 1 licenceKey from KeysTable Where Status = available
Update KeysTable Set status = locked
then return the key back to the application.
My concern is, if two asp.net threads access the same record and returns the same licencekey.
What do you think is the best practice of doing such assignments ? Is there a well known aproach or a pattern to this kind of problem ?
Where to use lock() statements if i need any ?
I'm using Sql Server 2005, stored procedures for data access, a DataLayer a BusinessLayer and Asp.Net GUI.
Thanks
There's probably no need to use explicit locks or transactions in this case.
In your stored procedure you can update the table and retrieve the license key in a single, atomic operation by using an OUTPUT clause in your UPDATE statement.
Something like this:
UPDATE TOP (1) KeysTable
SET Status = 'locked'
OUTPUT INSERTED.LicenseKey
-- if you want more than one column...
-- OUTPUT INSERTED.RecordID, INSERTED.LicenseKey
-- if you want all columns...
-- OUTPUT INSERTED.*
WHERE Status = 'available'
To achieve what you're talking about, you'll want to use a serializable transaction. To do this, follow this pattern:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
--Execute select
--Execute update
COMMIT TRANSACTION
However, why do you have a table with every possible license key? Why not have a key generation algorithm, then create a new key when a user purchases it?
You could also try using locks (in SQL) in addition to transactions, to verify that only one thread has access at a time.
I believe that an application lock may be of help here.
I think that you should actually mark the key as unavailable in the same stored proc that you are querying for it, because otherwise there will always be some sort of race condition. Manually locking tables is not a good practise IMHO.
If you have a two staged process (e.g. like booking airline tickets), you could introduce a concept of reserving a key for a specified period of time (e.g. 30 mins), so that when you query for a new key, you reserve it at the same time.
EDIT: Locking in business logic probably would work if you can guarantee that only one process is going to change the database, but it is much better to do it on the database level, preferably in a single stored proc. To do it correctly you have to set the transaction level and use transactions in the database, just as #Adam Robinson suggested in his answer.

Categories

Resources