Determine if SqlTransaction wrote data or was read only - c#

I'm going to start off by saying I am pretty sure this is not possible. Googling has not turned up anyone asking this question, so I am pessimistically hopeful.
I am using a SqlTransaction to connect to a database, and allowing several consumers to use this transaction before I close it. Is it possible to determine whether or not the transaction was only used to read data or to read/write data, even when stored procedures may have been used? Does some property or SQL method (other than performing some unholy diff) exist that can check this?

sys.dm_exec_sessions has a column writes. You can query it like this:
SELECT writes FROM sys.dm_exec_sessions WHERE session_id = ##SPID
That will impose some overhead, though. I think that DMV is O(N) in the number of sessions running. The system will slow down the more sessions there are. Eventually it becomes a case of negative scaling.
Note, that this is a per-session value. If you have multiple transactions per session (per opening of a SqlConnection object) the writes will be tracked cumulatively. You'd need to account for that, or use the idea in the other answer.

You can use the sys.dm_ views to check Transaction status.
When a transaction is opened, but no work is done (select only), it doesn't register, but if you do any work, the transaction log size is increased.
Try this first... note your SPID.
USE tempdb
begin tran
create table dave (id int)
insert into dave (id) select 1
-- rollback tran
Leave the transaction open, then look at it with this (replace your spid):
select database_transaction_log_bytes_used, *
from sys.dm_tran_database_transactions dt
inner join sys.dm_tran_active_transactions at on (at.transaction_id = dt.transaction_id)
inner join sys.dm_tran_session_transactions st on (st.transaction_id = dt.transaction_id)
where (dt.database_id = DB_ID('tempdb')) and (st.is_user_transaction=1)
and session_id=83
Then rollback the original.
Then try this:
USE tempdb
begin tran
select getdate()
-- rollback tran
And run the above on the sys.dm_ views...
The view is blank until there is something logged in the transaction. Selects do not get logged.
Don't forget to roll it all back.
So you could write a proc/view that you could query before closing your transaction, assuming you know the spid (easy enough to query for with SELECT ##SPID) that would return you the log size of the active session.

mmm, This is only an idea. I am thinking about something like.. creating a table in your database, calling CountRead, and another table CountWrite, and then, modify your querys to write in read table if the query is a SELECT and in the write table if the query is an INSERT or something. Then, when you want to check if only read or write, you only need to read this tables (if they already have data, you can or remove all the data when your application starts, or check the count in the beginning).
Then with a simple SELECT to this tables, you will be able to check if only read querys have been used, or only write querys, or both, and how many times.
As I said, is only an idea. ;) I hope this helps, or at least give you some idea about how to do it

Related

DB Transaction: can we block just a record, not the whole table?

I am trying to insert the data into the Wallets table of a SQL Server database.
There are can be many requests at the same time so due to the sensitive info I have to use transactions.
The work flow is the following:
read the amount of the user's wallet
insert the new record based on the previously received data
I tried different isolation levels but in all the cases the transaction blocks the whole table, not just the record I am working with. Even ReadUncommitted or RepeatableRead block the whole table.
Is there a way to block only the records I am working with?
Let me detail:
I don't use any indexes in the table
The workflow (translating C# into SQL) is the following:
1) Select * from Balance
2) Insert ... INTO Balance
UPDLOCK is used when you want to lock a row or rows during a select statement for a future update statement
Ttransaction-1 :
BEGIN TRANSACTION
SELECT * FROM dbo.Test WITH (UPDLOCK) /*read the amount of the user's wallet*/
/* update the record on same transaction that were selected in previous select statement */
COMMIT TRANSACTION
Ttransaction-2 :
BEGIN TRANSACTION
/* insert a new row in table is allowed as we have taken UPDLOCK, that only prevents updating the same record in other transaction */
COMMIT TRANSACTION
It isn't possible to handle lock escalation process (row - page - table - database). Unfortunately, It makes automatically. But you can get some positive effects if:
reduce the amount of data which used in queries
optimize queries by hints, indexes etc
For INSERT INTO TABLE hint with (rowlock) can improve performance.
Also, select statement use shared (S/IS) lock types which don't allow any update of data, but doesn't block reading.
you should use optimistic locking. That will only lock the current row. Not whole table.
You can read below links for more reference :-
optimistic locking
Optimistic Concurrency

How to execute a stored procedure and then rollback the changes at a later time?

I am executing a stored procedure against a database through a C# application. I would like to do computations after the stored procedure is executed and then after the computations are done, I'd like to roll back the database to its state prior to the stored procedure. Most of the examples I've seen on stack overflow only involve using a rollback in a catch block of a try/catch block in the event of an error, but that's different from what I'm doing.
I'm not sure if I should be saving the state of the database at some point, and then do a transaction roll back with that state, or should be attaching a transaction parameter to the SqlCommand instance of the stored procedure, or something else.
You can do this via transactions. Sample is here: https://msdn.microsoft.com/en-us/library/86773566(v=vs.110).aspx Yes, it also uses catch block but you don't have to.
Alternatively you can use database snapshots if your edition of SQL Server supports them but they will roll back ALL CHANGES SINCE THE MOMENT SNAPSHOT WAS MADE - yours and any other user. Most likely this is not what you want.
One option is to use a WAITFOR DELAY in combination with a modified transaction isolation level. What this does is executes your code, makes the query wait for a set amount of time, then rolls back the transaction. During that time you designate, you can have another session query from your table that received the modification (as long as you set the transaction level to read uncommitted) and you will be able to see the new value. Here is a sample:
In one window of SSMS:
CREATE TABLE ##testtable (id INT);
BEGIN TRAN;
INSERT INTO ##testtable (id)
VALUES (1), (2), (3);
WAITFOR DELAY '00:01:00';
ROLLBACK TRAN
In a second SSMS window, run this query during the 1 minute timeframe:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT *
FROM ##testtable
During the 1 minute that the other window is executing, you will see value in the temp table. After one minute, the table will be blank. This works for somewhat simple tasks, but if you are doing test upon data that already has a test change to it, just do a snapshot or a database restore.
Standard disclaimer: That might not be a good thing to do. (Okay, that's done.)
You could do it most easily in your stored procedure using a table variable. You can
begin a transaction
modify your data
query the modified data
insert it into the table variable
roll back the transaction
select what's in the table variable
declare #myData table (someColumn int, someOtherColumn varchar(10))
begin transaction
begin try
[make your changes]
insert into #myData
select something, something something
rollback transaction
select * from #myData
end try
begin catch
rollback transaction
end catch
Rolling back your transaction won't affect what's in the table variable. I'd trust this more than counting on my application to roll back the transaction. Not that it wouldn't work, but I'd just trust this much more.
That being said, you could just create a SqlTransaction (docs), execute your SqlCommand using that transaction, query your data, and then roll back the transaction.
On ORACLE database if you want to wait in your PL/SQL program (“sleep”) you may want to use the procedure “SLEEP” from the package “DBMS_LOCK“.
CREATE PROCEDURE execution(xVal NUMBER) IS
BEGIN
INSERT INTO TABLE_1 VALUES (xVal);
DBMS_LOCK.Sleep (60);
ROLLBACK;
END execution

How to perform a row lock?

I want to lock one record and then no one may make changes to that record. When I release the lock, then people may change the record.
In the meantime that a record is locked, I want to show the user a warning that the record is locked and that changes are not allowed.
How can I do this?
I've tried all the IsolationLevel levels, but none of them has the behavior I want. Some of the Isolation levels wait until the lock is released and then make a change. I don't want this, because updating is not allowed at the moment a record is locked.
What can I do to lock a record and deny all changes?
I use SQL Server 2008
With the assumption that this is MS SQL server, you probably want UPDLOCK, possibly combined with ROWLOCK (Table hints). I'm having trouble finding a decent article which describes the theory, but here is quick example:
SELECT id From mytable WITH (ROWLOCK, UPDLOCK) WHERE id = 1
This statement will place an update lock on the row for the duration of the transaction (so it is important to be aware of when the transaction will end). As update locks are incompatible with exclusive locks (required to update records), this will prevent anyone from updating this record until the transaction has ended.
Note that other processes attempting to modify this record will be blocked until the transaction completes, however will continue with whatever write operation they requested once the transaction has ended (unless they are timed out or killed off as a deadlocked process). If you wish to prevent this then your other processes need to use additional hints in order to either abort if an incompatible lock is detected, or skip the record if it has changed.
Also, You should not use this method to lock records while waiting for user input. If this is your intention then you should add some sort of "being modified" column to your table instead.
The SQL server locking mechanisms are really only suited for use to preserve data integrity / preventing deadlocks - transactions should generally be kept as short as possible and should certainly not be maintained while waiting for user input.
Sql Server has locking hints, but these are limited to the scope of a query.
If the decision to lock the record is taken in an application, you can use the same mechanisms as optimistic locking and deny any changes to the record from the application.
Use a timestamp or guid as a lock on the record and deny access or changes to the record if the wrong locking key is given. Be careful to unlock records again or you will get orphans
See this duplicate question on SO.
Basically it's:
begin tran
select * from [table] with(holdlock,rowlock) where id = #id
--Here goes your stuff
commit tran
Archive
Something like this maybe?
update t
set t.IsLocked = 1
from [table] t
where t.id = #id
Somewhere in the update trigger:
if exists (
select top 1 1
from deleted d
join inserted i on i.id = d.id
where d.IsLocked = 1 and i.RowVersion <> d.RowVersion)
begin
print 'Row is locked'
rollback tran
end
You don't want to wait for the lock to be released and show the message as soon as you encounter a lock, if this is the case then did you try NOWAIT. See Table Hints (Transact-SQL) and SQL Server 2008 Table Hints for more details. To get benefit of NOWAIT you need to lock records on edits, google for more details.

assigning a serial number to a client from a pool of serial numbers

I have a sql server table of licence keys/serial numbers.
Table structure is something like;
[
RecordId int,
LicenceKey string,
Status int (available, locked, used, expired etc.)
AssignedTo int (customerId)
....
]
Through my ASP.NET application, when the user decides to buy a licence clicking the accept button, i need to reserve a licence key for the user.
My approach is like,
Select top 1 licenceKey from KeysTable Where Status = available
Update KeysTable Set status = locked
then return the key back to the application.
My concern is, if two asp.net threads access the same record and returns the same licencekey.
What do you think is the best practice of doing such assignments ? Is there a well known aproach or a pattern to this kind of problem ?
Where to use lock() statements if i need any ?
I'm using Sql Server 2005, stored procedures for data access, a DataLayer a BusinessLayer and Asp.Net GUI.
Thanks
There's probably no need to use explicit locks or transactions in this case.
In your stored procedure you can update the table and retrieve the license key in a single, atomic operation by using an OUTPUT clause in your UPDATE statement.
Something like this:
UPDATE TOP (1) KeysTable
SET Status = 'locked'
OUTPUT INSERTED.LicenseKey
-- if you want more than one column...
-- OUTPUT INSERTED.RecordID, INSERTED.LicenseKey
-- if you want all columns...
-- OUTPUT INSERTED.*
WHERE Status = 'available'
To achieve what you're talking about, you'll want to use a serializable transaction. To do this, follow this pattern:
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
GO
BEGIN TRANSACTION
--Execute select
--Execute update
COMMIT TRANSACTION
However, why do you have a table with every possible license key? Why not have a key generation algorithm, then create a new key when a user purchases it?
You could also try using locks (in SQL) in addition to transactions, to verify that only one thread has access at a time.
I believe that an application lock may be of help here.
I think that you should actually mark the key as unavailable in the same stored proc that you are querying for it, because otherwise there will always be some sort of race condition. Manually locking tables is not a good practise IMHO.
If you have a two staged process (e.g. like booking airline tickets), you could introduce a concept of reserving a key for a specified period of time (e.g. 30 mins), so that when you query for a new key, you reserve it at the same time.
EDIT: Locking in business logic probably would work if you can guarantee that only one process is going to change the database, but it is much better to do it on the database level, preferably in a single stored proc. To do it correctly you have to set the transaction level and use transactions in the database, just as #Adam Robinson suggested in his answer.

How to avoid a database race condition when manually incrementing PK of new row

I have a legacy data table in SQL Server 2005 that has a PK with no identity/autoincrement and no power to implement one.
As a result, I am forced to create new records in ASP.NET manually via the ole "SELECT MAX(id) + 1 FROM table"-before-insert technique.
Obviously this creates a race condition on the ID in the event of simultaneous inserts.
What's the best way to gracefully resolve the event of a race collision? I'm looking for VB.NET or C# code ideas along the lines of detecting a collision and then re-attempting the failed insert by getting yet another max(id) + 1. Can this be done?
Thoughts? Comments? Wisdom?
Thank you!
NOTE: What if I cannot change the database in any way?
Create an auxiliary table with an identity column. In a transaction insert into the aux table, retrieve the value and use it to insert in your legacy table. At this point you can even delete the row inserted in the aux table, the point is just to use it as a source of incremented values.
Not being able to change database schema is harsh.
If you insert existing PK into table you will get SqlException with a message indicating PK constraint violation. Catch this exception and retry insert a few times until you succeed. If you find that collision rate is too high, you may try max(id) + <small-random-int> instead of max(id) + 1. Note that with this approach your ids will have gaps and the id space will be exhausted sooner.
Another possible approach is to emulate autoincrementing id outside of database. For instance, create a static integer, Interlocked.Increment it every time you need next id and use returned value. The tricky part is to initialize this static counter to good value. I would do it with Interlocked.CompareExchange:
class Autoincrement {
static int id = -1;
public static int NextId() {
if (id == -1) {
// not initialized - initialize
int lastId = <select max(id) from db>
Interlocked.CompareExchange(id, -1, lastId);
}
// get next id atomically
return Interlocked.Increment(id);
}
}
Obviously the latter works only if all inserted ids are obtained via Autoincrement.NextId of single process.
The key is to do it in one statement or one transaction.
Can you do this?
INSERT (PKcol, col2, col3, ...)
SELECT (SELECT MAX(id) + 1 FROM table WITH (HOLDLOCK, UPDLOCK)), #val2, #val3, ...
Without testing, this will probably work too:
INSERT (PKcol, col2, col3, ...)
VALUES ((SELECT MAX(id) + 1 FROM table WITH (HOLDLOCK, UPDLOCK)), #val2, #val3, ...)
If you can't, another way is to do it in a trigger.
The trigger is part of the INSERT transaction
Use HOLDLOCK, UPDLOCK for the MAX. This holds the row lock until commit
The row being updated is locked for the duration
A second insert will wait until the first completes.
The downside is that you are changing the primary key.
An auxiliary table needs to be part of a transaction.
Or change the schema as suggested...
Note: All you need is a source of ever-increasing integers. It doesn't have to come from the same database, or even from a database at all.
Personally, I would use SQL Express because it is free and easy.
If you have a single web server:
Create a SQL Express database on the web server with a single table [ids] with a single autoincrementing field [new_id]. Insert a record into this [ids] table, get the [new_id], and pass that onto your database layer as the PK of the table in question.
If you have multiple web servers:
It's a pain to setup, but you can use the same trick by setting appropriate seed/increment (i.e. increment = 3, and seed = 1/2/3 for three web servers).
What about running the whole batch (select for id and insert) in serializable transaction?
That should get you around needing to make changes in the database.
Is the main concern concurrent access? I mean, will multiple instances of your app (or, God forbid, other apps outside your control) be performing inserts concurrently?
If not, you can probably manage the inserts through a central, synchronized module in your app, and avoid race conditions entirely.
If so, well... like Joel said, change the database. I know you can't, but the problem is as old as the hills, and it's been solved well -- at the database level. If you want to fix it yourself, you're just going to have to loop (insert, check for collisions, delete) over and over and over again. The fundamental problem is that you can't perform a transaction (I don't mean that in the SQL "TRANSACTION" sense, but in the larger data-theory sense) if you don't have support from the database.
The only further thought I have is that if you at least have control over who has access to the database (e.g., only "authorized" apps, either written or approved by you), you could implement a side-band mutex of sorts, where a "talking stick" is shared by all the apps and ownership of the mutex is required to do an insert. That would be its own hairy ball of wax, though, as you'd have to figure out policy for dead clients, where it's hosted, configuration issues, etc. And of course a "rogue" client could do inserts without the talking stick and hose the whole setup.
The best solution is to change the database. You may not be able to change the column to be an identity column, but you should be able to make sure there's a unique constraint on the column and add a new identity column seeded with your existing PK's. Then either use the new column instead or use a trigger to make the old column mirror the new, or both.

Categories

Resources