Correct approach to update counter value in database concurrently? - c#

Problem:
I have a multi-threaded application (or multiple client applications) needing to access a "counter" stored in the database. They need to read the value, then +1 to the counter, and store it again. During the read and write, there is an additional layer of logic where the counter value is compared with an input obtained externally, and the result of the logic dictates whether to +1 to the counter or not.
Pseudocode:
var counter = Database.GetCurrentValueOfCounter();
counter = max(counter, input);
Database.Save(counter + 1);
return counter;
For example, in a multiple client scenario, all clients get an input externally, which all equals the same value (since they are obtained at the same time).
When the first client enters this function, the other clients should wait until the counter is updated before entering. Thus for multiple clients in the above scenario, each would obtain a sequential counter value.
What I've tried
I am using c# EF Core to implement the code and database logic. I have tried to use a serialisable transaction via Database.BeginTransaction(IsolationLevel.Serializable).
Then SELECT counter from Table -> logic in c# -> UPDATE Table SET counter ...
However, this approach gives me a deadlock transaction. Researching deadlocks, I believe the deadlock occurs on the second thread SELECT statement, which makes sense, since the first thread would be locking it in the transaction.
Question
What is the correct way to implement this locking/queuing mechanism on the database level?
Currently, I have resorted to entered a critical section in the code via lock(object), however, this WILL FAIL in a multi-server scenario. I can look into using SQL Distributed locks, however, it doesn't feel right to be doing the lock at the application level.
Can anyone point me to how I can achieve this sequential locking at the database level?

Related

ASP.NET Web API C# Concurrent Requests Causing Duplicates in Database

I have a WebApi Async controller method that calls another async method that first does a null check to see if a record exists, and if it doesn't add it to database. Problem is if I have say 3 requests come in at the same time all the null checks happen at once in various threads (i'm assuming) and I will get 2 duplicate entries. For example:
public async void DoSomething()
{
var record = {query that returns record or null}
if (record == null)
{
AddNewRecordToDatabase();
}
}
... This seems like a very common thing and maybe I'm missing something, but how do I prevent this from happening? I have to purposely try to get it to create duplicates of course, but it is a requirement to not allow it to do so.
Thanks in advance,
Lee
I would solve this by putting unique constraints in the data layer. Assuming your data source is sql, you can put a unique constraint across the columns you are querying by with "query that returns record or null" and it will prevent these duplicates. The problem with using a lock or a mutex, is that it doesn't scale across multiple instances of the service. You should be able to deploy many instances of your service (to different machines), have any of those instances handle requests, and still have consistent behavior. A mutex or lock isn't going to protect you from this concurrency issue in this situation.
I prevent this from happening with async calls by calling a stored procedure instead.
The stored procedure then makes the check, via a "On duplicate key detection" or a similar query for MSSQL db.
That way, it's merely the order of the async calls that gets to determine which is a create, and which is not.
There are several answers to this, depending on the details and what your team is comfortable with.
The best and most performant answer it to modify your c# code so that instead of calling a CRUD database operation it calls a stored procedure that you write. The stored procedure would check for existence and insert or update only as needed. The specifics are completely under your control, since you write the code.
If you want to stick with ordinary CRUD operations, you can force the database to serialize the requests one after the other by wrapping them in a transaction and using a strict transaction isolation level. On SQL Server you'd want to use serializable. This will prevent any transaction from altering the state of the table in the short time between the part where you check for existence and when you insert the record. See this article for a list of transaction isolation levels and how to apply them in c# code. If you do this there is a risk of deadlock, so you'll need to catch and swallow those specific errors.
If your only need it to ensure uniqueness, and the new record has a natural (not surrogate) key, you can add a uniqueness constraint on the key, which will prevent the second insert from succeeding. This solution doesn't work so well with surrogate keys; it doesn't really solve the problem, it just relocates it to the surrogate key generation process. But if you have a decent natural key, this is very easy to implement.

C# acquire lock from mysql database for critical section of code

I'm using Asp.NET with a MySql database.
Application flow:
Order created in Woocommerce and sent to my app
My app translated the woo order object to an object to add to an external ERP system
Order created in external ERP system and we update a local database with that order info to know that the creation was successful
I have a critical section of code that creates an order on an external ERP resource. Multiple requests for the same order can be running at the same time because they are created from an external application (woocommerce) that I can't control. So the critical section of code must only allow one of the requests to enter at a time otherwise duplicate orders can be created.
Important note: the application is hosted on Elastic Beanstalk which has a load balancer so the application can scale across multiple servers, which makes a standard C# lock object not work.
I would like to create a lock that can be shared across multiple servers/application instances so that only one server can acquire the lock and enter the critical section of code at a time. I can't find how to do this using MySql and C# so if anyone has an example that would be great.
Below is how I'm doing my single instance thread safe locking. How can I convert this to be safe across multiple instances:
SalesOrder newOrder = new SalesOrder(); //the external order object
var databaseOrder = new SalesOrderEntity(); //local MySql database object
/*
* Make this section thread safe so multiple threads can't try to create
* orders at the same time
*/
lock (orderLock)
{
//check if the order is already locked or created.
//wooOrder comes from external order creation application (WooCommerce)
databaseOrder = GetSalesOrderMySqlDatabase(wooOrder.id.ToString(), originStore);
if (databaseOrder.OrderNbr != null)
{
//the order is already created externally because it has an order number
return 1;
}
if (databaseOrder.Locked)
{
//the order is currently locked and being created
return 2;
}
//the order is not locked so lock it before we attempt to create externally
databaseOrder.Locked = true;
UpdateSalesOrderDatabase(databaseOrder);
//Create a sales order in external system with the specified values
newOrder = (SalesOrder) client.Put(orderToBeCreated);
//Update the order in our own database so we know it's created in external ERP system
UpdateExternalSalesOrderToDatabase(newOrder);
}
Let me know if further detail is required.
You can use MySQL's named advisory lock function GET_LOCK(name) for this.
This works outside of transaction scope, so you an commit or rollback database changes before you release your lock. Read more about it here: https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_get-lock
You could also use some other dedicated kind of lock service. You can do this with a shared message queue service, for example. See https://softwareengineering.stackexchange.com/questions/127065/looking-for-a-distributed-locking-pattern
You need to use a MySQL DBMS transaction lock for this.
You don't show your DBMS queries directly, so I can't guess them. Still you need this sort of series of queries.
START TRANSACTION;
SELECT col, col, col FROM wooTable WHERE id = <<<wooOrderId>>> FOR UPDATE;
/* do whatever you need to do */
COMMIT;
If the same <<<wooOrderID>>> row gets hit with the same sequence of queries from another instance of your web server running on another ELB server, that one's SELECT ... FOR UPDATE query will wait until the first one does the commit.
Notice that intra-server multithreading and critical section locking is neither necessary nor sufficient to solve your problem. Why?
It's unnecessary because database connections are not thread safe in the first place.
It's insufficient because you want a database-level transaction for this, not a process-level lock.
You should use Transaction, which is a unit of work in database. It's making your code not only atomic but also it'll be thread-safe. Here is a sample adopted from mysql official website
The code you need:
START TRANSACTION
COMMIT // if your transaction worked
ROLLBACK // in case of failure
Also I highly recommend you to read about Transaction isolation levels:
Mysql Transaction Isolation Levels
If you use the Transaction as I wrote above, you have a lock on your table, which prevents other queries, e.g. select queries, to execute and they will be waiting for the transaction to end. It's called "Server blocking", in order to prevent that just read the link intensively.
I don't think there's any nice solution for this using a database, unless everything can be done neatly in a stored procedure like another answer suggested. For anything else I would look at a message queueing solution with multiple writers and a single reader.

Is my SQL transaction taking too long?

There is something that worries me about my application. I have a SQL query that does a bunch of inserts into the database across various tables. I timed how long it takes to complete the process, it takes about 1.5 seconds. At this point I'm not even done developing the query, I still have more inserts to program into this. So I fully expect this to process to take even longer, perhaps up to 3 seconds.
Now, it is important that all of this data be consistent and finish either completely, or not at all. So What I'm wondering about is, is it OK for a transaction to take that long. Doesn't it lock up the table, so selects, inserts, updates, etc... cannot be run until the transaction is finished? My concern is if this query is being run frequently it could lock up the entire application so that certain parts of it become either incredibly slow, or unusable. With a low user base, I doubt this would be an issue, but if my application should gain some traction, this query could potentially be a lot.
Should I be concerned about this or am I missing something where the database won't act how I am thinking. I'm using a SQL Server 2014 database.
To note, I timed this by using the StopWatch C# object immediately before the transaction starts, and stop it right after the changes are committed. So it's about as accurate as can be.
You're right to be concerned about this, as a transaction will lock the rows it's written until the transaction commits, which can certainly cause problems such as deadlocks, and temporary blocking which will slow the system response. But there are various factors that determine the potential impact.
For example, you probably largely don't need to worry if your users are only updating and querying their own data, and your tables have indexing to support both read and write query criteria. That way each user's row locking will largely not affect the other users--depending on how you write your code of course.
If your users share data, and you want to be able to support efficient searching across multiple user's data even with multiple concurrent updates for example, then you may need to do more.
Some general concepts:
-- Ensure your transactions write to tables in the same order
-- Keep your transactions as short as possible by preparing the data to be written as much as possible before starting the transaction.
-- If this is a new system (and even if not new), definitely consider enabling Snapshot Isolation and/or Read Committed Snapshot Isolation on the database. SI will (when explicitly set on the session) allow your read queries not to be blocked by concurrent writes. RCSI will allow all your read queries by default not to be blocked by concurrent writes. But read this to understand both the benefits and gotchas of both isolation levels: https://www.brentozar.com/archive/2013/01/implementing-snapshot-or-read-committed-snapshot-isolation-in-sql-server-a-guide/
I think its depend on your code, how you used loop effectively, select query and the other statement.

Is there some way to open a new connection that does not take part in the current TransactionScope?

My application needs to write to a table that acts as a system-wide counter, and this counter needs to be unique thorough the whole system and all his applications.
This counter is currently implemented as a table in our Oracle database.
My problem is: i need to write to this counter table (which use some keys to guarantee uniqueness of the counters at each business process into the system) without getting it locked in the current transaction, as multiple other processes may read or write into this table as well. Gaps in the counter does not matter, but i cannot create the counter values in advance.
Simple SEQUENCEs do not help me in this case, as the sequence number is a formatted number.
For now, i have some other non-db alternatives for achieving this, but i want to avoid changing the system code as much as i can for a while.
Actually, it would be simplest if i could open a new connection that won't take part in the current transaction, write to the table, and then close this new connection, but my tired mind can't find a way to do it.
The answer must be obvious, but the light just don't shine on me right now.
Execute your command inside a TransactionScope block created with the option "Suppress". This way your command won't participate in the current transaction.
using (var scope = new TransactionScope(TransactionScopeOption.Suppress))
{
// Execute your command here
}
For more information, see http://msdn.microsoft.com/en-us/library/system.transactions.transactionscopeoption.aspx
Sure. Just make a component that is configured to open a NEW transaction (inner transaction) that is not coupled to the outer transaction.
Read http://msdn.microsoft.com/en-us/library/ms172152%28v=vs.90%29.aspx#Y1642 for a description how TransactionScopes progress.

Prevent calling a web service too many times

I provide a Web Service for my clients which allow him to add a record to the production database.
I had an incident lately, in which my client's programmer called the service in a loop , iterated to call to my service thousands of times.
My question is what would be the best way to prevent such a thing.
I thought of some ways:
1.At the entrence to the service, I can update counters for each client that call the service, but that looks too clumbsy.
2.Check the IP of the client who called this service, and raise a flag each time he/she calls the service, and then reset the flag every hour.
I'm positive that there are better ways and would appriciate any suggestions.
Thanks, David
First you need to have a look at the legal aspects of your situation: Does the contract with your client allow you to restrict the client's access?
This question is out of the scope of SO, but you must find a way to answer it. Because if you are legally bound to process all requests, then there is no way around it. Also, the legal analysis of your situation may already include some limitations, in which way you may restrict the access. That in turn will have an impact on your solution.
All those issues aside, and just focussing on the technical aspects, do you use some sort of user authentication? (If not, why not?) If you do, you can implement whatever scheme you decide to use on a per user base, which I think would be the cleanest solution (you don't need to rely on IP addresses, which is a somehow ugly workaround).
Once you have your way of identifying a single user, you can implement several restrictions. The fist ones that come to my mind are these:
Synchronous processing
Only start processing a request after all previous requests have been processed. This may even be implemented with nothing more but a lock statement in your main processing method. If you go for this kind of approach,
Time delay between processing requests
Requires that after one processing call a specific time must pass before the next call is allowed. The easiest solution is to store a LastProcessed timestamp in the user's session. If you go for this approach, you need to start thinking of how to respond when a new request comes in before it is allowed to be processed - do you send an error message to the caller? I think you should...
EDIT
The lock statement, briefly explained:
It is intended to be used for thread safe operations. the syntax is as follows:
lock(lockObject)
{
// do stuff
}
The lockObject needs to be an object, usually a private member of the current class. The effect is that if you have 2 threads who both want to execute this code, the first to arrive at the lock statement locks the lockObject. While it does it's stuff, the second thread can not acquire a lock, since the object is already locked. So it just sits there and waits until the first thread releases the lock when it exits the block at the }. Only thhen can the second thread lock the lockObject and do it's stuff, blocking the lockObject for any third thread coming along, until it has exited the block as well.
Careful, the whole issue of thread safety is far from trivial. (One could say that the only thing trivial about it are the many trivial errors a programmer can make ;-)
See here for an introduction into threading in C#
The way is to store on the session a counter and use the counter to prevent too many calls per time.
But if your user may try to avoid that and send different cookie each time*, then you need to make a custom table that act like the session but connect the user with the ip, and not with the cookie.
One more here is that if you block basic on the ip you may block an entire company that come out of a proxy. So the final correct way but more complicate is to have both ip and cookie connected with the user and know if the browser allow cookie or not. If not then you block with the ip. The difficult part here is to know about the cookie. Well on every call you can force him to send a valid cookie that is connected with an existing session. If not then the browser did not have cookies.
[ * ] The cookies are connected with the session.
[ * ] By making new table to keep the counters and disconnected from session you can also avoid the session lock.
In the past I have use a code that used for DosAttack, but none of them are working good when you have many pools and difficult application so I now use a custom table as I describe it. This are the two code that I have test and use
Dos attacks in your web app
Block Dos attacks easily on asp.net
How to find the clicks per seconds saved on a table. Here is the part of my SQL that calculate the Clicks Per Second. One of the tricks is that I continue to add clicks and make the calculation of the average if I have 6 or more seconds from the last one check. This is a code snipped from the calculation as an idea
set #cDos_TotalCalls = #cDos_TotalCalls + #NewCallsCounter
SET #cMilSecDif = ABS(DATEDIFF(millisecond, #FirstDate, #UtpNow))
-- I left 6sec diferent to make the calculation
IF #cMilSecDif > 6000
SET #cClickPerSeconds = (#cDos_TotalCalls * 1000 / #cMilSecDif)
else
SET #cClickPerSeconds = 0
IF #cMilSecDif > 30000
UPDATE ATMP_LiveUserInfo SET cDos_TotalCalls = #NewCallsCounter, cDos_TotalCallsChecksOn = #UtpNow WHERE cLiveUsersID=#cLiveUsersID
ELSE IF #cMilSecDif > 16000
UPDATE ATMP_LiveUserInfo SET cDos_TotalCalls = (cDos_TotalCalls / 2),
cDos_TotalCallsChecksOn = DATEADD(millisecond, #cMilSecDif / 2, cDos_TotalCallsChecksOn)
WHERE cLiveUsersID=#cLiveUsersID
Get user ip and insert it into cache for an hour after using web service, this is cached on server:
HttpContext.Current.Cache.Insert("UserIp", true, null,DateTime.Now.AddHours(1),System.Web.Caching.Cache.NoSlidingExpiration);
When you need to check if user entered in last hour:
if(HttpContext.Current.Cache["UserIp"] != null)
{
//means user entered in last hour
}

Categories

Resources