C# acquire lock from mysql database for critical section of code - c#

I'm using Asp.NET with a MySql database.
Application flow:
Order created in Woocommerce and sent to my app
My app translated the woo order object to an object to add to an external ERP system
Order created in external ERP system and we update a local database with that order info to know that the creation was successful
I have a critical section of code that creates an order on an external ERP resource. Multiple requests for the same order can be running at the same time because they are created from an external application (woocommerce) that I can't control. So the critical section of code must only allow one of the requests to enter at a time otherwise duplicate orders can be created.
Important note: the application is hosted on Elastic Beanstalk which has a load balancer so the application can scale across multiple servers, which makes a standard C# lock object not work.
I would like to create a lock that can be shared across multiple servers/application instances so that only one server can acquire the lock and enter the critical section of code at a time. I can't find how to do this using MySql and C# so if anyone has an example that would be great.
Below is how I'm doing my single instance thread safe locking. How can I convert this to be safe across multiple instances:
SalesOrder newOrder = new SalesOrder(); //the external order object
var databaseOrder = new SalesOrderEntity(); //local MySql database object
/*
* Make this section thread safe so multiple threads can't try to create
* orders at the same time
*/
lock (orderLock)
{
//check if the order is already locked or created.
//wooOrder comes from external order creation application (WooCommerce)
databaseOrder = GetSalesOrderMySqlDatabase(wooOrder.id.ToString(), originStore);
if (databaseOrder.OrderNbr != null)
{
//the order is already created externally because it has an order number
return 1;
}
if (databaseOrder.Locked)
{
//the order is currently locked and being created
return 2;
}
//the order is not locked so lock it before we attempt to create externally
databaseOrder.Locked = true;
UpdateSalesOrderDatabase(databaseOrder);
//Create a sales order in external system with the specified values
newOrder = (SalesOrder) client.Put(orderToBeCreated);
//Update the order in our own database so we know it's created in external ERP system
UpdateExternalSalesOrderToDatabase(newOrder);
}
Let me know if further detail is required.

You can use MySQL's named advisory lock function GET_LOCK(name) for this.
This works outside of transaction scope, so you an commit or rollback database changes before you release your lock. Read more about it here: https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_get-lock
You could also use some other dedicated kind of lock service. You can do this with a shared message queue service, for example. See https://softwareengineering.stackexchange.com/questions/127065/looking-for-a-distributed-locking-pattern

You need to use a MySQL DBMS transaction lock for this.
You don't show your DBMS queries directly, so I can't guess them. Still you need this sort of series of queries.
START TRANSACTION;
SELECT col, col, col FROM wooTable WHERE id = <<<wooOrderId>>> FOR UPDATE;
/* do whatever you need to do */
COMMIT;
If the same <<<wooOrderID>>> row gets hit with the same sequence of queries from another instance of your web server running on another ELB server, that one's SELECT ... FOR UPDATE query will wait until the first one does the commit.
Notice that intra-server multithreading and critical section locking is neither necessary nor sufficient to solve your problem. Why?
It's unnecessary because database connections are not thread safe in the first place.
It's insufficient because you want a database-level transaction for this, not a process-level lock.

You should use Transaction, which is a unit of work in database. It's making your code not only atomic but also it'll be thread-safe. Here is a sample adopted from mysql official website
The code you need:
START TRANSACTION
COMMIT // if your transaction worked
ROLLBACK // in case of failure
Also I highly recommend you to read about Transaction isolation levels:
Mysql Transaction Isolation Levels
If you use the Transaction as I wrote above, you have a lock on your table, which prevents other queries, e.g. select queries, to execute and they will be waiting for the transaction to end. It's called "Server blocking", in order to prevent that just read the link intensively.

I don't think there's any nice solution for this using a database, unless everything can be done neatly in a stored procedure like another answer suggested. For anything else I would look at a message queueing solution with multiple writers and a single reader.

Related

Correct approach to update counter value in database concurrently?

Problem:
I have a multi-threaded application (or multiple client applications) needing to access a "counter" stored in the database. They need to read the value, then +1 to the counter, and store it again. During the read and write, there is an additional layer of logic where the counter value is compared with an input obtained externally, and the result of the logic dictates whether to +1 to the counter or not.
Pseudocode:
var counter = Database.GetCurrentValueOfCounter();
counter = max(counter, input);
Database.Save(counter + 1);
return counter;
For example, in a multiple client scenario, all clients get an input externally, which all equals the same value (since they are obtained at the same time).
When the first client enters this function, the other clients should wait until the counter is updated before entering. Thus for multiple clients in the above scenario, each would obtain a sequential counter value.
What I've tried
I am using c# EF Core to implement the code and database logic. I have tried to use a serialisable transaction via Database.BeginTransaction(IsolationLevel.Serializable).
Then SELECT counter from Table -> logic in c# -> UPDATE Table SET counter ...
However, this approach gives me a deadlock transaction. Researching deadlocks, I believe the deadlock occurs on the second thread SELECT statement, which makes sense, since the first thread would be locking it in the transaction.
Question
What is the correct way to implement this locking/queuing mechanism on the database level?
Currently, I have resorted to entered a critical section in the code via lock(object), however, this WILL FAIL in a multi-server scenario. I can look into using SQL Distributed locks, however, it doesn't feel right to be doing the lock at the application level.
Can anyone point me to how I can achieve this sequential locking at the database level?

Concurrency issue with CosmosDb

We are struggling with the duplicate documents getting created due to race condition. We process events and we either create or update the document. We noticed that we are creating duplicate documents if we get two events within few milliseconds. The first event should result into new document and the second one should be an update.
Here is the logic that we have in the stored prod.
Look for an existing document with the specific Id and status
Create a new document or update an existing document if it exist.
If create, we do a select one more time to check if we have only one document with the combination of id and status. If more than 1, rollback. In case of update, we rely on the Etag.
We are good with the update but create is giving us hard time. Let me know if there is a way we can fix it.
Deduplicate key is the combination of external id and status. We have an existing database and we want to avoid any change that requires creating a new database.
Thanks,
Rohit
Define a unique key. CosmosDB will prevent the insertion of duplicate keys that are designated unique. You can then catch the exception and perform your update logic.
Edit based on feedback
I'm assuming you're in an environment where more than one thread or process is executing this logic. You're going to need a critical section (a lock) when you try to process each document. When it comes time to interact with CosmosDB, you'll need to acquire a lock on the id of the document you're going to insert/update. You can then check to see if the document exists, and do your insert or update based on the result. Then you'll exit the critical section by releasing the lock.
Depending on what technologies you're using will dictate what is available for you. If it's a single instance of an Azure Function, you can use something like a static ThreadSafeDictionary for locking. If it's multiple Azure Functions or Web Apps, you'll need a distributed lock. There are several ways to do this, such as Azure Blob Leases.
I am unaware of any type of synchronization functionality available OOTB in CosmosDB.

How can I lock all DB updates for all users but one (admin)?

We have a process that needs to run every so soften against a DB used by a web app, and we need to prevent all other updates during this process execution. Is there any global way to do this maybe thru nHibernate, .NET or maybe directly in Oracle?
The original idea was to have a one-record DB table to indicate if the process is running or not, but with this we will need to go back to every single save/update method and make changes to verify if this record exist or not prior to the save/update call.
My reaction to that kind of requirement is to review the design as it is highly unusual outside of doing application upgrades. Other than that there are a couple option:
Shutdown the DB, open it in exclusive mode, make changes, and then open it up for everyone.
Attempt to lock all the required tables with LOCK TABLE. That might generate deadlock exceptions depending on the order of doing the locks.

How to prevent simultaneous access of two applications to the one database

Imagine that you have an application that have access to SQL Server 2012, so it reads data from one table, process it and writes result to another table.
If you launch two such applications simultaneously on different computers the resulting data will be doubled.
The question is:
How to prevent this situation?
Please provide you examples with Transact-SQL and C#.
You set some state in the DB that informs applications that a processing task is being performed. (I assume it's ok for both applications can run one after the other with no side-effect, or the same app can run twice)
The application will then check this state and refuse to run if its set.
Alternatively, you can lock an entire table so the 2nd instance cannot read (or write) data using the isolation level.
What you want is to lock the corresponding tables while one application is doing it's job.
More info here: http://www.sqlteam.com/article/introduction-to-locking-in-sql-server

Is there some way to open a new connection that does not take part in the current TransactionScope?

My application needs to write to a table that acts as a system-wide counter, and this counter needs to be unique thorough the whole system and all his applications.
This counter is currently implemented as a table in our Oracle database.
My problem is: i need to write to this counter table (which use some keys to guarantee uniqueness of the counters at each business process into the system) without getting it locked in the current transaction, as multiple other processes may read or write into this table as well. Gaps in the counter does not matter, but i cannot create the counter values in advance.
Simple SEQUENCEs do not help me in this case, as the sequence number is a formatted number.
For now, i have some other non-db alternatives for achieving this, but i want to avoid changing the system code as much as i can for a while.
Actually, it would be simplest if i could open a new connection that won't take part in the current transaction, write to the table, and then close this new connection, but my tired mind can't find a way to do it.
The answer must be obvious, but the light just don't shine on me right now.
Execute your command inside a TransactionScope block created with the option "Suppress". This way your command won't participate in the current transaction.
using (var scope = new TransactionScope(TransactionScopeOption.Suppress))
{
// Execute your command here
}
For more information, see http://msdn.microsoft.com/en-us/library/system.transactions.transactionscopeoption.aspx
Sure. Just make a component that is configured to open a NEW transaction (inner transaction) that is not coupled to the outer transaction.
Read http://msdn.microsoft.com/en-us/library/ms172152%28v=vs.90%29.aspx#Y1642 for a description how TransactionScopes progress.

Categories

Resources