How to prevent simultaneous access of two applications to the one database - c#

Imagine that you have an application that have access to SQL Server 2012, so it reads data from one table, process it and writes result to another table.
If you launch two such applications simultaneously on different computers the resulting data will be doubled.
The question is:
How to prevent this situation?
Please provide you examples with Transact-SQL and C#.

You set some state in the DB that informs applications that a processing task is being performed. (I assume it's ok for both applications can run one after the other with no side-effect, or the same app can run twice)
The application will then check this state and refuse to run if its set.
Alternatively, you can lock an entire table so the 2nd instance cannot read (or write) data using the isolation level.

What you want is to lock the corresponding tables while one application is doing it's job.
More info here: http://www.sqlteam.com/article/introduction-to-locking-in-sql-server

Related

How can I lock all DB updates for all users but one (admin)?

We have a process that needs to run every so soften against a DB used by a web app, and we need to prevent all other updates during this process execution. Is there any global way to do this maybe thru nHibernate, .NET or maybe directly in Oracle?
The original idea was to have a one-record DB table to indicate if the process is running or not, but with this we will need to go back to every single save/update method and make changes to verify if this record exist or not prior to the save/update call.
My reaction to that kind of requirement is to review the design as it is highly unusual outside of doing application upgrades. Other than that there are a couple option:
Shutdown the DB, open it in exclusive mode, make changes, and then open it up for everyone.
Attempt to lock all the required tables with LOCK TABLE. That might generate deadlock exceptions depending on the order of doing the locks.

C# acquire lock from mysql database for critical section of code

I'm using Asp.NET with a MySql database.
Application flow:
Order created in Woocommerce and sent to my app
My app translated the woo order object to an object to add to an external ERP system
Order created in external ERP system and we update a local database with that order info to know that the creation was successful
I have a critical section of code that creates an order on an external ERP resource. Multiple requests for the same order can be running at the same time because they are created from an external application (woocommerce) that I can't control. So the critical section of code must only allow one of the requests to enter at a time otherwise duplicate orders can be created.
Important note: the application is hosted on Elastic Beanstalk which has a load balancer so the application can scale across multiple servers, which makes a standard C# lock object not work.
I would like to create a lock that can be shared across multiple servers/application instances so that only one server can acquire the lock and enter the critical section of code at a time. I can't find how to do this using MySql and C# so if anyone has an example that would be great.
Below is how I'm doing my single instance thread safe locking. How can I convert this to be safe across multiple instances:
SalesOrder newOrder = new SalesOrder(); //the external order object
var databaseOrder = new SalesOrderEntity(); //local MySql database object
/*
* Make this section thread safe so multiple threads can't try to create
* orders at the same time
*/
lock (orderLock)
{
//check if the order is already locked or created.
//wooOrder comes from external order creation application (WooCommerce)
databaseOrder = GetSalesOrderMySqlDatabase(wooOrder.id.ToString(), originStore);
if (databaseOrder.OrderNbr != null)
{
//the order is already created externally because it has an order number
return 1;
}
if (databaseOrder.Locked)
{
//the order is currently locked and being created
return 2;
}
//the order is not locked so lock it before we attempt to create externally
databaseOrder.Locked = true;
UpdateSalesOrderDatabase(databaseOrder);
//Create a sales order in external system with the specified values
newOrder = (SalesOrder) client.Put(orderToBeCreated);
//Update the order in our own database so we know it's created in external ERP system
UpdateExternalSalesOrderToDatabase(newOrder);
}
Let me know if further detail is required.
You can use MySQL's named advisory lock function GET_LOCK(name) for this.
This works outside of transaction scope, so you an commit or rollback database changes before you release your lock. Read more about it here: https://dev.mysql.com/doc/refman/5.7/en/miscellaneous-functions.html#function_get-lock
You could also use some other dedicated kind of lock service. You can do this with a shared message queue service, for example. See https://softwareengineering.stackexchange.com/questions/127065/looking-for-a-distributed-locking-pattern
You need to use a MySQL DBMS transaction lock for this.
You don't show your DBMS queries directly, so I can't guess them. Still you need this sort of series of queries.
START TRANSACTION;
SELECT col, col, col FROM wooTable WHERE id = <<<wooOrderId>>> FOR UPDATE;
/* do whatever you need to do */
COMMIT;
If the same <<<wooOrderID>>> row gets hit with the same sequence of queries from another instance of your web server running on another ELB server, that one's SELECT ... FOR UPDATE query will wait until the first one does the commit.
Notice that intra-server multithreading and critical section locking is neither necessary nor sufficient to solve your problem. Why?
It's unnecessary because database connections are not thread safe in the first place.
It's insufficient because you want a database-level transaction for this, not a process-level lock.
You should use Transaction, which is a unit of work in database. It's making your code not only atomic but also it'll be thread-safe. Here is a sample adopted from mysql official website
The code you need:
START TRANSACTION
COMMIT // if your transaction worked
ROLLBACK // in case of failure
Also I highly recommend you to read about Transaction isolation levels:
Mysql Transaction Isolation Levels
If you use the Transaction as I wrote above, you have a lock on your table, which prevents other queries, e.g. select queries, to execute and they will be waiting for the transaction to end. It's called "Server blocking", in order to prevent that just read the link intensively.
I don't think there's any nice solution for this using a database, unless everything can be done neatly in a stored procedure like another answer suggested. For anything else I would look at a message queueing solution with multiple writers and a single reader.

Multiple update of the same sql table in c#

I have a web service that can be accessed from a win form.
The web service accesses the database (MS Sql) in order to preform actions of update \ delete \ create on the tables' rows, according to the user choice on the winform.
What will happen if various users will run the winform and preform an update on the same table row?
will it be locked by the database?
That depends entirely things like the isolation level of both connections. However, done naively, the final outcome is rather unpredictable. In reality, changes happen quickly, so it is a race condition and will be hard to reproduce (for testing etc) reliably. It may be worthwhile using something like rowversion checking for concurrency / consistency - at least then you can predict the results.

How to get real time update of data to main warehouse

All,
Need some info.
We have stores at multiple locations and use client server app installed for sales activity.
sales data is stored in database which is setup in all stores...
# end of day - a batch pulls data from all of the store locations and update main warehouse database.
We want to have real time implementation so that whenever there is transcation # any store... data will update immediately to main warehouse repository.
Any clue as how can we achive real time update of data to main warehouse ?
Thanks in advance...
One approach to this is called replication. There are several ways to do it in SQL Server. You're probably looking for transaction replication or merge replication.
Here's a place to start in the SQL Server 2012 documentation.
And here's a fairly recent overview that might be helpful.
You should make sure you understand what "real time" means, and how real time you really need to be. If you are not pre aggregating data and then storing it in the WH, then you should be able to set up replication between the database servers (if they can talk to each other). If you are loading an aggregate, then it gets tricky because you have to merge the measures (facts) into the warehouses existing measures, which is tough. If you don't need true real time, just a slow trickle, then consider simply running your current process on a schedule in sql agent.
First off - why not run the batch multiple times a day. It would not really be "real-time" but might yield good enough real world results.
One option would be to implement master-master replication provided by the SQL engine in use. Though this probably means that some steps need to be taken to guard against duplicate IDs, auto increment mismatch etc. For example we have a master-master system set up so that one produces entries with odd IDs, the other with even.
Another approach could be that all reads are performed against local databases, and all writes are performed into the single remote master. Data would be replicated as a master-slave setup. This would provide best data consistency, but slow network would make any writes slow. We have this kind of a setup implemented atop of the master-master replication as most interaction are reads.
One real world use case I have actually come across for a similar stores/warehouse setup was based on Firebird SQL. Every single table had triggers implemented to store every action on local databases in so called log tables. And there was a replication application running at all times, regularly checking these log tables, updating the data to a remote database and pulling in new data from the remote (which had it's own log tables). But as a downside it was a horror to maintain as triggers needed to be updated when something changed in the database setup and the replication application would fail/hang at times. But data consistency was maintained well and resolved by negative IDs being used for local database and positive for master/remote. But in the end it did not really provide real "real-time".
In the end - there is no one-shoe-fits-all answer and books could probably be written on the topic. Research and Google are your friends.

SQLite and Multiple Users

I have a C# Winforms application. Part of the application pulls the contents of a SQLite table and displays it to the screen on a datagridview. I seem to have a problem where multiple users/computers are using the application.
When the program loads, it opens a single connection to the SQLite DB engine, which remains open until the user exits the program. On load, it refreshes the table in question and continues to do so at regular intervals. The table correctly updates when one user is using it or if that one user has more than one instance of the program open. If, however, more than one person uses it, the table doesn't seem to reflect changes made by other users until the program is closed and reopened.
An example - the first user (user A) logs in. the table has 5 entries. they add one to it. there are now 6 entries. User B now logs in and sees 6 entries. User A enters another record, for a total of 7. User B sees 6 even after the automatic refresh. And won't see 7 until they close out and reopen the program. User A sees 7 without any issue.
Any idea what could be causing this problem? It has to be something related to the DB engine for SQLite as I'm 100% sure my auto refresh is working properly. I suspect it has something to do with the write-ahead-logging feature or the connection pooling (which I have enabled). I disabled both to see what would happen, and the same issue occurs.
It could be a file lock issue - the first application may be taking an exclusive write lock, blocking out the other application instances. If this is true, then SQLite's may be simply waiting until the lock is released before updating the data file, which is not an ideal behaviour, but then again using SQLite for multi-user applications is also not ideal.
I have found hints that SHARED locks can (or should in the most recent version) be used. This may be a simple solution, but documentation of this is not easy to find (probably because this bends the specification of SQLite too far?)
Despite this however, it may be better to serialize file access yourself. And this depends on your precise system architecture, in how you should best approach such a feature.
Your system architecture is not clear from your description. You speak of "multiple users/computers" accessing the SQLite file.
If the multiple computers requirement is implemented using network share of the SQLfile, then this is indeed going to be a growing problem. A better architecture or another RDBMS would be advisable.
If multiple computers are accessing the data through a server process (or multiple server processes on the same machine?), then a simple Monitor lock (lock keyword), or ReaderWriterLock will help (in the case of multiple server processes an OS mutex would be required).
Update
Given your original question, the above still holds true. However given your situation, looking at your root problem - no access to your businesses RDBMS, I have some other suggestions:
Maria DB / mySQL / postgreSQL on your PC - of course this would require your PC to be kept on.
Some sort of database and/or service layer hosted in a datacentre (there are many options here, such as VPS, Azure DB, shared hosting DB etc.., of course all incurring a cost [perhaps there are some small free ones out there])
SQLite across network file systems is not a great solution. You'll find that the FAQ and Appropriate Uses pages gently steer you away from using SQLite as a concurrently accessed database across an NFS.
While in theory it could work, the implementation and latency of network file systems dramatically increases the chance of locking conflicts occurring during write actions.
I should point out that reading the database creates a read-only lock, which is fine for concurrent access.

Categories

Resources