ASP.NET MVC multiple threads access database simultaneously - c#

I am building a ASP.NET MVC 4 app using Entity Framework where multiple threads can access a table at the same time (add, delete row, etc.) Right now I am doing using (UserDBContext db = new UserDBContext()) within each controller (so a new DBContext is created for each request since MVC framework creates a seperate thread for each request). From what i read, this is safe; however, I am curious about:
What happens when two threads access the same table, but not the same row? Are both threads allowed to access simultaneously?
What happens when two threads modify the same row? say, one tries to read while the other tries to delete? Is one thread blocked (put to sleep), then gets waken up automatically when the other is done?
Thanks!

1: Locking in the database. Guaranteeing multi user scenarios is one of the top priority of databases. Learn the basics. There are good books.
2: Locking. Again. One will have to wait.
This is extremely fundamental so I would suggest you take 2 steps back and get something like "SQL for dummies" and learn about ACID conditions that any decent database guarantees. NOTHING in here has to do with EF by the way. This is all done on the database internal level.

Related

Locking rows with EF6

Using SQL Server 2012, ASP.NET 4.6.2, and EF6. I have a table of urls. each url has to go through a number of third party processes via API calls, and the state reflected in that table. I'm planning to use scheduled background processes of some sort to kick those processes off. I've come up with a structure like:
Id int (PK)
SourceUrl varchar(200)
Process1Status int
Process2Status int
When rows go into the table, Status flags will be 0 for AwaitingProcessing. 1 would mean InProgress, and 2 would be Complete.
To ensure the overall processing is quicker, I want to run these two processes in parallel. In addition, there may be multiple instances of each of these background processors picking up urls from the queue.
I'm new to multi threaded processing though, so I'm a bit concerned that there will be some conflicting processing going on.
What I want to be able to do is to ensure that no Process1runner selects the same row as another Process1runner, by ensuring that Process1Runner takes only 1 item and flags that it is currently in progress. I'd also like to ensure that when separate third party services call back to notification urls, that no status update is lost if two processes are attempting to update Process1Status and Process2Status at the same time.
I've seen two possible relevant answers: How can I lock a table on read, using Entity Framework?
and: Get "next" row from SQL Server database and flag it in single transaction
But I'm not much clearer about which route I should take for my needs. Could someone point me in the right direction? Am I on the right track?
If by design multiple actors need access the same row of data I would split the data to avoid this situation.
My first thought is to suggest building a UrlProcessStatus table with URLId, ProcessId, and Status columns. This way the workers can read/write their data independently.

Implementing database caching in ASP.NET

I'm considering implementing sql database caching using the following scheme:
In the ASP.NET webapplication I want a continuously running thread that check's a table, say dbStatus, to see if field dbDirty has been set true. If so, the local in-memory cache is updated, querying a view in which all needed tables are present.
When any of the tables in the view is updated, a trigger on that table is fired setting dbStatus.dbDirty true. So this would mean I have to add a on insert,update,delete trigger on those tables
One of the reasons I want to implement such a caching scheme is that the same database is used in a Winform version of this application.
My question: is this a viable approach?
Many thanks in advance for helping me with this one, Paul
This is a viable approach.
The main problem you need to be aware of is that ASP.NET worker processes can exit at any time for many reasons (deployment, recycle, reboot, bluescreen, bug, ...). This means that your code must tolerate being aborted (in fact just disappearing) at any time.
Also, consider that your app can run two times at the same time during worker recycling and if you run multiple servers for HA.
Also, cross-request state in a web app requires you to correctly synchronize your actions. This sounds like you might need to solve some race conditions.
Besides that this approach works.
Consider incrementing a version number instead of a boolean. That makes it easier to avoid synchronization issues such as lost updates because there is no need to reset the flag. There is only one writer. That's easier than multiple writers.

Restrict multiple edits on any objects, c# 2012

I have many objects like Pg, Layout, SitePart etc.,
Multiple users can edit these objects and can save them back to Data base.
However at a time only one user can save the changes to DB, and let the other users wait until the member completes his job if both are updating same objects.
I have functionality for building objects and saving them to DB.
However how to implement this locking, how the other user will know when the object is released.
Please shed some thoughts on this how to proceed.
Thank you in advance..
The type of behaviour you are describing is called pessimistic concurrency. You haven't said if you need this lock to get locked within a single web request or across multiple requests. Rather than reinventing the wheel, you should use standard concurrency techniques, and read up on how you implement those for .net.
Typically web applications use optimistic concurrency; if you need pessimistic concurrency it gets very hard very quickly. ASP.NET does not offer out of the box support for pessimistic concurrency.
You haven't said how you access your database (e.g. if you are using ADO.NET, or EF), but EF also has concurrency control. Ultimately it comes down to using transaction objects such as SqlTransaction to coordinate the updates across tables, being able to check to see if another user beat you to the update, and if they did then deciding what to do.
With pessimistic concurrency you have a whole lot more to worry about - where to put your global lock (e.g. in the code) what happens if that goes wrong (e.g. recycling of application pools in IIS can mean that two users don't lock the same object if your lock is in a code-based singleton) and how to deal with timeouts if you record locks in your database. If you want to see another SO question related to pessimistic concurrency, see: How do I implement "pessimistic locking" in an asp.net application?
Edit. I should also have mentioned that if you are already building logic for building objects and saving them to the db then you should be aware of the Repository and Unit of Work patterns. If not, then you should read about those as well. You are solving a standard problem that has standard patterns to implement those solutions in most languages.

Prevent insert data into table at the same time

I'm working on a online sales web site. I'm using C# 4,0 and SQL server 2008 and I want to control and prevent users simultaneously insert into the table like dbo.orders... How can I do that?
Inserts will not be a problem, but updates can be. The term that you need to research is database concurrency. There are four basic models you can implement each with its own pros and cons. Some are better suited for certain situations and there are hundreds of articles on the web for this subject.
Are you trying to solve this in C# code on in SQL? Because in SQL it's simple. If you add BEGIN TRAN in the beginning of the stored procedure and COMMIT in the end, this will act as a lock in C# preventing concurrent code executions effectively serializing the requests. So if there are two inserts, they will be executed one after another. One thing to remember is that it will be blocking operation, i.e. the second insert won't start until the first one is finished (regardless successfully or not).
In your Add method you can use Locking with lock keyword, this will allow one thread at a time.

Using transactions with ADO.NET Data Adapters

Scenario: I want to let multiple (2 to 20, probably) server applications use a single database using ADO.NET. I want individual applications to be able to take ownership of sets of records in the database, hold them in memory (for speed) in DataSets, respond to client requests on the data, perform updates, and prevent other applications from updating those records until ownership has been relinquished.
I'm new to ADO.NET, but it seems like this should be possible using transactions with Data Adapters (ADO.NET disconnected layer).
Question part 1: Is that the right way to try and do this?
Question part 2: If that is the right way, can anyone point me at any tutorials or examples of this kind of approach (in C#)?
Question part 3: If I want to be able to take ownership of individual records and release them independently, am I going to need a separate transaction for each record, and by extension a separate DataAdapter and DataSet to hold each record, or is there a better way to do that? Each application will likely hold ownership of thousands of records simultaneously.
How long were you thinking of keeping the transaction open for?
How many concurrent users are you going to support?
These are two of the questions you need to ask yourself. If the answer for the former is a "long time" and the answer to the latter is "many" then the approach will probably run into problems.
So, my answer to question one is: no, it's probably not the right approach.
If you take the transactional lock approach then you are going to limit your scalability and response times. You could also run into database errors. e.g. SQL Server (assuming you are using SQL Server) can be very greedy with locks and could lock more resources than you request/expect. The application could request some row level locks to lock the records that it "owns" however SQL Server could escalate those row locks to a table lock. This would block and could result in timeouts or perhaps deadlocks.
I think the best way to meet the requirements as you've stated them is to write a lock manager/record checkout system. Martin Fowler calls this a Pessimistic Offline Lock.
UPDATE
If you are using SQL Server 2008 you can set the lock escalation behavior on a table level:
ALTER TABLE T1 SET (LOCK_ESCALATION = DISABLE);
This will disable lock escalation in "most" situations and may help you.
You actually need concurrency control,along with Transaction support.
Transaction only come into picture when you perform multiple operations on database. As soon as the connection is released the transaction is no more applicable.
concurrency lets you work with multiple updates on the same data. If two or more clients hold the same set of data and one needs to read/write the data after another client updates it, the concurrency will let you decide which set of updates to keep and which one to ignore. Mentioning the concept of concurrency is beyond the scope of this article. Checkout this article for more information.

Categories

Resources