Restrict multiple edits on any objects, c# 2012 - c#

I have many objects like Pg, Layout, SitePart etc.,
Multiple users can edit these objects and can save them back to Data base.
However at a time only one user can save the changes to DB, and let the other users wait until the member completes his job if both are updating same objects.
I have functionality for building objects and saving them to DB.
However how to implement this locking, how the other user will know when the object is released.
Please shed some thoughts on this how to proceed.
Thank you in advance..

The type of behaviour you are describing is called pessimistic concurrency. You haven't said if you need this lock to get locked within a single web request or across multiple requests. Rather than reinventing the wheel, you should use standard concurrency techniques, and read up on how you implement those for .net.
Typically web applications use optimistic concurrency; if you need pessimistic concurrency it gets very hard very quickly. ASP.NET does not offer out of the box support for pessimistic concurrency.
You haven't said how you access your database (e.g. if you are using ADO.NET, or EF), but EF also has concurrency control. Ultimately it comes down to using transaction objects such as SqlTransaction to coordinate the updates across tables, being able to check to see if another user beat you to the update, and if they did then deciding what to do.
With pessimistic concurrency you have a whole lot more to worry about - where to put your global lock (e.g. in the code) what happens if that goes wrong (e.g. recycling of application pools in IIS can mean that two users don't lock the same object if your lock is in a code-based singleton) and how to deal with timeouts if you record locks in your database. If you want to see another SO question related to pessimistic concurrency, see: How do I implement "pessimistic locking" in an asp.net application?
Edit. I should also have mentioned that if you are already building logic for building objects and saving them to the db then you should be aware of the Repository and Unit of Work patterns. If not, then you should read about those as well. You are solving a standard problem that has standard patterns to implement those solutions in most languages.

Related

The problem of changes to a table with several different events (RabbitMQ) in a service and in ASP Core

The problem is that when a service receives messages from several other services and wants to apply those changes to a table, can this simultaneous change not cause a problem ?
To be more precise, the problem is that when a service receives two different messages from two different queues and wants to apply those received changes to the database, this synchronization will probably cause a problem !
Suppose a message contains updated user information and a message from another queue related to another case where these changes or updates are to be applied to Mongo ( assuming these changes occur at the same time or with a little distance ) . If the database is making changes to the author information, the information about the term collection must be updated at the same time or in a few moments later .
The table information for this service is as follows :
To deal with Concurrency Conflict, this usually comes in two flavors:
Pessimistic concurrency control
Pessimistic, or negative, concurrency control is when a record is locked at the time the user begins his or her edit process. In this concurrency mode, the record remains locked for the duration of the edit. The primary advantage is that no other user is able to get a lock on the record for updating, effectively informing any requesting user that they cannot update the record because it is in use.
There are several drawbacks to pessimistic concurrency control. If the user goes for a coffee break, the record remains locked, denying anyone else the ability to update the record, even if it has been untouched by the initial requestor. Also, in order to maintain record locks, a persistent connection to the database server is required. Since web applications can have hundreds or thousands of simultaneous users, a persistent connection to the database cannot be maintained without having tremendous resources on the database server. Moreover, some database tools are licensed based on the number of concurrent connections. As such, applications that use pessimistic concurrency would require additional licenses for use.
Optimistic concurrency control
Optimistic Concurrency means we allow concurrency conflicts happen. But we also (want to) believe that it will not happen. And if it happens anyway, we react on it in some manner. It’s supported in Entity Framework – you have got concurrency exceptions to handle, you can add a column of row version type (or timestamp) to database table and so on… It’s probably a good moment to stop and come back to the subject in separate post!
Frameworks such as Entity Framework have optimistic concurrency control built in (although it may be turned off). It’s instructive to quickly see how it works. Basically there are three steps:
Get an entity from the DB and disconnect.
Edit in memory.
Update the db with changes using a special update clause. Something
like: “Update this row WHERE the current values are same as original
values”.
There are some useful articles to help u with Optimistic concurrency control.
OPTIMISTIC CONCURRENCY IN MONGODB USING .NET AND C#
Document-Level Optimistic Concurrency in MongoDB
I use Transactions for concurrent updates. Query with ID before updating operation.

"Pessimistic offline lock" with third party concurrent writers

We have an application that reads and writes to a third party data storage.
The code of that data storage is closed source, we do not know about it and can not change it.
There is only a slim API that allows reading and writing to it.
An pessimistic offline lock helps to span transactions and have concurrent applications work with it. That will work fine I believe.
But now we have the problem that other software will also write and read to that storage
and our application shall update when changes in that data storage happen. The data storage itself does not provide any notification. The third party software will not change some global state that indicates that something has changed.
Is there any kind of pattern or best practise to "observe" that data storage and
publish events to update all clients (of our software)?
I really do not want to periodically read, compare and publish events if it is not
absolutely the last resort. Perhaps someone has a better idea here?
A non-System implemented Pessimistic Offline Lock requires cooperation/participation/enforcement among all possible modifers of the data. This is generally not possible and is one of the two reasons that this approach is rarely taken in modern software. To do anything remotely like this (i.e., with multiple heterogenuous writers) in a useful way requires some kind help/assistance from the System facilities themselves. (The second reason is the issues of determining and resolving abandoned locks, very problematic).
As for possible solutions, then from a purely design viewpoint, either optimistic offline locks, which still need some System help, but much less, or avoid the issue altogether through more detailed state-progression/control in your data model.
My approach, however, would be to set-aside the design question (initially) recognizing that this is primarily an issue of the data-store's capabilities and start there, looking to use System-provided lock/transaction control, (which both 1: usually works and 2: is how it is usually done).
AFAIK, issues of synchronizing multi-writer access always have to start with "What tools/controls/facilities are available to constrain, divert and/or track the out-of-application writers?" What you can accomplish is practically limited by those facilities.
For instance, if you can force all access through a service of your own, then you can do almost anything. But if all you have is the OS's file-locking and file-modification-dates, then you are a lot more constrained. And if you don't have even that, then there's not much you can do.
In fact I do not have direct access to the data store, it is hosted on
some server and I have no control over the other applications that
read and write to it. Right now, the best I can think of is having a
service as a proxy which periodically queries the store, compares it
to an older state and fires update events to my clients if some other
application has altered it (and fire some other event if my
application alters it to notify my own clients, leaving the other
applications with their own problems). It sound not very good to me,
but it probably does the job.
Yep, that's about all you can do, and that only supports Optimistic Concurrency (partially), not Pessimistic. You might get improvements by adding some kind of checksum/hash to your stored data, but that's only an optimization.

Locking the record and unlocking

I am new to this web application development and I have task to do. This probably would be some kind of a service (probably WCF at least this is my idea) which will be responsible for locking and unlocking records in db. I'm searching for some kind of best practices and/or tools which wil do that. By tools I mean the opensource solutions or something like that. The case is that what to do when user i.e closes the browser, or one is editing the record and the other one also edit the record, what we should do in this case. I hope this is understandable what I want to accomplish. From what that I know the problem with locks is that they are statless so this is some kind of an issue but I don't know what kind :) Thank you in advance for your help and time :)
ps. I've tried to google this in Stack..but all I get is the lock keyword in c# and in google there are soultions but not quite what I am looking for. Maybe I'm typing in the wrong keywords...I don't know
I'm searching for some kind of best practices
Don't do this. Do not write applications that explicitly lock and unlock data in the database. There is absolutely 0 (zero) valid scenarios for this.
I recommend you read about optimistic concurrency control.
Also read Entity Framework Optimistic Concurrency Patterns and Anti-Pattern #3: Mishandled Concurrency.
On the whole, locking records in a database is a really dangerous thing to do - especially through a service that isn't related to the actual data manipulation process. If other programs encounter that locked record and want to write to it, they tend to have to deal with exotic synchronisation issues - do they wait? Do they discard the changes they wanted to write?
In most database engines, the process that's been locked just waits - before you know it, you can have dozens or hundreds of suspended database tasks, all waiting for the lock to be released.
As Remus Rusanu writes, you should read up on optimistic concurrency control - this is the best practice for transactional web applications. It's supported by the MS Entity Framework (assuming your app is built using .Net); code example here.

Using transactions with ADO.NET Data Adapters

Scenario: I want to let multiple (2 to 20, probably) server applications use a single database using ADO.NET. I want individual applications to be able to take ownership of sets of records in the database, hold them in memory (for speed) in DataSets, respond to client requests on the data, perform updates, and prevent other applications from updating those records until ownership has been relinquished.
I'm new to ADO.NET, but it seems like this should be possible using transactions with Data Adapters (ADO.NET disconnected layer).
Question part 1: Is that the right way to try and do this?
Question part 2: If that is the right way, can anyone point me at any tutorials or examples of this kind of approach (in C#)?
Question part 3: If I want to be able to take ownership of individual records and release them independently, am I going to need a separate transaction for each record, and by extension a separate DataAdapter and DataSet to hold each record, or is there a better way to do that? Each application will likely hold ownership of thousands of records simultaneously.
How long were you thinking of keeping the transaction open for?
How many concurrent users are you going to support?
These are two of the questions you need to ask yourself. If the answer for the former is a "long time" and the answer to the latter is "many" then the approach will probably run into problems.
So, my answer to question one is: no, it's probably not the right approach.
If you take the transactional lock approach then you are going to limit your scalability and response times. You could also run into database errors. e.g. SQL Server (assuming you are using SQL Server) can be very greedy with locks and could lock more resources than you request/expect. The application could request some row level locks to lock the records that it "owns" however SQL Server could escalate those row locks to a table lock. This would block and could result in timeouts or perhaps deadlocks.
I think the best way to meet the requirements as you've stated them is to write a lock manager/record checkout system. Martin Fowler calls this a Pessimistic Offline Lock.
UPDATE
If you are using SQL Server 2008 you can set the lock escalation behavior on a table level:
ALTER TABLE T1 SET (LOCK_ESCALATION = DISABLE);
This will disable lock escalation in "most" situations and may help you.
You actually need concurrency control,along with Transaction support.
Transaction only come into picture when you perform multiple operations on database. As soon as the connection is released the transaction is no more applicable.
concurrency lets you work with multiple updates on the same data. If two or more clients hold the same set of data and one needs to read/write the data after another client updates it, the concurrency will let you decide which set of updates to keep and which one to ignore. Mentioning the concept of concurrency is beyond the scope of this article. Checkout this article for more information.

Preferred database/webapp concurrency design when multiple users can edit the same data

I have a ASP.NET C# business webapp that is used internally. One issue we are running into as we've grown is that the original design did not account for concurrency checking - so now multiple users are accessing the same data and overwriting other users changes. So my question is - for webapps do people usually use a pessimistic or optimistic concurrency system? What drives the preference to use one over another and what are some of the design considerations to take into account?
I'm currently leaning towards an optimistic concurrency check since it seems more forgiving, but I'm concerned about the potential for multiple changes being made that would be in contradiction to each other.
Thanks!
Optimistic locking.
Pessimistic is harder to implement and will give problems in a web environment. What action will release the lock, closing the browser? Leaving the session to time out? What about if they then do save their changes?
You don't specify which database you are using. MS SQL server has a timestamp datatype. It has nothing to do with time though. It is mearly a number that will get changed each time the row gets updated. You don't have to do anything to make sure it gets changed, you just need to check it. You can achive similar by using a date/time last modified as #KM suggests. But this means you have to remember to change it each time you update the row. If you use datetime you need to use a data type with sufficient precision to ensure that you can't end up with the value not changing when it should. For example, some one saves a row, then someone reads it, then another save happens but leaves the modified date/time unchanged. I would use timestamp unless there was a requirement to track last modified date on records.
To check it you can do as #KM suggests and include it in the update statement where clause. Or you can begin a transaction, check the timestamp, if all is well do the update, then commit the transaction, if not then return a failure code or error.
Holding transactions open (as suggested by #le dorfier) is similar to pessimistic locking, but the amount of data locked may be more than a row. Most RDBM's lock at the page level by default. You will also run into the same issues as with pessimistic locking.
You mention in your question that you are worried about conflicting updates. That is what the locking will prevent surely. Both optimistic or pessimistic will, when properly implemented prevent exactly that.
I agree with the first answer above, we try to use optimistic locking when the chance of collisions is fairly low. This can be easily implemented with a LastModifiedDate column or incrementing a Version column. If you are unsure about frequency of collisions, log occurrences somewhere so you can keep an eye on them. If your records are always in "edit" mode, having separate "view" and "edit" modes could help reduce collisions (assuming you reload data when entering edit mode).
If collisions are still high, pessimistic locking is more difficult to implement in web apps, but definitely possible. We have had good success with "leasing" records (locking with a timeout)... similar to that 2 minute warning you get when you buy tickets on TicketMaster. When a user goes into edit mode, we put a record into the "lock" table with a timeout of N minutes. Other users will see a message if they try to edit a record with an active lock. You could also implement a keep-alive for long forms by renewing the lease on any postback of the page, or even with an ajax timer. There is also no reason why you couldn't back this up with a standard optimistic lock mentioned above.
Many apps will need a combination of both approaches.
here's a simple solution to many people working on the same records.
when you load the data, get the last changed date, we use LastChgDate on our tables
when you save (update) the data add "AND LastChgDate=previouslyLoadedLastChgDate" to the where clause. If the row count=0 on the update, issue error where "someone else has already saved this data" and rollback everything, otherwise the data is saved.
I generally do the above logic on header tables only and not on the details tables, since they are all in one transaction.
I assume you're experiencing the 'lost update' problem.
To counter this as a rule of thumb I use pessimistic locking when the chances of a collision are high (or transactions are short lived) and optimistic locking when the chances of a collision are low (or transactions are long lived, or your business rules encompass multiple transactions).
You really need to see what applies to your situation and make a judgment call.

Categories

Resources