Short question
How can I lock my entity so that only one operation by only one user can be performed on it at a time in MVC project?
Long question
I have MVC project where I want my action methods to be [SessionState(SessionStateBehavior.ReadOnly)]. But when doing this users can execute another action methods even before one long running action method has not completed. As I have a lot calculations and action methods have to be executed in predefined order, executing another Action method before one ends creates lots of problems. To give example I have main entity called Report, I have to somehow ensure that one report undergoes only one operation by only one user at a time. So I have to lock my Report. Even if I do not use [SessionState(SessionStateBehavior.ReadOnly)] I have to lock report so that multiple users do not edit same reports at a time and for other specific reasons. Currently I am writing this information to database roughly something like:
ReportId
LockedUserId
IsInPorcess
I have to set IsInProcess to true every time before operation begins and reset it to false after operation completed. As I have lots of action methods I created ActionFilter something like below:
public class ManageReportLockAttribute
: FilterAttribute, IActionFilter
{
public ManageReportLockAttribute()
{
}
public void OnActionExecuting(ActionExecutingContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lockInfo.IsInProcess = true;
SaveToDatabase(lockInfo);
...
}
public void OnActionExecuted(ActionExecutedContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
lockInfo.IsInProcess = false;
SaveToDatabase(lockInfo);
...
}
}
It works, for most part, but it has some strange problems (see this question for more info).
My question is that "How can I achieve same functionality (locking report) by different more acceptable way?".
I feel like it is something similar to locking when using multithreading, but it is not exactly same IMO.
Sorry for long, broad and awkward question, but I want a direction to follow. Thanks in advance.
One reason why OnActionExecuted is not called though OnActionExecuting runs as expected is that there are unhandled exceptions that occur in OnActionExecuting. Especially when dealing with the database, there are various reasons that could lead to an exception, e.g.:
User1 starts the process and locks the entity.
User2 also wants to start the process before User1 has saved the change. So the check of IsInProcess does not lead to the redirection and User2 also wants to save the lock. In this case, a concurrency violation should occur because User1 has saved the entity in the meantime.
To illustrate the process over time (C is the check whether IsInProcess is set, S is SaveChanges): first a good case:
User1 CS
User2 CS (check is done after save, no problem)
Now a bad case:
User1 CS
User2 CS (check takes place after check for User1, but before SaveChanges becomes effective ==> concurrency violation)
As the example shows, it is critical to make sure that only one user can place the lock. There are several ways to handle this. In all cases make sure that there are as few reasons for exceptions in OnActionExecuting as possible. Handle and log the exceptions.
Please note that all synchronisation methods will have a negative impact on the performance of your application. So if you haven't already thought about whether you could avoid having to lock the report by restructuring your actions or the data model, this would be the first thing to do.
Easy approach: thread synchronisation
An easy approach is to use thread synchronisation. This approach will only work if the application runs in a single process and not in a web farm/the cloud. You need to decide whether you will be able to change the application if it will be installed in a farm at a later point in time. This sample shows an easy approach (that uses a static object for locking):
public class ManageReportLockAttribute
: FilterAttribute, IActionFilter
{
private static readonly object lockObj = new object();
// ...
public void OnActionExecuting(ActionExecutingContext filterContext)
{
...
ReportLockInfo lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lock(lockObj)
{
// read anew just in case the lock was set in the meantime
// A new context should be used.
lockInfo = GetFromDatabase(reportId);
if(lockInfo.IsInProcess)
RedirectToInformationView();
lockInfo.IsInProcess = true;
SaveToDatabase(lockInfo);
...
}
}
public void OnActionExecuted(ActionExecutedContext filterContext)
{
...
lock(lockObj)
{
lockInfo = GetFromDatabase(reportId);
if (lockInfo.IsInProcess) // check whether lock was released in the meantime
{
lockInfo.IsInProcess = false;
SaveToDatabase(lockInfo);
}
...
}
}
}
For details on using lock see this link. If you need more control, have a look at the overview of thread synchronization with C#. A named mutex is an alternative that provides locking in a more fine coarsed manner.
If you want to lock on reportId instead of a static object, you need to use a lock object that is the same for the same reportId. A dictionary can store the lock objects:
private static readonly IDictionary<int, object> lockObjectsByReportId = new Dictionary<int, object>();
private static object GetLockObjectByReportId(int reportId)
{
int lockObjByReportId;
if (lockObjectsByReportId.TryGetValue(reportId, out lockObjByReportId))
return lockObjByReportId;
lock(lockObj) // use global lock for a short operation
{
if (lockObjectsByReportId.TryGetValue(reportId, out lockObjByReportId))
return lockObjByReportId;
lockObjByReportId = new object();
lockObjectsByReportId.Add(reportId, lockObjByReportId);
return lockObjByReportId;
}
}
Instead of using lockObj in OnActionExecuting and OnActionExecuted, you'd use the function:
// ...
lock(GetLockObjectByReportId(reportId))
{
// ...
}
Database approach: Transactions and isolation levels
Another way to handle this is to use database transactions and isolation levels. This approach will also work in a multi-server environment. In this case, you'd not use the entity framework for database access but move the code to a stored procedure that is run on the database server. By running the stored procedure in a transaction and picking the right isolation level, you can avoid that a user can read the data while another one is changing them.
This link shows an overview of isolation levels for SQL Server.
Related
I am currently developing an application in ASP.NET CORE 2.0
The following is the action inside my controller that get's executed when the user clicks submit button.
The following is the function that get's called the action
As a measure to prevent duplicate inside a database I have the function
IsSignedInJob(). The function works
My Problem:
Sometimes when the internet connection is slow or the server is not responding right away it is possible to click submit button more than once. When the connection is reestablished the browser (in my case Chrome) sends multiple HttpPost request to the server. In that case the functions(same function from different instances) are executed so close in time that before the change in database is made, other instances are making the same change without being aware of each other.
Is there a way to solve this problem on a server side without being to "hacky"?
Thank you
As suggested on the comments - and this is my preferred approach-, you can simply disable the button once is clicked the first time.
Another solution would be to add something to a dictionary indicating that the job has already been registered but this will probably have to use a lock as you need to make sure that only one thread can read-write at a time. A Concurrent collection won't do the trick as the problem is not whether this operation is thread-safe or not. The IsSignedInJob method you have can do this behind the scenes but I wouldn't check the database for this as the latency could be too high. Adding/removing a Key from a dictionary should be a lot faster.
Icarus's answer is great for the user experience and should be implemented. If you also need to make sure the request is only handled once on the server side you have a few options. Here is one using the ReaderWRiterLockSlim class.
private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
This will prevent overlapping DoWork code. It does not prevent DoWork from finishing completely, then another post executing that causes DoWork again.
If you want to prevent the post from happening twice, implement the AntiForgeryToken, then store the token in session. Something like this (haven't used session in forever) may not compile, but you should get the idea.
private const SomeMethodTokenName = "SomeMethodToken";
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
var token = Request.Form.Get["__RequestVerificationToken"].ToString();
var session = Session[SomeMethodTokenName ];
if (token == session) return;
session[SomeMethodTokenName] = token
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
Not exactly perfect, two different requests could happen over and over, you could store in session the list of all used tokens for this session. There is no perfect way, because even then, someone could technically cause a OutOfMemoryException if they wanted to (to many tokens stored in session), but you get the idea.
Try not to use asynchronous processing. Remove task,await and async.
I have a question, I hope you can help me, Thank you in advance.
I am working in a project, a WEB Application hosted in IIS; the approach is that I have a LogIn for users, but the LogIn must allow one user to login at time, so if two users are trying to access to the site at the same time only one should access, while the other one waits until the first is logged in. I thought of using threads, with a lock statement in the Sign In validation, but I don't know if it is a good practice to use threads in this scenario, due to multiple users may try to Log In at the same time, and only one must access at time. Also, I need to have a log for the users in the order they have accessed the site, to verify that two users did not access at the same time.
Is multithreading a good practice or recommendation for making this?
Any suggestions? Thank you so much.
First off, when using threads its good practice to avoid anything that will block a thread, if at all possible.
You could use a lock which would cause incoming threads to block until the first thread has completed the login process, although I can't see how this would help in understanding multithreading. This will only help in learning how to block threads, which you should try to avoid at all costs, threads are expensive.
IMHO you should never have more threads than CPU cores, use the threadpool, understand the difference between compute bound and I\O bound threads. I say again threads are expensive, in both time and memory.
Well, this is solution is not so much about multithreading - but i would do something like this:
public class SingleUserLock : IDisposable
{
private SingleUserSemaphore _parent;
public SingleUserLock(SingleUserSemaphore parent)
{
_parent = parent;
}
public bool IsLoggedIn => _parent?.CurrentUser == this;
public void Unlock()
{
_parent?.Unlock();
_parent = null;
}
public void Dispose()
{
Unlock();
}
}
public class SingleUserSemaphore
{
private readonly object _lockObject = new object();
public SingleUserLock CurrentUser { get; private set; }
public bool TryLogin()
{
if (Monitor.TryEnter(_lockObject))
{
CurrentUser = new SingleUserLock(this);
return true;
}
return false;
}
public void Unlock()
{
try
{
Monitor.Exit(_lockObject);
CurrentUser = null;
}
catch (Exception ex) { };
}
}
Register an instance of SingleUserSemaphore as a Singleton in your DependecyInjection framework for the Web application. Every time a user logs in, you get the singleton SingleUserSemaphore instance and call TryLogin. If true you can the store SingleUserLock in the Session if possible.
For every request check the session for IsLoggedIn == true.
When the user logs out you call the returned SingleUserLock.Unlock(); from the session or directly SingleUserSemaphore.Unlock();
Now the challenge will be, if the user never Logs out. Your web application will be locked forever. To avoid this you could make an Update method on SingleUserSemaphore with a timestamp for every request made by the logged in user. So when a user logs in, you also check for last activity...
Good luck with you homework.
I am creating entities in with multiple thread at the same time.
When i do this in sequence order (with one thread) everything is fine, but when i introduce concurrency there are pretty much always new exception.
i call this method asynchronously:
public void SaveNewData(){
....DO SOME HARD WORK....
var data = new Data
{
LastKnownName = workResult.LastKnownName
MappedProperty = new MappedProperty
{
PropertyName = "SomePropertyName"
}
};
m_repository.Save(data);
}
I already got this exception:
a different object with the same identifier value was already
associated with the session: 3, of
entity:TestConcurrency.MappedProperty
and also this one:
Flushing during cascade is dangerous
and of course my favourite one:
Session is closed!Object name: 'ISession'.
What i think is going on is: Everythread got same session (nhibernateSession) and then it... go wrong cos everything try to send queries with same session.
For nhibernate configuration i use NhibernateIntegration with windsor castle.
m_repository.Save(data) looks like:
public virtual void Save(object instance)
{
using (ISession session = m_sessionManager.OpenSession())
{
Save(instance, session);
}
}
where m_sessionManager is injected in constructor from Castle and it is ISessionManager. Is there any way how to force this ISessionManager to give me SessionPerThread or any other concurrent session handling ?
So i researched and it seems that NHibernateIntengrationFacility doesnt support this transaction management out of the box.
I solved it when i changed to new Castle.NHibernate.Facility which supersede Castle.NHibernateIntegration - please note that this is only beta version currently.
Castle.Nhibernate.Facility supports session-per-transaction management, so it solved my problem completely.
The thing is that SQL Server sometimes chooses a session as its deadlock victim when 2 processes lock each other out. The one process does an update and the other just a read. During read SQL Server creates so called 'shared locks' which does not block other reader but does block updaters. So far the only way to solve this is to reprocess the victimized thread.
Now this is happening in a web application and I would like to have a mechanism that can do the reprocessing (let's say with a maximum of 5 times) when needed.
I've looked at the IHttpModule which has a BeginRequest() and EndRequest() event being called (amongst other events) but that does not give me the ability to reprocess the request.
In fact what I need is something that forces itself between the http handler and the process being called.
I could write something like this:
int maxtries = 5;
while(maxtries > 0)
{
try
{
using(var scope = Session.OpenTransaction())
{
// process
scope.Complete(); // commit
return result;
}
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
but I would have to write that for all requests which is tedious and error prone. I would be nice if I could just configure a kind of reprocessing handler via the Web.Config that is automatically called and does the processing deadlock reprocessing for me.
If your getting deadlocks you've got something wrong in your DB layer. You missing indices or something similar, or you are doing out of sequence updates within transactions that are locking dependant entities.
Regardless using HTTP as a mechanism to handle this error is not the way to go.
If you truly need to retry a deadlock, then you should wrap the attempt in your own function and retry almost exactly as you describe above.
BUT I would strongly suggest that you identify the cause of the deadlock and resolve it.
Hope that does not sound too dismissive of your problem, but fix the cause of the problem not the symptoms.
Since you're using MVC and assuming it is safe to rerun your entire action on DB failure, you can simply write a common base controller class from which all of your controllers will inherit (if you already don't have one), and in it override OnActionExecuting and trap specific exception(s) and retry. This way you'll have the code only in one place, but, again, assuming it is safe to rerun the entire action in such case.
Example:
public abstract class MyBaseController : Controller
{
protected override void OnActionExecuting(
ActionExecutingContext filterContext
)
{
int maxtries = 5;
while(maxtries > 0)
{
try
{
return base.OnActionExecuting(filtercontext);
}
catch(DeadlockException dlex)
{
maxtries--;
}
catch(Exception ex)
{
throw;
}
}
throw new Exception("Persistent DB locking - max retries reached.");
}
}
... and then simply update every relevant controller to inherit from this controller (again, if you don't already have a common controller).
EDIT: B/w, Bigtoe's answer is correct - deadlock is the cause and should be dealt with accordingly. The above solution is really a workaround if DB layer cannot be reliably fixed. The first attempt should be on reviewing and (re-)structuring queries so as to avoid deadlock in the first place. Only if that is not practical should the above workaround be employed.
I read many posts saying multithreaded applications must use a separate session per thread. Perhaps I don't understand how the locking works, but if I put a lock on the session in all repository methods, would that not make a single static session thread safe?
like:
public void SaveOrUpdate(T instance)
{
if (instance == null) return;
lock (_session)
using (ITransaction transaction = _session.BeginTransaction())
{
lock (instance)
{
_session.SaveOrUpdate(instance);
transaction.Commit();
}
}
}
EDIT:
Please consider the context/type of applications I'm writing:
Not multi-user, not typical user-interaction, but a self-running robot reacting to remote events like financial data and order-updates, performing tasks and saves based on that. Intermittently this can create clusters of up to 10 saves per second. Typically it's the same object graph that needs to be saved every time. Also, on startup, the program does load the full database into an entity-object-graph. So it basically just reads once, then performs SaveOrUpdates as it runs.
Given that the application is typically editing the same object graph, perhaps it would make more sense to have a single thread dedicated to applying these edits to the object graph and then saving them to the database, or perhaps a pool of threads servicing a common queue of edits, where each thread has it's own (dedicated) session that it does not need to lock. Look up producer/consumer queues (to start, look here).
Something like this:
[Producer Threads]
Edit Event -\ [Database Servicer Thread]
Edit Event ------> Queue -> Dequeue and Apply to Session -> Database
Edit Event -/
I'd imagine that a BlockingCollection<Action<Session>> would be a good starting point for such an implementation.
Here's a rough example (note this is obviously untested):
// Assuming you have a work queue defined as
public static BlockingCollection<Action<Session>> myWorkQueue = new BlockingCollection<Action<Session>>();
// and your eventargs looks something like this
public class MyObjectUpdatedEventArgs : EventArgs {
public MyObject MyObject { get; set; }
}
// And one of your event handlers
public MyObjectWasChangedEventHandler(object sender, MyObjectUpdatedEventArgs e) {
myWorkQueue.Add(s=>SaveOrUpdate(e.MyObject));
}
// Then a thread in a constant loop processing these items could work:
public void ProcessWorkQueue() {
var mySession = mySessionFactory.CreateSession();
while (true) {
var nextWork = myWorkQueue.Take();
nextWork(mySession);
}
}
// And to run the above:
var dbUpdateThread = new Thread(ProcessWorkQueue);
dbUpdateThread.IsBackground = true;
dbUpdateThread.Start();
At least two disadvantages are:
You are reducing the performance significantly. Having this on a busy web server is like having a crowd outside a cinema but letting people go in through a person-wide entrance.
A session has its internal identity map (cache). A single session per application means that the memory consumption grows as users access different data from the database. Ultimately you can even end up with the whole database in the memory which of course would just not work. This requires then calling a method to drop the 1st level cache from time to time. However, there is no good moment to drop the cache. You just can't drop in at the beginning of a request because other concurrent sessions could suffer from this.
I am sure people will add other disadvantages.