I am currently developing an application in ASP.NET CORE 2.0
The following is the action inside my controller that get's executed when the user clicks submit button.
The following is the function that get's called the action
As a measure to prevent duplicate inside a database I have the function
IsSignedInJob(). The function works
My Problem:
Sometimes when the internet connection is slow or the server is not responding right away it is possible to click submit button more than once. When the connection is reestablished the browser (in my case Chrome) sends multiple HttpPost request to the server. In that case the functions(same function from different instances) are executed so close in time that before the change in database is made, other instances are making the same change without being aware of each other.
Is there a way to solve this problem on a server side without being to "hacky"?
Thank you
As suggested on the comments - and this is my preferred approach-, you can simply disable the button once is clicked the first time.
Another solution would be to add something to a dictionary indicating that the job has already been registered but this will probably have to use a lock as you need to make sure that only one thread can read-write at a time. A Concurrent collection won't do the trick as the problem is not whether this operation is thread-safe or not. The IsSignedInJob method you have can do this behind the scenes but I wouldn't check the database for this as the latency could be too high. Adding/removing a Key from a dictionary should be a lot faster.
Icarus's answer is great for the user experience and should be implemented. If you also need to make sure the request is only handled once on the server side you have a few options. Here is one using the ReaderWRiterLockSlim class.
private ReaderWriterLockSlim cacheLock = new ReaderWriterLockSlim();
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
This will prevent overlapping DoWork code. It does not prevent DoWork from finishing completely, then another post executing that causes DoWork again.
If you want to prevent the post from happening twice, implement the AntiForgeryToken, then store the token in session. Something like this (haven't used session in forever) may not compile, but you should get the idea.
private const SomeMethodTokenName = "SomeMethodToken";
[HttpPost]
public async SomeMethod()
{
if (cacheLock.TryEnterWriteLock(timeout));
{
try
{
var token = Request.Form.Get["__RequestVerificationToken"].ToString();
var session = Session[SomeMethodTokenName ];
if (token == session) return;
session[SomeMethodTokenName] = token
// DoWork that should be very fast
}
finally
{
cacheLock.ExitWriteLock();
}
}
}
Not exactly perfect, two different requests could happen over and over, you could store in session the list of all used tokens for this session. There is no perfect way, because even then, someone could technically cause a OutOfMemoryException if they wanted to (to many tokens stored in session), but you get the idea.
Try not to use asynchronous processing. Remove task,await and async.
Related
I am reading that the recommendation is to use OnParametersSetAsync() for any async calls (such as database calls) inside a Blazor server component. The problem though is that OnParametersSetAsync() will fire twice in a default Blazor Server application, and I obviously don't want to access the database twice.
To avoid this problem, some people recommend replacing the "ServerPrerendered" render-mode in _Host.cshtml to "Server", which will avoid OnParametersSetAsync() firing twice. However, I don't like this solution, because the application will take longer to load for the user if we removing the initial static HTML phase.
So, my solution so far has been to put my database access calls inside OnAfterRenderAsync(), and to call StateHasChanged() once I am done. It looks something like this:
public partial class ExampleComponent
{
[Inject]
public IUserDataAccess UserAccess { get; set; }
[Parameter]
public string UserEmail { get; set; }
public IUser User { get; set; }
private bool _isLoading = true;
protected override async Task OnAfterRenderAsync(bool firstRender)
{
if (firstRender == false)
return;
User = await UserAccess.GetOneAsync(UserEmail);
_isLoading = false;
StateHasChanged();
}
}
The database call gets made only once, because of the "if (firstRender == false)" condition.
This approach has been working well for me so far, but I suspect that there is something wrong with this way of proceeding, because the examples given online of a valid call to OnAfterRenderAsync() usually only mention JavaScript calls. I don't why that is though. Is there anything wrong with the example code I am giving above? And if this approach is not recommended for some reason, then how can we avoid the double call to the database if we use OnParametersSetAsync() (excluding the server-mode change discussed above)?
Thanks.
Everyone gets hung up on the initial double load and make huge design compromises that they don't need to. [Sorry, and please don't take this the wrong way]This is a probably a good example.
If you're using Blazor Server then, yes the SPA "Start page" gets rendered twice - once on the static first load and once when the SPA starts in the page.
But that's it. Everything after that only gets rendered once. If you are so hung up on that initial load, don't do "too much" in the landing page, make it "light".
To restate - This only happens once, every other routing component load in your SPA only happens once unless you force a re-load or hit F5.
On putting lots of stuff in OnAfterRender{Async}, in general DON'T. It's designed to handle after render activity such as JSInterop stuff. Doing things that then need a call to StateAndChanged is a bit self defeating!
All your once only Db stuff should be in OnInitialized{Async} - normally OnInitializedAsync as it will be async code. You have little control over when OnParametersSet{async} gets run: code in here gets called whenever the component is re-rendered.
I have a controller where the functionality required is to implement a call where two actions are done simultaneously, first we get input and do a call to an external application then respond to the call OK we are working on it and release the caller. When the external application responds, we get the response and save to the db, I am using a task.delay as
Part 1
[HttpPost]
public async Task<IActionResults> ProcessTransaction(Transactions transactions)
{
// do some processing
TransactionResults results = new TransactionResults();
Notify(transactions, results);
return Ok("We are working on it, you will get a notification");
}
The delayed task
private void Notify(Transactions transactions, TransactionResults results)
{
Task.Delay(10000).ContinueWith(t => SendNotification(transactions, results));
}
on the SendNotification I am attempting to save the results
private void SendNotification(Transactions transactions, TransactionResults results)
{
// some processing
_context.Add(results); // this gives an error context has already been disposed
_context.SaveChanges();
}
Is there a better way to do this, or a way to re instantiate the context?
I managed to do a work around to the problem I am facing, I created an endpoint that I would call once the notification results came back and the data would be saved on the callback not at that particular event. Once the controller has respond with an Ok, the controller is disposed and its difficult to re instantiate it. The call back work around works for now, I will update if I find another way to do it.
My problem :
if i have a simple post request where i just change some Voodoo items in db on edit:
[HttpPost]
[ValidateAntiForgeryToken]
[SessionStateActionFilter]
public ActionResult Edit(MonkeyJar voodooMonkey)
{
if (!this.service.EditMonkey(voodooMonkey))
return RedirectToAction("Edit",voodooMonkey);
return RedirectToAction("Index");
}
Lets say that EditMonkey takes 1.5 second to respond, while those 1.5 seconds is not over user can spam post requests to the same edit method, so only the lates edit will be saved. I want to prevent that.
I Read alot about this problem. I can ofcourse just diable submit button on submit via jquery, but isnt it a bit of hacky way of solving the problem? arent there any other solutions without disabling the button, and just skip the post request number 2...x and only take into avout the first one?
The double-click issue is typically solved in the UI, yes. If you are creating and have control over the UI, then it's definitely not 'hacky' to prevent double-clicks there using Javascript.
To me it sounds like you are wanting to prevent a situation where someone has either gone around your UI code in some way to perform additional click operations. Whether that's disabling/editing the Javascript, poor browser Javascript support, or something else.
You could do something with session state. If you only permit one edit at a time, you could do something like this (pseudocode):
[HttpPost]
[ValidateAntiForgeryToken]
[SessionStateActionFilter]
public ActionResult Edit(MonkeyJar voodooMonkey)
{
//Prevent double-submit
if (Session.IsEditActive)
{
//TODO: determine if you want to show an error, or just ignore the request
}
Session.IsEditActive = true;
try
{
if (!this.service.EditMonkey(voodooMonkey))
{
//I'm thinking you need this line here in case the Redirect does the Thread.Abort() before the finally gets called. (Is that possible? Too lazy to test. :) Probably a race condition--I'd keep it for a guaruntee)
Session.IsEditActive = false;
return RedirectToAction("Edit",voodooMonkey);
}
}
finally
{
//Ensure that we re-enable edits, even on errors.
Session.IsEditActive = false;
}
return RedirectToAction("Index");
}
Http is stateless. That said, a double post with same data is completely valid (as per the transport).
So, this leaves you with 2 options:
Client side: disable the button, and throttle the request. This is not hacky at all. Is by far the most common solution.
Server side: have some kind of monitor lock per session, so that same session cannot re-enter same block that is already executing.
Comparing both, I would prefer the client side option. is not 100% secure, but 99% can be accepted.
This link http://msdn.microsoft.com/en-us/library/aa772153(VS.85).aspx says:
You can register up to five notification requests on a single LDAP connection. You must have a dedicated thread that waits for the notifications and processes them quickly. When you call the ldap_search_ext function to register a notification request, the function returns a message identifier that identifies that request. You then use the ldap_result function to wait for change notifications. When a change occurs, the server sends you an LDAP message that contains the message identifier for the notification request that generated the notification. This causes the ldap_result function to return with search results that identify the object that changed.
I cannot find a similar behavior looking through the .NET documentation. If anyone knows how to do this in C# I'd be very grateful to know. I'm looking to see when attributes change on all the users in the system so I can perform custom actions depending on what changed.
I've looked through stackoverflow and other sources with no luck.
Thanks.
I'm not sure it does what you need, but have a look at http://dunnry.com/blog/ImplementingChangeNotificationsInNET.aspx
Edit: Added text and code from the article:
There are three ways of figuring out things that have changed in Active Directory (or ADAM). These have been documented for some time over at MSDN in the aptly titled "Overview of Change Tracking Techniques". In summary: Polling for Changes using uSNChanged. This technique checks the 'highestCommittedUSN' value to start and then performs searches for 'uSNChanged' values that are higher subsequently. The 'uSNChanged' attribute is not replicated between domain controllers, so you must go back to the same domain controller each time for consistency. Essentially, you perform a search looking for the highest 'uSNChanged' value + 1 and then read in the results tracking them in any way you wish. Benefits This is the most compatible way. All languages and all versions of .NET support this way since it is a simple search. Disadvantages There is a lot here for the developer to take care of. You get the entire object back, and you must determine what has changed on the object (and if you care about that change). Dealing with deleted objects is a pain. This is a polling technique, so it is only as real-time as how often you query. This can be a good thing depending on the application. Note, intermediate values are not tracked here either. Polling for Changes Using the DirSync Control. This technique uses the ADS_SEARCHPREF_DIRSYNC option in ADSI and the LDAP_SERVER_DIRSYNC_OID control under the covers. Simply make an initial search, store the cookie, and then later search again and send the cookie. It will return only the objects that have changed. Benefits This is an easy model to follow. Both System.DirectoryServices and System.DirectoryServices.Protocols support this option. Filtering can reduce what you need to bother with. As an example, if my initial search is for all users "(objectClass=user)", I can subsequently filter on polling with "(sn=dunn)" and only get back the combination of both filters, instead of having to deal with everything from the intial filter. Windows 2003+ option removes the administrative limitation for using this option (object security). Windows 2003+ option will also give you the ability to return only the incremental values that have changed in large multi-valued attributes. This is a really nice feature. Deals well with deleted objects. Disadvantages This is .NET 2.0+ or later only option. Users of .NET 1.1 will need to use uSNChanged Tracking. Scripting languages cannot use this method. You can only scope the search to a partition. If you want to track only a particular OU or object, you must sort out those results yourself later. Using this with non-Windows 2003 mode domains comes with the restriction that you must have replication get changes permissions (default only admin) to use. This is a polling technique. It does not track intermediate values either. So, if an object you want to track changes between the searches multiple times, you will only get the last change. This can be an advantage depending on the application. Change Notifications in Active Directory. This technique registers a search on a separate thread that will receive notifications when any object changes that matches the filter. You can register up to 5 notifications per async connection. Benefits Instant notification. The other techniques require polling. Because this is a notification, you will get all changes, even the intermediate ones that would have been lost in the other two techniques. Disadvantages Relatively resource intensive. You don't want to do a whole ton of these as it could cause scalability issues with your controller. This only tells you if the object has changed, but it does not tell you what the change was. You need to figure out if the attribute you care about has changed or not. That being said, it is pretty easy to tell if the object has been deleted (easier than uSNChanged polling at least). You can only do this in unmanaged code or with System.DirectoryServices.Protocols. For the most part, I have found that DirSync has fit the bill for me in virtually every situation. I never bothered to try any of the other techniques. However, a reader asked if there was a way to do the change notifications in .NET. I figured it was possible using SDS.P, but had never tried it. Turns out, it is possible and actually not too hard to do. My first thought on writing this was to use the sample code found on MSDN (and referenced from option #3) and simply convert this to System.DirectoryServices.Protocols. This turned out to be a dead end. The way you do it in SDS.P and the way the sample code works are different enough that it is of no help. Here is the solution I came up with:
public class ChangeNotifier : IDisposable
{
LdapConnection _connection;
HashSet<IAsyncResult> _results = new HashSet<IAsyncResult>();
public ChangeNotifier(LdapConnection connection)
{
_connection = connection;
_connection.AutoBind = true;
}
public void Register(string dn, SearchScope scope)
{
SearchRequest request = new SearchRequest(
dn, //root the search here
"(objectClass=*)", //very inclusive
scope, //any scope works
null //we are interested in all attributes
);
//register our search
request.Controls.Add(new DirectoryNotificationControl());
//we will send this async and register our callback
//note how we would like to have partial results
IAsyncResult result = _connection.BeginSendRequest(
request,
TimeSpan.FromDays(1), //set timeout to a day...
PartialResultProcessing.ReturnPartialResultsAndNotifyCallback,
Notify,
request);
//store the hash for disposal later
_results.Add(result);
}
private void Notify(IAsyncResult result)
{
//since our search is long running, we don't want to use EndSendRequest
PartialResultsCollection prc = _connection.GetPartialResults(result);
foreach (SearchResultEntry entry in prc)
{
OnObjectChanged(new ObjectChangedEventArgs(entry));
}
}
private void OnObjectChanged(ObjectChangedEventArgs args)
{
if (ObjectChanged != null)
{
ObjectChanged(this, args);
}
}
public event EventHandler<ObjectChangedEventArgs> ObjectChanged;
#region IDisposable Members
public void Dispose()
{
foreach (var result in _results)
{
//end each async search
_connection.Abort(result);
}
}
#endregion
}
public class ObjectChangedEventArgs : EventArgs
{
public ObjectChangedEventArgs(SearchResultEntry entry)
{
Result = entry;
}
public SearchResultEntry Result { get; set;}
}
It is a relatively simple class that you can use to register searches. The trick is using the GetPartialResults method in the callback method to get only the change that has just occurred. I have also included the very simplified EventArgs class I am using to pass results back. Note, I am not doing anything about threading here and I don't have any error handling (this is just a sample). You can consume this class like so:
static void Main(string[] args)
{
using (LdapConnection connect = CreateConnection("localhost"))
{
using (ChangeNotifier notifier = new ChangeNotifier(connect))
{
//register some objects for notifications (limit 5)
notifier.Register("dc=dunnry,dc=net", SearchScope.OneLevel);
notifier.Register("cn=testuser1,ou=users,dc=dunnry,dc=net", SearchScope.Base);
notifier.ObjectChanged += new EventHandler<ObjectChangedEventArgs>(notifier_ObjectChanged);
Console.WriteLine("Waiting for changes...");
Console.WriteLine();
Console.ReadLine();
}
}
}
static void notifier_ObjectChanged(object sender, ObjectChangedEventArgs e)
{
Console.WriteLine(e.Result.DistinguishedName);
foreach (string attrib in e.Result.Attributes.AttributeNames)
{
foreach (var item in e.Result.Attributes[attrib].GetValues(typeof(string)))
{
Console.WriteLine("\t{0}: {1}", attrib, item);
}
}
Console.WriteLine();
Console.WriteLine("====================");
Console.WriteLine();
}
Simplest explanation I can produce:
In my .NET1.1 web app I create a file on disc, in the Render method, and add an item to the Cache to expire within, say, a minute. I also have a callback method, to be called when the cache item expires, which deletes the file created by Render. In the Page_Init method I try to access the file which the Render method wrote to disc. Both these methods have a lock statement, locking a private static Object.
Intention:
To create a page which essentially writes a copy of itself to disc, which gets deleted before it gets too old (or out of date, content-wise), while serving the file if it exists on disc.
Problem observed:
This is really two issues, I think. Requesting the page does what I expect, it renders the page to disc and serves it immediately, while adding the expiry item to the cache. For testing the expiry time is 1 minute.
I then expect that the callback method will get called after 60 seconds and delete the file. It doesn't.
After another minute (for the sake of argument) I refresh the page in the browser. Then I can see the callback method get called and place a lock on the lock object. The Page_Init also gets called and places a lock on the same object. However, both methods appear to enter their lock code block and proceed with execution.
This results in: Render checks file is there, callback method deletes file, render method tries to serve now-deleted-file.
Horribly simplified code extract:
public class MyPage : Page
{
private static Object lockObject = new Obect();
protected void Page_Init(...)
{
if (File.Exists(...))
{
lock (lockObject)
{
if (File.Exists(...))
{
Server.Transfer(...);
}
}
}
}
protected override void Render(...)
{
If (!File.Exists(...))
{
// write file out and serve initial copy from memory
Cache.Add(..., new CacheItemRemovedCallback(DoCacheItemRemovedCallback));
}
}
private static void DoCacheItemRemovedCallback(...)
{
lock (lockObject)
{
If (File.Exists(...))
File.Delete(...);
}
}
}
Can anyone explain this, please? I understand that the callback method is, essentially, lazy and therefore only calls back once I make a request, but surely the threading in .NET1.1 is good enough not to let two lock() blocks enter simultaneously?
Thanks,
Matt.
Not sure why your solution doesn't work, but that might be a good thing, considering the consequences...
I would suggest a completely different route. Separate the process of managing the file from the process of requesting the file.
Requests should just go to the cache, get the full path of the file, and send it to the client.
Another process (not bound to requests) is responsible for creating and updating the file. It simply creates the file on first use/access and stores the full path in the cache (set to never expire). At regular/appropriate intervals, it re-creates the file with a different, random name, sets this new path in the cache, and then deletes the old file (being careful that it isn't locked by another request).
You can spawn this file managing process on application startup using a thread or the ThreadPool. Linking your file management and requests will always cause you problems as your process will be run concurrently, requiring you to do some thread synchronization which is always best to avoid.
First thing I would do is open the Threads window and observe which thread is the Page_Init is running on and which thread the Call Back is running on. The only way I know that two methods can place a lock on the same object is if they are running in the same thread.
Edit
The real issue here is how Server.Transfer actually works. Server.Transfer simply configures some ASP.NET internal details indicating that the request is about to be transfer to a different URL on the server. It then calls Response.End which in turn throws a ThreadAbortException. No actual data has been read or sent to the client at that time.
Now when the exception occurs code execution leaves the block of code protect by the lock. At this time the Call back function can acquire the lock and delete the file.
Now somewhere deep inside ASP.NET the ThreadAbortException is handled in some way and the request for the new URL is processed. At this time it finds the file has gone missing.