Server function to be ran once for all users - c#

Good evening,
In my SignalR application I have a javascript timer that is ran for all users "simultaneously". At the end of this timer, a server function is called, and this is where this problem starts.
As the function is called at the end of the timer, every connected user calls it at the same time, which is unnecessary because it will return the same output for all connected users. Being a logically complex function, having the server run it unnecessarily for all users adds up to be a great resource waste.
How can I make it so that it is ran only once (maybe the first time it is called (until the next timer stops))?
Thank you in advance

You could make use of GlobalHost.ConnectionManager.GetHubContext. This will allow you to get any hub context and then trigger Clients.All.YourFunction on that context. That will send send a message to all connected clients subscribed to that hub.
You will need to have a background process that runs every at the time your JavaScript function fires (by the way, relying on all your clients to call a JavaScript function simultaneously is really not a good idea; different client locations and different machine performance will mean they're not likely to be simultaneous).
The following is assuming that you're just running this on a single server. If you're going to be deploying this to a web farm, then you'll need to use a Database value to ensure you don't repeat the same work, or set up a particular server instance to be responsible for doing the calls (otherwise you'll end up with one call per server).
Create a process that runs in the Background (I'm sticking with a simple thread here, I actually use HangFire for this, but this will suffice for the example), e.g. On App_Start
Thread thread = new Thread(new ThreadStart(YourFunction));
thread.Start();
Then create YourFunction which will be responsible for your client calls:
private bool Cancel = false;
private void YourFunction()
{
do
{
string foo = "Foo";
IHubContext context = GlobalHost.ConnectionManager.GetHubContext<YourHub>();
context.Clients.All.SendYourMessage(foo);
Thread.Sleep(10000);
}
while(!Cancel)
}
And then on the client, just handle the message from the hub:
youyHub.client.sendYourMessage = function(message)
{
// message == "Foo"
};

Related

.Net Session (StateServer mode) not synchronizing if manipulated after request end

I'm getting a bit frustrated with this problem:
I have a web site that manage some files to download, cause these files are very big, and must be organized in folders and then compacted, I build an Ajax structure that do this job in background, and when these files is ready to be downloaded, this job changes the status of an object in the user session (bool isReady = true, simple like that).
To achieve this, when the user clicks "download", a jquery Post is send to an API, and this API starts the "organizer" job and finish the code (main thread, the request scoped one), leaving a background thread doing the magic (it's so beautiful haha).
This "organizer" job is a background thread that receive HttpSessionState (HttpContext.Current.Session) by parameter. It organize and ZIP the files, create a download link and, in the end, change an object in the session using the HttpSessionState that received by param.
This works great when I'm using the session "InProc" mode (I was very happy to deploy this peace of art in production after the tests).
But, my nightmares started when I have deployed the project in production environment, cause we use "StateServer" mode in this environment.
In these environment, the changes is not applied.
What I have noticed, until now, is that in the StateServer, every change I make in the background thread is not "commited" to the session when the changes occurs AFTER the user request ends (the thread that starts the thread).
If i write a thread.join() to wait the thread to finish, the changes made inside the thread is applied.
I'm thinking about use the DB to store these values, but, I will lose some performance :(
[HttpPost]
[Route("startDownloadNow")]
public void StartDownloadNow(DownloadStatusProxy input)
{
//some pieces of code...
...
//add the download request in the user session
Downloads.Add(data);
//pass the session as parameter to the thread
//cause the thread itself don't know the current httpcontext session
HttpSessionState session = HttpContext.Current.Session;
Thread thread = new Thread(() => ProccessDownload(data, session));
thread.Start();
//here, if I put a thread.join(), the changes inside the thread are applied correctly, but I can't do this, otherwise, it ceases to be ajax
}
private void ProccessDownload(DownloadStatus currentDownload, HttpSessionState session)
{
List<DownloadStatus> listDownload = ((List<DownloadStatus>)session["Downloads"]);
try
{
//just make the magic...
string downloadUrl = CartClient.CartDownloadNow(currentDownload.idRegion, currentDownload.idUser, currentDownload.idLanguage, currentDownload.listCartAsset.ToArray(), currentDownload.listCartAssetThumb.ToArray());
listDownload.Find(d => d.hashId == currentDownload.hashId).downloadUrl = downloadUrl;
listDownload.Find(d => d.hashId == currentDownload.hashId).isReady = true;
//in this point, if I inspect the current session, the values are applied but, in the next user request, these values are in the previous state... sad... .net bad dog...
}
catch (Exception e)
{
listDownload.Find(d => d.hashId == currentDownload.hashId).msgError = Utils.GetAllErrors(e);
LogService.Log(e);
}
//this was a desesperated try, I retrieve the object, manipulated and put it back again to the session, but it doesn't works too...
session["Downloads"] = listDownload;
}

Monitor.TryEnter and Threading.Timer race condition

I have a Windows service that every 5 seconds checks for work. It uses System.Threading.Timer for handling the check and processing and Monitor.TryEnter to make sure only one thread is checking for work.
Just assume it has to be this way as the following code is part of 8 other workers that are created by the service and each worker has its own specific type of work it needs to check for.
readonly object _workCheckLocker = new object();
public Timer PollingTimer { get; private set; }
void InitializeTimer()
{
if (PollingTimer == null)
PollingTimer = new Timer(PollingTimerCallback, null, 0, 5000);
else
PollingTimer.Change(0, 5000);
Details.TimerIsRunning = true;
}
void PollingTimerCallback(object state)
{
if (!Details.StillGettingWork)
{
if (Monitor.TryEnter(_workCheckLocker, 500))
{
try
{
CheckForWork();
}
catch (Exception ex)
{
Log.Error(EnvironmentName + " -- CheckForWork failed. " + ex);
}
finally
{
Monitor.Exit(_workCheckLocker);
Details.StillGettingWork = false;
}
}
}
else
{
Log.Standard("Continuing to get work.");
}
}
void CheckForWork()
{
Details.StillGettingWork = true;
//Hit web server to grab work.
//Log Processing
//Process Work
}
Now here's the problem:
The code above is allowing 2 Timer threads to get into the CheckForWork() method. I honestly don't understand how this is possible, but I have experienced this with multiple clients where this software is running.
The logs I got today when I pushed some work showed that it checked for work twice and I had 2 threads independently trying to process which kept causing the work to fail.
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
Processing 0-3978DF84-EB3E-47F4-8E78-E41E3BD0880E.xml for Update Request. - at 09/14 10:15:501255801
Unloaded AppDomain - at 09/14 10:15:10:15:501255801
Stopping environments for Update request - at 09/14 10:15:501255801
AppDomain is already unloaded - at 09/14 10:15:501255801
=== Starting Update Process === - at 09/14 10:15:513756009
Downloading File X - at 09/14 10:15:525631183
Downloading File Y - at 09/14 10:15:525631183
=== Starting Update Process === - at 09/14 10:15:525787359
Downloading File X - at 09/14 10:15:525787359
Downloading File Y - at 09/14 10:15:525787359
The logs are written asynchronously and are queued, so don't dig too deep on the fact that the times match exactly, I just wanted to point out what I saw in the logs to show that I had 2 threads hit a section of code that I believe should have never been allowed. (The log and times are real though, just sanitized messages)
Eventually what happens is that the 2 threads start downloading a big enough file where one ends up getting access denied on the file and causes the whole update to fail.
How can the above code actually allow this? I've experienced this problem last year when I had a lock instead of Monitor and assumed it was just because the Timer eventually started to get offset enough due to the lock blocking that I was getting timer threads stacked i.e. one blocked for 5 seconds and went through right as the Timer was triggering another callback and they both somehow made it in. That's why I went with the Monitor.TryEnter option so I wouldn't just keep stacking timer threads.
Any clue? In all cases where I have tried to solve this issue before, the System.Threading.Timer has been the one constant and I think its the root cause, but I don't understand why.
I can see in log you've provided that you got an AppDomain restart over there, is that correct? If yes, are you sure that you have the one and the only one object for your service during the AppDomain restart? I think that during that not all the threads are being stopped right in the same time, and some of them could proceed with polling the work queue, so the two different threads in different AppDomains got the same Id for work.
You probably could fix this with marking your _workCheckLocker with static keyword, like this:
static object _workCheckLocker;
and introduce the static constructor for your class with initialization of this field (in case of the inline initialization you could face some more complicated problems), but I'm not sure is this be enough for your case - during AppDomain restart static class will reload too. As I understand, this is not an option for you.
Maybe you could introduce the static dictionary instead of object for your workers, so you can check the Id for documents in process.
Another approach is to handle the Stopping event for your service, which probably could be called during the AppDomain restart, in which you will introduce the CancellationToken, and use it to stop all the work during such circumstances.
Also, as #fernando.reyes said, you could introduce heavy lock structure called mutex for a synchronization, but this will degrade your performance.
TL;DR
Production stored procedure has not been updated in years. Workers were getting work they should have never gotten and so multiple workers were processing update requests.
I was able to finally find the time to properly set myself up locally to act as a production client through Visual Studio. Although, I wasn't able to reproduce it like I've experienced, I did accidentally stumble upon the issue.
Those with the assumptions that multiple workers were picking up the work was indeed correct and that's something that should have never been able to happen as each worker is unique in the work they do and request.
It turns out that in our production environment, the stored procedure to retrieve work based on the work type has not been updated in years (yes, years!) of deploys. Anything that checked for work automatically got updates which meant when the Update worker and worker Foo checked at the same time, they both ended up with the same work.
Thankfully, the fix is database side and not a client update.

C# Web-Api - processing inbound requests 'in order that they are recieved'

I've written a web-api project to act as a bridge/gateway between two sub-systems.
I need to ensure that inbound requests are processed 'in the order that they are received'. I'm not overaly familiar with how web-api works and the concern I have is this:
An inbound request comes in, an operation is kicked off and lasts 30 seconds.
Within 5 seconds of the first request being processed, and second request is received and is immediately also processed.
The reason for the concern is that a user may submit an update to a record which will propagate to the other sub-system. However that user may for whatever reason submit a second request. I need to ensure that the first request is completed first, before the subsequent request is actioned. So when hundres of requests are flooding in, it's just a case of processing on a first come, first served basis.
Does anyone know if web-api sort of works like this already, or what I'd need to do in order to get this behaviour?
You can do this this by using global static flag.
Declare one variable like this
static bool bInProcess = false;
Now, when you receive request set this variable to true and do your process. Once you done with your process set this variable to false; During your process if another request come then check this variable if its true then put current thread in sleep mode for 1 sec and check until its true. OR you can return error that another process is running.
//Sample for loop to Queue 2nd request
while (bInProcess )
{
Thread.Sleep(1000);
}
You have to be very careful in this code. In "WebApiConfig" class add the messagehandler class and write this code in that class. Use this method "config.MessageHandlers.Add". I did it long time ago, so I know it works.

Connecting to an Access database more than once at the same time?

Long version:
I want to make an connection to my database, that connection is done asynchronously, because it delays the Form.
Now this is working just fine, but I'm calling the OleDb code to do it's job in a scrollbar_valueChanged event.
This is where the problem is caused, because when the users scrolls the scrollbar very fast, the OleDb code in the background also doing stuff.
Now I thought to fix this by just doing 'classname.db.cmd.connection.Close();', and this closes the connection from the background OleDb code, but doesn't prevent the code to try to connect when there is already a connection trying to be made..
Short version:
I'm running my 'slow' database reading code with async code, but it's possible to run that same database connection very fast again.
And because of how async works ofcourse, it runs the code aside it again and the code tries to connect again, but there still is this other database connection open.
The actual question:
So, is there a way to use multiple connection at the same time, reading from the same Access database with OleDb?
First, I am guessing the issue here is the fact that the scroll bar event fires frequently and is using all of the connections available in the connection pool. There are lots of ways around this issue that might help. The first is to add a "Monitor.Enter" on a shared variable before and after you use the connection. The issue this is going to have is it will freeze your UI until the database I/O completes (which is relatively slow). In other words this isn't going to be a satisfactory solution.
Maybe a better way to approach this is as follows (pseudocode):
ScrollChange event fires
Call an "invalidate screen" or "run database I/O" routine
That routine will run only 1 at a time and will load the pending data set(s), whatever those might be.
That DatabaseIO routine could look something like (Pseudocode again):
public void ScrollBarChange(EventArgs e) {
// to call the routine:
Thread myThread = new Thread(new ThreadStart(DatabaseIO));
myThread.Start();
// any other code you need to run immediately
}
public void DatabaseIO() {
try {
Monitor.Enter(this);
if (ioActive) { pendingEvents = true; return; }
ioActive = true;
} finally {
Monitor.Exit(this);
}
// run the database io normally here...
// check pending events and call "DatabaseIO" again to make sure everything is processed
if (pendingEvents) {
pendingEvents = false;
DatabaseIO();
}
}
Remember that since this will be run in a thread you won't be able to access UI controls which means you need to save those variables before you start the thread and make sure they don't change during the life of a thread. Otherwise, this is sort of a generically good async pattern for responding to rapidly fired screen events. Hopefully it helps, best of luck!

Problems getting newly created thread to send outputs to asp:panel in ASP.NET C#

I'm creating a file processor for use in an intranet.
I described it in another question - ERR_EMPTY_RESPONSE when processing a large number of files in ASP.Net using C#
Now, as suggested on above question's answer, I'm trying to use threads to execute the file processing task.
But there is a problem. I need the newly created thread to write feedbacks to a component in page (asp:panel, or div, or whatever). Those feedbacks would be results from several database operations.
The application reads those txts, interprets each line of it, and insert data in database. Each line inserted in database must return a feedback, like "registry 'regname' inserted successfully", or "i got problems inserting registry 'regname' in file 'filename', skipping to next registry".
I did test with something very simple:
protected void DoImport()
{
try
{
MainBody.Style.Add(HtmlTextWriterStyle.Cursor, "wait");
int x = 0;
while (x < 10000)
{
ReturnMessage(String.Format("Number {0}<hr />", x), ref pnlConfirms);
x++;
}
}
catch (Exception ex)
{
ReturnMessage(String.Format("<font style='color:red;'><b>FATAL ERROR DURING DATA IMPORT</b></font><br /><br /><font style='color:black;'><b>Message:</b></font><font style='color:orange;'> {0}</font><br />{1}", ex.Message, ex.StackTrace), ref pnlErrors);
}
finally
{
MainBody.Style.Add(HtmlTextWriterStyle.Cursor, "default");
}
}
This function is called from Page_Load, and fills an asp:panel called "pnlConfirms" with a row of numbers, but all at once, on load.
I changed it to:
protected void DoImport()
{
try
{
MainBody.Style.Add(HtmlTextWriterStyle.Cursor, "wait");
ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork));
}
catch (Exception ex)
{
ReturnMessage(String.Format("<font style='color:red;'><b>FATAL ERROR DURING DATA IMPORT</b></font><br /><br /><font style='color:black;'><b>Message:</b></font><font style='color:orange;'> {0}</font><br />{1}", ex.Message, ex.StackTrace), ref pnlErrors);
}
finally
{
MainBody.Style.Add(HtmlTextWriterStyle.Cursor, "default");
}
}
private void DoWork(Object stateInfo)
{
int x = 0;
while (x < 10000)
{
ReturnMessage(String.Format("Number {0}<hr />", x), ref pnlConfirms);
x++;
}
}
And both uses this function:
public void ReturnMessage(string message, ref Panel panel, bool reset = false)
{
if (reset)
{
panel.Controls.Clear();
}
Label msg = new Label();
msg.Attributes.Add("width", "100%");
msg.Text = message;
panel.Controls.Add(msg);
}
I need ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork)); to fill those asp:panels with feedbacks - like insertion errors and warnings.
My code already has those feedbacks under try...catch statements, but they're not getting output to any asp:panel from threadpool (it works when invoked directly from DoImport() function, like in the first example I posted).
I'm doing something very wrong, but I can't find out what (and I'm researching this for almost 2 weeks). Please, help!
In ASP.NET, when a browser requests a page, that page is rendered and sent to the browser as soon as its processing finishes, so the browser will show the page as it's finally rendered.
According to your code you're trying to render a page, show a wait cursor, and expect it's shown on the browser and then, the cursor is changed by a default cursor. As I explained, independently from using or not additional threads, the page won't be sent to the browser until it's completely rendered. So you'l never see the wait cursor on the client side.
The easiest wait to get what you're trying to do is to use web services (traditional .asmx or WCF) and AJAX (jquery os ASP.NET AJAX).
1) create a web service that does the processing
2) create a page which is sent to the browser, and, using javascript (jQuery or ASP.NET AJAX) make a call to the web service, and show something to let the user know that the request is being processed. (a wait cursor, or even better an animated gif)
3) when the process finishes, your javascript will get the responde from the web service, and you can update the page to let the user know the process has finished.
if you don't have experience on javascript, you can make most of this task using:
ScriptManager which can be used to create a javascript web service proxy for your client side (other interesting article) and is required for the rest of the controls
some javascript (or jquery) which can be use to update the "process running/ process finished hints" on the client side. I.e. when the call to the web service ends, you can use javascript to update the page using DOM, or load a new page or the same page with an special parameter to show the result of the process
In this way you can do what you want:
1) show a page in a state that shows the process is running
2) show the same, or other page, in a state that shows the end of the process
The trick is comunicating the browser with the server, and this can only be done using some of the available ajax techniques.
Another typical technique is using jQuery.ajax, like explained in encosia.com
According to the OP message, the process of all the files would be so slow that it would tiemout the web service call. If this is the case, you can use this solution:
1) Create a web service that process one (or a batch) of the pending files, and returns at least the number of pending files when it finishes the processing of the current file (or batch).
2) from the client side (javascript), call the web service. When it finishes, update the page showing the number of pending files, and, if this number is greater than zero, call the web service again.
3) when the call to the web service returns 0 pending files, you can update the page to show the work is finished, and don't call it any more.
If you process all the files at once, there will be no feedback on the client side, and there will also be a timeout. Besides, IIS can decide to stop the working thread which is making the work. IIS does this for several reasons.
A more reliable solution, but harder to implement, is:
1) implement a Windows Service, that does the file processing
2) implement a web service that returns the number of pending files (you can communicate the Windows Service and Web App indirectly using the file system, a database table or something like that)
3) use a timer (ajax timer, or javascript setInterval) from your web page to poll the server every N seconds using the web service, until the number of pending files is 0.
An even harder way to do this is hosting a WCF service in your Windows Service, instead of the indirect communication between your web app and windows service. This case is much more complicated because you need to use threads to do the work, and attend the calls to the wcf service. If you can use indirect communitacion it's much easier to implemente. The dtabse table is a simple and effective solution: your working process updates a row a table whenever it process a file, and the web service reads the progress state from this table.
There are many different soultions for a not so simple problem.
You are starting new thread (or more precise running your code on one of free threads in thread pool)and not waiting for results in main thread. Something like Thread.Join (if you would use manual thread creation) or other synchronization mechanism as events need to be used if you want to go this route.
The question you've linked to suggests using asynchronous pages which you are not doing. You would start processing request, kick off the task and release the thread, when the task is finished you complete request.
Side note: consider simply doing all conversion on main thread that handles request. Unless you expect slow I/O to complete the task moving CPU work from one thread to another may not produce significant gains. Please measure performance of your current solution and confirm that it does not meet performance goals you have set up for your application. (this does not apply if you doing it for fun/educational purposes).

Categories

Resources