Is it possible to request a .NET server for an XHR request, while it is busy with another task?
Basically, I am working on an e-commerce application, that generates invoices of purchases once at a time, that may be weekly, monthly etc. So, while in process to generate the invoices a lot of calculations, and database reads and writes are done on the server side, but the user can only wait for the complete process to finish at once without knowing the progress made by the server. As per the project's requirement, the application can be generating thousands of invoices at a time, so I guess that'll take a lot of time.
So, I was thinking that, is it possible for me to write a code in ASP.NET, C# & jQuery, for requesting an XHR from the server while it is busy with generating invoices, so as to know the progress made by the server.
The process may be like:
User selects the criteria for Invoice generation on the screen and clicks Generate Invoice button.
The server gets the requests and makes a initial read operation on the database so as to return the number of records or invoices to be generated, and simultaneously starts generating invoices.
The output of that Read(), is sent to the Client System, and on client's side, a modal pop starts to show a progress bar telling the number of records to be processed as well as those completed processes.
As a server cannot send a response by itself without the client initiating with a request to do so, I guess the client may be sending XHR's every 10-20 seconds so as to know the progress made by the server on the Invoice Generation process.
But, here comes the actual problem, the server may not respond to the same application domain, and tell about the progress made, before completing the earlier requested process of Invoice Generation. Or else, it may break the earlier process.
Can it be done using multiple threads? Or may be some other methodology of .NET.
My application is in ASP.NET with C# and answers with Code examples or references will appreciated.
Related
I have a console application which is calling a third party API, basically it takes records from a database and pushes them to the third party via their WCF API.
For various reasons (Mainly the third party API being very slow - 7 seconds to respond) we want to post multiple records in parallel so we have started doing this, however we are now seeing some strange behaviour from the third party API where it is duplicating records.
It has been suggested to us by the developers of the API that this is because we are sending the requests over the same connection (which makes sense as .net will reuse connections) and they dont/cant/wont support that, they will only support one request over one connection and then the connection must be closed.
My question is, How do I do this in .net core (2.2) We are currently using a HttpClient which I'd expect to reuse connections where possible - how can I guarantee that we use a new connection for each request?
I have after some digging worked out what the problem is and there is no way to fix it.
The process is :
We Post into the API
The API creates a record in a staging table in the database with a flag showing a status of "to be processed" and then starts polling the table waiting for change.
The API then invokes an executable on the server which looks at the staging table for any records with a status of "to be processed"
The executable does its processing and then changes the status on the record to "complete"
The API which has been polling the table sees the changed status, reads the record and returns the result to the client.
All fine if you only ever post one record at a time. But as Im executing in parallel what is happening is :
We call the API 10 times within a few ms of each other
The API creates 10 records in the staging table all with the "to be processed" status.
The API starts polling the staging table for changes and at the same time invokes the executable TEN TIMES
All 10 instances of the executable read all 10 records and process them - each instance unaware that there are another 9 instances all doing the same thing.
All 10 instances of the executable finish processing and change the status on the staging table to "complete"
The API sees the status change and returns all of the changed records to me in the response - So each of the 10 requests I sent gets 10 records returned to it.
Needless to say, we have entered into discussions with the provider of the API, it might be EOL and pretty much out of support but we're paying for licencing of this thing and this is a really stupid process that they need to provide a fix or a workaround for.
So in the end it was nothing to do with reusing connections, dont know why we were told it was.
I searched threads here and couldn't really find what I wanted. I know asp.net web forms is an old technology, but I need to work on it for now. Let's say I have a method which does some heavy processing. For example, there is a function which creates 300 PDF Invoices, zip it and downloads it to user computer.
Sample Code:
for(int i = 1; i <= 300;i++)
{
PrintPDF(i);
}
Now let's say PrintPDF takes about 30 seconds to print one record, so it will take around 150 minutes to print 300 PDFs. Now from a user point of view, I may choose to quit in between if I don't like. If user closes the browser then
Does the request to print PDF get aborted instantly after user closes the session?
If it doesn't, what can we do to ensure that the request is immediately aborted as soon as user closes the browser.
Http is stateless. That means you can never relay on fact that you'll get notification when user is closing the browser. However you can always implement Dead man's switch. I.E. make a javascript that will send pings to your server every ten seconds or so & treat user that haven't sent "ping" for more than twenty seconds as logged of. As for heavy processing on server side - that's really unfortunate way to go; for instance ASP.NET have maximum time it can spend serving request - check executionTimeout of httpRuntime web.config element (by default 110s). You can increase this value of course - but the application pool can be recycled anyway and also if there will be lot of requests on "heavy processing" you can run out of available processing threads. If the site is accessible over internet that is also great place for DDos attack.
Better way is to create queue (in db/cloud) and windows service that will process this queue asynchronously. Still you can implement this "force kill request mechanism" by storing "close" flag in queue item that will service check periodically & will stop processing if it is set.
Other workaround is to use websockets (SignalR).
I have a website where I need to take a bit of data from the user, make an ajax call to a .net webservice, and then the webservice does some work for about 5-10 minutes.
I naturally dont want the user to have to sit there that whole time, so I have made it an asynchronous ajax call to the webservice, and after the call has been sent, I redirect the user to a "you are done!" page.
What I want to happen is for the webservice to keep running to finish--and not abort--after it receives the information from the user.
From my testing, this is more or less what happens, but now I'm finding that this might be limited by time? I.e. if the webservice runs past a certain amount of time, it will abort if the user isnt still connected.
I might be off here in this assessment, but this is what I THINK is going on from my testing.
So my question is whether with .net web services, if this is indeed what happens? Does it get aborted after some time if the user isnt still on the other end? Is there any way to disable this abort?
Thanks in advance!
when you invoke a web service, it will always finish its work, even if user leaves the page that invoked it.
Of course webservices have their own configuration and one of them sets timeout.
If you're creating a WCF service (SOAP Service) you can set it in its contract (changing binding properties), if you're creating a service with WebApi or MVC (REST/Http Service) then you can either add to its config file or programmatically set in its controller as it follows.
HttpContext.Server.ScriptTimeout = 3600; //Number of seconds
That can be a reason causing webservice to interrupt its work but it is not related to what happens on client side.
Have a nice day,
Alberto
Whilst I agree that the answer here is technically correct, I just
wanted to post a more robust alternative approach that avoids some of
the pitfalls possible with your current approach such as
Web Server being bounced during the long-running processing of request
Web Server App pool being recycled during processing
Web server running out of threads due to too many long-running requests and not being able to process any more requests
I would recommend you take a thoroughly ansynchronous approach and use
Message Queues (MSMQ for example) with a trigger on the queue that
will execute the work.
The process would be:
Your page makes Ajax call to the Webservice
Webservice writes a message into the Queue and returns right away. The message contains details of what work needs to be carried out.
User continues on your site as usual, or goes home, etc.
A trigger on the Queue is watching for messages and when a message
arrives in the queue, it activates a process which:
Reads the message
Performs the necessary work
Updates any back-end storage, etc, with the results of the work
This is much more robust because it totaly decouples the Web service from any long-running work and means that if the user makes a request and the web server goes down a moment later (for whatever reason) then the work will still be queued up when the server comes back online, etc.
You can read more about it here (MSMQ is the MS Message Queue tech; there are many others!)
Just my 2c
I am working on a project in which a user will upload a file to the server that will be parsed.
I would like the user to receive a status message when the upload is completed and then for them to be able to poll the server for updates regarding the status of the parsing.
I was thinking to use a ajax file upload in which when the client receives an upload success message from the server it begins polling every 2 seconds for the status. I do not know how to return data to the user while still having the server continue execution of the parser and being able to track the status of that execution.
What is the best way to go about continueing script execution after a view is returned from a controller.
EDIT:
I suspect I may need to spawn another process, but I have no idea how to do this
I think that in this particular case it would make sense to decouple your file processing from the web request. The ThreadPool.QueueUserWorkItem approach suggested by C.M. is one option, but you might also want to consider using a real queuing mechanism (like MSMQ or RabbitMQ) and process your uploads in a separate application. This way, your web tier is decoupled from your business processes and you can scale each piece independently if you need to.
You should take a look at SignalR (https://github.com/SignalR/SignalR) it's a library for building web web apps with very easy communication between client and server.
If you want to know about how to do background processing you can use threading to spin up a thread that will run even after the webpage has been returned to the user. There are plenty of examples of this on StackOverflow and the web.
A simple way I've seen this done is using the ThreadPool.QueueUserWorkItem along with a static list that keeps track of the status background threads.
How would one use SignalR to implement notifications in an .NET 4.0 system that consists of an ASP.NET MVC 3 application (which uses forms authentication), SQL Server 2008 database and an MSMQ WCF service (hosted in WAS) to process data? The runtime environment consists of IIS 7.5 running on Windows Server 2008 R2 Standard Edition.
I have only played with the samples and do not have extensive knowledge of SignalR.
Here is some background
The web application accepts data from the user and adds it to a table. It then calls an one way operation (with the database key) of the WCF service to process the data (a task). The web application returns to a page telling the user the data was submitted and they will be notified when processing is done. The user can look at an "index" page an see which tasks are completed, failed or are in progress. They can continue to submit more tasks (which is independent of previous data). They can close their browser and come back later.
The MSMQ based WCF service reads the record from the database and processes the data. This may take anything from milliseconds to several minutes. When its done processing the data, the record is updated with the corresponding status (error or fail) and results.
Most of the time, the WCF service is not performing any processing, however when it does, users generally want to know when its done as soon as possible. The user will still use other parts of the web application even if they don't have data to be processed by the WCF Service.
This is what I have done
In the primary navigation bar, I have an indicator (similar to Facebook or Google+) for the user to notify them when the status of tasks has changed. When they click on it, they get a summary of what was done and can then view the results if they wish to.
Using jQuery, I poll the server for changes. The controller action checks to see if there is any processes that were modified (completed or failed) and return them otherwise waits a couple of seconds and check again without returning to the client. In order to avoid a time out on the client, it will return after 30 seconds if there was no changes. The jQuery script waits a while and tries again.
The problems
Performance degrades with every user that views a page. There is no need for them to do anything in particular. We've noticed that memory usage of Firefox 7+ and Safari increases over time.
Using SignalR
I'm hoping that switching to SignalR can reduce polling and thus reduce resource requirements especially if nothing has changed task wise in the database. I have trouble getting the WCF service to notify clients that its done with processing a task given the fact that it uses forms based authentication.
By asking this question, I hope someone will give me better insight how they will redesign my notification scheme using SignalR, if at all.
If I understand correctly, you need a way of associating a task to a given user/client so that you can tell the client when their task has completed.
SignalR API documentation tells me you can call JS methods for specific clients based on the client id (https://github.com/SignalR/SignalR/wiki/SignalR-Client). In theory you could do something like:
Store the client id used by SignalR as part of the task metadata:
Queue the task as normal.
When the task is processed and de-queued:
Update your database with the status.
Using the client id stored as part of that task, use SignalR to send that client a notification:
You should be able to retrieve the connection that your client is using and send them a message:
string clientId = processedMessage.ClientId //Stored when you originally queued it.
IConnection connection = Connection.GetConnection<ProcessNotificationsConnection>();
connection.Send(clientId, "Your data was processed");
This assumes you mapped this connection and the client used that connection to start the data processing request in the first place. Your "primary navigation bar" has the JS that started the connection to the ProcessNotificationsConnection endpoint you mapped earlier.
EDIT: From https://github.com/SignalR/SignalR/wiki/Hubs
public class MyHub : Hub
{
public void Send(string data)
{
// Invoke a method on the calling client
Caller.addMessage(data);
// Similar to above, the more verbose way
Clients[Context.ClientId].addMessage(data);
// Invoke addMessage on all clients in group foo
Clients["foo"].addMessage(data);
}
}