How to prevent browser timeouts in ASP.NET? - c#

The scenario:
user uploads a small file or about 100 kB to the ASP.NET 4.0 application with perhaps 1000 units of work.
The server then gets to work on the units of work, one at a time.
Each unit takes a few seconds to complete due to requesting information from an external service.
The results are only saved to the database if all units were completed successfully, using a transaction.
Once completed, the user may get a list of what was done.
The problem is that the user does not get the confirmation, instead his browser I think gives up because of a timeout.
Future files are expected to be a few 100 times larger, increasing the problem.
I want to prevent this timeout.
Here are some ideas I had:
Optimize the code to run faster. This is done and is no longer the problem.
Run the requests to the external service in parallel.
Increase the server timeouts a little.
Let the user upload the file and then send him an email with the results later when the file has been processed.
Somehow make the page refresh and show some progress information to the user while waiting, e.g., 5% complete - done in 10 minutes.
How could I implement this last step, showing progress information and preventing browser timeout?
Other suggestions are welcome.
Thanks,

You need to decouple processing the file from browser response. This is achieved by
Creating a persistent item (i.e. in database, file, etc) in a queue so that it is fault tolerant
return success result to browser
Create a queue worker to asynchronously process your queue.

You put the data uploaded into a queue in your database. Then, you have an asynchronous process (example Windows Service) to pull items from your queue and process them. You can update your DB with progress of each operation, and when full completed, remove the item from the queue and update your other tables.
For progress, the user can then query the queue table for the status of his upload.

An easy trick is to write a piece of javacsript to repeatedly request something from the current page. If you use a request action of HEAD then the server will only respond with a minimal amount of information each time.
Something like:
<script src="/js/jquery-1.3.2.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(document).ready(function(){
setTimeout("callserver()",6000);
});
function callserver() {
var remoteURL = '/yourpage.aspx';
$.get(remoteURL, function(data) { setTimeout("callserver()",6000); });
}
</script>

Related

Receive multipart response and treat each part as soon as received

Current situation: an existing SQL Server stored procedure I have no control upon returns 10 large strings in separate resultsets in about 30 seconds (~3 seconds per dataset). The existing ASP.NET Web API controller method that collects these strings only returns a response once all strings are obtained from the stored procedure. When the client receives the response, it takes another 30 seconds to process the strings and display the results, for a total of 1 minute from request initiation to operation completion.
Contemplated improvement: somehow transmit the strings to the client as soon as each is obtained from the SqlDataReader, so the client can work on interpreting each string while receiving the subsequent ones. The total time from request initiation to completion would thus roughly be halved.
I have considered the WebClient events at my disposal, such as DownloadStringCompleted and DownloadProgressChanged, but feel none is viable and generally think I am on the wrong track, hence this question. I have all kinds of ideas, such as saving strings to temporary files on the server and sending each file name to the client through a parallel SignalR channel for the client to request in parallel, etc., but feel I would both lose my time and your opportunity to enlighten me.
I would not resort to inverting the standard client / server relationship using a "server push" approach. All you need is some kind of intermediary dataset. It could be a singleton object (or multiple objects, one per client) on your server, or another table in an actual database (perhaps NoSql).
The point is that the client will not directly access the slow data flow you're dealing with. Instead the client will only access the intermediary dataset. On the first request, you will start off the process of migrating data from the slow dataset to the intermediary database and the client will have to wait until the first batch is ready.
The client will then make additional requests as he processes each result on his end. If more intermediary results are already available he will get them immediately, otherwise he will have to wait like he did on the first request.
But the server is continuously waiting on the slow data set and adding more data to the intermediate data set. You will have to have a way of marking the intermediate data as having already been sent to the client or not. You will probably want to spawn a separate thread for the code that moves data from the slow data source to the intermediate one.

MongoDB connection problems on Azure

We have an ASP.NET MVC application deployed to an Azure Website that connects to MongoDB and does both read and write operations. The application does this iteratively. A few thousand times per minute.
We initialize the C# driver using Autofac and we set the MaxConnectionIdleTime to 45 seconds as suggested in https://groups.google.com/forum/#!topic/mongodb-user/_Z8YepNHnbI and a few other places.
We are still getting a large number of the below error:
Unable to read data from the transport connection: A connection
attempt failed because the connected party did not properly respond
after a period of time, or established connection failed because
connected host has failed to respond. Method
Message:":{"ClassName":"System.IO.IOException","Message":"Unable to
read data from the transport connection: A connection attempt failed
because the connected party did not properly respond after a period of
time, or established connection failed because connected host has
failed to respond.
We get this error while connecting to both a MongoDB instance deployed on a VM in the same datacenter/region on Azure and also while connecting to an external PaaS MongoDB provider.
I run the same code in my local computer and connect to the same DB and I don't receive these errors. It's only when I deploy the code to an Azure Website.
Any suggestions?
A few thousand requests per minute is a big load, and the only way to do it right, is by controlling and limiting the maximum number of threads which could be running at any one time.
As there's not much information posted as to how you've implemented this. I'm going to cover a few possible circumstances.
Time to experiment...
The constants:
Items to process:
50 per second, or in other words...
3,000 per minute, and one more way to look at it...
180,000 per hour
The variables:
Data transfer rates:
How much data you can transfer per second is going to play a role no matter what we do, and this will vary through out the day depending on the time of day.
The only thing we can do is fire off more requests from different cpu's to distribute the weight of traffic we're sending back n forth.
Processing power:
I'm assuming you have this in a WebJob as opposed to having this coded inside the MVC site it's self. It's highly inefficient and not fit for the purpose that you're trying to achieve. By using a WebJob we can queue work items to be processed by other WebJobs. The queue in question is the Azure Queue Storage.
Azure Queue storage is a service for storing large numbers of messages
that can be accessed from anywhere in the world via authenticated
calls using HTTP or HTTPS. A single queue message can be up to 64 KB
in size, and a queue can contain millions of messages, up to the total
capacity limit of a storage account. A storage account can contain up
to 200 TB of blob, queue, and table data. See Azure Storage
Scalability and Performance Targets for details about storage account
capacity.
Common uses of Queue storage include:
Creating a backlog of work to process asynchronously
Passing messages from an Azure Web role to an Azure Worker role
The issues:
We're attempting to complete 50 transactions per second, so each transaction should be done in under 1 second if we were utilising 50 threads. Our 45 second time out serves no purpose at this point.
We're expecting 50 threads to run concurrently, and all complete in under a second, every second, on a single cpu. (I'm exaggerating a point here, just to make a point... but imagine downloading 50 text files every single second. Processing it, then trying to shoot it back over to a colleague in the hopes they'll even be ready to catch it)
We need to have a retry logic in place, if after 3 attempts the item isn't processed, they need to be placed back in to the queue. Ideally we should be providing more time to the server to respond than just one second with each failure, lets say that we gave it a 2 second break on first failure, then 4 seconds, then 10, this will greatly increase the odds of us persisting / retrieving the data that we needed.
We're assuming that our MongoDb can handle this number of requests per second. If you haven't already, start looking at ways to scale it out, the issue isn't in the fact that it's a MongoDb, the data layer could have been anything, it's the fact that we're making this number of requests from a single source that is going to be the most likely cause of your issues.
The solution:
Set up a WebJob and name it EnqueueJob. This WebJob will have one sole purpose, to queue items of work to be process in the Queue Storage.
Create a Queue Storage Container named WorkItemQueue, this queue will act as a trigger to the next step and kick off our scaling out operations.
Create another WebJob named DequeueJob. This WebJob will also have one sole purpose, to dequeue the work items from the WorkItemQueue and fire out the requests to your data store.
Configure the DequeueJob to spin up once an item has been placed inside the WorkItemQueue, start 5 separate threads on each and while the queue is not empty, dequeue work items for each thread and attempt to execute the dequeued job.
Attempt 1, if fail, wait & retry.
Attempt 2, if fail, wait & retry.
Attempt 3, if fail, enqueue item back to WorkItemQueue
Configure your website to autoscale out to x amount of cpu's (note that your website and web jobs share the same resources)
Here's a short 10 minute video that gives an overview on how to utilise queue storages and web jobs.
Edit:
Another reason you may be getting those errors could be because of two other factors as well, again caused by it being in an MVC app...
If you're compiling the application with the DEBUG attribute applied but pushing the RELEASE version instead, you could be running into issues due to the settings in your web.config, without the DEBUG attribute, an ASP.NET web application will run a request for a maximum of 90 seconds, if the request takes longer than this, it will dispose of the request.
To increase the timeout to longer than 90 seconds you will need to change the [httpRuntime][3] property in your web.config...
<!-- Increase timeout to five minutes -->
<httpRuntime executionTimeout="300" />
The other thing that you need to be aware of is the request timeout settings of your browser > web app, I'd say that if you insist on keeping the code in MVC as opposed to extracting it and putting it into a WebJob, then you can use the following code to fire a request off to your web app and offset the timeout of the request.
string html = string.Empty;
string uri = "http://google.com";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
request.Timeout = TimeSpan.FromMinutes(5);
using (HttpWebResponse response = (HttpWebResonse)request.GetResponse())
using (Stream stream = response.GetResponseStream())
using (StreamReader reader = new StreamReader(stream))
{
html = reader.ReadToEnd();
}
Are you using mongoDB in a VM? It seems to be a network problem. This kind of transient faults should occur, so the best you can do is implement a retry pattern or use a lib such as Polly to do that:
Policy
.Handle<IOException>()
.Retry(3, (exception, retryCount) =>
{
// do something
});
https://github.com/michael-wolfenden/Polly

Web Page Hangs after Procedure execution completes

I have a web page that accepts an excel file, reads data from it and stores it in a buffer table. Then a procedure is called that reads from this buffer table and each record passes through number of validations. After procedure completes, I delete the buffer table contents from code behind. I have executed this code using about 100 records and it executes in a few seconds. However, when the record size is increased (say about 2000), the procedure takes over 5 mins to execute, but the web page hangs. I have checked the table, the record insertion and buffer table deletion takes about 6-7 mins, but the web page does not return a result even after 30 mins. I have tried to optimize the procedure, but still in case of large number of records the web page hangs.
Please give me some direction as to how to avoid this page hanging situation. Any help would be great. Thanks in advance
I think that the first thing you should do is wrap your inserts into a transaction.
If there are too many records for a single transaction, you could perform a commit every n records (say 500).
As far as the web page returning, you could be reaching a timeout of some sort where IIS or the client abandons the request or if you update the page with data, you could have invalid data which is causing errors in the page.
For this, you should check the windows event log to see if IIS or ASP.Net are reporting any exceptions. You can also run fiddler to see what is happening to the request.
Finally, I would strongly suggest a redesign that does not require the user to wait with a submitted form on the screen until processing is complete.
The standard pattern that we use for this type of functionality is to record incoming requests in the database with a GUID, kick off a background worker to perform the task, and return the GUID to the client.
When the background worker has finished (or encounters an error), it updates the request table in the database with the new status (i.e. success or fail) and the error message if any.
The client can use the GUID to issue ajax requests to the web server on a regular basis (using window.timeout so as not to block the user and allow animations to be displayed) to determine whether or not the process is complete. Once the process is complete, the UI can be updated as needed.
Update
To record incoming requests in the database with a GUID, create a table that contains a GUID column as the primary key, a status column (3 values: in progress, success, failure), and an error message column.
When the request is received, create a new GUID in your code, then write a record to this new table with this GUID and a status of in progress prior to launching the background worker.
You will pass the GUID to the background worker so that it can update the table on completion (it just updates that status to complete or error and records the error message, if any).
You will also pass the GUID back to the client through javascript so that the client can periodic ask the web server to perform a query against the table using the GUID to determine when the request is no longer in progress.

handling multiple stale request at server

In my app I have a map and when the user does any operation on the map a request is sent to the server asking it to give it the map for the new bounding box but the problem now is that if a user say zooms in fast or pans the map continuously we end up sending many requests to the server and end up sending back the result to the client too.
Now I want to handle this more gracefully both at the server end and at the client end. I have thought of ways to handle the same at the client end but I need a way to do the same at the server end gracefully. what i mean is I dont want to end up processing stale requests which my client doesnt expect a response frm anyways. IS there a way I can achieve the same.
I am using MVC architecture in .NET Framework.
Thanks in advance.
P.S.All these queries are obviously ajax queries
Multiple ways to do this :
First way :
On the server side where you receive the request from client for new bounding window information, have the server operation wait for a small fraction of time ( the duration of time can be fine tuned later ) before it starts the processing.If there is a new request arriving from same client ( for the same zoom operation ) within this wait time , then let the old request be discarded. When no new request arrives from the client after the wait elapses , the server interprets the current request as the final one and processes this one. In order to minimize the delay appearing on the client side , the server can prepare those resources necessary to process the request which do not depend upon the exact details of the zoom parameters.
Second way ( can be used along with the first approach ) :
If the server is capable of it , use multiple threads for processing client requests. This way , you can safely discard the stale results and still avoid any delay in zooming appearing on to the client.

Is timer good solution for this?

I have application that use MSSQL database.
Application have module that is using for sending messages between application users.
When one user send message to another i insert message in database, and set message status to 1( after user read message database I update and set message status to 0).
Now,i am using system.timers.timer for checking message status, and if message status 1 user get alert that he has one message inbox.
The problem is that this application can be used by many users, and if timer run ever 5 minutes this gone slow application and database.
Is there any other solution to do this, with out runing timer?
Thanks!
I don't think the solution using a timer which does polling is that bad. And 50 Users is relatively little.
Does each user run a client app, which directly connects to the database? Or is this a ASP.NET app? Or a service which connects to the db and notifies client apps?
If you have client apps connecting directly to the DB, I'd stay with the timer and probably reduce the timeout (the number of queries seems to be extremely low in your case).
Other options
Use SqlDependency/Query notifications MSDN
Only if your message processing logic gets more complex, probably take a look at service broker. Especially if you need queuing behavior. But as it seems, this would be far too complex.
I wouldn't use a trigger.
Maybe you should look into having a "monitor" service, which is the only one looking at changes in the database and then sending a message to the other applications (a delegate) that data has updated, and they themselves should fetch their own data only when they get that message.
If checking always against the message table you can use add a column to your user table named: HasNewMessage, which is updated by a trigger on the message table
To illustrate it:
User 1 gets a new message
Messagetable trigger sets HasNewMessage to 1 for user1
You then check every 5 minutes if user1 HasNewMessage (should be faster due to indexed user
table)
If user1 looks into his mailbox you set HasNewMessages back to 0
Hope this helps

Categories

Resources