I've developed a program using Delphi that, among some features, does a lot of database reading of float values and many calculations on these values. At the end of these calculations, it shows a screen with some results. These calculations take some time to finish: today, something like 5 to 10 minutes before finally showing up the results screen.
Now, my customers are requesting a .Net version of this program, as almost all of my other programs have already gone to .Net. But I'm afraid that this time-consuming calculation procedure wouldn't fit the web scenario and the program would take the user to some kind of timeout error.
So, I'd like some tips or advices on how to do this kind of procedure. Initially I thought about calling a local executable (that could be even my initial Delphi program, in a console way) and after some time show the result screen in a web page. But, again, I'm afraid this wouldn't be the best approach.
An external process is a reasonable way to go about it. You can fire off a thread inside the ASP.NET process (i.e. just with new Thread()) which could also work, but there are issues around process recycling and pooling that might make this a little harder. simply firing off an external process and then maybe using some Ajax polling to check on it's status on the browser seems like a good solution to me.
FWIW, another pattern that some existing online services use (for instance, ones that do file conversion that may take a few minutes) is having the person put in an email address and just send the results via email once it's done - that way if they accidentally kill their browser or it takes a little longer than expected or whatever, it's no big deal.
Another approach I've taken in the past is basically what Dean suggested - kick it off and have a status page that auto-refreshes, and once it's complete, the status includes a link to results.
How about:
Create a Web Service that does the fetching/calculation.
Set the timeout so it wont expire.
YourService.HeavyDutyCalculator svc = new YourService.HeavyDutyCalculator();
svc.Timeout = 10 * 1 * 1000; //Constitutes 10 mins, 10 mins x 1 second x 1000 ms
Service.CalculateResult result = svc.Calculate();
Note that you can put -1 if you want it to run infinitely.
MSDN:
Setting the Timeout property to Timeout.Infinite indicates that the request does not time out. Even though an XML Web service client can set the Timeout property to not time out, the Web server can still cause the request to time out on the server side.
Call that web method inside you web page
Place a waiting/inProgress image
Register for web method OnComplete event; and show results upon complete.
You can also update the timeout in your web.config:
<httpRuntime useFullyQualifiedRedirectUrl="true|false"
maxRequestLength="size in kbytes"
executionTimeout="seconds"
minFreeThreads="number of threads"
minFreeLocalRequestFreeThreads="number of threads"
appRequestQueueLimit="number of requests"
versionHeader="version string"/>
Regardless of what else you do you need a progress bar or other status indication to the user. Users are used to web pages that load in seconds, they simply won't realise (even if you tell them in advance) that they have to wait a full 10 minutes for their results.
Related
The essence of the problem is this: there is a controller, in which is a method that generates an excel file. Upon request by its need to generate and return. The file is generated for a long time for 1-2 hours, while it is necessary to highlight the text notifications, please wait. After finish notification must be removed.
I could not find my desired solutions.
I sory for my bad English
public ActionResult DownloadFile()
{
return new FileStreamResult(_exporter.Export(), "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
}
You only get one bite at the response apple. If you return a file result, that is all you can return. The only way to handle this while giving the user updates about the status is to do the file creation out-of-stream and then long-poll or use Web Sockets to update the user periodically. The request to this action would merely queue up the file creation and then return a regular view result.
It's unwise to have particularly long running actions take place within the request-response cycle, anyways. A web server has a thread pool which is often referred to as its "max requests", because each request needs a thread. Usually this is set by default to something around a 1000, and assumes that you're going to clear the threads as soon as possible. If 1001 people all tried to request this action at the same time, the 1001st individual would be queued until one of the other 1000 threads freed up, meaning they could be waiting for almost 4 hours. Even if you never see your site getting this kind of load, it's still an excellent vector for a DDoS attack. Just send a few thousand requests to this URL and your server deadlocks.
Also, I have no idea what you're doing, but 1-2 hours to generate an Excel file is absolutely insane. Either you're dealing with way too much data at once, and sending back multi-gigabyte files that will likely fail to even open properly in Excel, or the process by which you're doing it is severely unoptimized.
I've got an API written in C# (webforms) and an SQL Server 2008 database that accepts JSON POST data on an AWS EC2 VM. My problem is that the "first" use of this API is rather slow to respond.
What I mean by "first" is that if I were to wait for an hour or so, then post some data, that would be the first. Subsequent posts would process rather quickly in comparison, and I would need to wait another hour or so before experiencing the slow "first" transaction again.
Since only the initial post is slow, it makes me wonder if something is "spinning down" after being idle for some time, and then spinning up again upon first use, adding the extra time.
Things I have tried -
Run program through a performance profiler - This didn't really help. As far as I can see, the program itself doesn't have any obvious parts that run very slowly or inefficiently.
Change configuration to persist at least 1 connection to the database at all times. Again, no real change. I did this by adding "Min Pool Size=1;Max Pool Size=100" to my connection string.
Change configuration to use named pipes instead of TCP. Once again, no real change. I did this by adding "np:" before the server specified in my connection string, eg. server=np:MyServer;database=MyDatabase;
Is there anything else I can do to diagnose the problem? What else should I be looking for in this scenario?
Chances are your app pool is shutting down after a designated period of non-use. The first call after the shutdown forces everything to get loaded back into memory which explains the lag.
You could play with these settings: http://technet.microsoft.com/en-us/library/cc771956%28v=ws.10%29.aspx to see if you get the desired effect, or setup a task scheduler job that makes at least one call every 10+/- minutes of so by doing a simulated post - a simple powershell script could handle that for you and will keep everything 'primed' for the next use.
I am creating a windows application (using windows form application) which calls the web service to fetch data. In data, I have to fetch 200+ clients information and for each client, I have to fetch all users information. A client can have 50 to 100 users. So, I am calling web service in a loop (after getting all clients list) for each client to fetch the users listing. This is a long process. I want to reduce the execution time for this whole process. So, please suggest me which approach can help in reducing the execution time which is currently up to 40-50 mins for one time data fetch. Let me know any solution like multithreading or any thing else, whichever is best suited to my application.
Thanks in advance.
If you are in control of the web service, have a method that returns all the clients at once instead of 1 by one to avoid rountrips as Michael suggested.
If not, make sure to make as many requests at the same time (not in sequence) to avoid as much laterncy as possible. For each request you will have at least 1 rountrip (so at least your ping's Worth of delay), if you make 150 requests then you'll get your ping to the server X 150 Worth of "just waiting on the network". If you split those requests in 4 bunches, and do each of these bunches in parallel, then you'll only wait 150/4*ping time. So the more requests you do concurrently, the least you wait.
I suggest you to avoid calling the service in a loop for every user to get the details, but instead do that loop in the server and return all the data in one-shot, otherwise you will suffer of a lot of useless latencies caused by the thousand of calls, and not just because of the server time or data-transferring time.
This is also a pattern, called Remote Facade or Facade Pattern explained by Martin Fowler and the Gang of Four:
any object that's intended to be used as a remote objects needs a coarse-grained interface that minimizes the number of calls needed to get some-thing done [...] Rather than ask for an order and its order lines individually, you need to access and update the order and order lines in a single call.
In case you're not in control of the web service, you could try to use a Parallel.ForEach loop instead of a ForEach loop to query the web service.
The MSDN has a tutorial on how to use it: http://msdn.microsoft.com/en-us/library/dd460720(v=vs.110).aspx
This is a pretty vague question and getting it answered seems like a long shot, but I don't know what else to do.
Ever since I made my website live every now and then it will just freeze. You click on a link and the browser will just site there looking like its trying to connect. It seems the freezing can last up to 2 minutes or so, then everything is just fine. Then a little while later, it will do the same thing.
I track all the exceptions that occur on my website in a log file.
I get these quite a bit ..
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding
And the stack trace shows it leading to some method that's connecting to the database.
I'm assuming the freezing has to do with this timeout problem. My website is hosted on a shared server, and my database is on some other server with about a billion other databases as well.
But even being on a shared server, this freezing problem happens all the time. Its extremely annoying. And I can see this being a pretty catastrophic problem considering my site is ecommerce based and people are doing transactions on it. The last thing I want is the site freezing when my users hit the 'Submit payment' button, then it results in them hitting the submit payment button over and over again because the site froze, then there credit card gets charged about 10 extra times.
Does anyone have any suggestions on the best way to handle this?
I am guessing that it has to do with the database connections. Check to see that they are getting released properly? If not then it will use them all up.
Also check to see if your database has connection pooling configured.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding
That's a Sql command timeout exception - it can be somewhat common if your database is under load. Make sure you're disposing of SqlConnections and SqlCommands - though that'd usually result in a pool timeout exception (can't retrieve a connection from the connection pool).
Odds are, someone is running queries that are badly tuned or otherwise sucking resources. It may be your site, but since you're on a shared db server, it could just as easily be someone else's. It could also be blocking, or open transactions - since those would be on your database, that'd be a coding issue. You'll probably need to get your hosting provider involved to track it down or move to a dedicated db server.
You can decrease the CommandTimeout of your SqlCommands - I know that sounds somewhat counter-intuitive, but I often find that it's better to fail early than try for 60 seconds throwing additional load on the server. If your .5 second query isn't done in 5 seconds, odds are it won't be done in 60 either.
Alternatively, if you're the patient type, you can increase the CommandTimeout - but there's also a IIS timeout of 90 seconds that you'll need to modify if you bump it up too much.
The timeout errors is definitely the source of the freezing pages. When it happens the page will wait something like a minute for the database connection before it returns the error message. As the web server only handles one page at a time from each user, the entire site will seem to be frozen for the user until the timeout error comes. Even if it only happens for a few users once in a while, it will seem quite severe to them as they can't access the site at all for a minute or so.
How severe the problem really is depends on how many errors you get. From your description it sounds like you get a bit too many to be normal.
Make sure that all your data readers, command objects and connection objects gets disposed properly, so that you don't leave connections open.
Look for deadlock errors in the log also, as they can cause timeouts. If you have queries that lock each other, you may be able to improve them by changing the order that they use the tables.
Check the SQL Server logs, especially for deadlocks.
If you have multiple connections open, one might be waiting on a row that is locked by the other.
We have very strange problem, one of our applications is continually querying server by using .net remoting, and every 100 seconds the application stops querying for a short duration and then resumes the operation. The problem is on a client and not on the server because applications actually queries several servers in the same time and stops receiving data from all of them in the same time.
100 Seconds is a give away number as it's the default timeout for a webrequest in .Net.
I've seen in the past that the PSI (Project Server Interface within Microsoft Project) didn't override the timeout and so the default of 100 seconds was applied and would terminate anything talking to it for longer than that time.
Do you have access to all of the code and are you sure you have set timeouts where applicable so that any defaults are not being applied unbeknownst to you?
I've never seen that behavior before and unfortunately it's a vague enough scenario I think you're going to have a hard time finding someone on this board who's encountered the problem. It's likely specific to your application.
I think there are a few investigations you can do to help you narrow down the problem.
Determine whether it's the client or server that is actually stalling. If you have problems determining this, try installing a packet filter and monitor the traffic to see who sent the last data. You likely won't be able to read the binary data but at least you will get a sense of who is lagging behind.
Once you figure out whether it's the client or server causing the lag, attempt to debug into the application and get a breakpoint where the hang occurs. This should give you enough details to help track down the problem. Or at least ask a more defined question on SO.
How is the application coded to implement the continuous querying? Is it in a continuous loop? or a loop with a Thread.Sleep? or is it on a timer ?,
It would first be useful to determine if your system is executing this "trigger" in your code when you expect it to, or if it is, and the remoting server is not responding... so, ...
if you cannot reproduce this issue in a development environment where you can debug it, then, if you can, I suggest you add code to this Loop to write out to a log file (or some other persistence mechanism) each time it "should" be examining whatever conditions it uses to decide whether to query the remoting server or not, and then review those logs when the problem reoccurs...
If you can do the same in your remoting server, to record when the server receives a remoting request, this would help as well...
... and oh yes, just a thought, (I don;t know how you have coded this... ) but if you are using a separate thread in client to issue the remoting request, and the channel is being registered, and unregistered on that separate thread, make sure you are deconflicting the requests, cause you can't register the same port twice on the same machine at the same time...
(although this should probably have raised an exception in your client if this was the issue)