The essence of the problem is this: there is a controller, in which is a method that generates an excel file. Upon request by its need to generate and return. The file is generated for a long time for 1-2 hours, while it is necessary to highlight the text notifications, please wait. After finish notification must be removed.
I could not find my desired solutions.
I sory for my bad English
public ActionResult DownloadFile()
{
return new FileStreamResult(_exporter.Export(), "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
}
You only get one bite at the response apple. If you return a file result, that is all you can return. The only way to handle this while giving the user updates about the status is to do the file creation out-of-stream and then long-poll or use Web Sockets to update the user periodically. The request to this action would merely queue up the file creation and then return a regular view result.
It's unwise to have particularly long running actions take place within the request-response cycle, anyways. A web server has a thread pool which is often referred to as its "max requests", because each request needs a thread. Usually this is set by default to something around a 1000, and assumes that you're going to clear the threads as soon as possible. If 1001 people all tried to request this action at the same time, the 1001st individual would be queued until one of the other 1000 threads freed up, meaning they could be waiting for almost 4 hours. Even if you never see your site getting this kind of load, it's still an excellent vector for a DDoS attack. Just send a few thousand requests to this URL and your server deadlocks.
Also, I have no idea what you're doing, but 1-2 hours to generate an Excel file is absolutely insane. Either you're dealing with way too much data at once, and sending back multi-gigabyte files that will likely fail to even open properly in Excel, or the process by which you're doing it is severely unoptimized.
Related
I have a process that takes a very large amount of memory, it involves manipulating large images. The process is called via get request route, and I currently have a lock on the image creation method. Without the lock, if I send more than 10+ requests at once, the application memory immediately spikes and throws an exception.
[HttpGet]
[Route("example")]
public HttpResponseMessage GetImage([FromUri]ImageParams imageParams){
lock (myLock){
return CreateImage(imageParams);
}
}
Someone mentioned increasing the applicationPool in another question, but I can't figure out how to do it. I think this would be a better alternative to locking, because I could still use a couple of threads to create images, but could limit this so I don't run out of memory. I am under the impression that .NET is using an integrated thread pooling system for each GET request. I am sure from testing that these requests are somehow run in parallel, and it would be helpful to decrease the potential number of threads rather than locking it to one.
Looking at this resource,
https://msdn.microsoft.com/en-us/library/dd560842(v=vs.110).aspx
I've tried adding this element to the System.Web section but it says it is not a valid child element (even though I am running IIS version 10)
I was able to change the aspnet.config file to change the applicationPool number from 0 (default, no limit) to 1 and 3 but this did not yield any different results at all.
Any input would be appreciated, thanks for reading
Edit:
This is a followup from my question at this link, which shows the code where these errors are pointing me and some efforts to analyze..
Diagnosing OutOfMemory issues with Image Processing API in ASP.NET
There is a server that publishes some XML data every 5 seconds for GET fetch. The URL is simple and does not change, like www.XXX.com/fetch-data. The data is published in a loop every 5 seconds precisely, and IS NOT guaranteed to be unique every time (but does change quite often anyway). Apart from that, I can also fetch XML at www.XXX.com/fetch-time, where server time is stored, in unix time format. So, the fetch-time resolution is unfortunately just in seconds.
What I need is a way to synchronize my client code such that it fetches the data AS SOON AS POSSIBLE to when they are published. If I just naively fetch in a loop every 5 seconds, what might happen is that if I get really unlucky, my loop might start right before the server loop ends, so I will basically always end up with 5 second old data. I need a mechanism to get both server and client loops in tandem. Also, I need to compensate for lag (ping), so that the fetch request is sent actually a little before the server publishes new data.
The server code is proprietary and can't be changed, so all the hard stuff must be done by client. Also, there are many other questions about high-precision time measurements and sleep functions, so you can abstract from those and take them as granted. Any help with the algorithm would be much appreciated.
In a web application .NET, I had to convert html to the pdf on the fly. I played around with some open source projects . Finally I found wkhtmltopdf .On the server side my app will invoke a server side process of wkhtmlpdf and passes the argument and presents the user with the pdf file.
How bad is this approach from security stand point? Is it more vulnerable to bots?
Suppose the spawned program has some buffer overflow error when given untrustworthy input, that causes arbitrary code to run. On the good side: hey, the arbitrary code is now running in another process, not the server process. On the bad side: the arbitrary code now has all the rights that the process has.
Isolating subsystems to their own process is a good practice but don't stop there. Use defense in depth.
Start the new process with the least amount of privilege it needs to operate correctly. That way if there is a successful attack on it, the damage is limited.
Sanitize the inputs to the process, particularly if they come from a untrustworthy source. Make sure the files are a reasonable size and contain reasonable data.
You want a successful attack to have to jump through a dozen impossible hoops, not just one.
Joe's point about denial of service is also a good one to think about.
It's vulnerable to people swamping your server and DOSsing it. You could place requests in a message queue, and then have a service processing items off the queue. This means you can guarantee that you have at most N processes running. And the worst case, you have a long queue, which you can cancel.
If you use a message queue, you can move the queue consumer onto another server (or servers). This helps spread server load if you have a lot of demand for your service. Running on another service also means limited access to data, which would be good for security, meaning the executable can't access files and memory it doesn't need to.
The downside is that this is asynchronous, and you need to notify that the file is ready for download. You also need to store it somewhere whilst it is waiting to be downloaded.
An upside to this is that the user isn't tying up a HTTP serving connection whilst waiting, and if it takes a long time to run the process, the user's connection won't time out.
Running process on server can not be a security flaw as is. As running a process in cases like yours is a result of some other action or operation requested by someone. So security flaw could be present in the methods/architecture that leads to that action that runs executable. If you feel secure enough on that layer, I would not be worry much about invoking a separate process, especially cause it brings more value to the service you offer.
I have 2 different classes that i am testing to send files to the browser.
First one is at http://pastebin.org/1187259 uses Range specific headers in order to provide resuming
Second one is at http://pastebin.org/1187454 uses chunk reading to send large files.
Both work fine with one different. First one is wayyy slower than the second one in the sense of download speed. With first one i cannot pass over 80KB/s with second one i can get as fast as possible.
I have done few tests and result was same. Is this an illusion or is there something on the first one that slows download speed?
I also noticed that first one seems to block other requests. For example if i request a file from server with first one server will not respond to my other request to it until download finish. Even if I request different page. It doesn’t do that if i open different sessions from different browsers.
Thanks.
At last! I managed to fix the issue by adding EnableSessionState="ReadOnly" to the download page.
See http://www.guidanceshare.com/wiki/ASP.NET_2.0_Performance_Guidelines_-_Session_State
"Use the ReadOnly Attribute When You Can
For pages that only need read access to session data, consider setting EnableSessionState to ReadOnly.
Why
Page requests that use session state internally use a ReaderWriterLock object to manage session data. This allows multiple reads to occur at the same time when no lock is held. When the writer acquires the lock to update session state, all read requests are blocked. Normally two calls are made to the database for each request. The first call connects to the database, marks the session as locked, and executes the page. The second call writes any changes and unlocks the session. By setting EnableSessionState to ReadOnly, you avoid blocking, and you send fewer calls to the database thus improving the performance.
"
I've developed a program using Delphi that, among some features, does a lot of database reading of float values and many calculations on these values. At the end of these calculations, it shows a screen with some results. These calculations take some time to finish: today, something like 5 to 10 minutes before finally showing up the results screen.
Now, my customers are requesting a .Net version of this program, as almost all of my other programs have already gone to .Net. But I'm afraid that this time-consuming calculation procedure wouldn't fit the web scenario and the program would take the user to some kind of timeout error.
So, I'd like some tips or advices on how to do this kind of procedure. Initially I thought about calling a local executable (that could be even my initial Delphi program, in a console way) and after some time show the result screen in a web page. But, again, I'm afraid this wouldn't be the best approach.
An external process is a reasonable way to go about it. You can fire off a thread inside the ASP.NET process (i.e. just with new Thread()) which could also work, but there are issues around process recycling and pooling that might make this a little harder. simply firing off an external process and then maybe using some Ajax polling to check on it's status on the browser seems like a good solution to me.
FWIW, another pattern that some existing online services use (for instance, ones that do file conversion that may take a few minutes) is having the person put in an email address and just send the results via email once it's done - that way if they accidentally kill their browser or it takes a little longer than expected or whatever, it's no big deal.
Another approach I've taken in the past is basically what Dean suggested - kick it off and have a status page that auto-refreshes, and once it's complete, the status includes a link to results.
How about:
Create a Web Service that does the fetching/calculation.
Set the timeout so it wont expire.
YourService.HeavyDutyCalculator svc = new YourService.HeavyDutyCalculator();
svc.Timeout = 10 * 1 * 1000; //Constitutes 10 mins, 10 mins x 1 second x 1000 ms
Service.CalculateResult result = svc.Calculate();
Note that you can put -1 if you want it to run infinitely.
MSDN:
Setting the Timeout property to Timeout.Infinite indicates that the request does not time out. Even though an XML Web service client can set the Timeout property to not time out, the Web server can still cause the request to time out on the server side.
Call that web method inside you web page
Place a waiting/inProgress image
Register for web method OnComplete event; and show results upon complete.
You can also update the timeout in your web.config:
<httpRuntime useFullyQualifiedRedirectUrl="true|false"
maxRequestLength="size in kbytes"
executionTimeout="seconds"
minFreeThreads="number of threads"
minFreeLocalRequestFreeThreads="number of threads"
appRequestQueueLimit="number of requests"
versionHeader="version string"/>
Regardless of what else you do you need a progress bar or other status indication to the user. Users are used to web pages that load in seconds, they simply won't realise (even if you tell them in advance) that they have to wait a full 10 minutes for their results.