I have a problem with an ASP.NET MVC project hosted on IIS. I'm flooding the same request hundreds of times:
function Test(count){
for(var i=0; i<count; i++){
$.ajax({
url: "http://example.com?someparam=sth&test="+i,
context: document.body
}).done(function() {
console.log("done");
});
}
}
Test(500)
Here is the taken time of each request in milliseconds (here are just a part of the sent requests):
221
215
225
429
217
228
227
209
236
355
213
224
257
249
223
211
227
1227
168
181
257
3241
201
244
130
198
283
1714
146
136
177
3304
294
868
772
2750
138
1283
221
775
136
235
792
278
641
1707
880
1711
As you can see there are peaks for some of the requests and the taken time could be more than 10 times of the average of the other requests.
I though it could be a Garbage Collector issue, but I think it's not. I called GB on each request. I had the same result, the delays were still there in the log.
This happens not only for my MVC project but either for an empty MVC project.
I created a new MVC project and sent lots of requests to Home/About. The result was same.
I tried with an action that returns EmprtActionResult... same result.
If anybody knows why this happens and has a solution for the problem or just has a suggestion ... please share the information, I will be really grateful
Also I'm using .NET Memory Profiler, but I can't find out how to track each request and catch exactly the requests with delays. Can I do it with .NET Memory Profiler? If I don't please suggest another profiler that will be working for me.
Thank you!
EDIT: I also tried with an empty WebForms project. There were delays just for the firs 5 requests ... but this is because of IIS warming up for sure. And no delays for the next 1495 requests.
Your testing methodology has no way to identify where your bottleneck might be occurring, only that something is causing your delay.
Also, there is no mention whether this is an isolated server. If you are hitting a production website, you'll be affected when pages are requested by visitors accessing the web site.
At the very least, you'll need to add a control to this. I would start by loading a plain text file from the same web server. Another point to note is that most web browsers will limit the number of concurrent requests to the same web site. Usually, that is two simultaneous requests. Your delay could be a backlog of ajax requests in your client.
Related
We are doing http calls with Windows Authentication between asp.net apps (specifically a .net core an and a standard .net framework 4.5.1 app) apps using System.Net.Http.HttpClient like this:
var client = new HttpClient(new HttpClientHandler { Credentials = CredentialCache.DefaultCredentials });
var response= await _winHttpClient.GetAsync(url);
...
This works fine, except for that the first request takes 10 seconds. The requests then go fast for about 40 seconds, and then one request takes 10 seconds. This cycle goes on forever.
Looking at the IIS logs on the receiving end, we can se that every request is denied (401) and then a follow up request goes through, and every so often the delay between these are about 10s. This is all invisible to the client code - it is worked out by the underlying framework.
Example:
2017-03-17 14:19:40 10.241.108.23 GET /person/search/john - 80 - 10.211.37.246 - 401 2 5 31
2017-03-17 14:19:40 10.241.108.23 GET /person/search/john - 80 utv\frank 10.211.37.246 - 200 0 0 93
2017-03-17 14:19:41 10.241.108.23 GET /person/search/johnn - 80 - 10.211.37.246 - 401 2 5 46
2017-03-17 14:19:51 10.241.108.23 GET /person/search/johnn - 80 utv\frank 10.211.37.246 - 200 0 0 281
It seems as if the credentials are somehow cached, and have to be refreshed every 40ish seconds.
It is worth noting that this problem doesn't occur when both applications are run locally, only when they are run in the actual hosting environment.
What's going on?
Is it expected behaviour that the consumer has to do two calls for every request? And why do some of the requests take 10 seconds to authenticate?
Any help would be appreciated.
In which scenario can I get an error like this?
The elastic search service is on the same computer as the client calling it, so there it no network issue. The server has free memory, free disk, and there is always at least 15-20% cpu free.
I'm inserting a lot of data in elastic, but it never came with a timeout; and today, there are hundreds of similar error in our logs.
Could it be because there are a lot of requests in parallel? the insertion code is heavily multi-threaded.
InternalServerError - Invalid NEST response built from a unsuccessful low level call on POST: /albums/albummetadata/f3c20bb7-8f60-5d80-fe87-449bdf3d828a/_update
# Audit trail of this API call:
- [1] BadResponse: Node: http://localhost:9200/ Took: 00:01:00.3240283
- [2] MaxTimeoutReached: Took: -736395.18:15:40.4144464
# OriginalException: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at Elasticsearch.Net.HttpConnection.Request[TReturn](RequestData requestData) in C:\code\elasticsearch-net\src\Elasticsearch.Net\Connection\HttpConnection.cs:line 145
# Request:
# Response:
+The operation has timed out
Are you doing bulk inserts? Are your documents large?
It sounds reasonable that under heavy load inserts may timeout. You could try to detect increased error rates and scale back/slow down your insertions. You could also increase the timeout threshold, but that'll likely only go so far - you'll still end up with a backlog of requests that is ever-growing and will eventually start failing again.
Another option is to scale up your ES cluster, either increasing the specs of your current nodes or adding more nodes.
When a page is taking a long time to process in IIS, all other page requests coming in are delayed until the first one either times out, or responds.
This was brought to light by a 3rd party API having high response times. However, I can duplicate the issue by putting a Sleep in any page.
We are using DNN version 7.0.6
For Example:
The page http://www.website.com/foo.aspx has the code System.Threading.Thread.Sleep(10000); in the Page_Load.
While this page is sleeping, http://www.website.com/bar.aspx is requested. Bar.aspx (a page that usually responds right away) will not respond until foo.aspx has completed it's request.
From IIS Logs, you can see this process:
#Fields: date time cs-uri-stem sc-status sc-substatus sc-win32-status time-taken
2016-08-24 19:44:20 /bar.aspx 200 0 0 69
2016-08-24 19:44:24 /foo.aspx 200 0 0 10053
2016-08-24 19:44:24 /bar.aspx 200 0 0 9204
2016-08-24 19:44:26 /bar.aspx 200 0 0 91
I have tried adding additional worker processes, and the problem still exists.
I feel like I'm missing something simple. Am I just overlooking some fundamental way IIS or DNN works? Can anything be done to prevent this from happening?
If the default is 110 seconds why do I see requests going beyond that (up to 177 seconds)?
I'd expect and hope that once time is reached the request is cancelled and resources reallocated.
I'm seeing these response times in my apm tool (dynatrace) which instruments the code and doesn't likely get the time from the server logs
( referring to In our IIS logs, why do requests last 5 min and longer when executionTimeout is 110 seconds?)
Thank you
Have you considered the requests may be being queued on the server? If you look at perfmon RequestsQueued you might see some queuing going on.
Also look at request wait time to get an indication of how long the last request waited.
Can you send a screenshot of the PurePath showing the Exec Time but also the Elapsed Time column in the tree? Maybe the PurePath itself actually gets aborted by IIS after 110 s but some asynchronous activity in your ASP.NET App is still working and was not interrupted by the IIS Timeout. The PurePath tree should show that as it shows asynchronous subpaths
andi
I am submitting HTTP POST requests via HttpWebRequest which contain a large amount of content. I would like to gzip the message content. Is this possible?
Does IIS 7 have to be configured to handle the compressed content? It has already been configured to serve compressed responses.
I've tried adding a Content-Encoding = gzip header and writing to the request stream wrapped in a GZipStream but the server returns a 504 (GatewayTimeout) which seems odd.
I don't believe IIS7 supports GZIP requests, out of the box. Here's why. On my IIS7 machine, gzip.dll does not export decompression methods.
c:\Windows\System32\inetsrv>c:\vc9\bin\dumpbin.exe -exports gzip.dll
Microsoft (R) COFF/PE Dumper Version 9.00.30729.01
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file gzip.dll
File Type: DLL
Section contains the following exports for gzip.dll
00000000 characteristics
47919400 time date stamp Sat Jan 19 01:09:04 2008
0.00 version
1 ordinal base
6 number of functions
6 number of names
ordinal hint RVA name
1 0 0000242D Compress
2 1 00002E13 CreateCompression
3 2 000065AE DeInitCompression
4 3 000012EE DestroyCompression
5 4 0000658D InitCompression
6 5 000065B6 ResetCompression
Summary
1000 .data
1000 .reloc
1000 .rsrc
6000 .text
I think this represents a change in gzip.dll. I believe in prior versions of gzip.dll, there were 12 exported methods, including 6 that did Decompression.
The vast majority of web servers do not support compressed request bodies. mod_deflate can be configured to support it on Apache but seldom actually is (as a zip-bomb is an easy potential DoS attack). I'm not aware of an IIS solution.
If you are talking back to your own server there is of course nothing stopping you doing the compression at the application level. If you have to pass a standard form type for the backend to read, you should pick multipart/form-data, as URL-encoding would bloat the binary data of the compressed content parameter.
I got the same error.
Solved by adding executionTimeout to web.config:
<httpRuntime maxRequestLength="1048576" executionTimeout="300" />
ExecutionTimeout- is on seconds...