If the default is 110 seconds why do I see requests going beyond that (up to 177 seconds)?
I'd expect and hope that once time is reached the request is cancelled and resources reallocated.
I'm seeing these response times in my apm tool (dynatrace) which instruments the code and doesn't likely get the time from the server logs
( referring to In our IIS logs, why do requests last 5 min and longer when executionTimeout is 110 seconds?)
Thank you
Have you considered the requests may be being queued on the server? If you look at perfmon RequestsQueued you might see some queuing going on.
Also look at request wait time to get an indication of how long the last request waited.
Can you send a screenshot of the PurePath showing the Exec Time but also the Elapsed Time column in the tree? Maybe the PurePath itself actually gets aborted by IIS after 110 s but some asynchronous activity in your ASP.NET App is still working and was not interrupted by the IIS Timeout. The PurePath tree should show that as it shows asynchronous subpaths
andi
Related
Could you please explain, how load or stress test is released inside Visual Studio when the test is oriented on the number of simultaneous users?
Example:
Step user count: 0
Initial user count: 2.
Run duration and step duration: does not important really because user count is always the same (2), let it be 30 seconds for run duration and 30 seconds for step duration. It is a kind bit weird configuration for the test but it helps to show the main idea of my question.
Web serivce is able to response after small period of time (0.1-0.5 sec).
There are 2 users in load test, they start request to web-service, one user recieved request result and one did not recieve. When the first user will start another request, 1) immediately after recieving his response, 2) or when the second user will recieve response too? Am I right that users' requests start absolutely idependently, one user is able to recieve 30/0.1=300 responses and the second one is able to recieve 30/0.5=60 responses after the test? Users send the same requests to the same service.
In these 2 scenarious could be different number of errors that will be recieved during this small test. For me, it is important to understand after what period of time (or event) the next request will begin for the same user?
Thank you.
In which scenario can I get an error like this?
The elastic search service is on the same computer as the client calling it, so there it no network issue. The server has free memory, free disk, and there is always at least 15-20% cpu free.
I'm inserting a lot of data in elastic, but it never came with a timeout; and today, there are hundreds of similar error in our logs.
Could it be because there are a lot of requests in parallel? the insertion code is heavily multi-threaded.
InternalServerError - Invalid NEST response built from a unsuccessful low level call on POST: /albums/albummetadata/f3c20bb7-8f60-5d80-fe87-449bdf3d828a/_update
# Audit trail of this API call:
- [1] BadResponse: Node: http://localhost:9200/ Took: 00:01:00.3240283
- [2] MaxTimeoutReached: Took: -736395.18:15:40.4144464
# OriginalException: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at Elasticsearch.Net.HttpConnection.Request[TReturn](RequestData requestData) in C:\code\elasticsearch-net\src\Elasticsearch.Net\Connection\HttpConnection.cs:line 145
# Request:
# Response:
+The operation has timed out
Are you doing bulk inserts? Are your documents large?
It sounds reasonable that under heavy load inserts may timeout. You could try to detect increased error rates and scale back/slow down your insertions. You could also increase the timeout threshold, but that'll likely only go so far - you'll still end up with a backlog of requests that is ever-growing and will eventually start failing again.
Another option is to scale up your ES cluster, either increasing the specs of your current nodes or adding more nodes.
When a page is taking a long time to process in IIS, all other page requests coming in are delayed until the first one either times out, or responds.
This was brought to light by a 3rd party API having high response times. However, I can duplicate the issue by putting a Sleep in any page.
We are using DNN version 7.0.6
For Example:
The page http://www.website.com/foo.aspx has the code System.Threading.Thread.Sleep(10000); in the Page_Load.
While this page is sleeping, http://www.website.com/bar.aspx is requested. Bar.aspx (a page that usually responds right away) will not respond until foo.aspx has completed it's request.
From IIS Logs, you can see this process:
#Fields: date time cs-uri-stem sc-status sc-substatus sc-win32-status time-taken
2016-08-24 19:44:20 /bar.aspx 200 0 0 69
2016-08-24 19:44:24 /foo.aspx 200 0 0 10053
2016-08-24 19:44:24 /bar.aspx 200 0 0 9204
2016-08-24 19:44:26 /bar.aspx 200 0 0 91
I have tried adding additional worker processes, and the problem still exists.
I feel like I'm missing something simple. Am I just overlooking some fundamental way IIS or DNN works? Can anything be done to prevent this from happening?
The requirement was to create an application for stress testing of a web service.
Idea was to bombard, say, 1000K HTTPWebRequests of size 4 KB to the web service
In order to achieve this, we have created an app whose structure is something like few threads are adding data to be sent in a queue and a thread pool is sending those requests asynchronously.
Task responseTask = Task.Factory.FromAsync(testRequest.BeginGetResponse, testRequest.EndGetResponse, null);" But now what is happening is after some amount of time the no of requests/sec are getting decreased significantly (may be because the response time of service has increased.. but again if we are sending the requests asynchronously, will the response time matter?). And in addition to that, after some time the tool crashses with the message "application has stopped working" and exception is shown as outofmemory exception.
One thing I have observed is just before app crash, the response time of the web service increases significantly. Is it the indirect reason of the crash?
What is the remedy for it?
Maybe you are queueing up tasks without throttling so the amount of pending requests increases infinitely. You need to throttle, even when using async behavior.
I'd like to implement a WebService containing a method whose reply will be delayed for less than 1 second to about an hour (it depends if the data is already cached or neeeds to be fetched).
Basically my question is what would be the best way to implement this if you are only able to connect from the client to the WebService (no notification possible)?
AFAIK this will only be possible by using some kind of polling. But polling is bad and so I'd rather like to avoid using it. The other extreme could be to just let the connection stay open as long as the method isn't done. But i guess this could end up in slowing down the webserver and the network. I considerd to combine these two technics. Then the client would call the method and the server will return after at least 10 seconds either with the message that the client needs to poll again or the actual result.
What are your thoughts?
You probably want to have a look at comet
I would suggest a sort of intelligent polling, if possible:
On first request, return a token to represent the request. This is what gets presented in future requests, so it's easy to check whether or not that request has really completed.
On future requests, hold the connection open for a certain amount of time (e.g. a minute, possibly specified on the client) and return either the result or a result of "still no results; please try again at X " where X is the best guess you have about when the response will be completed.
Advantages:
You allow the client to use the "hold a connection open" model which is relatively expensive (in terms of connections) but allows the response to be served as soon as it's ready. Make sure you don't hold onto a thread each connection though! (And have some kind of time limit...)
By saying when the client should come back, you can implement a backoff policy - even if you don't know when it will be ready, you could have a "backoff for 1, 2, 4, 8, 16, 30, 30, 30, 30..." minutes policy. (You should potentially check that the client isn't ignoring this.) You don't end up with masses of wasted polls for long misses, but you still get quick results quickly.
I think that for something which could take an hour to respond a web service is not the best mechanism to use.
Why is polling bad? Surely if you adjust the frequency of the polling it won't be so bad. Prehaps double the time between polls with a max of about five minutes.
Some web services I've worked with return a "please try again in " xml message when they can't respond immediately. I realise that this is just a refinement of the polling technique, but if your server can determine at the time of the request what the likely delay is going to be, it could tell the client that and then forget about it, leaving the client to ask again once the polling interval has expired.
There are timeouts on IIS and client - side, which will prevent you from leaving the connection open.
This is also not practical, because resources/connections are blocked on the server.
Why do you want the user to wait for such a long running task? Let them look up the status of the operation somewhere.