Loading pages of a website in different threads - c#

For example, i can load the website 10 times consequentially with different pages (stackoverflow.com/questions/a , stackoverflow.com/questions/b , ...). The question is, will it be faster if i will load pages in 10 threads?

The biggest time in loading a webpage is waiting for the HTTP response to come back from the server, and a large amount of that time is taken in setting up the TCP connection.
HTTP has supported the concept of pipelining since version 1.1. This allows multiple requests to be sent along the same TCP connection, and also allows them to be sent before the replies have come back from the previous requests.
So yes, using ten threads could speed up loading ten different pages, but equally one thread could do the same by using asynchronous calls and firing off ten requests before the replies come back.

Related

Log number of queued requests OWIN self-hosted

I do have my OWIN application hosted as a Windows Service and I am getting a lot of timeout issues from different clients. I have some metrics in place around the request/response time however the numbers are very different. For example I can see the client is taking around one minute to perform a request that looks like in the server is taking 3-4 seconds. I am then assuming that the number of requests that can be accepted has reached the limit and subsequent requests that come in would get queued up. Am I right? If that's the case, is there any way I can monitor the number of incoming requests at a given time and how big is the queue (as in number of requests pending to get served)?
I am playing around with https://msdn.microsoft.com/en-us/library/microsoft.owin.host.httplistener.owinhttplistener.setrequestprocessinglimits(v=vs.113).aspx but doesn't look to have any effect.
Any feedback is much appreciated.
THanks!
HttpListener is built on top of Http.Sys so you need to use its performance counters and ETW traces to get this level of information.
https://msdn.microsoft.com/en-us/library/windows/desktop/cc307239%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
http://blogs.msdn.com/b/wndp/archive/2007/01/18/event-tracing-in-http-sys-part-1-capturing-a-trace.aspx

Large Volume Async Calls Being Blocked on Server

I have an application that must send hundreds to thousands of HTTP requests at once. It's a .NET Windows service that uses Async calls. When my main server sends out small batches (around 1000 or less at a time) everything works fine, I get a response form the HTTP calls and all is good.
When it starts hitting 1500 or more at a time, though, all of sudden I get very little to no responses from my HTTP requests. When I run these large batch tests on my local machine though, I have no issues. Has anyone had any experience and might know what the culprit would be of what is holding back my .NET app?
Async calls end up using ThreadPool behind the scenes. And creating a new thread for pool could be time consuming. Try to check ThreadPool.GetMaxThreads() to see how many threads can be created.
Another options is just your local machine is faster than server's one

How expensive is it to call a web service?

I've had a fairly good search on google and nothing has popped up to answer my question. As I know very little about web services (only started using them, not building them in the last couple of months) I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
To give you an example, my app is designed to make job updates, which for certain types of updates will call the web service. It seems like my options are that I could create a datatable in my app of updates that require the web service and pass the whole datatable to the web service and then write a method in the web service to process the datatable's updates. Alternatively I could iterate through my entire table of updates (which includes other updates than those requiring the web service) and call the web service as when an update requires it.
At the moment it seems like it would be simpler for me to pass each update rather than a datatable to the web service.
In terms of data being passed to the web service each update would contain a small amount of data (3 strings, max 120 characters in length). In terms of numbers of updates there would probably be no more than 200.
I was wondering whether I should be ok to call a particular web service as frequently as I wish (within reason), or should I build up requests to do in one go.
Web services or not, any calls routed over the network would benefit from building up multiple requests, so that they could be processed in a single round-trip. In your case, building an object representing all the updates is going to be a clear winner, especially in setups with slower connections.
When you make a call over the network, these things need to happen when a client communicates to a server (again, web services or not):
The data associated with your call gets serialized on the client
Serialized data is sent to the server
Server deserializes the data
Server processes the data, producing a response
Server serializes the response
Server sends serialized response back to the client
The response is deserialized on the client
Steps 2 and 6 usually cause a delay due to network latency. For simple operations, latency often dominates the timing of the call.
The latency on fastest networks used for high-frequency trading is in microseconds; on regular ones it is in milliseconds. If you are sending 100 packages one by one on a network with 1ms lag (2ms per roundtrip), you are wasting 200ms just on the network latency! This one fifth of a second, a lot of time by the standards of today's CPUs. If you can eliminate it simply by restructuring your requests, it's a great reason to do it.
You should usually favor coarse-grained remote interfaces over a fine-grained ones.
Consider adding a 10ms network latency to each call - what would be the delay for 100 updates?

Want to know the exact value of requests waiting in HttpBeginRequest

We run our application on .Net Framework 2.0 and IIS 7.5
While checking in New Relic, we found that we take a lot of time in System.Web.HttpApplication.BeginRequest().
We are working on that fact, i.e. trying to disable session on page level, on all those pages where it is not required.
But currently, We want to know how many total requests are waiting in System.Web.HttpApplication.BeginRequest()?
We saw in IIS Request Monitor, that there are a number of requests in BeginRequest at all particular times.
But is there a performance counter or some way thru code that I can know the exact value of such requests?
You have ASP.Net\Requests Queued performance counter to check the number of the requests that are queued and are waiting to be serviced.
More info at: http://msdn.microsoft.com/en-us/library/fxk122b4%28v=vs.100%29.aspx

Comet and simultaneous Ajax request

I am trying to use a COMET solution using ASP.NET .
Trouble is I want to implement sending and notification part in the same page.
On IE7, whenever I try to send a request, it just gets queued up.
After reading on internet and stackoverflow pages I found that I can only do 2 simultaneous asyn ajax requests per page.
So until I close my comet Ajax request, my 2nd request doesn't get completed, doesn't even go out from the browser. And when I checked with Firefox I just one Ajax comet request running all time..so doesn't that leave me one more ajax request?
Also the solution uses IRequiressessionstate for Asynchronous HTTP Handler which I had removed. But it still creates problems on multiple instances of IE7.
I had one work around which is stated here http://support.microsoft.com/kb/282402
it means we can increase the request limit from registry by default is 2.
By changing "MaxConnectionsPer1_0Server" key
in hive "HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings"
we can increase the number of requests.
Basically I want to broadcast information to multiple clients connected to a server using Comet and the clients can also send messages to the Server.
Broadcasting works but the send request back to server doesn't work.
I'm using IIS 6 and ASP.NET .
Are there any more workarounds or ways to send more requests?
References :
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
AJAX, PHP Sessions and simultaneous requests
jquery .ajax request blocked by long running .ajax request
jQuery: Making simultaneous ajax requests, is it possible?
You are limited to 2 connections, but typically that's all you need - 1 to send, 1 to receive, even in IE.
That said, you can totally do this; we do it all the time in WebSync. The solution lies in subdomains.
The thing to note is that IE (and other browsers, although they typically limit to 6 requests, not 2) limits requests per domain - but that limitation is for the entire domain excluding subdomains. So for example, you can have 2 requests open to "www.stackoverflow.com" and 2 more requests open to "static.stackoverflow.com", all at the same time.
Now, you've got to be somewhat careful with this approach, because if you make a request from the www subdomain to the static subdomain, that's considered a cross-domain request, so you're immediately limited to not using direct XHR calls, but at that point you have nevertheless bypassed the 2 connection limit; JSONP, HTML5, etc, are all your friend for bypassing the cross-domain limitations.
Edit
Managing with > 1 instance of IE comes back to the same problem. The limitation applies across all instances. So, if you have two browsers open, and they're both using comet, you're stuck with 2 long-polling connections open. If you've maximized your options, you're going to be connecting those long-polling requests to something like "comet.mysite.com", and your non-long-polling requests will go to "mysite.com". That's the best you'll get without going into wildcard DNS.
Check out some of our WebSync Demos; they work in 2 instances of IE without a problem. If you check out the source, you'll see that the DNS for the streaming connection is different from the main page; we use JSONP to bypass the cross-domain limitation.
The main idea in COMET is to keep one client-to-server request open, until a response is necessary.
If you design your code properly, then you don't need more than 2 requests to be open simultaneously. Here's how it works:
client uses a central message send-receive loop to send out a request to the server
server receives the request and keeps it open.
at some point, the server responds to the client.
the client (browser) receives the response, handles it in its central message loop.
immediately the client sends out another request.
repeat
The key is to centralize and asynchronize all communications in the client. So you will never need to have 2 open requests.
But to answer your question directly, no, there are no additional workarounds.
Raise the connection limit or reduce the number of connections you use.

Categories

Resources