Slow Http Get Requests With C# - c#

I currently have a python server running that handles many different kinds of requests and I'm trying to do this using C#. My code is as follows:
try
{
ServicePointManager.DefaultConnectionLimit = 10;
System.Net.ServicePointManager.Expect100Continue = false;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Proxy = null;
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
response.Close();
}
}
catch (WebException e)
{
Console.WriteLine(e);
}
My first get request is almost instant but after that, the time it takes for one request to go through is almost 30 seconds to 1 min. I have researched everywhere online and tried to change the settings to make it run faster but it can't seem to work. Are there anymore settings that I could change to make it faster?

By using my psychic debugging skills I guess your server only accepts one connection at the time. You do not close your request, so you connection is kept alive. Keep alive attribute. The server will accept new connection when you current one is closed, which is default 100000ms or the server timeout. In your case I guess 30 to 60 seconds. You can start by setting the keepalive attribe to false.

Related

Oubound HttpWebRequest in ASP.NET still timing out after destination server comes back up

We have a ASP.NET web application (MVC3) that makes calls out to another web server using HttpWebRequest.
Anyhow, when the remote server goes down, we start getting timeout errors, which we would expect.
However, then when the remote server comes back up, we continue to get timeout errors. Refreshing the application pool solves the issue, but we really don't want to have to restart the app pool every time.
Is there pooling that is going on that could be storing the bad timed-out connections? My expectation would be that if the connection throws an error, it would get disposed, and then we'd get a new connection the next time, but that doesn't seem to be the case.
Here's what our code looks like:
var request = (HttpWebRequest)HttpWebRequest.Create(url);
int timeout = GetTimeout();
request.ReadWriteTimeout = timeout;
WebRequest.DefaultWebProxy = null;
var asyncResult = request.BeginGetResponse(null, null);
bool complete = asyncResult.AsyncWaitHandle.WaitOne(timeout);
if (!complete)
{
ThrowTimeoutError(url, timeout);
}
using (var webResponse = request.EndGetResponse(asyncResult))
{
using (var responseStream = webResponse.GetResponseStream())
{
using (var responseStreamReader = new StreamReader(responseStream))
{
responseXml = responseStreamReader.ReadToEnd();
}
}
}
Have you seen/tried the CloseConnectionGroup method of the ServicePoint class?
From the MSDN documentation:
Connection groups associate a set of requests with a particular
connection or set of connections. This method removes and closes all
connections that belong to the specified connection group.
Calling this method may reset your connections without having to restart the pool.

Timeout in web request doesn't seem to be honored

I want my http get request to fail if it takes more than 10 seconds by timint out.
I have this:
var request = (HttpWebRequest)WebRequest.Create(myUrl);
request.Method = "GET";
request.Timeout = 1000 * 10; // 10 seconds
HttpStatusCode httpStatusCode = HttpStatusCode.ServiceUnavailable;
using (var webResponse = (HttpWebResponse)request.GetResponse())
{
httpStatusCode = webResponse.StatusCode;
}
It doesn't seem to timeout when I put a bad URL in the request, it just keeps going and going for a long time (seems like minutes).
WHy is this?
If you are doing it in a web project, make sure the debug attribute of the system.web/compilation tag in the Web.Config file is set to "false".
If it is a console application or such, compile it in "Release" mode.
A lot of timeouts are ignored in "Debug" mode.
Your code is probably performing a DNS lookup on the bad URL which takes a minimum of 15 seconds.
According to the documentation for HttpWebRequest.Timeout
A Domain Name System (DNS) query may take up to 15 seconds to return
or time out. If your request contains a host name that requires
resolution and you set Timeout to a value less than 15 seconds, it may
take 15 seconds or more before a WebException is thrown to indicate a
timeout on your request.
You can perform a DNS Lookup using Dns.GetHostEntry but it looks like it will take 5 seconds by default.

How do I increase the performance of HttpWebResponse?

I am trying to build an application that sends and receives responses from a website.
None of the solutions I've read on Stack Overflow have solved my problem, so I think that my code could use optimization.
I have the following thread:
void DomainThreadNamecheapStart()
{
while (stop == false)
{
foreach (string FromDomainList in DomainList.Lines)
{
if (FromDomainList.Length > 1)
{
// I removed my api parameters from the string
string namecheapapi = "https://api.namecheap.com/foo" + FromDomainList + "bar";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(namecheapapi);
request.Proxy = null;
request.ServicePoint.Expect100Continue = false;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader sr = new StreamReader(response.GetResponseStream());
status.Text = FromDomainList + "\n" + sr.ReadToEnd();
sr.Close();
}
}
}
}
This thread is called when a button is clicked:
private void button2_Click(object sender, EventArgs e)
{
stop = false;
Thread DomainThread = new Thread(new ThreadStart(DomainThreadNamecheapStart));
DomainThread.Start();
}
I only receive around 12 responses in 10 seconds using the above code. When I try to make the same request in JavaScript or using a simple IFrame, it's more than twice as fast. The Browser doesn't use multiple threads for the connection, it waits until one is finished and then starts the new one.
I tried setting request.Proxy = null;, but it had negligible impact.
I have noticed that HTTPS is 2-3 times slower than HTTP. Unfortunately, I have to use HTTPS. Is there anything I can do to make it faster?
My bet would be on the aspect you pointed out - the HTTPS protocol.
The iteration between Client(browser) and Server for pure HTTP protocol is quite straightforward: Ask for the info, get info. If 1.0, close connection; if 1.1, it may stay alive for reuse. (Check image 1 for details.)
But when you do a HTTPS request, the initial protocol overhead is considerable (image 2); but, once the initial negotiation is done, some symmetric encryption takes place, and no further certificate negotiation is necessary, thus speeding up data transfer.
What I think the problem is, if you destroy the HTTPWebRequest object and creates a new one, the full HTTPS exchange takes place once again, slowing your iteration. (HTTPS + HTTP 1.1 Keepalive should do fine, though.)
So, suggestions: Switch to HTTP only, or reuse the connection objects.
And i hope it works for you. =)
(1) HTTP protocol handshake and response
(2) Initial HTTPS protocol handshake
Try to modify the System.Net.ServicePointManager.DefaultConnectionLimit value (the default value is 2).
Other reference (Performance Issues Part).
try these, it helped me to improve the performance,
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = 200;
ServicePointManager.MaxServicePointIdleTime = 2000;
ServicePointManager.SetTcpKeepAlive(false, 0, 0);

HttpWebRequest keep alive - how is the connection reused by .net?

Hy,
I am using HttpWebRequest in 10 concurent threads to download a list of images. I sorted the images after the hostName so each of this threads are getting an image from the same Hostname.
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
myReq.ServicePoint.ConnectionLimit = 10;
myReq.Timeout = 2000;
myReq.KeepAlive = true;
HttpWebResponse myResp = (HttpWebResponse )myReq.GetResponse();
After the program is running for a while I keep getting timeout exception.
My thoughts are that I get exception because maybe the Host server has some limitation regarding the concurent connections from the same user.
So how is a connection reused in .net ?
In my program each thread is creating a new connection to a hostname, or is reusing the existing one because of the KeepAlive property ??
It seems that the problem was some servers that were using http/1.0.
HttpWebRequest is using 100 Continue behaviour but the server doesn't support it.
So i changed the property System.Net.ServicePointManager.Expect100Continue to false and then everything worked fine.

C#: quit System.Net.WebClient download (of whatever) when webpage is not responding

If the website isn't responding after one second or so, it's probably safe to assume that it's a bad link and that I should move on to my next link in a set of "possible links."
How do I tell the WebClient to stop attempting to download after some predetermined amount of time?
I suppose I could use a thread, but if it's taking longer than one second to download, that's ok. I just want to make sure that it's connecting with the site.
Perhaps I should modify the WebClient headers, but I don't see what I should modify.
Perhaps I should use some other class in the System.Net namespace?
If you use the System.Net.WebRequest class you can set the Timeout property to be short and handle timeout exceptions.
try{
var request = WebRequest.Create("http://www.contoso.com");
request.Timeout = 5000; //set the timeout to 5 seconds
request.Method = "GET";
var response = request.GetResponse();
}
catch(WebException webEx){
//there was an error, likely a timeout, try another link here
}

Categories

Resources