I want my http get request to fail if it takes more than 10 seconds by timint out.
I have this:
var request = (HttpWebRequest)WebRequest.Create(myUrl);
request.Method = "GET";
request.Timeout = 1000 * 10; // 10 seconds
HttpStatusCode httpStatusCode = HttpStatusCode.ServiceUnavailable;
using (var webResponse = (HttpWebResponse)request.GetResponse())
{
httpStatusCode = webResponse.StatusCode;
}
It doesn't seem to timeout when I put a bad URL in the request, it just keeps going and going for a long time (seems like minutes).
WHy is this?
If you are doing it in a web project, make sure the debug attribute of the system.web/compilation tag in the Web.Config file is set to "false".
If it is a console application or such, compile it in "Release" mode.
A lot of timeouts are ignored in "Debug" mode.
Your code is probably performing a DNS lookup on the bad URL which takes a minimum of 15 seconds.
According to the documentation for HttpWebRequest.Timeout
A Domain Name System (DNS) query may take up to 15 seconds to return
or time out. If your request contains a host name that requires
resolution and you set Timeout to a value less than 15 seconds, it may
take 15 seconds or more before a WebException is thrown to indicate a
timeout on your request.
You can perform a DNS Lookup using Dns.GetHostEntry but it looks like it will take 5 seconds by default.
Related
I am currently running a script that is hitting an api roughly 3000 times over an extended period of time. I am using parallel.foreach() so that I can send a few requests at a time to speed up the process. I am creating two requests in each foreach iteration so I should have no more than 10 requests. My question is that I keep receiving 429 errors from the server and I spoke to someone who manages the server and they said they are seeing requests in bursts of 40+. With my current understanding of my code, I don't believe this is even possible, Can someone let me know If I am missing something here?
public static List<Requests> GetData(List<Requests> requests)
{
ParallelOptions para = new ParallelOptions();
para.MaxDegreeOfParallelism = 5;
Parallel.ForEach(requests, para, request =>
{
WeatherAPI.GetResponseForDay(request);
});
return requests;
}
public static Request GetResponseForDay(Request request)
{
var request = WebRequest.Create(request);
request.Timeout = 3600000;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader myStreamReader = new
StreamReader(response.GetResponseStream());
string responseData = myStreamReader.ReadToEnd();
response.Close();
var request2 WebRequest.Create(requestthesecond);
HttpWebResponse response2 = (HttpWebResponse)request2.GetResponse();
StreamReader myStreamReader2 = new
StreamReader(response2.GetResponseStream());
string responseData2 = myStreamReader2.ReadToEnd();
response2.Close();
DoStuffWithData(responseData, responseData2)
return request;
}
as smartobelix pointed out, your degreeofparallelism set to 5 doesn't prevent you of sending too many requests as defined by servers side policy. all it does is prevents you from exhausting number of threads needed for making those requests on your side. so what you need to do is communicate to servers owner, get familiar with their limits and change your code to never meet the limits.
variables involved are:
number of concurrent requests you will send (parallelism)
average time one request takes
maximum requests per unit time allowed by server
so, for example, if your average request time is 200ms and you have max degree of parallelism of 5, then you can expect to send 25 requests per second on average. if it takes 500ms per request, then you'll send only 10.
mix servers allowed numbers into that and you'll get the idea how to fine tune your numbers.
Both answers above are essentially correct. The problem I had was that the server manager set a rate limit of 100 requests/min where there previously wasn't one and didn't inform me. As soon as I hit this limit of 100 requests/min, I started receiving 429 errors nearly instantly and started firing off requests at that speed which were also receiving 429 errors.
The first request or a request after idling roughly 100 seconds is very slow and takes 15-30 seconds. Any request without idling takes less than a second. I am fine with the first request taking time, just not the small idle time causing the slowdown.
The slowdown is not unique to the client, if I keep making requests on one client then it stays quick on the other. Only when all are idle for 100 seconds does it slowdown.
Here are some changes that I have tried:
Setting HttpClient to a singleton and not disposing it using a using() block
Setting ServicePointManager.MaxServicePointIdleTime to a higher value since by default it is 100 seconds. Since the time period is the same as mine I thought this was the issue but it did not solve it.
Setting a higher ServicePointManager.DefaultConnectionLimit
Default proxy settings set via web.config
using await instead of httpClient.SendAsync(request).Result
It is not related to IIS application pool recycling since the default there is set to 20mn and the rest of the application remains quick.
The requests are to a web service which communicates with AWS S3 to get files. I am at a loss for ideas at this point and all my research has led me to the above points that I already tried. Any ideas would be appreciated!
Here is the method:
`
//get httpclient singleton or create
var httpClient = HttpClientProvider.FileServiceHttpClient;
var queryString = string.Format("?key={0}", key);
var request = new HttpRequestMessage(HttpMethod.Get, queryString);
var response = httpClient.SendAsync(request).Result;
if (response.IsSuccessStatusCode)
{
var metadata = new Dictionary<string, string>();
foreach (var header in response.Headers)
{
//grab tf headers
if (header.Key.StartsWith(_metadataHeaderPrefix))
{
metadata.Add(header.Key.Substring(_metadataHeaderPrefix.Length), header.Value.First());
}
}
var virtualFile = new VirtualFile
{
QualifiedPath = key,
FileStream = response.Content.ReadAsStreamAsync().Result,
Metadata = metadata
};
return virtualFile;
}
return null;
The default idle timeout is about 1-2 mins. After that, the client has to re handshake with the server. So, you will find that it will be slow after 100s.
You could use socket handler to extend the idle timeout.
var socketsHandler = new SocketsHttpHandler
{
PooledConnectionIdleTimeout = TimeSpan.FromHours(27),//Actually 5 mins can be idle at maximum. Note that timeouts longer than the TCP timeout may be ignored if no keep-alive TCP message is set at the transport level.
MaxConnectionsPerServer = 10
};
client = new HttpClient(socketsHandler);
As you can see, although I set the idle timeout to 27 hours, but actually it just keep 5 mins alive.
So, finally I just call the target endpoint using the same HttpClient every 1 min. In this case, there is always an established connection. You could use netstat to check that. It works fine.
I currently have a python server running that handles many different kinds of requests and I'm trying to do this using C#. My code is as follows:
try
{
ServicePointManager.DefaultConnectionLimit = 10;
System.Net.ServicePointManager.Expect100Continue = false;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Proxy = null;
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
response.Close();
}
}
catch (WebException e)
{
Console.WriteLine(e);
}
My first get request is almost instant but after that, the time it takes for one request to go through is almost 30 seconds to 1 min. I have researched everywhere online and tried to change the settings to make it run faster but it can't seem to work. Are there anymore settings that I could change to make it faster?
By using my psychic debugging skills I guess your server only accepts one connection at the time. You do not close your request, so you connection is kept alive. Keep alive attribute. The server will accept new connection when you current one is closed, which is default 100000ms or the server timeout. In your case I guess 30 to 60 seconds. You can start by setting the keepalive attribe to false.
If the website isn't responding after one second or so, it's probably safe to assume that it's a bad link and that I should move on to my next link in a set of "possible links."
How do I tell the WebClient to stop attempting to download after some predetermined amount of time?
I suppose I could use a thread, but if it's taking longer than one second to download, that's ok. I just want to make sure that it's connecting with the site.
Perhaps I should modify the WebClient headers, but I don't see what I should modify.
Perhaps I should use some other class in the System.Net namespace?
If you use the System.Net.WebRequest class you can set the Timeout property to be short and handle timeout exceptions.
try{
var request = WebRequest.Create("http://www.contoso.com");
request.Timeout = 5000; //set the timeout to 5 seconds
request.Method = "GET";
var response = request.GetResponse();
}
catch(WebException webEx){
//there was an error, likely a timeout, try another link here
}
We will need to call out to a 3rd party to retrieve a value using REST, however if we do not receive a response within 10ms, I want to use a default value and continue processing.
I'm leaning towards using an asynchronous WebRequest do to this, but I was wondering if there was a trick to doing it using a synchronous request.
Any advice?
If you are doing a request and waiting on it to return I'd say stay synchronous - there's no reason to do an async request if you're not going to do anything or stay responsive while waiting.
For a sync call:
WebRequest request = WebRequest.Create("http://something.somewhere/url");
WebResponse response = null;
request.Timeout = 10000; // 10 second timeout
try
{
response = request.GetResponse();
}
catch(WebException e)
{
if( e.Status == WebExceptionStatus.Timeout)
{
//something
}
}
If doing async:
You will have to call Abort() on the request object - you'll need to check the timeout yourself, there's no built-in way to enforce a hard timeout.
You could encapsulate your call to the 3rd party in a WebService. You could then call this WebService synchronously from your application - the web service reference has a simple timeout property that you can set to 10 seconds or whatever.
Your call to get the 3rd party data from your WebService will throw a WebException after the timeout period has elapsed. You catch it and use a default value instead.
EDIT: Philip's response above is better. RIF.