System.Net.WebException: The operation has timed out - c#

I have a big problem: I need to send 200 objects at once and avoid timeouts.
while (true)
{
NameValueCollection data = new NameValueCollection();
data.Add("mode", nat);
using (var client = new WebClient())
{
byte[] response = client.UploadValues(serverA, data);
responseData = Encoding.ASCII.GetString(response);
string[] split = Javab.Split(new[] { '!' }, StringSplitOptions.RemoveEmptyEntries);
string command = split[0];
string server = split[1];
string requestCountStr = split[2];
switch (command)
{
case "check":
int requestCount = Convert.ToInt32(requestCountStr);
for (int i = 0; i < requestCount; i++)
{
Uri myUri = new Uri(server);
WebRequest request = WebRequest.Create(myUri);
request.Timeout = 200000;
WebResponse myWebResponse = request.GetResponse();
}
break;
}
}
}
This produces the error:
Unhandled Exception: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at vir_fu.Program.Main(String[] args)
The requestCount loop works fine outside my base code but when I add it to my project I get this error. I have tried setting request.Timeout = 200; but it didn't help.

It means what it says. The operation took too long to complete.
BTW, look at WebRequest.Timeout and you'll see that you've set your timeout for 1/5 second.

Close/dispose your WebResponse object.

I'm not sure about your first code sample where you use WebClient.UploadValues, it's not really enough to go on, could you paste more of your surrounding code? Regarding your WebRequest code, there are two things at play here:
You're only requesting the headers of the response**, you never read the body of the response by opening and reading (to its end) the ResponseStream. Because of this, the WebRequest client helpfully leaves the connection open, expecting you to request the body at any moment. Until you either read the response body to completion (which will automatically close the stream for you), clean up and close the stream (or the WebRequest instance) or wait for the GC to do its thing, your connection will remain open.
You have a default maximum amount of active connections to the same host of 2. This means you use up your first two connections and then never dispose of them so your client isn't given the chance to complete the next request before it reaches its timeout (which is milliseconds, btw, so you've set it to 0.2 seconds - the default should be fine).
If you don't want the body of the response (or you've just uploaded or POSTed something and aren't expecting a response), simply close the stream, or the client, which will close the stream for you.
The easiest way to fix this is to make sure you use using blocks on disposable objects:
for (int i = 0; i < ops1; i++)
{
Uri myUri = new Uri(site);
WebRequest myWebRequest = WebRequest.Create(myUri);
//myWebRequest.Timeout = 200;
using (WebResponse myWebResponse = myWebRequest.GetResponse())
{
// Do what you want with myWebResponse.Headers.
} // Your response will be disposed of here
}
Another solution is to allow 200 concurrent connections to the same host. However, unless you're planning to multi-thread this operation so you'd need multiple, concurrent connections, this won't really help you:
ServicePointManager.DefaultConnectionLimit = 200;
When you're getting timeouts within code, the best thing to do is try to recreate that timeout outside of your code. If you can't, the problem probably lies with your code. I usually use cURL for that, or just a web browser if it's a simple GET request.
** In reality, you're actually requesting the first chunk of data from the response, which contains the HTTP headers, and also the start of the body. This is why it's possible to read HTTP header info (such as Content-Encoding, Set-Cookie etc) before reading from the output stream. As you read the stream, further data is retrieved from the server. WebRequest's connection to the server is kept open until you reach the end of this stream (effectively closing it as it's not seekable), manually close it yourself or it is disposed of. There's more about this here.

proxy issue can cause this. IIS webconfig put this in
<defaultProxy useDefaultCredentials="true" enabled="true">
<proxy usesystemdefault="True" />
</defaultProxy>

I remember I had the same problem a while back using WCF due the quantity of the data I was passing. I remember I changed timeouts everywhere but the problem persisted. What I finally did was open the connection as stream request, I needed to change the client and the server side, but it work that way. Since it was a stream connection, the server kept reading until the stream ended.

I encountered the same error than adding
Task.Delay(2000);
in each request solved the problem

Related

How to get the HTTP response when the request stream was closed during transfer

TL;DR version
When a transfer error occurs while writing to the request stream, I can't access the response, even though the server sends it.
Full version
I have a .NET application that uploads files to a Tomcat server, using HttpWebRequest. In some cases, the server closes the request stream prematurely (because it refuses the file for one reason or another, e.g. an invalid filename), and sends a 400 response with a custom header to indicate the cause of the error.
The problem is that if the uploaded file is large, the request stream is closed before I finish writing the request body, and I get an IOException:
Message: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
InnerException: SocketException: An existing connection was forcibly closed by the remote host
I can catch this exception, but then, when I call GetResponse, I get a WebException with the previous IOException as its inner exception, and a null Response property. So I can never get the response, even though the server sends it (checked with WireShark).
Since I can't get the response, I don't know what the actual problem is. From my application point of view, it looks like the connection was interrupted, so I treat it as a network-related error and retry the upload... which, of course, fails again.
How can I work around this issue and retrieve the actual response from the server? Is it even possible? To me, the current behavior looks like a bug in HttpWebRequest, or at least a severe design issue...
Here's the code I used to reproduce the problem:
var request = HttpWebRequest.CreateHttp(uri);
request.Method = "POST";
string filename = "foo\u00A0bar.dat"; // Invalid characters in filename, the server will refuse it
request.Headers["Content-Disposition"] = string.Format("attachment; filename*=utf-8''{0}", Uri.EscapeDataString(filename));
request.AllowWriteStreamBuffering = false;
request.ContentType = "application/octet-stream";
request.ContentLength = 100 * 1024 * 1024;
// Upload the "file" (just random data in this case)
try
{
using (var stream = request.GetRequestStream())
{
byte[] buffer = new byte[1024 * 1024];
new Random().NextBytes(buffer);
for (int i = 0; i < 100; i++)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}
catch(Exception ex)
{
// here I get an IOException; InnerException is a SocketException
Console.WriteLine("Error writing to stream: {0}", ex);
}
// Now try to read the response
try
{
using (var response = (HttpWebResponse)request.GetResponse())
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
}
catch(Exception ex)
{
// here I get a WebException; InnerException is the IOException from the previous catch
Console.WriteLine("Error getting the response: {0}", ex);
var webEx = ex as WebException;
if (webEx != null)
{
Console.WriteLine(webEx.Status); // SendFailure
var response = (HttpWebResponse)webEx.Response;
if (response != null)
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
else
{
Console.WriteLine("No response");
}
}
}
Additional notes:
If I correctly understand the role of the 100 Continue status, the server shouldn't send it to me if it's going to refuse the file. However, it seems that this status is controlled directly by Tomcat, and can't be controlled by the application. Ideally, I'd like the server not to send me 100 Continue in this case, but according to my colleagues in charge of the back-end, there is no easy way to do it. So I'm looking for a client-side solution for now; but if you happen to know how to solve the problem on the server side, it would also be appreciated.
The app in which I encounter the issue targets .NET 4.0, but I also reproduced it with 4.5.
I'm not timing out. The exception is thrown long before the timeout.
I tried an async request. It doesn't change anything.
I tried setting the request protocol version to HTTP 1.0, with the same result.
Someone else has already filed a bug on Connect for this issue: https://connect.microsoft.com/VisualStudio/feedback/details/779622/unable-to-get-servers-error-response-when-uploading-file-with-httpwebrequest
I am out of ideas as to what can be a client side solution to your problem. But I still think the server side solution of using a custom tomcat valve can help here. I currently doesn`t have a tomcat setup where I can test this but I think a server side solution here would be along the following lines :
RFC section 8.2.3 clearly states :
Requirements for HTTP/1.1 origin servers:
- Upon receiving a request which includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
So assuming tomcat confirms to the RFC, while in the custom valve you would have recieved the HTTP request header, but the request body would not be sent since the control is not yet in the servlet that reads the body.
So you can probably implement a custom valve, something similar to :
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.valves.ErrorReportValve;
public class CustomUploadHandlerValve extends ValveBase {
#Override
public void invoke(Request request, Response response) throws IOException, ServletException {
HttpServletRequest httpRequest = (HttpServletRequest) request;
String fileName = httpRequest.getHeader("Filename"); // get the filename or whatever other parameters required as per your code
bool validationSuccess = Validate(); // perform filename check or anyother validation here
if(!validationSuccess)
{
response = CreateResponse(); //create your custom 400 response here
request.SetResponse(response);
// return the response here
}
else
{
getNext().invoke(request, response); // to pass to the next valve/ servlet in the chain
}
}
...
}
DISCLAIMER : Again I haven`t tried this to success, need sometime and a tomcat setup to try it out ;).
Thought it might be a starting point for you.
I had the same problem. The server sends a response before the client end of the transmission of the request body, when I try to do async request. After a series of experiments, I found a workaround.
After the request stream has been received, I use reflection to check the private field _CoreResponse of the HttpWebRequest. If it is an object of class CoreResponseData, I take his private fields (using reflection): m_StatusCode, m_StatusDescription, m_ResponseHeaders, m_ContentLength. They contain information about the server's response!
In most cases, this hack works!
What are you getting in the status code and response of the second exception not the internal exception?
If a WebException is thrown, use the Response and Status properties of the exception to determine the response from the server.
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.getresponse(v=vs.110).aspx
You are not saying what exactly version of Tomcat 7 you are using...
checked with WireShark
What do you actually see with WireShark?
Do you see the status line of response?
Do you see the complete status line, up to CR-LF characters at its end?
Is Tomcat asking for authentication credentials (401), or it is refusing file upload for some other reason (first acknowledging it with 100 but then aborting it mid-flight)?
The problem is that if the uploaded file is large, the request stream
is closed before I finish writing the request body, and I get an IOException:
If you do not want the connection to be closed but all the data transferred over the wire and swallowed at the server side, on Tomcat 7.0.55 and later it is possible to configure maxSwallowSize attribute on HTTP connector, e.g. maxSwallowSize="-1".
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html
If you want to discuss Tomcat side of connection handling, you would better ask on the Tomcat users' mailing list,
http://tomcat.apache.org/lists.html#tomcat-users
At .Net side:
Is it possible to perform stream.Write() and request.GetResponse() simultaneously, from different threads?
Is it possible to performs some checks at the client side before actually uploading the file?
hmmm... i don't get it - that is EXACTLY why in many real-life scenarios large files are uploaded in chunks (and not as a single large file)
by the way: many internet servers have size limitations. for instance in tomcat that is representad by maxPostSize (as seen in this link: http://tomcat.apache.org/tomcat-5.5-doc/config/http.html)
so tweaking the server configurations seems like the easy way, but i do think that the right way is to split the file to several requests
EDIT: replace Uri.EscapeDataString with HttpServerUtility.UrlEncode
Uri.EscapeDataString(filename) // a problematic .net implementation
HttpServerUtility.UrlEncode(filename) // the proper way to do it
I am experience a pretty similar problem currently also with Tomcat and a Java client. The Tomcat REST service sends a HTTP returncode with response body before reading the whole request body. The client however fails with IOException. I inserted a HTTP Proxy on the client to sniff the protocol and actually the HTTP response is sent to the client eventually. Most likly the Tomcat closed the request input stream before sending the response.
One solution is to use a different HTTP server like Jetty which does not have this problem. The other solution is a add a Apache HTTP server with AJP in front of Tomcat. Apache HTTP server has a different handling of streams and with that the problem goes away.

How to reuse connection/request to avoid Handshake

I would like to know how Reusing HttpWebRequests works to avoid SSL handshake process everytime.
I use the keep alive headr in the request and the first handshake is successfull but i would like to reuse the request in order to avoid future handshakes against the same certificate.
Think is i dont know if i had to reuse the HttpWebRequest object instance or even if i create a new request object it will use the same connection since the keep alive is already on place and working.
Should i store the existing request object lets say at class level and reuse it? or i can safely dispose the object and next time i create a request it will be under the effect of the keep alive connection?
I am asking this cause i need to lower the timings in an application, and worst part is always ssl handshake, that can take over 3seconds in a phone with medium signal from carrier.
I am using C# to develop.
I have tried to look for this kind of information but all i read over internet is how to set up the SSL Server and enabling certain settings but not how to make the client work with these features.
EDIT: FINDINGS RESULTS
I created a sample program in .NET C# whith the following code:
Stopwatch sw = new Stopwatch();
sw.Start();
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(#"https:\\www.gmail.com"));
request.KeepAlive = true;
request.Method = "GET";
request.ContentType = "application/json";
request.ContentLength = 0;
request.ConnectionGroupName = "test";
//request.UnsafeAuthenticatedConnectionSharing = true;
//request.PreAuthenticate = true;
var response = request.GetResponse();
//response.Close();
request.Abort();
sw.Stop();
listBox1.Items.Add("Connection in : " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
HttpWebRequest request2 = (HttpWebRequest)WebRequest.Create(new Uri(#"https:\\www.gmail.com"));
request2.KeepAlive = true;
request2.Method = "GET";
//request2.UnsafeAuthenticatedConnectionSharing = true;
//request2.PreAuthenticate = true;
request2.ContentType = "application/json";
request2.ContentLength = 0;
request2.ConnectionGroupName = "test";
var response2 = request2.GetResponse();
//response2.Close();
request2.Abort();
sw.Stop();
listBox1.Items.Add("Connection 2 in : " + sw.Elapsed.ToString());
Results was that the first connection triggered the CertificatevalidationCallback 3 times (one for each certificate) and then the second connection only once, but when i CLOSED THE RESPONSE before performing the next request, no callback was triggered.
I suppose that keeping a response open keeps the socket open and thats why the partial handshake takes place (not the full certificate chain).
Sorry if I sound kind of noob in this matter, SSL and timings was coded by a work mate and the code was not clear to follow. But i think i have the answer. Thanks Poupou for your tremendous help
This is already built-in the the SSL/TLS stack shipped with Xamarin.iOS (i.e. at a lower level than HttpWebRequest). There's nothing to set up to enabled this. In fact you would need extra code if you wanted to disable it.
If the server supports it then subsequent handshake will already faster because a Session ID cache will be used (see TLS 1.0 RFC page 30).
However the server does not have to honor the session id (given to it). In such case a full handshake will need be done again. IOW you cannot force this from the client (only offer).
You can verify this using a network analyzer, e.g. wireshark, by looking at the exchanges (and comparing them to the RFC).

HttpWebRequest, BeginGetResponse not called until endOfStream

I am working on a client that uses a webservice to get some events pushed its way - the webservice is designed so, that upon the client POST'ing a subscribe command, it will send back some events of interest and keep doing so as long as the client stay connected.
When POSTing the command, the service responds (immediately) with an initial answer with these headers
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Transfer-Encoding: chunked
and then keeps the connection open until it times out (after 30s, if the client does not send some keep-alive data)
Since it is a mix of POST + having to read the response + keeping the connection open until endOFStream, it appears I have to use HttpWebRequest with BeginGetRequestStream (to POST) and BeginGetResponse to read and act on the response.
My problem is that the BeginGetResponse callback is not called until the input stream is actually closed by the server/service (after 30s), despite AllowReadStreamBuffering being set to false.
The doc have this to say on AllowReadStreamBuffering:
The AllowReadStreamBuffering property affects when the callback from BeginGetResponse method is called. When the AllowReadStreamBuffering property is true, the callback is raised once the entire stream has been downloaded into memory. When the AllowReadStreamBuffering property is false, the callback is raised as soon as the stream is available for reading which may be before all data has arrived.
I've seen a few suggestions that no matter what AllowReadStreamBuffering is set to, HttpWebRequest will not call BeginGetResponse until it's buffer is filled up - but I have not been able to find anything on that in the docs.
Does any one have an idea on how to control this buffering behaviour or maybe suggestion to another approach I should try when dealing with this kind of webservice?
The relevant snippets of the code I currently use, look like this:
public void open()
{
string url = "http://funplaceontheinternet/webservice";
HttpWebRequest request = WebRequest.CreateHttp(url);
request.Method = "POST";
request.Credentials = new NetworkCredential("username", "password");
request.CookieContainer = new CookieContainer();
request.AllowReadStreamBuffering = false;
request.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), request);
}
void GetRequestStreamCallback(IAsyncResult result)
{
Debug.WriteLine("open.GetRequestStreamCallback");
HttpWebRequest webRequest = (HttpWebRequest)result.AsyncState;
// End the stream request operation
Stream postStream = webRequest.EndGetRequestStream(result);
// Create the post data
byte[] byteArray = Encoding.UTF8.GetBytes(_xmlEncodedSubscribeCommand);
// Add the post data to the web request
postStream.Write(byteArray, 0, byteArray.Length);
postStream.Close();
// Start the web request
webRequest.BeginGetResponse(new AsyncCallback(BeginGetResponseCallback), webRequest);
}
void BeginGetResponseCallback(IAsyncResult result)
{
HttpWebRequest request = (HttpWebRequest)result.AsyncState;
HttpWebResponse response = null;
if (request != null)
response = (HttpWebResponse)request.EndGetResponse(result);
else
Debug.WriteLine("request==null :-(");
if (response != null)
{
using (var reader = new StreamReader(response.GetResponseStream()))
{
while (!reader.EndOfStream)
{
string line = reader.ReadLine();
Debug.WriteLine("BeginGetResponseCallback - received: " + line);
}
Debug.WriteLine("BeginGetResponseCallback - reader.EndOfStream");
}
}
else
Debug.WriteLine("response==null :-(");
}
You've mentioned that the service is a web service, but not which platform.
If this is a "normal" web service, then I assume that XML is the transport format.
If so, I suspect the problem may be that this style of communication does not really lend itself to streaming. The web service infrastructure at the server end might not be creating the SOAP envelope and payload until all the data is available. If you wanted to stream like this, you might be better using some custom service at the server end, rather than a web service.
Do you know for sure that the server is really streaming the response? (e.g confirmed with something like wireshark?)
If you really want to use a web service, then I would suggest you complete the request when the first event(s) are available, and don't wait for the timeout. This will still achieve the latency reduction that I assume you are trying to get.

HttpWebRequest.AllowAutoRedirect=false can cause timeout?

I need to test around 300 URLs to verify if they lead to actual pages or redirect to some other page. I wrote a simple application in .NET 2.0 to check it, using HttpWebRequest. Here's the code snippet:
System.Net.HttpWebRequest wr = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create( url );
System.Net.HttpWebResponse resp = (System.Net.HttpWebResponse)wr.GetResponse();
code = resp.StatusDescription;
Code ran fast and wrote to file that all my urls return status 200 OK. Then I realized that by default GetResponse() follows redirects. Silly me! So I added one line to make it work properly:
System.Net.HttpWebRequest wr = (System.Net.HttpWebRequest)System.Net.HttpWebRequest.Create( url );
wr.AllowAutoRedirect = false;
System.Net.HttpWebResponse resp = (System.Net.HttpWebResponse)wr.GetResponse();
code = resp.StatusDescription;
I ran the program again and waited... waited... waited... It turned out that for each url I was getting a System.Net.WebException "The operation has timed out". Surprised, I checked the URL manually - works fine... I commented out AllowAutoRedirect = false line - and it works fine again. Uncommented this line - timeout. Any ideas what might cause this problem and how to work around?
Often timeouts are due to web responses not being disposed. You should have a using statement for your HttpWebResponse:
using (HttpWebResponse resp = (HttpWebResponse)wr.GetResponse())
{
code = resp.StatusDescription;
// ...
}
We'd need to do more analysis to predict whether that's definitely the problem... or you could just try it :)
The reason is that .NET has a connection pool, and if you don't close the response, the connection isn't returned to the pool (at least until the GC finalizes the response). That leads to a hang while the request is waiting for a connection.

Error using HttpWebRequest to upload files with PUT

We've got a .NET 2.0 WinForms app that needs to upload files to an IIS6 Server via WebDav. From time to time we get complaints from a remote office that they get one of the following error messages
The underlying connection was closed:
an unexpected error occurred on send.
The underlying connection was closed:
an unexpected error occurred on
receive.
This only seems to occur with large files (~20Mb plus). I've tested it with a 40Mb file from my home computer and tried putting 'Sleep's in the loop to simulate a slow connection so I suspect that it's down to network issues at their end... but
The IT at the remote office are no help
I'd like to rule out the posibility my code is at fault.
So - can anybody spot any misakes or suggest any workarounds that might 'bulletproof' the code against this problem. Thanks for any help. Chopped down version of code follows:
public bool UploadFile(string localFile, string uploadUrl)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uploadUrl);
try
{
req.Method = "PUT";
req.AllowWriteStreamBuffering = true;
req.UseDefaultCredentials = Program.WebService.UseDefaultCredentials;
req.Credentials = Program.WebService.Credentials;
req.SendChunked = false;
req.KeepAlive = true;
Stream reqStream = req.GetRequestStream();
FileStream rdr = new FileStream(localFile, FileMode.Open, FileAccess.Read);
byte[] inData = new byte[4096];
int bytesRead = rdr.Read(inData, 0, inData.Length);
while (bytesRead > 0)
{
reqStream.Write(inData, 0, bytesRead);
bytesRead = rdr.Read(inData, 0, inData.Length);
}
reqStream.Close();
rdr.Close();
System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse();
if (response.StatusCode != HttpStatusCode.OK && response.StatusCode!=HttpStatusCode.Created)
{
MessageBox.Show("Couldn't upload file");
return false;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
return false;
}
return true;
}
Try setting KeepAlive to false:
req.KeepAlive = false;
This will allow the connection to be closed and opened again. It will not allow to use a persistent connection. I found a lot of references in the Web that suggested this in order to solve a similar to yours error. This is a relevant link.
Anyway, it is not a good idea to use HTTP PUT (or HTTP POST) to upload large files. It will be better to use FTP or a download/upload manager. These will handle retries, connection problems, timeouts automatically for you. The upload will be faster too and you could also resume a stopped uploading. If you decide to stay with HTTP, you should at least try to add a retry mechanism. If an upload is taking too long, then there is a high probability that it will fail due to proxy, server timeout, firewall or what ever reason not to have with your code.
To remove the risk of a bug in your code, try using WebClient:
using (WebClient client = new WebClient())
{
client.UseDefaultCredentials = Program.WebService.UseDefaultCredentials;
client.Credentials = Program.WebService.Credentials;
client.UploadFile(uploadUrl, "PUT", localFile);
}
Maybe try using POST, but the real culprit is probably the content type.
Try setting
req.ContentType = "application/octet-stream";
req.ContentLength = inData.Length;
or look at the code in the accepted answer here: Upload files with HTTPWebrequest (multipart/form-data)
Both my example and the link I provided involve modifying the ContentType - my example is simpler but might not work, as most applications receiving files expect multipart
Please you check whether [Enable Http Keep-Alives] is set [on] at [Web Site] tab in IIS manager.
The size of the uploads might be limited.
See here for one discussion:
http://www.codeproject.com/KB/aspnet/uploadlargefilesaspnet.aspx
Start by checking some basic configuration. The default values of either of the following may cause problems in file upload - including termination of the connection. I believe IIS 6 would never allow file upload > 2GB (even if it could complete, regardless of config). Msdn describes these nicely.
<httpRuntime executionTimeout = "30" maxRequestLength="200"/>
EDIT: This is ASP.NET config, of course, which assumes you are running your own webdav server or a 3rd party server within ASP.NET. If it's a different webdav server, you'll want to look for the equivalent.

Categories

Resources