I am using Httpclient C # .net
to connect to a url.
I need to make a GET request. This connection is persistent.
The problem is that I need to perform actions according to the notifications that the server sends me.
When executing the HttpResponseMessage, it hangs (being persistent it works and receiving notifications constantly, but it never gives me back control to be able to work according to the notifications of the server)
when canceling the execution I can see all notifications by console.
Is there any way to control HttpResponseMessage to be able to work with every response the server sends me?
Should I use another type of technology for this?
This is what I get back by console de "await response.Content.ReadAsStringAsync ();" when I cancel the operation.
<? xml version = "1.0" encoding = "UTF-8" standalone = "yes"?>
<message xmlns = "http://asdasd.com/asd/08/DS/Sync" xmlns: ns2 = "http: // asdadasdasdasd">
<event> KEEPALIVE </ event>
</ message>
<? xml version = "1.0" encoding = "UTF-8" standalone = "yes"?>
<message xmlns = "http://asdasd.com/asd/08/DS/Sync" xmlns: ns2 = "http: // asdadasdasdasd">
<event> KEEPALIVE </ event>
</ message>
<? xml version = "1.0" encoding = "UTF-8" standalone = "yes"?>
<message xmlns = "http://asdasd.com/asd/08/DS/Sync" xmlns: ns2 = "http: // asdadasdasdasd">
<event> KEEPALIVE </ event>
</ message>
every ('message') arrives approximately every 2 minutes I wish that each time I get one I can perform an action
I leave the code that ultilizo. Thank you
public async static Task<int> GetRequest(string url)
{
HttpClient client = new HttpClient();
HttpResponseMessage response = new HttpResponseMessage();
response = await client.GetAsync(url, HttpCompletionOption.ResponseHeadersRead);
Console.WriteLine("STATUS OF CONNECTION");
Console.WriteLine(response.StatusCode);
response.EnsureSuccessStatusCode();
var body = await response.Content.ReadAsStringAsync().ConfigureAwait(continueOnCapturedContext: false); ;
Console.WriteLine("ANSWER");
Console.WriteLine(body);
Console.WriteLine(" \n");
Console.WriteLine("\n ATTENTION!! Disconnected from the Persistence line \n");
Console.WriteLine("Connections done");
int result1 = 1;
return result1;
}
You are looking to parse partial HTTP responses, as and when each part is received. The HTTP protocol does not know/understand that you are sending/receiving multiple discrete messages separated by a newline in a single HTTP response.
As far as I know, HttpClient won't be much help to you because it is designed to receive one whole HTTP response. Rick Strahl has a blog on exactly this subject Using .NET HttpClient to capture partial Responses.
You should be able to use HttpWebRequest to manually read bytes from the network stream into a buffer. After appending to the buffer, you will want to check if it contains a complete message (<message> ... </ message>). If so, convert the message to a string and announce it as required: eg raise an event, start a task, call a method, add to a queue. Then clear the buffer and repeat.
TcpClient is probably not the best approach in this scenario, because you may then also need to implement TLS (if your endpoint is HTTPS).
According MSDN, you don't have to create a new HttpClient per each request
HttpClient is intended to be instantiated once and re-used throughout
the life of an application. Instantiating an HttpClient class for
every request will exhaust the number of sockets available under heavy
loads.
Next, you can specify the default timeout for httpClient or use it per request.
Then, you are using ResponseHeadersRead option, it means
The operation should complete as soon as a response is available and
headers are read. The content is not read yet.
Try to switch to ResponseContentRead instead
Related
I have create an httplistener. So i need when client will send me data to read them. The problem is that i dont know how client should send the data
HttpListener listener = new HttpListener();
listener.Prefixes.Add("http://192.168.1.26:8282/");
listener.Prefixes.Add("http://localhost:8282/");
listener.Prefixes.Add("http://127.0.0.1:8282/");
listener.Start();
new Thread(() =>
{
Thread.CurrentThread.IsBackground = true;
for (;;)
{
Console.WriteLine("Listening...");
// Note: The GetContext method blocks while waiting for a request.
HttpListenerContext context = listener.GetContext();
HttpListenerRequest request = context.Request;
string text;
using (var reader = new StreamReader(request.InputStream,
request.ContentEncoding))
{
text = reader.ReadToEnd();
MessageBox.Show(text);
}
// Obtain a response object.
HttpListenerResponse response = context.Response;
// Construct a response.
string responseString = "HelloWorld";
byte[] buffer = System.Text.Encoding.UTF8.GetBytes(responseString);
// Get a response stream and write the response to it.
response.ContentLength64 = buffer.Length;
System.IO.Stream output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
// You must close the output stream.
output.Close();
}
}).Start();
}
So from client i send this command:
GET / 192.168.1.26:8282 HTTP/1.0
But i'm getting this message
Recv 34 bytes
SEND OK
+IPD,1,518:HTTP/1.1 400 Bad Request
Content-Type: text/html; charset=us-ascii
Server: Microsoft-HTTPAPI/2.0
Date: Wed, 13 Jun 2018 13:16:03 GMT
Connection: close
Content-Length: 339
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd">
<HTML><HEAD><TITLE>Bad Request</TITLE>
<META HTTP-EQUIV="Content-Type" Content="text/html; charset=us-ascii"></HEAD>
<BODY><h2>Bad Request - Invalid Header</h2>
<hr><p>HTTP Error 400. The request has an invalid header name.</p>
</BODY></HTML>
1,CLOSED
I cant understand what is wrong. Also in my code i set to get a message box every time a request will happen. But it never runs
This s what mozilla is sending
You are not attempting to invoke the service correctly. Here is your client request:
GET / 192.168.1.26:8282 HTTP/1.0
What you should be doing is first establishing a socket connection to host 192.168.1.26 over port 8282. Then you must issue a HTTP request in a valid format:
GET / HTTP/1.0
Don't forget to add some newlines after the request (ie: \r\n\r\n). Then your web server should respond to the HTTP request.
Quick example in Telnet:
telnet 192.168.1.26 8282
GET / HTTP/1.0
Quick example with netcat:
nc 192.168.1.26 8282
GET / HTTP/1.0
Note that these quick examples are provided just to help you ensure that your web service is accessible and functioning correctly. Ideally, you would likely use a more robust HTTP client that is customized for whatever your particular needs are. The process is still the same:
Establish a connection to your host IP address over the listening port
Issue a HTTP request in a valid format: (HTTP_VERB PATH HTTP_VERSION)
*) Maybe check out the developer tools in your browser of choice (F12 -> Network) to see how HTTP headers are sent as well.
Parse the response in some meaningful way.
"Also in my code i set to get a message box every time a request will happen." - You should try putting in a manual message to the message box, instead of reading from the input stream. This is a good debugging technique. In a HTTP GET request you generally are not sending data except in the form of optional query string parameters. I have a feeling that you are not getting the results you are expecting because you are reading from input that isn't there. Before reading from the stream input, first make sure that the connection is successful.
I'm writing two small pieces of C# code. The first is for a client-side Portable Class Library. All it does is send messages to an Azure Service Bus topic via the Azure Service Bus REST API, using HttpClient.
I populate the BrokerProperties header on the REST call with valid JSON, and I expect that on the server side, when I receive the message through a subscription, that I'll get my instance of BrokeredMessage.Properties populated with the values I sent from the client.
The one problem I've had on this side is that the documentation says to set Content-Type to application/atom+xml;type=entry;charset=utf-8, but even when I do I get application/json; charset=utf-8, so I'm just using application/json.
With that aside, as far as I can tell, this does what it's supposed to do. It creates the client and the request message, sets the headers, and sends the message. I get a 201 Created every time. Here's all of it:
private async static void SendServiceBusMessage(Command command)
{
// Create the HttpClient and HttpRequestMessage objects
HttpClient client = new HttpClient();
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Post, topicUri);
// Add the authorization header (CreateAuthToken does the SHA256 stuff)
request.Headers.Add("Authorization", CreateAuthToken(topicUri, authSasKeyName, authSasKey));
// Add the content (command is a normal POCO)
// I've tried application/atom+xml;type=entry;charset=utf-8, always see application/json in the request
request.Content = new StringContent(JsonConvert.SerializeObject(command), Encoding.UTF8, "application/json");
// Add the command name and SessionId as BrokeredMessage properties
var brokeredMessageProperties = new Dictionary<string, string>();
brokeredMessageProperties.Add("CommandName", command.GetType().Name);
brokeredMessageProperties.Add("SessionId", Guid.NewGuid().ToString());
// Add the BrokerProperties header to the request
request.Content.Headers.Add("BrokerProperties", JsonConvert.SerializeObject(brokeredMessageProperties));
// I've also tried adding it directly to the request, nothing seems different
// request.Headers.Add("BrokerProperties", JsonConvert.SerializeObject(brokeredMessageProperties));
// Send it
var response = await client.SendAsync(request);
if (!response.IsSuccessStatusCode)
{
// Do some error-handling
}
}
and here's an example of the HTTP request it sends. Compare it to the example at the bottom of Send Message documentation... aside from the Content-Type, it looks (functionally) identical to me.
POST https://myawesomesbnamespace.servicebus.windows.net/commands/messages HTTP/1.1
Authorization: SharedAccessSignature sr=https%3A%2F%2Fmyawesomesbnamespace.servicebus.windows.net%2Fcommands%2Fmessages&sig=SomeValidAuthStuffHere
Content-Type: application/json; charset=utf-8
BrokerProperties: {"CommandName":"CreateJob_V1","SessionId":"94932660-54e9-4867-a020-883a9bb79fa1"}
Host: myawesomesbnamespace.servicebus.windows.net
Content-Length: 133
Expect: 100-continue
Connection: Keep-Alive
{"JobId":"6b76e7e6-9499-4809-b762-54c03856d5a3","Name":"Awesome New Job Name","CorrelationId":"47fc77d9-9470-4d65-aa7d-690b65a7dc4f"}
However, when I receive the message on the server, the .Properties are empty. This is annoying.
The server code looks like this. It just gets a batch of messages and does a foreach loop.
private async Task ProcessCommandMessages()
{
List<BrokeredMessage> commandMessages = (await commandsSubscriptionClient.ReceiveBatchAsync(serviceBusMessageBatchSize, TimeSpan.FromMilliseconds(waitTime_ms))).ToList();
foreach (BrokeredMessage commandMessage in commandMessages)
{
// commandMessage.Properties should have CommandName and SessionId,
// like I sent from the client, but it's empty
// that's not good
if (commandMessage.Properties.ContainsKey("CommandName"))
{
string commandName = commandMessage.Properties["CommandName"] as string;
// Do some stuff
}
else
{
// This is bad, log an error
}
}
}
So, I'm a bit stuck. Can anyone spot something I'm doing wrong here? Maybe it's the Content-Type problem and there's a way around it?
Thanks!
Scott
Seattle, WA, USA
OK, finally getting back to this. What I misunderstood (and I'd argue the documentation isn't clear about) is that arbitrary properties cannot be passed through the BrokerProperties header. Only named properties from the BrokeredMessage class (like SessionId, Label, etc.) will come through Service Bus to the server.
For properties to show up in BrokeredMessage.Properties, they have to be passed as custom headers on the request. So, in my case,
request.Headers.Add("CommandName", command.GetType().Name);
gets the CommandName property to show up on the server after the message is passed through Service Bus.
And to pass the SessionId value, I'll still want to pass it through BrokerProperties header.
TL;DR version
When a transfer error occurs while writing to the request stream, I can't access the response, even though the server sends it.
Full version
I have a .NET application that uploads files to a Tomcat server, using HttpWebRequest. In some cases, the server closes the request stream prematurely (because it refuses the file for one reason or another, e.g. an invalid filename), and sends a 400 response with a custom header to indicate the cause of the error.
The problem is that if the uploaded file is large, the request stream is closed before I finish writing the request body, and I get an IOException:
Message: Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.
InnerException: SocketException: An existing connection was forcibly closed by the remote host
I can catch this exception, but then, when I call GetResponse, I get a WebException with the previous IOException as its inner exception, and a null Response property. So I can never get the response, even though the server sends it (checked with WireShark).
Since I can't get the response, I don't know what the actual problem is. From my application point of view, it looks like the connection was interrupted, so I treat it as a network-related error and retry the upload... which, of course, fails again.
How can I work around this issue and retrieve the actual response from the server? Is it even possible? To me, the current behavior looks like a bug in HttpWebRequest, or at least a severe design issue...
Here's the code I used to reproduce the problem:
var request = HttpWebRequest.CreateHttp(uri);
request.Method = "POST";
string filename = "foo\u00A0bar.dat"; // Invalid characters in filename, the server will refuse it
request.Headers["Content-Disposition"] = string.Format("attachment; filename*=utf-8''{0}", Uri.EscapeDataString(filename));
request.AllowWriteStreamBuffering = false;
request.ContentType = "application/octet-stream";
request.ContentLength = 100 * 1024 * 1024;
// Upload the "file" (just random data in this case)
try
{
using (var stream = request.GetRequestStream())
{
byte[] buffer = new byte[1024 * 1024];
new Random().NextBytes(buffer);
for (int i = 0; i < 100; i++)
{
stream.Write(buffer, 0, buffer.Length);
}
}
}
catch(Exception ex)
{
// here I get an IOException; InnerException is a SocketException
Console.WriteLine("Error writing to stream: {0}", ex);
}
// Now try to read the response
try
{
using (var response = (HttpWebResponse)request.GetResponse())
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
}
catch(Exception ex)
{
// here I get a WebException; InnerException is the IOException from the previous catch
Console.WriteLine("Error getting the response: {0}", ex);
var webEx = ex as WebException;
if (webEx != null)
{
Console.WriteLine(webEx.Status); // SendFailure
var response = (HttpWebResponse)webEx.Response;
if (response != null)
{
Console.WriteLine("{0} - {1}", (int)response.StatusCode, response.StatusDescription);
}
else
{
Console.WriteLine("No response");
}
}
}
Additional notes:
If I correctly understand the role of the 100 Continue status, the server shouldn't send it to me if it's going to refuse the file. However, it seems that this status is controlled directly by Tomcat, and can't be controlled by the application. Ideally, I'd like the server not to send me 100 Continue in this case, but according to my colleagues in charge of the back-end, there is no easy way to do it. So I'm looking for a client-side solution for now; but if you happen to know how to solve the problem on the server side, it would also be appreciated.
The app in which I encounter the issue targets .NET 4.0, but I also reproduced it with 4.5.
I'm not timing out. The exception is thrown long before the timeout.
I tried an async request. It doesn't change anything.
I tried setting the request protocol version to HTTP 1.0, with the same result.
Someone else has already filed a bug on Connect for this issue: https://connect.microsoft.com/VisualStudio/feedback/details/779622/unable-to-get-servers-error-response-when-uploading-file-with-httpwebrequest
I am out of ideas as to what can be a client side solution to your problem. But I still think the server side solution of using a custom tomcat valve can help here. I currently doesn`t have a tomcat setup where I can test this but I think a server side solution here would be along the following lines :
RFC section 8.2.3 clearly states :
Requirements for HTTP/1.1 origin servers:
- Upon receiving a request which includes an Expect request-header
field with the "100-continue" expectation, an origin server MUST
either respond with 100 (Continue) status and continue to read
from the input stream, or respond with a final status code. The
origin server MUST NOT wait for the request body before sending
the 100 (Continue) response. If it responds with a final status
code, it MAY close the transport connection or it MAY continue
to read and discard the rest of the request. It MUST NOT
perform the requested method if it returns a final status code.
So assuming tomcat confirms to the RFC, while in the custom valve you would have recieved the HTTP request header, but the request body would not be sent since the control is not yet in the servlet that reads the body.
So you can probably implement a custom valve, something similar to :
import org.apache.catalina.connector.Request;
import org.apache.catalina.connector.Response;
import org.apache.catalina.valves.ErrorReportValve;
public class CustomUploadHandlerValve extends ValveBase {
#Override
public void invoke(Request request, Response response) throws IOException, ServletException {
HttpServletRequest httpRequest = (HttpServletRequest) request;
String fileName = httpRequest.getHeader("Filename"); // get the filename or whatever other parameters required as per your code
bool validationSuccess = Validate(); // perform filename check or anyother validation here
if(!validationSuccess)
{
response = CreateResponse(); //create your custom 400 response here
request.SetResponse(response);
// return the response here
}
else
{
getNext().invoke(request, response); // to pass to the next valve/ servlet in the chain
}
}
...
}
DISCLAIMER : Again I haven`t tried this to success, need sometime and a tomcat setup to try it out ;).
Thought it might be a starting point for you.
I had the same problem. The server sends a response before the client end of the transmission of the request body, when I try to do async request. After a series of experiments, I found a workaround.
After the request stream has been received, I use reflection to check the private field _CoreResponse of the HttpWebRequest. If it is an object of class CoreResponseData, I take his private fields (using reflection): m_StatusCode, m_StatusDescription, m_ResponseHeaders, m_ContentLength. They contain information about the server's response!
In most cases, this hack works!
What are you getting in the status code and response of the second exception not the internal exception?
If a WebException is thrown, use the Response and Status properties of the exception to determine the response from the server.
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.getresponse(v=vs.110).aspx
You are not saying what exactly version of Tomcat 7 you are using...
checked with WireShark
What do you actually see with WireShark?
Do you see the status line of response?
Do you see the complete status line, up to CR-LF characters at its end?
Is Tomcat asking for authentication credentials (401), or it is refusing file upload for some other reason (first acknowledging it with 100 but then aborting it mid-flight)?
The problem is that if the uploaded file is large, the request stream
is closed before I finish writing the request body, and I get an IOException:
If you do not want the connection to be closed but all the data transferred over the wire and swallowed at the server side, on Tomcat 7.0.55 and later it is possible to configure maxSwallowSize attribute on HTTP connector, e.g. maxSwallowSize="-1".
http://tomcat.apache.org/tomcat-7.0-doc/config/http.html
If you want to discuss Tomcat side of connection handling, you would better ask on the Tomcat users' mailing list,
http://tomcat.apache.org/lists.html#tomcat-users
At .Net side:
Is it possible to perform stream.Write() and request.GetResponse() simultaneously, from different threads?
Is it possible to performs some checks at the client side before actually uploading the file?
hmmm... i don't get it - that is EXACTLY why in many real-life scenarios large files are uploaded in chunks (and not as a single large file)
by the way: many internet servers have size limitations. for instance in tomcat that is representad by maxPostSize (as seen in this link: http://tomcat.apache.org/tomcat-5.5-doc/config/http.html)
so tweaking the server configurations seems like the easy way, but i do think that the right way is to split the file to several requests
EDIT: replace Uri.EscapeDataString with HttpServerUtility.UrlEncode
Uri.EscapeDataString(filename) // a problematic .net implementation
HttpServerUtility.UrlEncode(filename) // the proper way to do it
I am experience a pretty similar problem currently also with Tomcat and a Java client. The Tomcat REST service sends a HTTP returncode with response body before reading the whole request body. The client however fails with IOException. I inserted a HTTP Proxy on the client to sniff the protocol and actually the HTTP response is sent to the client eventually. Most likly the Tomcat closed the request input stream before sending the response.
One solution is to use a different HTTP server like Jetty which does not have this problem. The other solution is a add a Apache HTTP server with AJP in front of Tomcat. Apache HTTP server has a different handling of streams and with that the problem goes away.
I am working on a client that uses a webservice to get some events pushed its way - the webservice is designed so, that upon the client POST'ing a subscribe command, it will send back some events of interest and keep doing so as long as the client stay connected.
When POSTing the command, the service responds (immediately) with an initial answer with these headers
Keep-Alive: timeout=5, max=98
Connection: Keep-Alive
Transfer-Encoding: chunked
and then keeps the connection open until it times out (after 30s, if the client does not send some keep-alive data)
Since it is a mix of POST + having to read the response + keeping the connection open until endOFStream, it appears I have to use HttpWebRequest with BeginGetRequestStream (to POST) and BeginGetResponse to read and act on the response.
My problem is that the BeginGetResponse callback is not called until the input stream is actually closed by the server/service (after 30s), despite AllowReadStreamBuffering being set to false.
The doc have this to say on AllowReadStreamBuffering:
The AllowReadStreamBuffering property affects when the callback from BeginGetResponse method is called. When the AllowReadStreamBuffering property is true, the callback is raised once the entire stream has been downloaded into memory. When the AllowReadStreamBuffering property is false, the callback is raised as soon as the stream is available for reading which may be before all data has arrived.
I've seen a few suggestions that no matter what AllowReadStreamBuffering is set to, HttpWebRequest will not call BeginGetResponse until it's buffer is filled up - but I have not been able to find anything on that in the docs.
Does any one have an idea on how to control this buffering behaviour or maybe suggestion to another approach I should try when dealing with this kind of webservice?
The relevant snippets of the code I currently use, look like this:
public void open()
{
string url = "http://funplaceontheinternet/webservice";
HttpWebRequest request = WebRequest.CreateHttp(url);
request.Method = "POST";
request.Credentials = new NetworkCredential("username", "password");
request.CookieContainer = new CookieContainer();
request.AllowReadStreamBuffering = false;
request.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), request);
}
void GetRequestStreamCallback(IAsyncResult result)
{
Debug.WriteLine("open.GetRequestStreamCallback");
HttpWebRequest webRequest = (HttpWebRequest)result.AsyncState;
// End the stream request operation
Stream postStream = webRequest.EndGetRequestStream(result);
// Create the post data
byte[] byteArray = Encoding.UTF8.GetBytes(_xmlEncodedSubscribeCommand);
// Add the post data to the web request
postStream.Write(byteArray, 0, byteArray.Length);
postStream.Close();
// Start the web request
webRequest.BeginGetResponse(new AsyncCallback(BeginGetResponseCallback), webRequest);
}
void BeginGetResponseCallback(IAsyncResult result)
{
HttpWebRequest request = (HttpWebRequest)result.AsyncState;
HttpWebResponse response = null;
if (request != null)
response = (HttpWebResponse)request.EndGetResponse(result);
else
Debug.WriteLine("request==null :-(");
if (response != null)
{
using (var reader = new StreamReader(response.GetResponseStream()))
{
while (!reader.EndOfStream)
{
string line = reader.ReadLine();
Debug.WriteLine("BeginGetResponseCallback - received: " + line);
}
Debug.WriteLine("BeginGetResponseCallback - reader.EndOfStream");
}
}
else
Debug.WriteLine("response==null :-(");
}
You've mentioned that the service is a web service, but not which platform.
If this is a "normal" web service, then I assume that XML is the transport format.
If so, I suspect the problem may be that this style of communication does not really lend itself to streaming. The web service infrastructure at the server end might not be creating the SOAP envelope and payload until all the data is available. If you wanted to stream like this, you might be better using some custom service at the server end, rather than a web service.
Do you know for sure that the server is really streaming the response? (e.g confirmed with something like wireshark?)
If you really want to use a web service, then I would suggest you complete the request when the first event(s) are available, and don't wait for the timeout. This will still achieve the latency reduction that I assume you are trying to get.
I have a big problem: I need to send 200 objects at once and avoid timeouts.
while (true)
{
NameValueCollection data = new NameValueCollection();
data.Add("mode", nat);
using (var client = new WebClient())
{
byte[] response = client.UploadValues(serverA, data);
responseData = Encoding.ASCII.GetString(response);
string[] split = Javab.Split(new[] { '!' }, StringSplitOptions.RemoveEmptyEntries);
string command = split[0];
string server = split[1];
string requestCountStr = split[2];
switch (command)
{
case "check":
int requestCount = Convert.ToInt32(requestCountStr);
for (int i = 0; i < requestCount; i++)
{
Uri myUri = new Uri(server);
WebRequest request = WebRequest.Create(myUri);
request.Timeout = 200000;
WebResponse myWebResponse = request.GetResponse();
}
break;
}
}
}
This produces the error:
Unhandled Exception: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at vir_fu.Program.Main(String[] args)
The requestCount loop works fine outside my base code but when I add it to my project I get this error. I have tried setting request.Timeout = 200; but it didn't help.
It means what it says. The operation took too long to complete.
BTW, look at WebRequest.Timeout and you'll see that you've set your timeout for 1/5 second.
Close/dispose your WebResponse object.
I'm not sure about your first code sample where you use WebClient.UploadValues, it's not really enough to go on, could you paste more of your surrounding code? Regarding your WebRequest code, there are two things at play here:
You're only requesting the headers of the response**, you never read the body of the response by opening and reading (to its end) the ResponseStream. Because of this, the WebRequest client helpfully leaves the connection open, expecting you to request the body at any moment. Until you either read the response body to completion (which will automatically close the stream for you), clean up and close the stream (or the WebRequest instance) or wait for the GC to do its thing, your connection will remain open.
You have a default maximum amount of active connections to the same host of 2. This means you use up your first two connections and then never dispose of them so your client isn't given the chance to complete the next request before it reaches its timeout (which is milliseconds, btw, so you've set it to 0.2 seconds - the default should be fine).
If you don't want the body of the response (or you've just uploaded or POSTed something and aren't expecting a response), simply close the stream, or the client, which will close the stream for you.
The easiest way to fix this is to make sure you use using blocks on disposable objects:
for (int i = 0; i < ops1; i++)
{
Uri myUri = new Uri(site);
WebRequest myWebRequest = WebRequest.Create(myUri);
//myWebRequest.Timeout = 200;
using (WebResponse myWebResponse = myWebRequest.GetResponse())
{
// Do what you want with myWebResponse.Headers.
} // Your response will be disposed of here
}
Another solution is to allow 200 concurrent connections to the same host. However, unless you're planning to multi-thread this operation so you'd need multiple, concurrent connections, this won't really help you:
ServicePointManager.DefaultConnectionLimit = 200;
When you're getting timeouts within code, the best thing to do is try to recreate that timeout outside of your code. If you can't, the problem probably lies with your code. I usually use cURL for that, or just a web browser if it's a simple GET request.
** In reality, you're actually requesting the first chunk of data from the response, which contains the HTTP headers, and also the start of the body. This is why it's possible to read HTTP header info (such as Content-Encoding, Set-Cookie etc) before reading from the output stream. As you read the stream, further data is retrieved from the server. WebRequest's connection to the server is kept open until you reach the end of this stream (effectively closing it as it's not seekable), manually close it yourself or it is disposed of. There's more about this here.
proxy issue can cause this. IIS webconfig put this in
<defaultProxy useDefaultCredentials="true" enabled="true">
<proxy usesystemdefault="True" />
</defaultProxy>
I remember I had the same problem a while back using WCF due the quantity of the data I was passing. I remember I changed timeouts everywhere but the problem persisted. What I finally did was open the connection as stream request, I needed to change the client and the server side, but it work that way. Since it was a stream connection, the server kept reading until the stream ended.
I encountered the same error than adding
Task.Delay(2000);
in each request solved the problem