I have a console app that uses 20 or so threads to connect to a remote web server and send arbitrary http requests rather small in size, 100% over ssl. The remote web server is actually an entire load balanced data center full of high availability systems that can handle hundreds of thousands of request per second. This is not a server or bandwidth issue. With that being said, I don't run it, nor do i have any influence in how it is configured, so I couldn't make server side changes even if I wanted to.
When running the app with fiddler the app performs amazingly fast. When not running in fiddler its really much slower, to the point of being useless for the task at hand. It also seems to lock up at some point rather early in the process, but this could simply be a deadlock issue, im not sure yet.
Anyhow, fiddler being a proxy , is undoubtedly modifying my requests/connections in some way that ensures wonderful throughput, however I have no idea what its doing. I am trying to figure it out so that I can force my .net application to mimic fiddlers connection handling behavior without actually having to run it through fiddler
I've pasted the connection code below.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Net;
using System.IO;
namespace Redacted
{
public class HiveCommunicator
{
public static IResponse SendRequest(IRequest request) {
ServicePointManager.DefaultConnectionLimit = 60;
ServicePointManager.Expect100Continue = false;
string hostUrlString = string.Empty;
if (request.SiteID <= 0)
hostUrlString = string.Format("{0}://{1}{2}", request.UseSSL ? "https" : "http", DataCenters.GetCenter(request.DataCenter), request.Path);
else
hostUrlString = string.Format("{0}://{1}{2}", request.UseSSL ? "https" : "http", DataCenters.GetCenter(request.DataCenter), string.Format(request.Path, request.SiteID));
HttpWebRequest webRequest = (HttpWebRequest)HttpWebRequest.Create(hostUrlString);
switch (request.ContentType)
{
default:
case ContentTypes.XML:
webRequest.ContentType = "application/xml";
break;
case ContentTypes.JSON:
webRequest.ContentType = "application/json";
break;
case ContentTypes.BINARY:
webRequest.ContentType = "application/octet-stream";
break;
}
if (request.RequiresAuthorizationToken)
{
AuthorizationToken tok = HiveAuthentication.GetToken(request.SiteID);
if (tok == null)
{
return null;
}
webRequest.Headers.Add(HttpRequestHeader.Authorization, tok.Token);
}
bool UsesRequestBody = true;
switch (request.HttpVerb)
{
case HttpVerbs.POST:
webRequest.Method = "POST";
break;
case HttpVerbs.DELETE:
webRequest.Method = "DELETE";
UsesRequestBody = false;
break;
case HttpVerbs.PUT:
webRequest.Method = "PUT";
break;
default:
case HttpVerbs.GET:
webRequest.Method = "GET";
UsesRequestBody = false;
break;
}
HttpWebResponse webResponse = null;
Stream webRequestStream = null;
byte[] webRequestBytes = null;
if (UsesRequestBody)
{
webRequestBytes = request.RequestBytes;
webRequest.ContentLength = webRequestBytes.Length;
webRequestStream = webRequest.GetRequestStream();
for (int i = 0; i < webRequest.ContentLength; i++)
{
webRequestStream.WriteByte(webRequestBytes[i]);
}
}
try
{
webResponse = (HttpWebResponse)webRequest.GetResponse();
}
catch (WebException ex)
{
webResponse = (HttpWebResponse)ex.Response;
}
if (UsesRequestBody)
{
webRequestStream.Close();
webRequestStream.Dispose();
}
IResponse respReturn = request.ParseResponse(webResponse);
webResponse.Close();
return respReturn;
}
}
}
I thank the folks here who tried to help. Unfortunately this needed a call to Microsoft Profesional Support.
Even though I was using ServicePointManager.Expect100Continue = false; It was happening to late in the app life cycle. Looking at the System.Net.Trace logs we saw that the expect-100 continue header was still being used (except when using fiddler). The solution was to put this into the app startup (in Main())
I was also trying to read the response stream before closing the request stream.
After fixing that, everything sped up nicely. The app runs much faster without fiddler than with, which is what i would expect.
A couple people said to call dispose on on the HttpWebResponse. That class does not have a public Dispose method. I'm assuming .Close() calls .Dispose() internally though.
You can play around with Fiddler's "Connection Options" to see if the reason for Fiddler's powerfull throughput is reusing of client connections. If that's the case, you may want to consider implementing a shared secure http connection pool or just go watch a movie or something. ^^
Taking a wild guess here, but it might have to do with a simple app.config setting:
<system.net>
<connectionManagement>
<add address="*" maxconnection="40"/>
</connectionManagement>
</system.net>
I had the same problem in a multi threaded HTTP requesting app once and this solved that problem.
Given that your application sends "arbitrary http requests rather small in size", it may help to disable the Nagle algorithm.
ServicePointManager.UseNagleAlgorithm = true;
From MSDN: A number of elements can impact performance when using HttpWebRequest, including:
The ServicePointManager class.
The DefaultConnectionLimit property.
The UseNagleAlgorithm property.
The Nagle algorithm [...] accumulates sequences of small messages into larger TCP packets before the data is sent over the network. [...] Generally for constant high-volume throughput, a performance improvement is realized using the Nagle algorithm. But for smaller throughput applications, degradation in performance may be seen. [...] if an application is using low-latency connections, it may help to set this property to false.
Related
I would like to know how Reusing HttpWebRequests works to avoid SSL handshake process everytime.
I use the keep alive headr in the request and the first handshake is successfull but i would like to reuse the request in order to avoid future handshakes against the same certificate.
Think is i dont know if i had to reuse the HttpWebRequest object instance or even if i create a new request object it will use the same connection since the keep alive is already on place and working.
Should i store the existing request object lets say at class level and reuse it? or i can safely dispose the object and next time i create a request it will be under the effect of the keep alive connection?
I am asking this cause i need to lower the timings in an application, and worst part is always ssl handshake, that can take over 3seconds in a phone with medium signal from carrier.
I am using C# to develop.
I have tried to look for this kind of information but all i read over internet is how to set up the SSL Server and enabling certain settings but not how to make the client work with these features.
EDIT: FINDINGS RESULTS
I created a sample program in .NET C# whith the following code:
Stopwatch sw = new Stopwatch();
sw.Start();
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(#"https:\\www.gmail.com"));
request.KeepAlive = true;
request.Method = "GET";
request.ContentType = "application/json";
request.ContentLength = 0;
request.ConnectionGroupName = "test";
//request.UnsafeAuthenticatedConnectionSharing = true;
//request.PreAuthenticate = true;
var response = request.GetResponse();
//response.Close();
request.Abort();
sw.Stop();
listBox1.Items.Add("Connection in : " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
HttpWebRequest request2 = (HttpWebRequest)WebRequest.Create(new Uri(#"https:\\www.gmail.com"));
request2.KeepAlive = true;
request2.Method = "GET";
//request2.UnsafeAuthenticatedConnectionSharing = true;
//request2.PreAuthenticate = true;
request2.ContentType = "application/json";
request2.ContentLength = 0;
request2.ConnectionGroupName = "test";
var response2 = request2.GetResponse();
//response2.Close();
request2.Abort();
sw.Stop();
listBox1.Items.Add("Connection 2 in : " + sw.Elapsed.ToString());
Results was that the first connection triggered the CertificatevalidationCallback 3 times (one for each certificate) and then the second connection only once, but when i CLOSED THE RESPONSE before performing the next request, no callback was triggered.
I suppose that keeping a response open keeps the socket open and thats why the partial handshake takes place (not the full certificate chain).
Sorry if I sound kind of noob in this matter, SSL and timings was coded by a work mate and the code was not clear to follow. But i think i have the answer. Thanks Poupou for your tremendous help
This is already built-in the the SSL/TLS stack shipped with Xamarin.iOS (i.e. at a lower level than HttpWebRequest). There's nothing to set up to enabled this. In fact you would need extra code if you wanted to disable it.
If the server supports it then subsequent handshake will already faster because a Session ID cache will be used (see TLS 1.0 RFC page 30).
However the server does not have to honor the session id (given to it). In such case a full handshake will need be done again. IOW you cannot force this from the client (only offer).
You can verify this using a network analyzer, e.g. wireshark, by looking at the exchanges (and comparing them to the RFC).
Good day.
I really need help on this issue. I have tried every possible option here.
I use a REST API in an Outlook add-in using C#. The code links outlook items to CRM records, one way. The add-in works 100% fine but after a couple of calls outs i keep on getting the error "The operation has timed out".
When I use the Google Chrome App "Advanced REST Client" I can post the same request 50 times after each other with no time out error.
From within the add-in I use POST, GET and PATCH HttpWebRequest and I get the error for all of them. The error happens at the code line System.IO.Stream os = req.GetRequestStream();
Below is the method:
public static string HttpPatch(string URI, string Parameters)
{
var req = (System.Net.HttpWebRequest)System.Net.WebRequest.Create(URI);
if (GlobalSettings.useproxy.Equals("true"))
{
req.Proxy = WebRequest.DefaultWebProxy;
req.Credentials = new NetworkCredential(GlobalSettings.proxyusername, GlobalSettings.proxypassword, GlobalSettings.proxydomain);
req.Proxy.Credentials = new NetworkCredential(GlobalSettings.proxyusername, GlobalSettings.proxypassword, GlobalSettings.proxydomain);
}
req.Headers.Add("Authorization: OAuth " + GlobalSettings.token.access_token);
req.ContentType = "application/json";
req.Method = "PATCH";
byte[] data = System.Text.Encoding.Unicode.GetBytes(Parameters);
req.ContentLength = data.Length;
using (System.IO.Stream os = req.GetRequestStream())
{
os.Write(data, 0, data.Length);
os.Close();
}
WebResponse resp;
try
{
resp = req.GetResponse();
}
catch (WebException ex)
{
if (ex.Message.Contains("401"))
{
}
}
}
I suspect the problem is that you're not disposing of the WebResponse. That means the connection pool thinks that the connection is still in use, and will wait for the response to be disposed before reusing it for another request. The connection is needed in order to get a request stream, and it won't become available unless the finalizer happens to kick in at a useful time, hence the timeout.
Simply change your code using the response to use a using statement - or in your case, potentially something a little more complicated using a finally block as you're assigning it within a try block. (We can't really see how you're using the response, which makes it hard to give sample code around that. But fundamentally, you need to dispose it.)
I use C#.
The first time I use WebRequest GetRequestStream() in my code, it takes up to 20 seconds. After that it takes it takes under 1 second.
Below is my code. The row "this.requestStream = httpRequest.GetRequestStream()" is causing the delay.
StringBuilder postData = new StringBuilder(100);
postData.Append("param=");
postData.Append("test");
byte[] dataArray = Encoding.UTF8.GetBytes(postData.ToString());
this.httpRequest = (HttpWebRequest)WebRequest.Create("http://myurl.com");
httpRequest.Method = "POST";
httpRequest.ContentType = "application/x-www-form-urlencoded";
httpRequest.ContentLength = dataArray.Length;
this.requestStream = httpRequest.GetRequestStream();
using (requestStream)
requestStream.Write(dataArray, 0, dataArray.Length);
this.webResponse = (HttpWebResponse)httpRequest.GetResponse();
Stream responseStream = webResponse.GetResponseStream();
StreamReader responseReader = new System.IO.StreamReader(responseStream, Encoding.UTF8);
String responseString = responseReader.ReadToEnd();
How can I see what causes this? (for instance: DNS lookup? Server not responding?)
Thanks and regards, Koen
You could also try to set the .Proxy = null. Sometimes it tries to autodetect a proxy which takes up time.
That sounds like your application is pre-compiling when you first hit it. This is how .net works.
Here is a way to speed up your web app. link text
It's actually the framework for HTML operations doing startup network proxy checking to setup the property HttpWebRequest.DefaultWebProxy.
In my application as part of the startup actions I create a fully formed request as a back ground task to get this overhead out of the way.
HttpWebRequest web = (HttpWebRequest)WebRequest.Create(m_ServletURL);
web.UserAgent = "Mozilla/4.0 (Windows 7 6.1) Java/1.6.0_26";
Setting the UserAgent field in my case is triggers the startup overhead.
I had the same issue but .proxy = null didn't solve it for me. Depending on the network structure the problem might be connected to IPv6. The first request nearly took exactly 21sec each time the application run. Therefore I argue it must be a timeout value. If this value is reached the fallback solution IPv4 is used (for subsequent calls as well). Forcing the use of IPv4 in the first place solved the issue for me!
HttpWebRequest request = WebRequest.Create("http://myurl.com") as HttpWebRequest;
request.ServicePoint.BindIPEndPointDelegate = (servicePount, remoteEndPoint, retryCount) =>
{
if (remoteEndPoint.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork)
{
return new IPEndPoint(IPAddress.Any, 0);
}
throw new System.InvalidOperationException("No IPv4 address found.");
};
One problem may be the fact that .NET, by default, only allows 2 connections at a time.
You can increase the number of simultaneous connections with:
ServicePointManager.DefaultConnectionLimit = newConnectionLimit;
We leave the determination of the optimal value to the user.
I have a big problem: I need to send 200 objects at once and avoid timeouts.
while (true)
{
NameValueCollection data = new NameValueCollection();
data.Add("mode", nat);
using (var client = new WebClient())
{
byte[] response = client.UploadValues(serverA, data);
responseData = Encoding.ASCII.GetString(response);
string[] split = Javab.Split(new[] { '!' }, StringSplitOptions.RemoveEmptyEntries);
string command = split[0];
string server = split[1];
string requestCountStr = split[2];
switch (command)
{
case "check":
int requestCount = Convert.ToInt32(requestCountStr);
for (int i = 0; i < requestCount; i++)
{
Uri myUri = new Uri(server);
WebRequest request = WebRequest.Create(myUri);
request.Timeout = 200000;
WebResponse myWebResponse = request.GetResponse();
}
break;
}
}
}
This produces the error:
Unhandled Exception: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at vir_fu.Program.Main(String[] args)
The requestCount loop works fine outside my base code but when I add it to my project I get this error. I have tried setting request.Timeout = 200; but it didn't help.
It means what it says. The operation took too long to complete.
BTW, look at WebRequest.Timeout and you'll see that you've set your timeout for 1/5 second.
Close/dispose your WebResponse object.
I'm not sure about your first code sample where you use WebClient.UploadValues, it's not really enough to go on, could you paste more of your surrounding code? Regarding your WebRequest code, there are two things at play here:
You're only requesting the headers of the response**, you never read the body of the response by opening and reading (to its end) the ResponseStream. Because of this, the WebRequest client helpfully leaves the connection open, expecting you to request the body at any moment. Until you either read the response body to completion (which will automatically close the stream for you), clean up and close the stream (or the WebRequest instance) or wait for the GC to do its thing, your connection will remain open.
You have a default maximum amount of active connections to the same host of 2. This means you use up your first two connections and then never dispose of them so your client isn't given the chance to complete the next request before it reaches its timeout (which is milliseconds, btw, so you've set it to 0.2 seconds - the default should be fine).
If you don't want the body of the response (or you've just uploaded or POSTed something and aren't expecting a response), simply close the stream, or the client, which will close the stream for you.
The easiest way to fix this is to make sure you use using blocks on disposable objects:
for (int i = 0; i < ops1; i++)
{
Uri myUri = new Uri(site);
WebRequest myWebRequest = WebRequest.Create(myUri);
//myWebRequest.Timeout = 200;
using (WebResponse myWebResponse = myWebRequest.GetResponse())
{
// Do what you want with myWebResponse.Headers.
} // Your response will be disposed of here
}
Another solution is to allow 200 concurrent connections to the same host. However, unless you're planning to multi-thread this operation so you'd need multiple, concurrent connections, this won't really help you:
ServicePointManager.DefaultConnectionLimit = 200;
When you're getting timeouts within code, the best thing to do is try to recreate that timeout outside of your code. If you can't, the problem probably lies with your code. I usually use cURL for that, or just a web browser if it's a simple GET request.
** In reality, you're actually requesting the first chunk of data from the response, which contains the HTTP headers, and also the start of the body. This is why it's possible to read HTTP header info (such as Content-Encoding, Set-Cookie etc) before reading from the output stream. As you read the stream, further data is retrieved from the server. WebRequest's connection to the server is kept open until you reach the end of this stream (effectively closing it as it's not seekable), manually close it yourself or it is disposed of. There's more about this here.
proxy issue can cause this. IIS webconfig put this in
<defaultProxy useDefaultCredentials="true" enabled="true">
<proxy usesystemdefault="True" />
</defaultProxy>
I remember I had the same problem a while back using WCF due the quantity of the data I was passing. I remember I changed timeouts everywhere but the problem persisted. What I finally did was open the connection as stream request, I needed to change the client and the server side, but it work that way. Since it was a stream connection, the server kept reading until the stream ended.
I encountered the same error than adding
Task.Delay(2000);
in each request solved the problem
We've got a .NET 2.0 WinForms app that needs to upload files to an IIS6 Server via WebDav. From time to time we get complaints from a remote office that they get one of the following error messages
The underlying connection was closed:
an unexpected error occurred on send.
The underlying connection was closed:
an unexpected error occurred on
receive.
This only seems to occur with large files (~20Mb plus). I've tested it with a 40Mb file from my home computer and tried putting 'Sleep's in the loop to simulate a slow connection so I suspect that it's down to network issues at their end... but
The IT at the remote office are no help
I'd like to rule out the posibility my code is at fault.
So - can anybody spot any misakes or suggest any workarounds that might 'bulletproof' the code against this problem. Thanks for any help. Chopped down version of code follows:
public bool UploadFile(string localFile, string uploadUrl)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(uploadUrl);
try
{
req.Method = "PUT";
req.AllowWriteStreamBuffering = true;
req.UseDefaultCredentials = Program.WebService.UseDefaultCredentials;
req.Credentials = Program.WebService.Credentials;
req.SendChunked = false;
req.KeepAlive = true;
Stream reqStream = req.GetRequestStream();
FileStream rdr = new FileStream(localFile, FileMode.Open, FileAccess.Read);
byte[] inData = new byte[4096];
int bytesRead = rdr.Read(inData, 0, inData.Length);
while (bytesRead > 0)
{
reqStream.Write(inData, 0, bytesRead);
bytesRead = rdr.Read(inData, 0, inData.Length);
}
reqStream.Close();
rdr.Close();
System.Net.HttpWebResponse response = (HttpWebResponse)req.GetResponse();
if (response.StatusCode != HttpStatusCode.OK && response.StatusCode!=HttpStatusCode.Created)
{
MessageBox.Show("Couldn't upload file");
return false;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.ToString());
return false;
}
return true;
}
Try setting KeepAlive to false:
req.KeepAlive = false;
This will allow the connection to be closed and opened again. It will not allow to use a persistent connection. I found a lot of references in the Web that suggested this in order to solve a similar to yours error. This is a relevant link.
Anyway, it is not a good idea to use HTTP PUT (or HTTP POST) to upload large files. It will be better to use FTP or a download/upload manager. These will handle retries, connection problems, timeouts automatically for you. The upload will be faster too and you could also resume a stopped uploading. If you decide to stay with HTTP, you should at least try to add a retry mechanism. If an upload is taking too long, then there is a high probability that it will fail due to proxy, server timeout, firewall or what ever reason not to have with your code.
To remove the risk of a bug in your code, try using WebClient:
using (WebClient client = new WebClient())
{
client.UseDefaultCredentials = Program.WebService.UseDefaultCredentials;
client.Credentials = Program.WebService.Credentials;
client.UploadFile(uploadUrl, "PUT", localFile);
}
Maybe try using POST, but the real culprit is probably the content type.
Try setting
req.ContentType = "application/octet-stream";
req.ContentLength = inData.Length;
or look at the code in the accepted answer here: Upload files with HTTPWebrequest (multipart/form-data)
Both my example and the link I provided involve modifying the ContentType - my example is simpler but might not work, as most applications receiving files expect multipart
Please you check whether [Enable Http Keep-Alives] is set [on] at [Web Site] tab in IIS manager.
The size of the uploads might be limited.
See here for one discussion:
http://www.codeproject.com/KB/aspnet/uploadlargefilesaspnet.aspx
Start by checking some basic configuration. The default values of either of the following may cause problems in file upload - including termination of the connection. I believe IIS 6 would never allow file upload > 2GB (even if it could complete, regardless of config). Msdn describes these nicely.
<httpRuntime executionTimeout = "30" maxRequestLength="200"/>
EDIT: This is ASP.NET config, of course, which assumes you are running your own webdav server or a 3rd party server within ASP.NET. If it's a different webdav server, you'll want to look for the equivalent.