How to reuse connection/request to avoid Handshake - c#

I would like to know how Reusing HttpWebRequests works to avoid SSL handshake process everytime.
I use the keep alive headr in the request and the first handshake is successfull but i would like to reuse the request in order to avoid future handshakes against the same certificate.
Think is i dont know if i had to reuse the HttpWebRequest object instance or even if i create a new request object it will use the same connection since the keep alive is already on place and working.
Should i store the existing request object lets say at class level and reuse it? or i can safely dispose the object and next time i create a request it will be under the effect of the keep alive connection?
I am asking this cause i need to lower the timings in an application, and worst part is always ssl handshake, that can take over 3seconds in a phone with medium signal from carrier.
I am using C# to develop.
I have tried to look for this kind of information but all i read over internet is how to set up the SSL Server and enabling certain settings but not how to make the client work with these features.
EDIT: FINDINGS RESULTS
I created a sample program in .NET C# whith the following code:
Stopwatch sw = new Stopwatch();
sw.Start();
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(new Uri(#"https:\\www.gmail.com"));
request.KeepAlive = true;
request.Method = "GET";
request.ContentType = "application/json";
request.ContentLength = 0;
request.ConnectionGroupName = "test";
//request.UnsafeAuthenticatedConnectionSharing = true;
//request.PreAuthenticate = true;
var response = request.GetResponse();
//response.Close();
request.Abort();
sw.Stop();
listBox1.Items.Add("Connection in : " + sw.Elapsed.ToString());
sw.Reset();
sw.Start();
HttpWebRequest request2 = (HttpWebRequest)WebRequest.Create(new Uri(#"https:\\www.gmail.com"));
request2.KeepAlive = true;
request2.Method = "GET";
//request2.UnsafeAuthenticatedConnectionSharing = true;
//request2.PreAuthenticate = true;
request2.ContentType = "application/json";
request2.ContentLength = 0;
request2.ConnectionGroupName = "test";
var response2 = request2.GetResponse();
//response2.Close();
request2.Abort();
sw.Stop();
listBox1.Items.Add("Connection 2 in : " + sw.Elapsed.ToString());
Results was that the first connection triggered the CertificatevalidationCallback 3 times (one for each certificate) and then the second connection only once, but when i CLOSED THE RESPONSE before performing the next request, no callback was triggered.
I suppose that keeping a response open keeps the socket open and thats why the partial handshake takes place (not the full certificate chain).
Sorry if I sound kind of noob in this matter, SSL and timings was coded by a work mate and the code was not clear to follow. But i think i have the answer. Thanks Poupou for your tremendous help

This is already built-in the the SSL/TLS stack shipped with Xamarin.iOS (i.e. at a lower level than HttpWebRequest). There's nothing to set up to enabled this. In fact you would need extra code if you wanted to disable it.
If the server supports it then subsequent handshake will already faster because a Session ID cache will be used (see TLS 1.0 RFC page 30).
However the server does not have to honor the session id (given to it). In such case a full handshake will need be done again. IOW you cannot force this from the client (only offer).
You can verify this using a network analyzer, e.g. wireshark, by looking at the exchanges (and comparing them to the RFC).

Related

Why is this HttpWebRequest slow in C#

I am using this code to perform a simple REST request. (The code mostly comes from this q: How to post JSON to the server?).
Why is it so slow? I'm using VS 2013 and it takes about 15 secs on first try and then about 4 secs. on subsequent tries, yet in another language (Delphi) I can make a http request and it takes about 1 sec consistently.
var request = (HttpWebRequest)WebRequest.Create("http://jsonplaceholder.typicode.com/posts");
request.ContentType = "application/json";
request.Method = "POST";
request.ServicePoint.Expect100Continue = false;
using (var streamWriter = new StreamWriter(request.GetRequestStream()))
{
string json = new JavaScriptSerializer().Serialize(new
{
title = "foo",
body = "bar",
userId = "1"
});
streamWriter.Write(json);
}
var response = (HttpWebResponse)request.GetResponse();
using (var streamReader = new StreamReader(response.GetResponseStream()))
{
var result = streamReader.ReadToEnd();
textBox1.Text = result;
}
P.S. You can test this code for yourself, it is simply using a test REST server from the internet at above url.
What do you mean by first try? It means the first try after I leave the computer for a while
Before reaching the server, there is a process of finding the IP address of the server. This process is called Dns Resolution.
First time, your application has to go through the process of Dns Resolution in order to find the IP address. Once you resolved the IP address, the IP address will be cached in the local machine.
So, further calls doesn't go through the process of Dns Resolution; it can use the cached IP. After a while, the cache will be dropped and again you'll hit the DNS server for resolving the IP address.
This is the only explanation I can come up for the delay you're noticing. Whenever you're noticing a delay, that probably means that you're hitting the Dns Server just because it is either the first time or cache is expired.
Why it is faster in other environment(Delphi)?
I'm sorry I can't come up with a good reason for this.

How to make a C# application using HttpWebRequest behave like fiddler

I have a console app that uses 20 or so threads to connect to a remote web server and send arbitrary http requests rather small in size, 100% over ssl. The remote web server is actually an entire load balanced data center full of high availability systems that can handle hundreds of thousands of request per second. This is not a server or bandwidth issue. With that being said, I don't run it, nor do i have any influence in how it is configured, so I couldn't make server side changes even if I wanted to.
When running the app with fiddler the app performs amazingly fast. When not running in fiddler its really much slower, to the point of being useless for the task at hand. It also seems to lock up at some point rather early in the process, but this could simply be a deadlock issue, im not sure yet.
Anyhow, fiddler being a proxy , is undoubtedly modifying my requests/connections in some way that ensures wonderful throughput, however I have no idea what its doing. I am trying to figure it out so that I can force my .net application to mimic fiddlers connection handling behavior without actually having to run it through fiddler
I've pasted the connection code below.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Net;
using System.IO;
namespace Redacted
{
public class HiveCommunicator
{
public static IResponse SendRequest(IRequest request) {
ServicePointManager.DefaultConnectionLimit = 60;
ServicePointManager.Expect100Continue = false;
string hostUrlString = string.Empty;
if (request.SiteID <= 0)
hostUrlString = string.Format("{0}://{1}{2}", request.UseSSL ? "https" : "http", DataCenters.GetCenter(request.DataCenter), request.Path);
else
hostUrlString = string.Format("{0}://{1}{2}", request.UseSSL ? "https" : "http", DataCenters.GetCenter(request.DataCenter), string.Format(request.Path, request.SiteID));
HttpWebRequest webRequest = (HttpWebRequest)HttpWebRequest.Create(hostUrlString);
switch (request.ContentType)
{
default:
case ContentTypes.XML:
webRequest.ContentType = "application/xml";
break;
case ContentTypes.JSON:
webRequest.ContentType = "application/json";
break;
case ContentTypes.BINARY:
webRequest.ContentType = "application/octet-stream";
break;
}
if (request.RequiresAuthorizationToken)
{
AuthorizationToken tok = HiveAuthentication.GetToken(request.SiteID);
if (tok == null)
{
return null;
}
webRequest.Headers.Add(HttpRequestHeader.Authorization, tok.Token);
}
bool UsesRequestBody = true;
switch (request.HttpVerb)
{
case HttpVerbs.POST:
webRequest.Method = "POST";
break;
case HttpVerbs.DELETE:
webRequest.Method = "DELETE";
UsesRequestBody = false;
break;
case HttpVerbs.PUT:
webRequest.Method = "PUT";
break;
default:
case HttpVerbs.GET:
webRequest.Method = "GET";
UsesRequestBody = false;
break;
}
HttpWebResponse webResponse = null;
Stream webRequestStream = null;
byte[] webRequestBytes = null;
if (UsesRequestBody)
{
webRequestBytes = request.RequestBytes;
webRequest.ContentLength = webRequestBytes.Length;
webRequestStream = webRequest.GetRequestStream();
for (int i = 0; i < webRequest.ContentLength; i++)
{
webRequestStream.WriteByte(webRequestBytes[i]);
}
}
try
{
webResponse = (HttpWebResponse)webRequest.GetResponse();
}
catch (WebException ex)
{
webResponse = (HttpWebResponse)ex.Response;
}
if (UsesRequestBody)
{
webRequestStream.Close();
webRequestStream.Dispose();
}
IResponse respReturn = request.ParseResponse(webResponse);
webResponse.Close();
return respReturn;
}
}
}
I thank the folks here who tried to help. Unfortunately this needed a call to Microsoft Profesional Support.
Even though I was using ServicePointManager.Expect100Continue = false; It was happening to late in the app life cycle. Looking at the System.Net.Trace logs we saw that the expect-100 continue header was still being used (except when using fiddler). The solution was to put this into the app startup (in Main())
I was also trying to read the response stream before closing the request stream.
After fixing that, everything sped up nicely. The app runs much faster without fiddler than with, which is what i would expect.
A couple people said to call dispose on on the HttpWebResponse. That class does not have a public Dispose method. I'm assuming .Close() calls .Dispose() internally though.
You can play around with Fiddler's "Connection Options" to see if the reason for Fiddler's powerfull throughput is reusing of client connections. If that's the case, you may want to consider implementing a shared secure http connection pool or just go watch a movie or something. ^^
Taking a wild guess here, but it might have to do with a simple app.config setting:
<system.net>
<connectionManagement>
<add address="*" maxconnection="40"/>
</connectionManagement>
</system.net>
I had the same problem in a multi threaded HTTP requesting app once and this solved that problem.
Given that your application sends "arbitrary http requests rather small in size", it may help to disable the Nagle algorithm.
ServicePointManager.UseNagleAlgorithm = true;
From MSDN: A number of elements can impact performance when using HttpWebRequest, including:
The ServicePointManager class.
The DefaultConnectionLimit property.
The UseNagleAlgorithm property.
The Nagle algorithm [...] accumulates sequences of small messages into larger TCP packets before the data is sent over the network. [...] Generally for constant high-volume throughput, a performance improvement is realized using the Nagle algorithm. But for smaller throughput applications, degradation in performance may be seen. [...] if an application is using low-latency connections, it may help to set this property to false.

Fast HTTP call ASP.Net

I'm sending an HTTPWebRequest to a 3rd party with the code below. The response takes between 2 and 22 seconds to come back. The 3rd party claims that once they receive it, they are sending back a response immediately, and that none of their other partners are reporting any delays (but I'm not sure I believe them -- they've lied before).
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("https://www.example.com");
request.Timeout = 38000;
request.Method = "POST";
request.ContentType = "text/xml";
StreamWriter streamOut = new StreamWriter(request.GetRequestStream(), System.Text.Encoding.ASCII);
streamOut.Write(XMLToSend); // XMLToSend is just a string that is maybe 1kb in size
streamOut.Close();
HttpWebResponse resp = null;
resp = (HttpWebResponse)request.GetResponse(); // This line takes between 2 and 22 seconds to return.
StreamReader responseReader = new StreamReader(resp.GetResponseStream(), Encoding.UTF8);
Response = responseReader.ReadToEnd(); // Response is merely a string to hold the response.
Is there any reason that the code above would just...pause? The code is running in a very solid hosting provider (Rackspace Intensive Segment), and the machine it is on isn't being used for anything else. I'm merely testing some code that we are about to put into production. So, it's not that the machine is taxed, and given that it is Rackspace and we are paying a boatload, I doubt it is their network either.
I'm just trying to make sure that my code is as fast as possible, and that I'm not doing anything stupid, because in a few weeks, this code will be ramped up to run 20,000 requests to this 3rd part every hour.
Try doing a flush before you close.
streamOut.Flush();
streamOut.Close();
Also download microsoft network monitor to see for certain if the hold up is you or them, you can download it here...
http://www.microsoft.com/downloads/en/details.aspx?FamilyID=983b941d-06cb-4658-b7f6-3088333d062f&displaylang=en
There is a few things that I would do:
I would profile the code above and get some definitive timings.
Implement the using statements in order to dispose of resources correctly.
Write the code in an async style there's going to be an awful lot of IO wait once its ramped.
Can you hit the URL in a regular ole browser? How fast is that?
Can you hit other URL's (not your partner's) in this code? How fast is that?
It is entirely possible you're getting bitten by the 'latency bug' where even an instant response from your partner results in unpredictable delays from your perspective.
Another thought: I noticed the https in your URL. Is it any faster with http?

C# first time I use GetRequestStream() it takes 20 seconds

I use C#.
The first time I use WebRequest GetRequestStream() in my code, it takes up to 20 seconds. After that it takes it takes under 1 second.
Below is my code. The row "this.requestStream = httpRequest.GetRequestStream()" is causing the delay.
StringBuilder postData = new StringBuilder(100);
postData.Append("param=");
postData.Append("test");
byte[] dataArray = Encoding.UTF8.GetBytes(postData.ToString());
this.httpRequest = (HttpWebRequest)WebRequest.Create("http://myurl.com");
httpRequest.Method = "POST";
httpRequest.ContentType = "application/x-www-form-urlencoded";
httpRequest.ContentLength = dataArray.Length;
this.requestStream = httpRequest.GetRequestStream();
using (requestStream)
requestStream.Write(dataArray, 0, dataArray.Length);
this.webResponse = (HttpWebResponse)httpRequest.GetResponse();
Stream responseStream = webResponse.GetResponseStream();
StreamReader responseReader = new System.IO.StreamReader(responseStream, Encoding.UTF8);
String responseString = responseReader.ReadToEnd();
How can I see what causes this? (for instance: DNS lookup? Server not responding?)
Thanks and regards, Koen
You could also try to set the .Proxy = null. Sometimes it tries to autodetect a proxy which takes up time.
That sounds like your application is pre-compiling when you first hit it. This is how .net works.
Here is a way to speed up your web app. link text
It's actually the framework for HTML operations doing startup network proxy checking to setup the property HttpWebRequest.DefaultWebProxy.
In my application as part of the startup actions I create a fully formed request as a back ground task to get this overhead out of the way.
HttpWebRequest web = (HttpWebRequest)WebRequest.Create(m_ServletURL);
web.UserAgent = "Mozilla/4.0 (Windows 7 6.1) Java/1.6.0_26";
Setting the UserAgent field in my case is triggers the startup overhead.
I had the same issue but .proxy = null didn't solve it for me. Depending on the network structure the problem might be connected to IPv6. The first request nearly took exactly 21sec each time the application run. Therefore I argue it must be a timeout value. If this value is reached the fallback solution IPv4 is used (for subsequent calls as well). Forcing the use of IPv4 in the first place solved the issue for me!
HttpWebRequest request = WebRequest.Create("http://myurl.com") as HttpWebRequest;
request.ServicePoint.BindIPEndPointDelegate = (servicePount, remoteEndPoint, retryCount) =>
{
if (remoteEndPoint.AddressFamily == System.Net.Sockets.AddressFamily.InterNetwork)
{
return new IPEndPoint(IPAddress.Any, 0);
}
throw new System.InvalidOperationException("No IPv4 address found.");
};
One problem may be the fact that .NET, by default, only allows 2 connections at a time.
You can increase the number of simultaneous connections with:
ServicePointManager.DefaultConnectionLimit = newConnectionLimit;
We leave the determination of the optimal value to the user.

System.Net.WebException: The operation has timed out

I have a big problem: I need to send 200 objects at once and avoid timeouts.
while (true)
{
NameValueCollection data = new NameValueCollection();
data.Add("mode", nat);
using (var client = new WebClient())
{
byte[] response = client.UploadValues(serverA, data);
responseData = Encoding.ASCII.GetString(response);
string[] split = Javab.Split(new[] { '!' }, StringSplitOptions.RemoveEmptyEntries);
string command = split[0];
string server = split[1];
string requestCountStr = split[2];
switch (command)
{
case "check":
int requestCount = Convert.ToInt32(requestCountStr);
for (int i = 0; i < requestCount; i++)
{
Uri myUri = new Uri(server);
WebRequest request = WebRequest.Create(myUri);
request.Timeout = 200000;
WebResponse myWebResponse = request.GetResponse();
}
break;
}
}
}
This produces the error:
Unhandled Exception: System.Net.WebException: The operation has timed out
at System.Net.HttpWebRequest.GetResponse()
at vir_fu.Program.Main(String[] args)
The requestCount loop works fine outside my base code but when I add it to my project I get this error. I have tried setting request.Timeout = 200; but it didn't help.
It means what it says. The operation took too long to complete.
BTW, look at WebRequest.Timeout and you'll see that you've set your timeout for 1/5 second.
Close/dispose your WebResponse object.
I'm not sure about your first code sample where you use WebClient.UploadValues, it's not really enough to go on, could you paste more of your surrounding code? Regarding your WebRequest code, there are two things at play here:
You're only requesting the headers of the response**, you never read the body of the response by opening and reading (to its end) the ResponseStream. Because of this, the WebRequest client helpfully leaves the connection open, expecting you to request the body at any moment. Until you either read the response body to completion (which will automatically close the stream for you), clean up and close the stream (or the WebRequest instance) or wait for the GC to do its thing, your connection will remain open.
You have a default maximum amount of active connections to the same host of 2. This means you use up your first two connections and then never dispose of them so your client isn't given the chance to complete the next request before it reaches its timeout (which is milliseconds, btw, so you've set it to 0.2 seconds - the default should be fine).
If you don't want the body of the response (or you've just uploaded or POSTed something and aren't expecting a response), simply close the stream, or the client, which will close the stream for you.
The easiest way to fix this is to make sure you use using blocks on disposable objects:
for (int i = 0; i < ops1; i++)
{
Uri myUri = new Uri(site);
WebRequest myWebRequest = WebRequest.Create(myUri);
//myWebRequest.Timeout = 200;
using (WebResponse myWebResponse = myWebRequest.GetResponse())
{
// Do what you want with myWebResponse.Headers.
} // Your response will be disposed of here
}
Another solution is to allow 200 concurrent connections to the same host. However, unless you're planning to multi-thread this operation so you'd need multiple, concurrent connections, this won't really help you:
ServicePointManager.DefaultConnectionLimit = 200;
When you're getting timeouts within code, the best thing to do is try to recreate that timeout outside of your code. If you can't, the problem probably lies with your code. I usually use cURL for that, or just a web browser if it's a simple GET request.
** In reality, you're actually requesting the first chunk of data from the response, which contains the HTTP headers, and also the start of the body. This is why it's possible to read HTTP header info (such as Content-Encoding, Set-Cookie etc) before reading from the output stream. As you read the stream, further data is retrieved from the server. WebRequest's connection to the server is kept open until you reach the end of this stream (effectively closing it as it's not seekable), manually close it yourself or it is disposed of. There's more about this here.
proxy issue can cause this. IIS webconfig put this in
<defaultProxy useDefaultCredentials="true" enabled="true">
<proxy usesystemdefault="True" />
</defaultProxy>
I remember I had the same problem a while back using WCF due the quantity of the data I was passing. I remember I changed timeouts everywhere but the problem persisted. What I finally did was open the connection as stream request, I needed to change the client and the server side, but it work that way. Since it was a stream connection, the server kept reading until the stream ended.
I encountered the same error than adding
Task.Delay(2000);
in each request solved the problem

Categories

Resources