How do I increase the performance of HttpWebResponse? - c#

I am trying to build an application that sends and receives responses from a website.
None of the solutions I've read on Stack Overflow have solved my problem, so I think that my code could use optimization.
I have the following thread:
void DomainThreadNamecheapStart()
{
while (stop == false)
{
foreach (string FromDomainList in DomainList.Lines)
{
if (FromDomainList.Length > 1)
{
// I removed my api parameters from the string
string namecheapapi = "https://api.namecheap.com/foo" + FromDomainList + "bar";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(namecheapapi);
request.Proxy = null;
request.ServicePoint.Expect100Continue = false;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader sr = new StreamReader(response.GetResponseStream());
status.Text = FromDomainList + "\n" + sr.ReadToEnd();
sr.Close();
}
}
}
}
This thread is called when a button is clicked:
private void button2_Click(object sender, EventArgs e)
{
stop = false;
Thread DomainThread = new Thread(new ThreadStart(DomainThreadNamecheapStart));
DomainThread.Start();
}
I only receive around 12 responses in 10 seconds using the above code. When I try to make the same request in JavaScript or using a simple IFrame, it's more than twice as fast. The Browser doesn't use multiple threads for the connection, it waits until one is finished and then starts the new one.
I tried setting request.Proxy = null;, but it had negligible impact.
I have noticed that HTTPS is 2-3 times slower than HTTP. Unfortunately, I have to use HTTPS. Is there anything I can do to make it faster?

My bet would be on the aspect you pointed out - the HTTPS protocol.
The iteration between Client(browser) and Server for pure HTTP protocol is quite straightforward: Ask for the info, get info. If 1.0, close connection; if 1.1, it may stay alive for reuse. (Check image 1 for details.)
But when you do a HTTPS request, the initial protocol overhead is considerable (image 2); but, once the initial negotiation is done, some symmetric encryption takes place, and no further certificate negotiation is necessary, thus speeding up data transfer.
What I think the problem is, if you destroy the HTTPWebRequest object and creates a new one, the full HTTPS exchange takes place once again, slowing your iteration. (HTTPS + HTTP 1.1 Keepalive should do fine, though.)
So, suggestions: Switch to HTTP only, or reuse the connection objects.
And i hope it works for you. =)
(1) HTTP protocol handshake and response
(2) Initial HTTPS protocol handshake

Try to modify the System.Net.ServicePointManager.DefaultConnectionLimit value (the default value is 2).
Other reference (Performance Issues Part).

try these, it helped me to improve the performance,
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = 200;
ServicePointManager.MaxServicePointIdleTime = 2000;
ServicePointManager.SetTcpKeepAlive(false, 0, 0);

Related

How to check internet availability during single frame?

In Unity3D, is there any way to instantly check network availability? By "instantly", I mean during a single frame, because lines of code below this check need to work based on network availability.
It all happens during Start(). I know I can ping a web page, and get network availability based on any errors occurring during the download of a web page. However, an operation like this takes several seconds whereas I need to know the result immediately, before moving to next line of code in the script.
Assuming your game is running at reasonable frame rates 30fps or greater then any solution that you can come up with (even pinging the host of your server) will only be valid for instances where the latency of the round trip is less than 1/30th of a second or lower ( roughly 30 ms)
As such it is unrealistic to handle this between frames (except for maybe on local networks)
Instead i would suggest into looking into threading your network based code to decouple it from frames
Don't do this.
As long as you do not provide more information of what exactly you are planning, one cannot give you proper answers.
This is unsatisfying for both sides.
But what you actually could do:
Open a TCP connection to a web available device like the google.com server.
Once the network state is changed (connected, disconnected, ...) trigger a simple c# event or set a variable like isOnline = true;.
This can be a way. But it is a bad one.
It all happens during Start()
Yes, this is possible and can be done in one frame if this is the case. I would have discouraged it so much if this operation is performed every frame in the Update function but that's not the case. If this is done in the beginning of the app, that's fine. If you do this while the game is running, you will affect the performace.
but operation like this take several seconds
This is designed like this in order to avoid blocking the main Thread.
Network operation should be done in a Thread or with the async methods to avoid blocking the main Thread. This is how most Unity network API such as the WWW and UnityWebRequest work. They use Thread in the background and then give you coroutine to manage that Thread by yielding/waiting in a coroutine function over frames until the network request completes.
To accomplish this in one frame just use HttpWebRequest and provide a server url to check. Most examples uses the google.com since that's always online but make sure to provide "User-Agent" so that the connection is not rejected on mobile devices. Finally, if HttpStatusCode is not 200 or if there is an exception then there is a problem, otherwise assume it is connected.
bool isOnline()
{
bool success = true;
try
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://google.com");
request.Method = "GET";
//Make sure Google don't reject you when called on mobile device (Android)
request.changeSysTemHeader("User-Agent", "Mozilla / 5.0(Windows NT 10.0; WOW64) AppleWebKit / 537.36(KHTML, like Gecko) Chrome / 55.0.2883.87 Safari / 537.36");
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
if (response == null)
{
success = false;
}
if (response != null && response.StatusCode != HttpStatusCode.OK)
{
success = false;
}
}
catch (Exception)
{
success = false;
}
return success;
}
Class for the custom changeSysTemHeader function used to change the User-Agent:
public static class ExtensionMethods
{
public static void changeSysTemHeader(this HttpWebRequest request, string key, string value)
{
WebHeaderCollection wHeader = new WebHeaderCollection();
wHeader[key] = value;
FieldInfo fildInfo = request.GetType().GetField("webHeaders",
System.Reflection.BindingFlags.NonPublic
| System.Reflection.BindingFlags.Instance
| System.Reflection.BindingFlags.GetField);
fildInfo.SetValue(request, wHeader);
}
}
Simple usage from the Start function done in one frame:
void Start()
{
Debug.Log(isOnline());
}

Improving simultaneous HttpWebRequest performance in c#

I have an application that batches web requests to a single endpoint using the HttpWebRequest mechanism, the goal of the application is to revise large collections of product listings (specifically their descriptions).
Here is an example of the code I use to make these requests:
static class SomeClass
{
static RequestCachePolicy cachePolicy;
public static string DoRequest(string requestXml)
{
string responseXml = string.Empty;
Uri ep = new Uri(API_ENDPOINT);
HttpWebRequest theRequest = (HttpWebRequest)WebRequest.Create(ep);
theRequest.ContentType = "text/xml;charset=\"utf-8\"";
theRequest.Accept = "text/xml";
theRequest.Method = "POST";
theRequest.Headers[HttpRequestHeader.AcceptEncoding] = "gzip";
theRequest.Proxy = null;
if (cachePolicy == null) {
cachePolicy = new RequestCachePolicy(RequestCacheLevel.BypassCache);
}
theRequest.CachePolicy = cachePolicy;
using (Stream requestStream = theRequest.GetRequestStream())
{
using (StreamWriter requestWriter = new StreamWriter(requestStream))
{
requestWriter.Write(requestXml);
}
}
WebResponse theResponse = theRequest.GetResponse();
using (Stream responseStream = theResponse.GetResponseStream())
{
using (MemoryStream ms = new MemoryStream())
{
responseStream.CopyTo(ms);
byte[] resultBytes = GzCompressor.Decompress(ms.ToArray());
responseXml = Encoding.UTF8.GetString(resultBytes);
}
}
return responseXml;
}
}
My question is this; If I thread the task, I can call and complete at most 3 requests per second (based on the average sent data length) and this is through a gigabit connection to a router running business grade fibre internet. However if I divide the task up into 2 sets, and run the second set in a second process, I can double the requests complete per second.
The same can be said if I divide the task into 3 or 4 (after that performance seems to plateau unless I grab another machine to do the same), why is this? and can I change something in the first process so that running multiple processes (or computers) is no longer needed?
Things I have tried so far include the following:
Implementing GZip compression (as seen in the example above).
Re-using the RequestCachePolicy (as seen in the example above).
Setting Expect100Continue to false.
Setting DefaultConnectionLimit before the ServicePoint is created to a larger number.
Reusing the HttpWebRequest (does not work as remote host does not support it).
Increasing the ReceiveBufferSize on the ServicePoint both before and after creation.
Disabling proxy detection in Internet Explorer's Lan Settings.
My suspicion is not with the remote host as I can quite clearly wrench far more performance out by the methods I explained, but instead that some mechanism is capping the amount amount of data that is allowed to be sent through the HttpWebRequest (maybe something to do with the ServicePoint?). Thanks in advance, and please let me know if there is anything else you need clarifying.
--
Just to expand on the topic, my colleague and I used the same code on a system running Windows Server Standard 2016 64bit and requests using this method run significantly faster and more numerous. This seems to be pointing out that there is likely some sort of software bottleneck imposed proving that there is something going on. The slow operations are observed on Windows 10 Home/Pro 64bit and lower on faster hardware than the server is running on.
Scaling
I do not have a better solution for your problem but i think i know why your performance seems to peek or why it is machine dependent.
Usually a program has the best performance when the number of threads or processes matches exactly the number of cores. That is because the system can run them independently and the overhead for scheduling or context switching is minimized.
You arrived at your peek performance at 3 or 4 different tasks. From that i would conclude your machine has 2 or 4 cores. That would exactly match my explanation.

Parallel.foreach() with WebRequest. 429 error

I am currently running a script that is hitting an api roughly 3000 times over an extended period of time. I am using parallel.foreach() so that I can send a few requests at a time to speed up the process. I am creating two requests in each foreach iteration so I should have no more than 10 requests. My question is that I keep receiving 429 errors from the server and I spoke to someone who manages the server and they said they are seeing requests in bursts of 40+. With my current understanding of my code, I don't believe this is even possible, Can someone let me know If I am missing something here?
public static List<Requests> GetData(List<Requests> requests)
{
ParallelOptions para = new ParallelOptions();
para.MaxDegreeOfParallelism = 5;
Parallel.ForEach(requests, para, request =>
{
WeatherAPI.GetResponseForDay(request);
});
return requests;
}
public static Request GetResponseForDay(Request request)
{
var request = WebRequest.Create(request);
request.Timeout = 3600000;
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
StreamReader myStreamReader = new
StreamReader(response.GetResponseStream());
string responseData = myStreamReader.ReadToEnd();
response.Close();
var request2 WebRequest.Create(requestthesecond);
HttpWebResponse response2 = (HttpWebResponse)request2.GetResponse();
StreamReader myStreamReader2 = new
StreamReader(response2.GetResponseStream());
string responseData2 = myStreamReader2.ReadToEnd();
response2.Close();
DoStuffWithData(responseData, responseData2)
return request;
}
as smartobelix pointed out, your degreeofparallelism set to 5 doesn't prevent you of sending too many requests as defined by servers side policy. all it does is prevents you from exhausting number of threads needed for making those requests on your side. so what you need to do is communicate to servers owner, get familiar with their limits and change your code to never meet the limits.
variables involved are:
number of concurrent requests you will send (parallelism)
average time one request takes
maximum requests per unit time allowed by server
so, for example, if your average request time is 200ms and you have max degree of parallelism of 5, then you can expect to send 25 requests per second on average. if it takes 500ms per request, then you'll send only 10.
mix servers allowed numbers into that and you'll get the idea how to fine tune your numbers.
Both answers above are essentially correct. The problem I had was that the server manager set a rate limit of 100 requests/min where there previously wasn't one and didn't inform me. As soon as I hit this limit of 100 requests/min, I started receiving 429 errors nearly instantly and started firing off requests at that speed which were also receiving 429 errors.

How do I get reasonable performance out of C# WebClient UploadString

I have a Windows Service that sends json data to a MVC5 WebAPI using the WebClient. Both the Windows Service and the WebClient are currently on the same machine.
After it has run for about 15 minutes with about 10 requests per second each post takes unreasonably long to complete. It can start out at about 3 ms to complete a request and build up to take about 5 seconds, which is way too much for my application.
This is the code I'm using:
private WebClient GetClient()
{
var webClient = new WebClient();
webClient.Headers.Add("Content-Type", "application/json");
return webClient;
}
public string Post<T>(string url, T data)
{
var sw = new Stopwatch();
try
{
var json = JsonConvert.SerializeObject(data);
sw.Start();
var result = GetClient().UploadString(GetAddress(url), json);
sw.Stop();
if (Log.IsVerboseEnabled())
Log.Verbose(String.Format("json: {0}, time(ms): {1}", json, sw.ElapsedMilliseconds));
return result;
}
catch (Exception)
{
sw.Stop();
Log.Debug(String.Format("Failed to send to webapi, time(ms): {0}", sw.ElapsedMilliseconds));
return "Failed to send to webapi";
}
}
The result of the request isn't really of importance to me.
The serialized data size varies from just a few bytes to about 1 kB but that does not seem to affect the time it takes to complete the request.
The api controllers that receive the request completes their execution almost instantly (0-1 ms).
From various questions here on SO and some blog posts I've seen people suggesting the use of HttpWebRequest instead to be able to control options of the request.
Using HttpWebRequest I've tried these things that did not work:
Setting the proxy to an empty proxy.
Setting the proxy to null
Setting the ServicePointManager.DefaultConnectionLimit to an arbitrary large number.
Disable KeepAlive (I don't want to but it was suggested).
Not opening the response stream at all (had some impact but not enough).
Why are the requests taking so long? Any help is greatly appreciated.
It turned out to be another part of the program that took all available connections. I.e. I was out of sockets and had to wait for a free one.
I found out by monitoring ASP.NET Applications\Requests/Sec in the Performance Monitor.

Slow Http Get Requests With C#

I currently have a python server running that handles many different kinds of requests and I'm trying to do this using C#. My code is as follows:
try
{
ServicePointManager.DefaultConnectionLimit = 10;
System.Net.ServicePointManager.Expect100Continue = false;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Proxy = null;
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
response.Close();
}
}
catch (WebException e)
{
Console.WriteLine(e);
}
My first get request is almost instant but after that, the time it takes for one request to go through is almost 30 seconds to 1 min. I have researched everywhere online and tried to change the settings to make it run faster but it can't seem to work. Are there anymore settings that I could change to make it faster?
By using my psychic debugging skills I guess your server only accepts one connection at the time. You do not close your request, so you connection is kept alive. Keep alive attribute. The server will accept new connection when you current one is closed, which is default 100000ms or the server timeout. In your case I guess 30 to 60 seconds. You can start by setting the keepalive attribe to false.

Categories

Resources