I'm calling a third party web API to update some of our data on their side. I've been submitting about five jobs in quick succession and, without fail, the first two requests are working properly. The last three however never update. The application seems to be indicating that the request is timing out, but I want to make sure that I'm not messing anything up on my side.
I'm calling the function below with an Action<string, Dictionary<string,object>> Delegate and I'm using BeginInvoke to call the API asynchronously. I don't really care about the response. Am I misunderstanding something about WebRequest.GetResponse() or is this a problem with the endpoint?
private void UpdateJobInfo(string jobId, Dictionary<string, object> updates)
{
var postData = GetJsonEncodedValues(updates);
var request = WebRequest.Create(string.Format(JobResultEndpoint, _username, jobId));
request.ContentType = "application/json; charset=utf-8";
request.Method = WebRequestMethods.Http.Put;
request.Headers[HttpRequestHeader.Authorization] = GetAuthenticationCredentials();
request.GetRequestStream().Write(Encoding.ASCII.GetBytes(postData), 0, Encoding.ASCII.GetBytes(postData).Length);
request.GetResponse();
}
You're not disposing of the response (or indeed the request stream, although that's a slightly different matter). That means you're leaving the connection to the server open until the finalizer happens to notice that the response can be finalized. The connections are pooled with (by default) two connections per URL. So your later requests are waiting for the earlier responses to be finalized before they can obtain a connection.
Better code:
// Is this definitely what you want? What about non-ASCII data?
byte[] binaryPostData = Encoding.ASCII.GetBytes(postData);
using (var requestStream = request.GetRequestStream())
{
requestStream.Write(binaryPostData, 0, binaryPostData.Length);
}
using (var response = request.GetResponse())
{
// We don't care about the response, but we have to fetch it
// and dispose it.
}
Related
I'm developing a class that can send fttp requests, it has a utility method that can execute different types of ftp methods:
private FtpWebResponse DoFttpRequest(Uri uri, NetworkCredential credentials, string method, string file = null)
{
var request = (FtpWebRequest)WebRequest.Create(uri);
request.Credentials = credentials;
request.Method = method;
if (!string.IsNullOrEmpty(file))
{
using (var stream = request.GetRequestStream())
using (var writer = new StreamWriter(stream))
{
writer.Write(file);
}
}
return (FtpWebResponse)request.GetResponse();
}
As you can see, this methods executes ftp method and returns response stream to the caller. Here is the client method that uses this method to write string contents to a file through ftp:
public void WriteToFile(string path, string contents)
{
var uri = new Uri(path);
using (var ftpResponse = DoFttpRequest(uri, _credentials, Ftp.UploadFile, contents)) { }
}
As you can see, here I'm using empty using statement using (var ftpResponse = DoFttpRequest(uri, _credentials, Ftp.UploadFile, contents)) { } to dispose of the received stream.
Is this a good approach to dispose object like that? Is it even necessary to dispose this stream, since it will probably be disposed by the garbage collector anyway?
Is it even necessary to dispose this stream, since it will probably be
disposed by the garbage collector anyway
You can use this simple code to see how not disposing response stream might completely break application. I use http request instead of ftp for simlicity of testing, but that applies equally to ftp requests.
public class Program {
static void Main(string[] args) {
// this value is *already* 2 by default, set for visibility
ServicePointManager.DefaultConnectionLimit = 2;
// replace example.com with real site
DoFttpRequest("http://example.com");
DoFttpRequest("http://example.com");
DoFttpRequest("http://example.com");
Console.ReadLine();
}
private static HttpWebResponse DoFttpRequest(string uri) {
var request = (HttpWebRequest) WebRequest.Create(uri);
var response = (HttpWebResponse) request.GetResponse();
Console.WriteLine("got response");
return response;
}
}
Note that you are not disposing HttpWebResponse. What will happen is you will see 2 "got response" messages in console and then application will hang trying to get response 3rd time. That's because concurrent connections limit per endpoint (per host) is 2, so while 2 connections to the host (example.com here) are "in progress" - next connection to the same host will have to wait for them to complete. Because you don't dispose response - those connections will not be "completed" until GC collects them. Until then - your application hangs and then fails by timeout (if request.Timeout is set to some reasonable time). All subsequent requests also hang then fail by timeout. If you dispose responses - application will work as expected.
So always dispose things that are disposable. Using block is not necessary, you can just do DoFtpRequest(..).Dispose(). But if you prefer empty using - at least don't declare unnecessary variable, just do using (DoFttpRequest(..)) {}. One thing to note when choosing between empty using and Dispose is the possibility of null being returned by DoFtpRequest, because if it will return null - explicit Dispose will throw NullReferenceException while empty using will just ignore it (you can do DoFttpRequest(...)?.Dispose(); if you expect nulls but don't want to use using).
What using statement does is actually execute some sort of code and then simply calls the Dispose method.
That's why u can use it only types inherits from IDisposible interface(in most case)
So you don't really have to use using statement. Just simply call
DoFttpRequest(uri, _credentials, Ftp.UploadFile, contents)).Dispose()
If you dont Dispose and object by yourself the Garbage Collector automatically disposes it after the scope completed.
You don't have to think much about memory when you use high level languages like c#, java ... They are called Memory Managed Languages. They handle thoose kind of staff for you.
I have a custom WebUploadTraceListener : TraceListener that I use to send HTTP (and eventually HTTPS) POST data to a web service that writes it to a database.
I have tested doing this with both WebClient and HttpWebRequest and empirically I'm seeing better performance with the latter.
Because of the one-way nature of the data, I don't care about the server response. But I found that if I don't handle the HttpWebResponse my code locks up on the third write. I think this is because of the DefaultConnectionLimit setting and the system not reusing the resource...
Per Jon Skeet
Note that you do need to dispose of the WebResponse returned by request.GetResponse - otherwise the underlying infrastructure won't know that you're actually done with it, and won't be able to reuse the connection.
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(ServiceURI);
httpRequest.Method = "POST";
httpRequest.ContentType = "application/x-www-form-urlencoded";
try
{
using (Stream stream = httpRequest.GetRequestStream())
{
stream.Write(postBytes, 0, postBytes.Length);
}
using (HttpWebResponse response = (HttpWebResponse)httpRequest.GetResponse())
{
/// discard response
}
}
catch (Exception)
{
/// ...
}
I want to maximize the speed of sending the POST data and get back to the main program flow as quickly as possible. Because I'm tracing program flow, a synchronous write is preferable, but not mandatory as I can always add a POST field including a Tick count.
Is an HttpWebRequest and stream.Write the quickest method for doing this in .Net4.0 ?
Is there a cheaper way of discarding the unwanted response?
Attually, httpRequest.GetResponse only post the data to server and don't download anything to the client, except the information to tell client if the request is successfully processed by server.
You only get the respone data when you call GetResponseStream. If even the information about the success/error of the request you don't want to received, you don't have anyway to tell if server is success process the request or not.
So the answer is:
Yes, that is lowest level for managed code, unless you want mess up
with socket (which you shouldn't)
With HttpWebRequest, no. You almost don't get any data
you don't want.
I currently have a python server running that handles many different kinds of requests and I'm trying to do this using C#. My code is as follows:
try
{
ServicePointManager.DefaultConnectionLimit = 10;
System.Net.ServicePointManager.Expect100Continue = false;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Proxy = null;
request.Method = "GET";
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
response.Close();
}
}
catch (WebException e)
{
Console.WriteLine(e);
}
My first get request is almost instant but after that, the time it takes for one request to go through is almost 30 seconds to 1 min. I have researched everywhere online and tried to change the settings to make it run faster but it can't seem to work. Are there anymore settings that I could change to make it faster?
By using my psychic debugging skills I guess your server only accepts one connection at the time. You do not close your request, so you connection is kept alive. Keep alive attribute. The server will accept new connection when you current one is closed, which is default 100000ms or the server timeout. In your case I guess 30 to 60 seconds. You can start by setting the keepalive attribe to false.
I have some logic issues with HttpWebRequest class.
I using HttpWebRequest class from System.Net namespace, and when I doing this:
while(true)
{
HttpWebRequest request = WebRequest.Create("http://somesite.com/") as HttpWebRequest;
HttpWebResponse responce = request.GetResponse() as HttpWebResponse;
}
I get response one by one with overage one sec interval, but I think my internet connection can work faster because the received data is very small. Then I try this:
while(true)
{
HttpWebRequest request = WebRequest.Create("http://google.com/") as HttpWebRequest;
HttpWebResponse response = request.BeginGetResponse(EndReceive, obj);
}
internal void EndReceive(IAsyncResult ar)
{
obj.Response = obj.Request.EndGetResponse(ar) as HttpWebResponse;
}
And I get very small speed increasing, something about 10-30%, but I use async request, I send to server 5 request instead of one, why speed wasn't increase by more than 100%?
It's ok if server can't handle more than one request from one ip at the same time... But when I run 10 console app with code:
void SendRequest()
{
HttpWebRequest request = WebRequest.Create("http://google.com/") as HttpWebRequest;
HttpWebResponse responce = request.BeginGetResponse(EndReceive, obj);
}
void EndReceive(IAsyncResult ar)
{
obj.Response = obj.Request.EndGetResponse(ar) as HttpWebResponse;
}
I get speed increasing for like 4-8 times, is problem with HttpWebRequest class? And why i cant get such speed with one application but many async requests?
I strongly suspect you're basically being bounded by the built-in connection pool, which will limit the number of requests a single process (or possibly AppDomain) makes to a given host. If you want to change the number of concurrent requests you can make to a single host, use the <connectionManagement> element in your app.config.
As an aside, you should have a using statement when you use the response, otherwise you're not disposing of it properly... which can cause horrible problems with the connection pool in itself.
I have a webrequest that createst a persistent (keepalive) connection to the server, e.g.:
webRequest.ContentType = "application/x-www-form-urlencoded";
// Set the ContentLength property of the WebRequest.
webRequest.ContentLength = byteArray.Length;
webRequest.Timeout = Timeout.Infinite;
webRequest.KeepAlive = true;
webRequest.ReadWriteTimeout = Timeout.Infinite;
//[System.Threading.Timeout]::Infinite
webRequest.UserAgent = "www.socialblazeapp.com";
Stream dataStream = webRequest.GetRequestStream();
// Write the data to the request stream.
dataStream.Write(byteArray, 0, byteArray.Length);
// Close the Stream object.
dataStream.Close();
// Get the response.
webResponse = (HttpWebResponse)webRequest.GetResponse();
Encoding encode = System.Text.Encoding.GetEncoding("utf-8");
responseStream = new StreamReader(webResponse.GetResponseStream(), encode);
while(!responseStream.EndOfStream){ //do something}
I'm wondering why responseStream.EndOfStream becomes true after a while. I would have assumed that because this is a persistent connection, the stream would never close?
Any ideas why this is happening?
I think you're confusing keeping the TCP connection open with keeping the response stream open. The TCP connection is the underlying transmission medium, whereas the request and response are individual entities communicated via that connection.
With a persistent connection you [in theory] could issue multiple request/response pairs across the same connection. Without a persistent connection you would essentially open the connection, issue the request, receive the response, then close the connection and then repeat that process for subsequent request/response pairs.
The response itself however is finite in size, once you're received the completed response the stream should close as there is nothing more to tell you. Once you issue another request, another response would follow; I'm not clear as to whether .Net will reuse the underlying persistent connection.
All that is supposed to do is keep the TCP connection open. If I'm reading this correctly, that means you can reuse the same physical TCP connection for multiple requests to a given server. What it won't do is keep your stream open so that the server can send additional information.
If you really want streaming data you should either use a different protocol or straight TCP.