Fiddler makes HttpWebRequest/HttpClient behaviour unexpected - c#

Just wanted to ask if somebody encountered a problem of using HttpWebRequest class (or even new HttpClient) when trying upload some file to the server when Fiddler is launched.
I have encountered the bug, namely, AllowWriteStreamBuffering property of HttpWebRequest class is not working when the fiddler is launched, so upload progress is not working at all. Bytes are not sent instantly but buffored even if I set AllowWriteStreamBuffering to true, therefore I can't track upload progress. It is works ok when fiddler is not launched.
Moreover if I close fiddler when my application is uploading some file, it will crash too throwing WebException which says "The underlying connection was closed: An unexpected error occurred on a receive."
The same things is happening with new .net 4.5 HttpClient class.

Sorry for the confusion; Fiddler currently only supports streaming of responses and not requests.
Some proxies (like Fiddler) or other intermediaries will fully buffer a request before sending it to the server for performance or functional (e.g. virus scanning, breakpoint debugging) reasons.
http://www.fiddler2.com/fiddler/help/streaming.asp

Ok, caught my interest this, it appears for AllowWriteSteamBuffering to work the server must support Chunked transfer encoding. which led me to this forum post about proxies and the afore mentioned chunked encoding: https://groups.google.com/forum/?fromgroups=#!topic/httpfiddler/UkOiK96kg_k.
It appears from what I read here that when using a proxy you may or may not get the chunked encoding, etc. hence your issue.
I also found this which seemed a good detailed article on uploading with feedback which may be helpful?
http://blogs.msdn.com/b/delay/archive/2009/09/08/when-framework-designers-outsmart-themselves-how-to-perform-streaming-http-uploads-with-net.aspx

Related

What is different in the .NET (c#) implementation of webclient versus java or firefox RESTClient?

I need to post JSON to a https endpoint using c# .
I am using System.Net.WebClient (or HttpWebRequest ).
When I post the JSON to the endpoint using JAVA or the firefox RESTClient everything works fine (from the same machine).
With Wireshark I can see that the receiving server RESETs the connection, resulting in this .NET exception:
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
I don't use any proxy servers.
I have set the request timeout to -1 (and other values).
What can the .NET runtime be adding to (or removing from) the requests that the firefox RESTPlugin en JAVA are not ?
There must be a difference.
Fiddler shows me two http(s) requests with response status 200, but no data seems to be coming back (and Fiddler introduces a proxy...)
#Mason thanks for making me look once more at the fiddler data.
After setting the protocol to TLS1.2
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
It works.
I have seen posts that actual get an error message hinting at the minimal TLS support. But here I had to go through StackOverflow first.
Just the exercise of formulating the question and the first quick responders helpt me fix this quickly !

Exception-proof HttpListener possible?

Last few days I've been building a web server application in C# that uses HttpListener. I've learned quite a lot on the way, and still am. Currently I got it all working, setting headers here and there depending on certain situations.
In most cases things are working fine, however at times a exception error is thrown. This happens on a few occasions. Most if not all of them is closing a connection before all data is send. Then the error occurs. But some of them are really caused by browsers as far as I can tell.
Like let's take Chrome. Whenever I go to a MP3 file directly, it sends 2 GET requests. And one of them is causing the error, the other is working and receiving part of the content. After this, I can listen the MP3 and there are no issues. Streaming works.
But back to the request that gives me the error, there is nothing in the headers that I could use in my code to not output data, like I do already with HEAD requests. So I'm quite puzzled here.
IE also has this problem with both opening MP3 files directly, and streaming via HTML5 audio tag. It also varies from time to time. Sometimes I open the page, and only 2 requests are made. The HTML page, and the MP3. No error there. Sometimes tho, there are 3 requests. It connects to the MP3 twice. Now sometimes one of those connections is being aborted straight after I open the page, and sometimes 2 requests to the MP3 file, doesn't even accept data. In both request headers, they want end of the file. So bytes: 123-123/124.
I've also tested it on w3school's audio element. IE also makes twice connections there, one aborted, other loading the MP3 file.
So my question is, is it possible to make the web server exception/error-proof, or maybe better question, is it bad that these exceptions are thrown? Or do you perhaps know how to fix these errors?
The error I'm getting is: I/O Operation has been aborted by either a thread exit or an application request.
The way I write to the client is:
using (Stream Output = _CResponse.OutputStream)
{
Output.Write(_FileOutput, rangeBegin, rangeLength);
}
I am not sure if there's another (better) way. This is what I came across in many topics, tutorials and pages while researching.
About headers: Default headers: Content Length, Content Type, Status Code. In some cases, like MP3 files and video's, I add a Accept-Ranges: Bytes header. In case the request header has Range in it, I add Content-Range header, and PartialContent status code.
From the server's point of view any client can disconnect at any time. This is part of the normal operation of a server. Detect this specific case, log it and swallow the exception (because it has been handled). It's not a server bug.

C# Cannot load XML document from server

I had an XMLDocument loading a document from a server with no problems till, almost randomly, I started getting a connection refused error.
It also doesn't matter what host I put in, whether it's a legit one or unresolvable. It always gives the same result.
Here's the code:
XmlDocument doc = new XmlDocument();
doc.Load("http://doesnotmatterifIresolveornot.com");
And here is the error:
{"No connection could be made because the target machine actively refused it 127.0.0.1:8888"}
I've turned off any applicable firewalls I can find in Win7, but it's weird cause it happened while I was testing it.
Find out why it's trying to go to 127.0.0.1:8888.
My guess is that for some reason, it thinks that's your HTTP proxy. Did you run something like Fiddler recently? Fiddler runs on 8888 and changes your default proxy settings - maybe they got stuck incorrectly?
Look in Control Panel, or in the Internet Explorer settings.
Are you serving your XML document using IIS? If so, you may need to add a mime-type definition to IIS to serve XML files. This article should help with that (if it is indeed the problem).
you may also try the HTTP loader to get a more detailed picture of what the server is responding with (HTTP headers, in particular, could be useful for troubleshooting).
I suspect the primary issue is that you're trying to connect to a socket (server + port) that the server isn't configured to listen on -- that means you'll get this error regardless of whether or not the URL resolves, since the server isn't configured to deal with a socket connection of the sort you're sending it.

WCF client hangs on response

I have a WCF client (running on Win7) pointing to a WebSphere service.
All is good from a test harness (a little test fixture outside my web app) but when my calls to the service originate from my web project one of the calls (and only that one) is extremely slow to deserialize (it takes minutes VS seconds) and not just the first time.
I can see from fiddler that the response comes back quickly but then the WCF client hangs on the response itself for more than a minute before the next line of code is hit by the debugger, almost if the client was having trouble deserializing. This happens only if in the response I have a given pdf string (the operation generates a pdf), base64 encoded chunked. If for example the service raises a fault (thus the pdf string is not there) then the response is deserialized immediately.
Again, If I send the exact same envelope through Soap-UI or from outside the web project all is good.
I am at loss - What should I be looking for and is there some config setting that might do the trick?
Any help appreciated!
EDIT:
I coded a stub against the same service contract. Using the exact same basicHttpBinding and returning the exact same pdf string there is no delay registered. I think this rules out the string and the binding as a possible cause. What's left?
Changing transferMode="Buffered" into transferMode="Streamed" on the binding did the trick!
So the payload was apparently being chunked in small bits the size of the buffer.
I thought the same could have been achieved by increasing the buffersize (maxBufferSize="1000000") but I had that in place already and it did not help.
I have had this bite me many times. Check in your WCF client configuration that you are not trying to use the windows web proxy, that step to check on the proxy (even if there is not one configured) will eat up a lot of time during your connection.
If the tips of the other users don't help, you might want to Enable WCF Tracing and open it in the Service Trace Viewer. The information is detailed, but it has enabled me to fix a number of hard-to-identity problems in the past.
More information on WCF Tracing can be found on MSDN.
Two thing you can try:
Adjust the readerQoutas settings for your client. See http://msdn.microsoft.com/en-us/library/ms731325.aspx
Disable "Just My Code" in debugging options. Tools -> Options -> Debugging -> General "Enable Just My Code (Managed only)" and see if you can catch interal WCF exceptions.
//huusom
I had the very same issue... The problem of WCF, IMO, is in the deserialization of the base64 string returned by the service into a byte[] client side.
The easiest to solve this if you may not change your service configuration (Ex.: use a transferMode="Streamed") is to adapt your DataContract/ServiceContract client side. Replace the type "byte[]" with "string" in the Response DataContract.
Next simply decode the returned string yourself with a piece of code such as:
byte[] file = Convert.FromBase64String(pdfBase64String);
To download a PDF of 70KB, it used to required ~6 sec. With the suggested change here above, it takes now < 1 sec.
V.
PS.: Regarding the transfer mode, I did try to only change the client side (transferMode="StreamedResponse") but without improvement...
First things to check:
Is the config the same in the web project and the test project?
When you test from SOAP UI are you doing it from the same server and in the same security context as when the code is running from the web project.
-Is there any spike in the memory when the pdf comes back?
Edit
From your comments the 1 min wait, appears that it is waiting for a timeout. You also mention transactions.
I am wondering if the problem is somewhere else. The call to the WCF service goes OK, but the call is inside a transaction, and there is no complete or dispose on the transaction (I am guessing here), then the transaction / code will hang for 1 min while it waits to timeout.
Edit 2
Next things to check:
Is there any difference in the code is the test and in the web project, on how the service is being called.
Is there any differnce in the framework version, for example is one 3.0 and the other 3.5
Can it be that client side is trying to analyse what type of content is coming from server side? Try to specify mime type of the service response explicitly on the server side, e.g. Response.ContentType = "application/pdf" EDIT: By client side I mean any possible mediator like a firewall or a security suite.

Using .NET's HttpWebRequest to download a multitude of files in a row

I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though).
I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem.
Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do.
Here's some basic code to verify what I'm doing (just in case I'm missing closing something):
WebRequest webRequest = WebRequest.Create(uri);
webRequest.Method = "GET";
webRequest.Credentials = new NetworkCredential(username, password);
WebResponse webResponse = webRequest.GetResponse();
try
{
using(Stream stream = webResponse.GetResponseStream())
{
// read the stream
}
}
finally
{
webResponse.Close()
}
What kind of application is this? You mentioned that the server is running out of ports, but then you mentioned HttpWebRequest. Are you running this code in a webservice or ASP.NET page, which is trying to then download multiple files for the same incoming request from the client?
What kind of authentication is the page using? If it is using NTLM authentication, then the connections cannot be shared if the credentials being used are different for each request.
What I would suggest is to group your request per credential. So, for eg, all requests using username "John" would be grouped. You can specify the "ConnectionGroupName" property on the service point, so the system will try to reuse connections for the same credential and server.
If that also doesnt work, you will need to do one or more of the following:
1) Throttle your requests.
2) Increase the wildcard port range.
3) Use the BindIPConnectionCallback on ServicePoint to make it bind to a non-wildcard port (i.e a port in the range 1024-16384)
More digging seems to point to it possibly being due to authentication and the UnsafeAuthenticatedConnectionSharing property might alleviate this. However, I'm not sure that's the best thing, either.

Categories

Resources