It is valid behavior that an http(tcp) request can get lost without the listeners get informed. see here for the discussion on that:
C# httpClient (block for async call) deadlock
Problem
We are using HttpClient.PostAsJsonAsync to upload a Json File to a server. However in worst case scenarios this upload can take several hours.
That's why just using HttpClient.Timeout is not working for us. This is an hard timeout and we need to have it huge.
So what do we do when the tcp connection is gone and the client does not detect that. With our huge timeout we are stuck for a long time. So is there any other Timeout we can use in such cases? Any other ideas or best practices?
I was also looking into tcp sockets keep alive, but that doesn't seem to be an option.
After some research, I finally found an article which describes the issue and provides a workaround:
http://www.thomaslevesque.com/2014/01/14/tackling-timeout-issues-when-uploading-large-files-with-httpwebrequest/
According to this article, there is a design flaw in HttpWebRequest which I was able to reproduce. Seems ridiculous that the timeout also effects the upload.
However, I can live with the provided workaround (WebRequestExtensions) since our code is synchronous anyway.
Related
I have WPF (.net 6.0) application with multiple web-services in its model. Some services have their own HttpClients.
But sometimes I need to make multiple requests within short period of time. The problem is the HttpClient must be long-living/reusable, and sockets are garbaged if we create multiple HttpClients for short work. Some of my web-services used only once a day or more rarely.
Example 1:
I have 150 proxies. Once a day I need to check all my proxies. Recreating HttpClient 150 times is smell and perfomance could be affected. Keeping these clients as long-living also bad idea.
Example 2:
I need to make multiple requests with preset of default headers/proxy/cookies or another combination of unchangable data from HttpClientHandler (unchangable from first request) once a day/week/month.
Question:
Is there any a single solution that can solve these problems. Some kind of magical HttpClient or Handler analog that doesn't have a socket problem and allows you to make a short queue of requests without loss of performance and speed of work.
I've seen something similar to the solution of this problem somewhere, maybe even on MSDN. But I can't find this article anywhere.
We have an asp.net webapi application that needs to issue a lot of calls to other web applications (it's basically a reverse proxy). To do this we use the async methods of the HttpClient.
Yes, we have seen the hints about using only one HttpClient instance and not to dispose of it.
Yes, we have seen the hints about setting configuration values, especially the problem with the lease timeout. Currently we set ConnectionLimit = CPU*12, ConnectionLeaseTimeout = 5min and MaxIdleTime = 30s.
We can see that the connections behave as desired. The throughput in a load test was also very good. However we are facing issues where occasionally the connections stop working. It seems to happen when a lot of requests are coming in (and, being a reverse proxy, cause new requests to be issued) and it happens mostly (but not only) with the slowest of all backend applications. The behaviour is then that it takes forever to finish the requests to this endpoint or they simply end in a timeout.
An IISReset of the server hosting our reverse proxy application terminates the problems (for a while).
We have investigated in several areas already:
Performance issues of the remote web application: Although it behaves exactly as this would be the case the performance is good when the same requests are issued locally on the remote server. Also the values for CPU / network etc. are low.
Network issues (bandwidth, router, firewall, load balancers): Possible but rather unlikely since everything else runs stable and our hoster is involved in the analysis too.
Threadpool starvation: Not impossible but rather theoretical - sure we have a lot of async calls but shouldn't that help regarding this issue?
HttpCompletionOption.ResponseHeadersRead: Not a problem by itself but maybe one piece of the puzzle?
The best explanation so far focuses on the ConnectionLimit: We started setting the values mentioned above only recently and this seems to have triggered the problems. But why would it? Shouldn't it be an improvement to reuse the connections instead of opening a new one for every request? And the values we set seem to be rather conservative?
We have started to experiment with these values lately to see their impact in production. Yet it is still unclear to us if this is the only cause. And we'd appreciate a more straighforward approach for analysis. Unfortunately a memory dump and netstat printouts did not help any further.
Some suggestions about how to analyze or hints about possible causes would be highly appreciated.
***** EDIT *****
Setting the connection limit to 1000 is solving the issue! So the question remains as to why is that the case? From what we know the default connection limit is 2 in a non-web and 1000 in a web application. MS is suggesting a default value of CPU*12 (but they didn't implement it like that?!) so our change was basically to go from 1000 to 48. Still we can see that only a handful connections are open. Is there anyone who can shed some light on this? What is the exact behaviour about opening new connections, reusing existing ones, pipelining etc.? Is there any source of information for this?
ConnectionLimit means ServicePointManager.DefaultConnectionLimit? Yes it matters. When the value is X, if there are already X requests waiting response, new request will not be sent until any previous request is finished.
I posted a follow up question here: How to disable pipelining for the .NET HttpClient
Unfortunately there were no real answers to any of my questions. We ended up leaving the ConnectionLimit at 1000 (which is a workaround only but the only solution we were able to find).
We are currently developing a software solution which has a client and a number of WCF services that it consumes. The issues we are having is WCF services timing out after a period of inactivity. As far as I understand, there are 2 ways to resolve this:
Increase timeouts (as far as I understood, this is generally not recommended. Eg. setting timeout to infinite/weeks is considered bad practice)
Periodically ping the WCF services from the Client (I'm not sure that I'm a huge fan of his as it will add redundant, periodic calls)
Handle timeout issues and attempt to reconnect (this is slow and requires a lot of manual code)
Reliable Sessions - some sources mention that this is the in-built WCF pinging and message reliability mechanism, but other sources mention that this will still time out.
What is the recommended/best way of resolving this issue? Is there any official reading material on this? I could not find all that much info myself
Thanks!
As i can see, you have to use a combination of your stated points.
You are right, increasing the timeouts is bad practice and can give you a lot of problems.
If you don't want to use Reliable Sessions, then Ping is the only applicable way to hold the connection.
You need to handle this things, no matter if a timeout occurs, the connection is lost or a exception is thrown. There are a plenty of possibilities that your connection can fault.
Reliable Sessions are a good way not to implement a ping, but technically, it does nearly the same. WCF automatically sends an "I am still here" Request.
The conclusion of this is, that you need point 3 and point 2 or 4. To reduce the manually code for point 3, you can use Proxies or a wrapper around your ServiceClient, that establishes a new connection if the old one is faulted during a request. Point 4 is easy to implement, because you only need some small additions to your binding in your config. And the traffic overhead is not that big. Point 2 is the most expensive way, you need to handle a Thread/Task that only pings the server and the service needs to be extended. But as you stated before, Reliable Sessions can fail, and Pings should bring you on the safe side.
You should ask yourself what is your WCF endpoint is doing? Is the way you have your command setup the most optimal?
Perhaps it'd be better to have your endpoint that takes a long time be based on a polling system that allows there to be a quick query instead of waiting on the results of the endpoints actions.
You should also consider data transfer as a possible issue. Is the amount of data you're transferring back a lot?
To get a more pointed answer, we'd need to know more about the specific endpoint as well as any other responsibilities there are for the service.
I am currently writing a system logging program which sends different logs via ftp.
The Problem I am facing is that my program should constantly check if the connection is being used before and during it's upload in order to stop sending packets if a different program wants to use the connection.
I actually found this link helping me measure the speed of the connection, but I think I can only use the latter in order to discover if the something is already being streamed.
After reading the library entry on System.Net.NetworkInformation, checking various Network Statistics and states wasn't a problem either. As stated beforehand my only problem is checking if some other program wants to send something.
As you can probably tell from the question, I am very new to this topic and a fairly junior programmer. I have been reading up on the System.Net.NetworkInformation Namespace library and facilitating it's various classes, methods and delegates. I have the feeling that I am on the right track, but just not getting there. Anyone got a push in the right direction?
Thank you.
I ended up using the System.Net.NetworkInformation library and it's methods.
The methods GetIsNetworkAvailable(), NetworkChange.NetworkAvailabilityChanged Eventhandler and the TcpStatistics helped me gather information on the connection. MSDN and the reference is a great guide in using the foregone mentionend methods and I basically used the examples with slight modifications to suit my needs.
msdn NetworkInformation:
http://msdn.microsoft.com/de-de/library/system.net.networkinformation.aspx
The GetIsNetworkAvailable is pretty straight forward returns boolean value on connection being up or down.
Networkchange.NetworkAvailabilityChanged triggers an event on connection loss or reconnection. See the msdn link above for an excellent and very usuable example on it's usage.
And the TcpStatistics return information on how many connections have been accepted, initiated, errors received, failed connections, connection resets and more. These were the five I used so far to evaluate the connection.
I realized that you do not really need any more to monitor the connection efficiently.
Maybe the method NetworkInterface.GetAllNetworkInterfaces() can help finding out which Networkadapter is sending the data and should be monitored.
I now understand the comments of Peter Ritchie to my question. FTP transmission runs extremely well and the protocol handles the transmission of the files flawlessly and no problems have arisen up until now in streaming the log files. In 4 weeks of testing I have received the logging data constantly.
Recently I got the need to access a web service created using SOAP::Lite. It's really messy to use since there's no WSDL, it doesn't return reasonable datatypes etc. so I started out using the provided sample code.
Right from the start I got problems, requests were timing out. Sometimes often, sometimes more seldom but never entirely without problem. After using Fiddler to sniff the traffic and searching it seems like there is/was a bug with SOAP::Lite that messed up the header Content-Length when dealing with UTF-8 encoded data. This seems reasonable since my analysis points to the timeouts being caused by the client waiting for more data (Content-Length) while the server said it was done (real data).
So now I need a way to counter this erroneous header field and either:
Provide the correct Content-Length or
Pad the payload to match the Content-Length
Problem is, I never get the chance to use a SoapExtension or any other modification since Invoke() eventually throws an IoException or WebException before parsing commences. Also, the WS is not mine and pretty unchangable, I presume.
I also tried overriding the SoapHttpClientProtocol.GetWebResponse() to do an async request but that didn't help either since I couldn't get hold of the ResponseStream before calling HttpWebRequest.EndGetResponse() and that one always threw an exception.
Does anyone have an idea how I could approach this?
UPDATE: by now I've also tried WCF and came across this post at MSDN - the answer is not very uplifting. Basically, this happens far too deep in the plumbing to be accesible by user code. My best bet now seems to be to use a Fiddler script to correct the Content-Length header, perhaps not trivial since this WS is only available by HTTPS.
/Dan