I inherited this huge application consisting of client and server code, and I'm trying to change the transfer mode for parts of our communication to 'streamed', to get around the really wasteful memory consumption that can occur in buffered mode and makes my client throw OOM exceptions, see also this answer.
The code goes something like this:
GZipMessageEncodingBindingElement gElement = new GZipMessageEncodingBindingElement();
HttpsTransportBindingElement hElement = new HttpsTransportBindingElement();
hElement.TransferMode = TransferMode.Streamed;
hElement.MaxBufferSize = int.MaxValue;
hElement.MaxBufferPoolSize = int.MaxValue;
hElement.MaxReceivedMessageSize = int.MaxValue;
CustomBinding binding = new CustomBinding();
binding.SendTimeout = new TimeSpan(0, 0, 60);
binding.Elements.Add(gElement);
binding.Elements.Add(hElement);
EndpointAddress address = new EndpointAddress(uri);
return new ChannelFactory<T>(binding, address).CreateChannel();
And the exception is a TimeoutException that suggests I increase the SendTimeout (which currently is set to 1 minute).
Update: However, there are 2 services that still work after setting the transferMode to streamed, I actually confirmed that they use GZIP/HTTPS by setting breakpoints in GZIPMessageEncoder.ReadMessage(Stream, ...). So for 2 services it breaks inside ReadMessage, for one service it doesn't. As far is I can tell, the services are configured identically, when it comes to ConcurrencyMode and InstanceContextMode.
Update 2: After configuring only the one service that didn't seem to work as Streamed and leaving the other 2 services buffered, it partly worked. So it is not the service as such that is bad, maybe some connection is getting in the way of other connections in Streamed mode, making them time-out.
If I delete just the 'TransferMode' line, everything is fine, and the test server never needs more than a second or so to respond, so just increasing the SendTimeout won't lead anywhere.
For this test, I only changed the client btw., as to my understanding, this setting should not affect the way client and server communicate, just the way that the application with the "streamed" setting processes the data.
Please be easy on me, I'm a total WCF newb, and although I have seen some caveats on the MSDN pages about the streamed transfer mode, a pointer at what to grep for in my code / configuration would be really helpful, otherwise it's just huge, many services, each with different transport settings etc.
Thanks!
I could not exactly figure out the cause of the issue, but I was able to fix the issue and getting rid of excessive memory consumption etc. by doing the following.
After getting rid of GzipMessageEncoder (see wcf conditional compression for how to replace it by IIS' builtin compression), which is a monstrosity (see my answer on WCF HttpTransport: streamed vs buffered TransferMode), it was easy to switch to streamed TransferMode: How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory?.
Related
I have ported over an old application to asp.net core 6. Its an API that gets incoming requests and communicates over net.tcp to a wcf service that i have no control over. The bindings and xml reader quotas are all set high. Everything works ok when I transfer simple messages. However one feature requires me to send base64 encoded images to the wcf service. And this is when i get this error.
The socket was aborted because an asynchronous receive from the socket did not complete within the allotted timeout of 10675199.02:48:05.4775807. The time allotted to this operation may have been a portion of a longer timeout.
In my testing an image of 8kb goes through normally but one on 16kb and above does not go through.
Further more. If i launch my asp.net application via .exe file instead of IIS then everything goes through fine. This leads me to believe that something in IIS blocks my xml transfer. I have tried modifying "uploadReadAheadSize" after googling but that had no effect. Im at a loss here as i would like to use IIS to host this API.
Ive tries to enable tracing but cannot get that working on my solution.
Current binding looks like this, although ive tried increasing buffer sizes also, but i removed to make it more like how it looks on the service side.
NetTcpBinding binding = new NetTcpBinding();
binding.TransferMode = TransferMode.Streamed;
binding.Security.Mode = SecurityMode.None;
binding.ReceiveTimeout = TimeSpan.FromSeconds(20);
binding.SendTimeout = TimeSpan.FromSeconds(20);
binding.MaxReceivedMessageSize = 650000;
XmlDictionaryReaderQuotas myReaderQuotas = new XmlDictionaryReaderQuotas();
myReaderQuotas.MaxStringContentLength = 2147483647;
myReaderQuotas.MaxNameTableCharCount = 2147483647;
myReaderQuotas.MaxArrayLength = 2147483647;
myReaderQuotas.MaxBytesPerRead = 2147483647;
myReaderQuotas.MaxDepth = 64;
This error may be caused by the default limits defined in service configuration are too low (MaxItemsInObjectGraph, MaxReceivedMessageSize, MaxBufferPoolSize, MaxBufferSize, MaxArrayLength).
you can try fix by increasing the values in the config file.
<binding name="MyCoolBinding" maxreceivedmessagesize="10000000" maxbuffersize="10000000" maxbufferpoolsize="10000000">
More information you can refer to this link: http://nerdwords.blogspot.com/2008/01/wcf-error-socket-connection-was-aborted.html.
I'm using C# to connect to a Webservice to grab data. However, I'm currently having problems getting the code to run on a remote server; when I say problems, I mean its running, but the connection speed between client and server is ridiculously slow (through no fault of mine - the client is providing a slow resultset via a webservice, and they have all timeouts turned off their side in order to do so.)
if ((endpointConfiguration == EndpointConfiguration.SFFService))
{
System.ServiceModel.BasicHttpBinding result = new System.ServiceModel.BasicHttpBinding();
result.MaxBufferSize = int.MaxValue;
result.ReaderQuotas = System.Xml.XmlDictionaryReaderQuotas.Max;
result.MaxReceivedMessageSize = int.MaxValue;
result.AllowCookies = true;
result.OpenTimeout = TimeSpan.MaxValue;
result.CloseTimeout = TimeSpan.MaxValue;
result.SendTimeout = TimeSpan.MaxValue;
return result;
}
So. Not a great start. Open Close and Send all set to maximum.
Anyway, I've matched their long timeouts my side, and a few of the smaller webservice requests finish and succeed ok on the server. The biggest, slowest one however just hangs indefinitely, probably because I've told it to never timeout.
However, I'm pretty sure there's some other problem happening, as I left it overnight and it just sat there. Locally, on my development machine, although slow, it works.
My question is, has anyone any idea on additional things to check about the environment that could potentially be in play here? I thought perhaps firewall, but given that the small requests succeed (and connect) it is very difficult to debug the slow requests as I've no idea how long to wait until accepting that the program isn't going to do anything.
FWIW I've tried connecting via a browser, and again, the browser just sits there waiting for the request to finish which it never does (most likely due to the timeout being turned off on the server). If there was any way to see even how much of the request was left to finish (like a percentage download) that may help give me some guidance as to if the code is doing anything other than waiting.
There's no way to get a progress of the remote call even when you are attached to the remote process. Try using a local Visual Studio on the server machine (preferably on a non-production VM) and attach to the local process rather than using the Remote Debugger.
I am not sure exactly what the question is but the first step I'd take while debugging a slow application would be to test a local connection (local client and local server) to eliminate the network from the equation. If that works well, try hosting the server on a different place (public cloud maybe?) and try again there, if it works well then there's definitely something en-route or on that server.
If you're interested in tracking how long web service calls take you could track it by placing the start time into the HttpContext.Current.Items or OperationContext.Current.Items on BeginRequest/EndRequest in Global.asax or in a MessageInspector if you use WCF (you can send the datetime between the two methods by returning it into the Before method and read it from the corelationState parameter in the After method).
We're using ServiceStack 3.9.71.0 and we're currently experiencing unexplained latency issues with clients over a WAN connection.
A reply with a very small payload (<100 bytes) is received after 200ms+.
The round-trip-time (RTT) on the link is about 40ms due to the geographical distance. This has been verified by pinging the other host and using a simple echo service to test the latency of a TCP connection.
Both ping and echo test show latencies which are in line with expectations. Getting a reply from our ServiceStack host takes much longer than expected.
We've verified that:
WAN link is only running at 25% of capacity (no congestion)
No QOS is employed on the WAN link
same host gives fast reply to same request from a different host on local network
delay is not caused by our code processing the request
We've now stumbled across Nagle's algorithm and that it can mean delays for small requests on WAN networks (http://blogs.msdn.com/b/windowsazurestorage/archive/2010/06/25/nagle-s-algorithm-is-not-friendly-towards-small-requests.aspx).
In .NET it can be disabled by setting TcpClient.NoDelay = true (https://msdn.microsoft.com/en-us/en-US/library/system.net.sockets.tcpclient.nodelay(v=vs.110).aspx).
How can this be disabled for ServiceStack's TCP handling?
EDIT: I don't think that this is a duplicate of HttpWebRequest is slow with chunked data. The mentioned question covers HttpWebRequest which isn't used by ServiceStack. ServiceStack uses HttpListener which also happens to be controlled / managed by the mentioned ServicePointManager. We're going to conduct a test to see whether setting ServicePointManager.UseNagleAlgorithm = false solves the issue.
I think you provided an answer in your Update UseNagleAlgorithm = false should solve this issue. But be careful because ServicePointManager.UseNagleAlgorithm = false; is a global settings which means it will turn off this algorithm for all of your endpoint and for all of your requests in the entire App Domain. When you call more than one service endpoints (usually that is the case) with mixed sized of Request it will bite back. So you should consider setting this only for one specific ServicePoint, you can acquire it by:
ServicePoint sp = ServicePointManager.FindServicePoint(<uri>);
sp.UseNagleAlgorithm = false;
and not set it globally
Here is an article about it: https://technet2.github.io/Wiki/blogs/windowsazurestorage/nagles-algorithm-is-not-friendly-towards-small-requests.html
In the project I'm working on, we have several services implemented using WCF. The situation I'm facing is that some of the services need to know when a session ends, so that it can appropriately update the status of that client. Notifying the service when a client gracefully terminates (e.g. the user closes the application) is easy, however, there are cases where the application might crash, or the client machine might restart, in which case the client won't be able to notify the service about its status.
Initially, I was thinking about having a timer on the server side, which is triggered once a client connects, and changes the status of that client to "terminated" after, let's say, 1 minute. Now the client sends its status every 30 seconds to the service, and the service basically restarts its timer on every request from the client, which means it (hopefully) never changes the status of the client as long as the client is alive.
Even though this method is pretty reliable (not fully reliable; what if it takes the client more than 1 minute to send its status?) it's still not the best approach to solving this problem. Note that due to the original design of the system, I cannot implement a duplex service, which would probably make things a lot simpler. So my question is: Is there a way for the sevice to know when the session ends (i.e. the connection times out or the client closes the proxy)? I came accross this question: WCF: How to find out when a session is ending but the link on the answer seems to be broken.
Another thing that I'm worried about is; they way I'm currently creating my channel proxies is implemented like this:
internal static TResult ExecuteAndReturn<TProxy, TResult>(Func<TProxy, TResult> delegateToExecute)
{
string endpointUri = ServiceEndpoints.GetServiceEndpoint(typeof(TProxy));
var binding = new WSHttpBinding();
binding.Security.Mode = SecurityMode.Message;
binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
TResult valueToReturn;
using (ChannelFactory<TProxy> factory = new ChannelFactory<TProxy>(binding,
new EndpointAddress(new Uri(endpointUri),
EndpointIdentity.CreateDnsIdentity(ServiceEndpoints.CertificateName))))
{
TProxy proxy = factory.CreateChannel();
valueToReturn = delegateToExecute(proxy);
}
return valueToReturn;
}
So the channel is closed immediately after the service call is made (since it's in a using block), is that, from a service standpoint, an indication that the session is terminated? If so, should I keep only one instance of each service during application runtime, by using a singleton maybe? I apologize if the questions seem a little vague, I figured there would be plenty of questions like these but wasn't able to find something similar.
Yes, closing the channel terminates the session, but if there is an error of some kind then you are subject to the timeout settings of the service, like this:
<binding name="tcpBinding" receiveTimeout="00:00:10" />
This introduces a ten second timeout if an error occurs.
Check out Managing WCF Session Lifetime with IsInitiating and IsTerminating
got wcf dll with client and server classes wraping it.
when my server uses callback it takes over 10 seconds for my client to get it..
what is going on?
only got simplest NetNamedPipeBinding endpoint.
got lots of code so I'm not sure what to paste here.
what can cause such a long time.
EDIT:
only first callback takes 10 seconds..
after this it works fast.
any one knows why?
I had similar problem. This helped in my case:
NetNamedPipeSecurity security = new NetNamedPipeSecurity() { Mode = NetNamedPipeSecurityMode.None };
Pass this security object when creating the binding:
new NetNamedPipeBinding() { Security = security }
The original idea is from here. The thread was about TCP binding, but the solution presented at the end appeared to be helpful for named pipes too in my case.
Even simpler is to do:
new NetNamedPipeBinding(NetNamedPipeSecurityMode.None)
Nothing helped. I ended up to add a fake call decorater. that sends the first call when the system is booting.
Accidentally I found a setting that greatly improves performance of the first WCF request. The time came down from > 10 seconds to ~2 seconds.
Set binding's TransferMode property to Streamed both on server and client:
var binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.None);
binding.TransferMode = TransferMode.Streamed;
Then pass the binding into AddServiceEndpoint server-side and into Channelfactory constructor client-side.
How are you hosting your service? The first call will need to create the service which can be slow to startup.
When debugging I use Studio's built in service host, and this often takes several seconds to sort itself out. Don't think I've ever seen it take 10 seconds mind.