named pipe callback takes 10 seconds? - c#

got wcf dll with client and server classes wraping it.
when my server uses callback it takes over 10 seconds for my client to get it..
what is going on?
only got simplest NetNamedPipeBinding endpoint.
got lots of code so I'm not sure what to paste here.
what can cause such a long time.
EDIT:
only first callback takes 10 seconds..
after this it works fast.
any one knows why?

I had similar problem. This helped in my case:
NetNamedPipeSecurity security = new NetNamedPipeSecurity() { Mode = NetNamedPipeSecurityMode.None };
Pass this security object when creating the binding:
new NetNamedPipeBinding() { Security = security }
The original idea is from here. The thread was about TCP binding, but the solution presented at the end appeared to be helpful for named pipes too in my case.
Even simpler is to do:
new NetNamedPipeBinding(NetNamedPipeSecurityMode.None)

Nothing helped. I ended up to add a fake call decorater. that sends the first call when the system is booting.

Accidentally I found a setting that greatly improves performance of the first WCF request. The time came down from > 10 seconds to ~2 seconds.
Set binding's TransferMode property to Streamed both on server and client:
var binding = new NetNamedPipeBinding(NetNamedPipeSecurityMode.None);
binding.TransferMode = TransferMode.Streamed;
Then pass the binding into AddServiceEndpoint server-side and into Channelfactory constructor client-side.

How are you hosting your service? The first call will need to create the service which can be slow to startup.
When debugging I use Studio's built in service host, and this often takes several seconds to sort itself out. Don't think I've ever seen it take 10 seconds mind.

Related

Debugging a long running Webservice Call

I'm using C# to connect to a Webservice to grab data. However, I'm currently having problems getting the code to run on a remote server; when I say problems, I mean its running, but the connection speed between client and server is ridiculously slow (through no fault of mine - the client is providing a slow resultset via a webservice, and they have all timeouts turned off their side in order to do so.)
if ((endpointConfiguration == EndpointConfiguration.SFFService))
{
System.ServiceModel.BasicHttpBinding result = new System.ServiceModel.BasicHttpBinding();
result.MaxBufferSize = int.MaxValue;
result.ReaderQuotas = System.Xml.XmlDictionaryReaderQuotas.Max;
result.MaxReceivedMessageSize = int.MaxValue;
result.AllowCookies = true;
result.OpenTimeout = TimeSpan.MaxValue;
result.CloseTimeout = TimeSpan.MaxValue;
result.SendTimeout = TimeSpan.MaxValue;
return result;
}
So. Not a great start. Open Close and Send all set to maximum.
Anyway, I've matched their long timeouts my side, and a few of the smaller webservice requests finish and succeed ok on the server. The biggest, slowest one however just hangs indefinitely, probably because I've told it to never timeout.
However, I'm pretty sure there's some other problem happening, as I left it overnight and it just sat there. Locally, on my development machine, although slow, it works.
My question is, has anyone any idea on additional things to check about the environment that could potentially be in play here? I thought perhaps firewall, but given that the small requests succeed (and connect) it is very difficult to debug the slow requests as I've no idea how long to wait until accepting that the program isn't going to do anything.
FWIW I've tried connecting via a browser, and again, the browser just sits there waiting for the request to finish which it never does (most likely due to the timeout being turned off on the server). If there was any way to see even how much of the request was left to finish (like a percentage download) that may help give me some guidance as to if the code is doing anything other than waiting.
There's no way to get a progress of the remote call even when you are attached to the remote process. Try using a local Visual Studio on the server machine (preferably on a non-production VM) and attach to the local process rather than using the Remote Debugger.
I am not sure exactly what the question is but the first step I'd take while debugging a slow application would be to test a local connection (local client and local server) to eliminate the network from the equation. If that works well, try hosting the server on a different place (public cloud maybe?) and try again there, if it works well then there's definitely something en-route or on that server.
If you're interested in tracking how long web service calls take you could track it by placing the start time into the HttpContext.Current.Items or OperationContext.Current.Items on BeginRequest/EndRequest in Global.asax or in a MessageInspector if you use WCF (you can send the datetime between the two methods by returning it into the Before method and read it from the corelationState parameter in the After method).

WCF - Determining when session ends on the server side

In the project I'm working on, we have several services implemented using WCF. The situation I'm facing is that some of the services need to know when a session ends, so that it can appropriately update the status of that client. Notifying the service when a client gracefully terminates (e.g. the user closes the application) is easy, however, there are cases where the application might crash, or the client machine might restart, in which case the client won't be able to notify the service about its status.
Initially, I was thinking about having a timer on the server side, which is triggered once a client connects, and changes the status of that client to "terminated" after, let's say, 1 minute. Now the client sends its status every 30 seconds to the service, and the service basically restarts its timer on every request from the client, which means it (hopefully) never changes the status of the client as long as the client is alive.
Even though this method is pretty reliable (not fully reliable; what if it takes the client more than 1 minute to send its status?) it's still not the best approach to solving this problem. Note that due to the original design of the system, I cannot implement a duplex service, which would probably make things a lot simpler. So my question is: Is there a way for the sevice to know when the session ends (i.e. the connection times out or the client closes the proxy)? I came accross this question: WCF: How to find out when a session is ending but the link on the answer seems to be broken.
Another thing that I'm worried about is; they way I'm currently creating my channel proxies is implemented like this:
internal static TResult ExecuteAndReturn<TProxy, TResult>(Func<TProxy, TResult> delegateToExecute)
{
string endpointUri = ServiceEndpoints.GetServiceEndpoint(typeof(TProxy));
var binding = new WSHttpBinding();
binding.Security.Mode = SecurityMode.Message;
binding.Security.Message.ClientCredentialType = MessageCredentialType.UserName;
TResult valueToReturn;
using (ChannelFactory<TProxy> factory = new ChannelFactory<TProxy>(binding,
new EndpointAddress(new Uri(endpointUri),
EndpointIdentity.CreateDnsIdentity(ServiceEndpoints.CertificateName))))
{
TProxy proxy = factory.CreateChannel();
valueToReturn = delegateToExecute(proxy);
}
return valueToReturn;
}
So the channel is closed immediately after the service call is made (since it's in a using block), is that, from a service standpoint, an indication that the session is terminated? If so, should I keep only one instance of each service during application runtime, by using a singleton maybe? I apologize if the questions seem a little vague, I figured there would be plenty of questions like these but wasn't able to find something similar.
Yes, closing the channel terminates the session, but if there is an error of some kind then you are subject to the timeout settings of the service, like this:
<binding name="tcpBinding" receiveTimeout="00:00:10" />
This introduces a ten second timeout if an error occurs.
Check out Managing WCF Session Lifetime with IsInitiating and IsTerminating

HttpWebResponse won't scale for concurrent outbound requests

I have an ASP.NET 3.5 server application written in C#. It makes outbound requests to a REST API using HttpWebRequest and HttpWebResponse.
I have setup a test application to send these requests on separate threads (to vaguely mimic concurrency against the server).
Please note this is more of a Mono/Environment question than a code question; so please keep in mind that the code below is not verbatim; just a cut/paste of the functional bits.
Here is some pseudo-code:
// threaded client piece
int numThreads = 1;
ManualResetEvent doneEvent;
using (doneEvent = new ManualResetEvent(false))
{
for (int i = 0; i < numThreads; i++)
{
ThreadPool.QueueUserWorkItem(new WaitCallback(Test), random_url_to_same_host);
}
doneEvent.WaitOne();
}
void Test(object some_url)
{
// setup service point here just to show what config settings Im using
ServicePoint lgsp = ServicePointManager.FindServicePoint(new Uri(some_url.ToString()));
// set these to optimal for MONO and .NET
lgsp.Expect100Continue = false;
lgsp.ConnectionLimit = 100;
lgsp.UseNagleAlgorithm = true;
lgsp.MaxIdleTime = 100000;
_request = (HttpWebRequest)WebRequest.Create(some_url);
using (HttpWebResponse _response = (HttpWebResponse)_request.GetResponse())
{
// do stuff
} // releases the response object
// close out threading stuff
if (Interlocked.Decrement(ref numThreads) == 0)
{
doneEvent.Set();
}
}
If I run the application on my local development machine (Windows 7) in the Visual Studio web server, I can up the numThreads and receive the same avg response time with minimal variation whether it's 1 "user" or 100.
Publishing and deploying the application to Apache2 on a Mono 2.10.2 environment, the response times scale almost linearly. (i.e, 1 thread = 300ms, 5 thread = 1500ms, 10 threads = 3000ms). This happens regardless of server endpoint (different hostname, different network, etc).
Using IPTRAF (and other network tools), it appears as though the application only opens 1 or 2 ports to route all connections through and the remaining responses have to wait.
We have built a similar PHP application and deployed in Mono with the same requests and the responses scale appropriately.
I have run through every single configuration setting I can think of for Mono and Apache and the ONLY setting that is different between the two environments (at least in code) is that sometimes the ServicePoint SupportsPipelining=false in Mono, while it is true from my machine.
It seems as though the ConnectionLimit (default of 2) is not being changed in Mono for some reason but I am setting it to a higher value both in code and the web.config for the specified host(s).
Either me and my team are overlooking something significant or this is some sort of bug in Mono.
I believe that you're hitting a bottleneck in the HttpWebRequest. The web requests each use a common service point infrastructure within the .NET framework. This appears to be intended to allow requests to the same host to be reused, but in my experience results in two bottlenecks.
First, the service points allow only two concurrent connections to a given host by default in order to be compliant to the HTTP specification. This can be overridden by setting the static property ServicePointManager.DefaultConnectionLimit to a higher value. See this MSDN page for more details. It looks as if you're already addressing this for the individual service point itself, but due to the concurrency locking scheme at the service point level, doing so may be contributing to the bottleneck.
Second, there appears to be an issue with lock granularity in the ServicePoint class itself. If you decompile and look at the source for the lock keyword, you'll find that it uses the instance itself to synchronize and does so in many places. With the service point instance being shared among web requests for a given host, in my experience this tends to bottleneck as more HttpWebRequests are opened and causes it to scale poorly. This second point is mostly personal observation and poking around the source, so take it with a grain of salt; I wouldn't consider it an authoritative source.
Unfortunately, I did not find a reasonable substitute at the time that I was working with it. Now that the ASP.NET Web API has been released, you may wish to give the HttpClient a look. Hope that helps.
I know this is pretty old but I'm putting this here in case it might help somebody else who runs into this issue. We ran into the same problem with parallel outbound HTTPS requests. There are a few issues at play.
The first issue is that ServicePointManager.DefaultConnectionLimit did not change the connection limit as far as I can tell. Setting this to 50, creating a new connection, and then checking the connection limit on the service point for the new connection says 2. Setting it on that service point to 50 once appears to work and persist for all connections that will end up going through that service point.
The second issue we ran into was with threading. The current implementation of the mono thread pool appears to create at most 2 new threads per second. This is an eternity if you are doing many parallel requests that start at exactly the same time. To counteract this, we tried setting ThreadPool.SetMinThreads to a higher number. It appears that Mono only creates up to 1 new thread when you make this call, regardless of the delta between the current number of threads and the desired number. We were able to work around this by calling SetMinThreads in a loop until the thread pool had the desired number of idle threads.
I opened a bug about the latter issue because that's the one I'm most confident is not working as intended: https://bugzilla.xamarin.com/show_bug.cgi?id=7055
If #jake-moshenko is right about ServicePointManager.DefaultConnectionLimit not having any effect if changed in Mono, please file this as a bug in http://bugzilla.xamarin.com/.
However I would try some things before discarding this completely as a Mono issue:
Try using the SGen garbage collector instead of the old boehm one, by passing --gc=sgen as a flag to mono.
If the above doesn't help, upgrade to Mono 3.2 (which BTW defaults to SGEN GC too), because there has been a lot of fixes since you asked the question.
If the above doesn't help, build your own Mono (master branch), as this important pull request about threading has been merged recently.
If the above doesn't help, build your own Mono with this pull request added. If it fixes your problem, please add a "+1" to the pull request. It might be a fix for bug 7055.

TimeoutException when TransferMode=Streamed

I inherited this huge application consisting of client and server code, and I'm trying to change the transfer mode for parts of our communication to 'streamed', to get around the really wasteful memory consumption that can occur in buffered mode and makes my client throw OOM exceptions, see also this answer.
The code goes something like this:
GZipMessageEncodingBindingElement gElement = new GZipMessageEncodingBindingElement();
HttpsTransportBindingElement hElement = new HttpsTransportBindingElement();
hElement.TransferMode = TransferMode.Streamed;
hElement.MaxBufferSize = int.MaxValue;
hElement.MaxBufferPoolSize = int.MaxValue;
hElement.MaxReceivedMessageSize = int.MaxValue;
CustomBinding binding = new CustomBinding();
binding.SendTimeout = new TimeSpan(0, 0, 60);
binding.Elements.Add(gElement);
binding.Elements.Add(hElement);
EndpointAddress address = new EndpointAddress(uri);
return new ChannelFactory<T>(binding, address).CreateChannel();
And the exception is a TimeoutException that suggests I increase the SendTimeout (which currently is set to 1 minute).
Update: However, there are 2 services that still work after setting the transferMode to streamed, I actually confirmed that they use GZIP/HTTPS by setting breakpoints in GZIPMessageEncoder.ReadMessage(Stream, ...). So for 2 services it breaks inside ReadMessage, for one service it doesn't. As far is I can tell, the services are configured identically, when it comes to ConcurrencyMode and InstanceContextMode.
Update 2: After configuring only the one service that didn't seem to work as Streamed and leaving the other 2 services buffered, it partly worked. So it is not the service as such that is bad, maybe some connection is getting in the way of other connections in Streamed mode, making them time-out.
If I delete just the 'TransferMode' line, everything is fine, and the test server never needs more than a second or so to respond, so just increasing the SendTimeout won't lead anywhere.
For this test, I only changed the client btw., as to my understanding, this setting should not affect the way client and server communicate, just the way that the application with the "streamed" setting processes the data.
Please be easy on me, I'm a total WCF newb, and although I have seen some caveats on the MSDN pages about the streamed transfer mode, a pointer at what to grep for in my code / configuration would be really helpful, otherwise it's just huge, many services, each with different transport settings etc.
Thanks!
I could not exactly figure out the cause of the issue, but I was able to fix the issue and getting rid of excessive memory consumption etc. by doing the following.
After getting rid of GzipMessageEncoder (see wcf conditional compression for how to replace it by IIS' builtin compression), which is a monstrosity (see my answer on WCF HttpTransport: streamed vs buffered TransferMode), it was easy to switch to streamed TransferMode: How can I prevent BufferManager / PooledBufferManager in my WCF client app from wasting memory?.

WCF client hangs on response

I have a WCF client (running on Win7) pointing to a WebSphere service.
All is good from a test harness (a little test fixture outside my web app) but when my calls to the service originate from my web project one of the calls (and only that one) is extremely slow to deserialize (it takes minutes VS seconds) and not just the first time.
I can see from fiddler that the response comes back quickly but then the WCF client hangs on the response itself for more than a minute before the next line of code is hit by the debugger, almost if the client was having trouble deserializing. This happens only if in the response I have a given pdf string (the operation generates a pdf), base64 encoded chunked. If for example the service raises a fault (thus the pdf string is not there) then the response is deserialized immediately.
Again, If I send the exact same envelope through Soap-UI or from outside the web project all is good.
I am at loss - What should I be looking for and is there some config setting that might do the trick?
Any help appreciated!
EDIT:
I coded a stub against the same service contract. Using the exact same basicHttpBinding and returning the exact same pdf string there is no delay registered. I think this rules out the string and the binding as a possible cause. What's left?
Changing transferMode="Buffered" into transferMode="Streamed" on the binding did the trick!
So the payload was apparently being chunked in small bits the size of the buffer.
I thought the same could have been achieved by increasing the buffersize (maxBufferSize="1000000") but I had that in place already and it did not help.
I have had this bite me many times. Check in your WCF client configuration that you are not trying to use the windows web proxy, that step to check on the proxy (even if there is not one configured) will eat up a lot of time during your connection.
If the tips of the other users don't help, you might want to Enable WCF Tracing and open it in the Service Trace Viewer. The information is detailed, but it has enabled me to fix a number of hard-to-identity problems in the past.
More information on WCF Tracing can be found on MSDN.
Two thing you can try:
Adjust the readerQoutas settings for your client. See http://msdn.microsoft.com/en-us/library/ms731325.aspx
Disable "Just My Code" in debugging options. Tools -> Options -> Debugging -> General "Enable Just My Code (Managed only)" and see if you can catch interal WCF exceptions.
//huusom
I had the very same issue... The problem of WCF, IMO, is in the deserialization of the base64 string returned by the service into a byte[] client side.
The easiest to solve this if you may not change your service configuration (Ex.: use a transferMode="Streamed") is to adapt your DataContract/ServiceContract client side. Replace the type "byte[]" with "string" in the Response DataContract.
Next simply decode the returned string yourself with a piece of code such as:
byte[] file = Convert.FromBase64String(pdfBase64String);
To download a PDF of 70KB, it used to required ~6 sec. With the suggested change here above, it takes now < 1 sec.
V.
PS.: Regarding the transfer mode, I did try to only change the client side (transferMode="StreamedResponse") but without improvement...
First things to check:
Is the config the same in the web project and the test project?
When you test from SOAP UI are you doing it from the same server and in the same security context as when the code is running from the web project.
-Is there any spike in the memory when the pdf comes back?
Edit
From your comments the 1 min wait, appears that it is waiting for a timeout. You also mention transactions.
I am wondering if the problem is somewhere else. The call to the WCF service goes OK, but the call is inside a transaction, and there is no complete or dispose on the transaction (I am guessing here), then the transaction / code will hang for 1 min while it waits to timeout.
Edit 2
Next things to check:
Is there any difference in the code is the test and in the web project, on how the service is being called.
Is there any differnce in the framework version, for example is one 3.0 and the other 3.5
Can it be that client side is trying to analyse what type of content is coming from server side? Try to specify mime type of the service response explicitly on the server side, e.g. Response.ContentType = "application/pdf" EDIT: By client side I mean any possible mediator like a firewall or a security suite.

Categories

Resources