Does WCF optimize the client's connection under the hood? - c#

We are currently working on an application that will use a WCF service. The host (the client) is using the excellent WCF Service Proxy Helper from Erwyn van der Meer.
What I would like to know... is if I open this object multiple times... will it lead to multiple (expensive) connections or will WCF manage it and pool the connections.
The reason why I want to know this is because we will be calling methods of the service at different point in time within the same web request and we currently have wrapped the instanciation of the Service proxy class within the call.
Eg.:
MyService.MyMethod1() // wraps the connection usage as well as the call to the service
Any suggestions about how I would go to minimize the amount of connection while keeping the code conform with SRP would be excellent.
So? Any idea?

You should try to minimize the number of proxy objects you create. Proxies in WCF are quite expensive to set up, so creating one and calling functions on it multiple times is definitely more efficient than creating a new one for each method invocation.
The relationship between proxy objects and connections depends on the transport used. For http transports, an HTTP connection is initiated for each function invocation. For the net.tcp transport, the connection is established at Open() time and kept until a Close(). Certain binding settings (eg those supporting WS-SecureConversation) will incur extra "housekeeping" connections and message exchanges.
AKAIK, none of the out-of-the-box bindings perform connection pooling.

It doesn't do pooling like SqlConnection, if that is what you mean.
[caveat: I use "connection" here loosely to mean a logical connection, not necessarily a physical connection]
Between using a connection on-demand, and keeping one around, there are advantages and disadvantages to both approaches. There is some overhead in initializing a connection, so if you are doing 5 things you should try to do them on the same proxy - but I wouldn't keep a long term proxy around just for the sake of it.
In particular, in terms of life-cycle management, once a proxy has faulted, it stays faulted - so you need to be able to recover from the occasional failure (which should be expected). Equally, in some configurations (some combinations of session/instancing), a connection has a definite footprint on the server - so to improve scalability you should keep connections short-lived. Of course, for true scalability you'd usually want to disable those options anyway!
The cost of creating a connection also depends on things like the security mode. IIRC, it will be more expensive to open a two-way validated connection using message security than it will to set up a TransportWithMessageCredential connection - so "YMMV" is very much the case here.
Personally, I find that the biggest common issue with proxy performance isn't anything to do with the time to set up a conncetion - it is the chattiness of the API. i.e.
Scenario A:
open a connection
perform 1 operation with a large payload (i.e. a message that means "do these 19 things")
close the proxy
Scenario B:
open a connecton
perform 19 operations with small payloads
close the connection
Then scenario A will usually be significantly faster due to latency etc. And IMO don't even think about distributed transactions over WCF ;-p

Related

Redis Connection - Multiplex it or not?

In terms of performance, is it better to multiplex 1 connection object across multiple requests or to give each request it's own connection?
Well, what redis client are you using? StackExchange.Redis is explicitly designed to be multiplexed and shared between multiple requests (or any other parallel load); other clients may not be, and may require you to lease from a pool per request (or for some portion of a request). There is quite a lot of overhead involved in establishing a connection with redis (optionally DNS, sockets, optionally TLS, and a bit of chit-chat backwards and forwards to determine the redis server configuration), so you don't want to completely establish a new underlying connection per request (even if it is fast).

Chatroom functionality with WCF, duplex callbacks vs polling?

I am using WCF and I am putting a chatroom facility in my C# program. So I need to be able to send information from the server to the clients for two events -
When a user connects/disconnects I update the list of connected users and send that back to all clients for display in a TextBlock
When a user posts a message, I need the server to send that message out to all clients
So I am looking for advice on the best way of implementing this. I was going to use netTcpBinding for duplex callbacks to clients but then I ran into some issues regarding not being able to call back the client if the connection is closed. I need to use percall instances for scalibility. I was advised in this thread that I shouldnt leave connections open as it would 'significantly limit scalibity' - WCF duplex callbacks, how do I send a message to all clients?
However I had a look through the book Programming WCF Services and the author seems to state that this is not an issue because 'In between calls, the client holds a reference on a proxy that doesn’t have an actual object at the end of the wire. This means that you can dispose of the expensive resources the service instance occupies long before the client closes the proxy'
So which is correct, is it fine to keep proxies open on clients?
But even if that is fine it leads to another issue. If the service instances are destroyed between call, how can they do duplex callbacks to update the clients? Regarding percall instances, the author of Programming WCF Services says 'Because the object will be discarded once the method returns, you should not spin off background threads or dispatch asynchronous calls back into the instance'
Would I be better off having clients poll the service for updates? I would have imagined that this is much more inefficient than duplex callbacks, clients could end up polling the service 50+ times as often as using a duplex callback. But maybe there is no other way? Would this be scalable? I envisage several hundred concurrent users.
Since I am guilty of telling you that server callbacks won't scale, I should probably explain a bit more. Let me start by addressing your questions:
Without owning the book in question, I can only assume that the author is either referring to http-based transports or request-response only, with no callbacks. Callbacks require one of two things- either the server needs to maintain an open TCP connection to the client (meaning that there are resources in use on the server for each client), or the server needs to be able to open a connection to a listening port on the client. Since you are using netTcpBinding, your situation would be the former. wsDualHttpBinding is an example of the latter, but that introduces a lot of routing and firewall issues that make it unworkable over the internet (I am assuming that the public internet is your target environment here- if not, let us know).
You have intuitively figured out why server resources are required for callbacks. Again, wsDualHttpBinding is a bit different, because in that case the server is actually calling back to the client over a new connection in order to send the async reply. This basically requires ports to be opened on the client's side and punched through any firewalls, something that you can't expect of the average internet user. Lots more on that here: WSDualHttpBinding for duplex callbacks
You can architect this a few different ways, but it's understandable if you don't want the overhead (and potential for delay) of the clients constantly hammering the server for updates. Again, at several hundred concurrent users, you are likely still within the range that one good server could handle using callbacks, but I assume you'd like to have a system that can scale beyond that if needed (or at peak times). What I'd do is this:
Use callback proxies (I know, I told you not to)... Clients connecting create new proxies, which are stored in a thread-safe collection and occasionally checked for live-ness (and purged if found to be dead).
Instead of having the server post messages directly from one client to another, have the server post the messages to some Message Queue Middleware. There are tons of these out there- MSMQ is popular with Windows, ActiveMQ and RabbitMQ are FOSS (Free Open Source Software), and Tibco EMS is popular in big enterprises (but can be very expensive). What you probably want to use is a topic, not a queue (more on queues vs topics here).
Have a thread (or several threads) on the server dedicated to reading messages off of the topic, and if that message is addressed to a live session on that server, deliver that message to the proxy on the server.
Here's a rough sketch of the architecture:
This architecture should allow you to automatically scale out by simply adding more servers, and load balancing new connections among them. The message queueing infrastructure would be the only limiting factor, and all of the ones I mentioned would scale beyond any likely use case you'd ever see. Because you'd be using topics and not queues, every message would be broadcast to each server- you might need to figure out a better way of distributing the messages, like using hash-based partitioning.

WCF - is it bad practice to leave a channel open for a long time?

I'm just learning the ropes around WCF. What I was planning to do was have a duplex channel open between a client and server using NetTcpBinding, and keep that open indefinitely so that the server can initiate requests to the client.
Then I stumbled upon this blog by Jesse Ezell, which seems to indicate that it's a Bad Thing to keep a channel open indefinitely, because you can't catch faults, and that causes all manner of instabilities.
Is that correct? If I use NetTcpBinding and keep a reference to an open channel on either side of the relationship, what happens if there's a communication failure? How do I catch the failure event? What other gotchas are there? Is there any difference which .NET framework you're using? (I'm on 4.0.)
I don't agree with Jesse (as a side note: he also recommends you use WCF service classes as singletons by default, which is the worst idea ever in my opinion).....
As long as you take good care of catching exceptions on the server (e.g. by implementing the IErrorHandler interface in your service class), there's no point here to keep shutting down your channel... especially not in a corporate LAN environment using netTcpBinding.
Contrary to e.g. a database connection which often incurs licensing costs, keeping a network connection to your service machine open shouldn't cause any issues. It's also usually not a limited resource, so constantly opening and closing it seems pointless.
If you do keep your service channel open for a longer period of time, you need to be capable on the client side to handle faults - e.g. you need to be able to recover from a situation where the channel has become faulted, when an exception has occured after all (e.g. the network is down or something like that).
But if you do that, then I don't see any benefit in constantly closing your channel after every call, and reopening for the next...
Yes it is good practice to close the channel as soon as it is not needed any more. But it is not usual in case of Duplex communication. When you are using duplex communication you need opened channel to allow server send messages back to the client. WCF communication is always initiated by the client. Callback is allowed only by keeping channel initiated by the client openned.
Duplex communication involves some additional tasks to handle connection failures. Your service should contain some ping mechanism to allow client to check the connection in regular interval. If the connection fails client receives exception and you will be able to reestabilish connection. Also service should handle exception when sending callback message to faulted channel.

WCF or Custom Socket Architecture

I'm writing a client/server architecture where there are going to be possibly hundreds of clients over multiple virtual machines, mostly on the intranet but some in other locations.
Each client will be gathering data constantly and sending a message to a server every second or so. Each message will probably be about 128 characters or so in length.
My question is, for this architecture where I am writing both client/server in .NET is should I go with WCF or some socket code I've written previously. I need scalability (which the socket code has in mind), reliability and just the ability to handle that many messages.
I would not make final decision without peforming some proof of concept. Create very simple service, host it and use some stress test to get real performance results. Than validate results against your requirements. You have mentioned amount of messages but you didn't mentioned expected response time. There is currently discussed similar question on MSDN forum which complains about slow response time of WCF compared to sockets.
Other requirements are not directly mentioned in your post so I will make some assumption for best performance:
Use netTcpBinding - best performance, binary encoding, requires .NET server / clients. I guess you are going to use Net.Tcp because your other choice was direct socket programming.
Don't use security if you don't have to - reduces performance. Probably not possible for clients outside your intranet.
Reuse proxy on clients if possible. Openning TCP connection is expensive if you reuse the same proxy you will have single connection per proxy. This will affect instancing of you services - by default single service instance will handle all requests from single proxy.
Set service throttling so that your service host is ready for many clients
Also you should make some decisions about load balancing. Load balancing for WCF net.tcp connections requires sticky sessions (session affinity) so that after openning the channel client always calls the service on the same server (bacause instance of that service was created only on single server).
100 requests per second does not sound like much for a WCF service, especially with that little payload. But it should be quite quick to setup a simple setup with a WCF service with one echo method just returning the input and then hook up a client with a bunch of threads and a loop.
If you already have a working socket implementation you might keep it, but otherwise you can pick WCF and spend your precious development time elsewhere.
From my experience with WCF, i can tell you that it's performance on high load is very very nice. Especially you can chose between several bindings to achieve your requirements for the different scenarios (httpBinding for outside communication, netPeerTcpBinding in local network e.g.).

C#, WCF, When to reuse a client side proxy

Im writing an application which transfers files using WCF. The transfers are done in segments so that they can be resumed after any unforeseen interruption.
My question concerns the use of the client side proxy, is it better to keep it open and reuse it to transfer each file segment or should I be reopening it each time I want to send something?
The reason to close a proxy as quickly as possible is the fact that you might be having a session in place which ties up system resources (netTcpBinding uses a transport-level session, wsHttpBinding can use security or reliability-based sessions).
But you're right - as long as a client proxy isn't in a faulted state, you can totally reuse it.
If you want to go one step further, and if you can share a common assembly with the service and data contracts between server and client, you could split up the client proxy creation into two steps:
create a ChannelFactory<IYourServiceContract> once and cache that - this is a very expensive and resource-intensive operation; since you need to make this a generic using your service contract (interface), you need to be able to share contracts between server and client
given that factory, you can create your channels using factory.CreateChannel() as needed - this operation is much less "heavy" and can be done quickly and over and over again
This is one possible optimization you could look into - given the scenario that you control both ends of the communication, and you can share the contract assembly between server and client.
You can reuse your WCF client proxy and that will make your client application faster, as proxy just will initialize once.
Creation of a new proxy takes up about 50-100 ms of time, if your system needs good scaling, it's quite a significant time.
When reusing the proxy, you have to be careful of its state and threading issues. Do not try to send data using a proxy that is already busy with sending data. (or receiving) You'll have terrible sleepless nights.
One way of reusing is, having a [ThreadStatic] private field for the proxy and testing its state & presence each time you need to send data. If a new thread was created, the thread static field will be null and you'll need to create a proxy. Assuming you have a simple threading model, this will keep the different threads from stepping on each other's toes and you'll have to worry only about the faulted state of the proxy.

Categories

Resources