Singleton vs PerSession vs PerCall state managment - c#

I have a WCF Service that is transferring large files.
Currently I'm using Singleton service with list of instances from my class to hold the state and to response to the client requests for transfer progress and so on.
The instanced class itself handles new threads for each transfer when needed.
Clients who add transfer requests and request progress can disconnect meanwhile and reconnect at random time for the requests.
Also several different clients may want to request progress of all transfers that are going on.
Everything is working great as it is, but I'm sure there is better way of doing this?
Storing state somehow in SQL?
Storing state as I'm currently doing and somehow reconnecting to the same instance? How to get data from all instances then?
I hope you understood my rather long question :)

Yor solution would work great if there is only one instance of yor WCF service.
When you instantiate more than 1 instance with loadballancing then that approach won't work.
In such case you need to keep the state is some place which is common to all instances. It could be SQL, State server, another WCF service which could keep state, etc.
UPDATE:
You need to generate id for each file tranfer task. Then Singletone can assosiate id with instance which does transfer (let's call it Executor).
When client wants to get progress or cancel transfer then it requests Singletone and provides task id.
Singletone should use task id to resolve the actual Executor and forvard client's request to the right Executor.
As result you will be able to instantiate as many Executors as you need. Client should not worry which exactly Executor processes file. Everithing what client should know is task id.

Related

WCF best practice for multiple service calls

We're writing a WCF service which will have to manage external client petitions, on those petitions we will:
Connect to another external service (not under our control) and validate a signature to ensure the identity of the caller.
Make several operations on our environment.
Finally, if everything is ok, call to another external service (again not under our control) and obtain a registry number to send to the client as a response.
From my point of view, the external services calls add too much entropy to allow all process in only one operation, there are lots of possible scenarios out of our reach that can finish with a timeout on the final client.
IMHO we should try to write different operations to manage each of the different parts (generating a secure token to carry authentication during 2nd and 3rd steps) so we can avoid an scenario where all the processes works fine but takes too much time and the client stops waiting for the response, generating a timeout.
My opinion is not shared for other members of the team, who wants to do all in a single operation.
Is there a best practice for this type of scenarios?
Will be better to use different operations or won't it have impact at all?
You can leverage of callback contract. Create oneway operation and let client invoke service and order all work to be done. Within method save client's reference and once all the long-running work is done, check if client's reference is not null so as to make sure client has not been closed. If not then invoce callback operation specified in callback contract. The pros of such as way out is that client does not keep waiting for the result but he is informed when result is obtained and ready to be provided. I refer you to this post.

How to make a call to WCF webservice with more than one client at the same time (in parallel)

I have a c# WCF web service which is a server and I do have 2 clients one is java client and another is c++ client. I want both the clients to run at the same time. The scenario I have and am unable to figure it out is:
My java client will be making a call to the WCF web service and the processing time might take around 10 mins, meanwhile I want my c++ client to make a call to the web service and the get the response back. But right now I am just able to make a call to web service using c++ client when the java client request is being processed. I am not getting the response back for c++ client request until java client request is completed.
Can any one please suggest me how to make this work parallel. Thanks in advance.
Any "normal" WCF service can most definitely handle more than one client request at any given time.
It all depends on your settings for InstanceContextMode:
PerSession means, each session gets a copy of the service class to handle a number of requests (from that same client)
PerCall means, each request gets a fresh copy of the service class to handle the request (and it's disposed again after handling the call)
Single means, you have a singleton - just one copy of your service class.
If you have a singleton - you need to ask yourself: why? By default, PerCall is the recommended setting, and that should easily support quite a few requests at once.
See Understanding Instance Context Mode for a more thorough explanation.
Use
[ServiceBehavior( ConcurrencyMode = ConcurrencyMode.Multiple )]
attribute over your service class. More on this for example here:
http://www.codeproject.com/Articles/89858/WCF-Concurrency-Single-Multiple-and-Reentrant-and
This is peripheral to your question but have you considered asynchronous callbacks from the method that takes 10+ minutes to return, and then having the process run in a separate thread? It's not really good practice to have a service call waiting 10 minutes synchronously, and might solve your problem, although the service should allow for multiple callers at once anyway (our WCF service takes thousands of simultaneous requests).
When you call a WCF you have a choice in either calling it synchronously or asynchronously. A synchronous call waits for the response to send back to the caller in the same operation. In the caller it would look like "myresult = svc.DoSomething()". With an asynchronous call, the caller gives the service a function to call when it completes but does not wait for the response. The caller doesn't block while waiting for the response and goes about its business.
Your callback will take DoSomethingCompletedEventArgs:
void myCallback(object sender, DoSomethingCompletedEventArgs e)
{
var myResult = e.Result;
//then use the result however you would have before.
}
You register the callback function like an event handler:
svc.DoSomethingCompleted+=myCallback;
then
svc.DoSomethingAsync(). Note there is no returned value in that statement; The service would execute myCallBack when it completes and pass the result. (All WCF calls from Silverlight have to be asynchronous but for other clients this restriction isn't there).
Here's a codeproject article that demonstrates a slightly different way in detail.
http://www.codeproject.com/Articles/91528/How-to-Call-WCF-Services-Synchronously-and-Asynchr
This keeps the client from blocking during the 10+ minute process but doesn't really change the way the service itself functions.
Now the second part of what I was mentioning was firing off the 10+ minute process in a separate thread from inside the service. The service methods themselves should be very thin and just be calling functionality in other libraries. Functions that are going to take a long time should ideally be called in their own threads (say a backgroundworker, for which you register on the service side a callback when it completes) and have some sort of persistent system to keep track of their progress and any results that need to go back to the client. If it were me I would register the request for the process in a db and then update that db with its completion. The client would then periodically initiate a simple poll to see if the process was completed and get any results. You might be able to set up duplex binding to get notified when the process completes automatically but to be honest it's been a few years since I've done any duplex binding so I don't remember exactly how it works.
These topics are really too big for me to go into depth here. I would suggest researching multithreaded operations with the BackgroundWorker.

Best practice for WCF Duplex client

I can't deny the performance benefit of a duplex async call, but some things about makes me feel wary.
My concern is that given a client object instantiated, will WCF be able to tell which particular client service instance will receive the callback argument?
Can anyone tell me if this is a good idea? If not why not?
new DuplexChannelFactory<IServerWithCallback>(
new ClientService(),
new NetTcpBinding(),
new EndpointAddress("net.tcp://localhost:1234/"+Guid.NewGuid()))
If the virtual path above is reserved how can it be discarded. I want the client service lifetime to be fairly short. IE make a request and receive a response and when done receiving, kill it. How bad is the performance penalty in making the client service lifetime short as opposed to pooling it and keeping it alive longer.
The idea is to avoid timeout issue. When done receiving, sending, dispose ASAP. By convention - can't pass the client services around. If you need info, create a new one, simple - just like EF/L2S etc.
From inside the WCF service itself, how do I kill the session with the client. ie. I don't want the client ending the session - I know I can decorate my operation accordingly, but I want the service to terminate itself programmatically when certain conditions are met.
I can affix the port and forward accordingly to resolve any firewall issue, but what I'm worried about is if the client were to sit behind a load-balancer. How would the service know which particular server to call?
I think in the end Duplex services is simply another failed architecture from Microsoft. This is one of those things that looked really good on paper but just falls apart upon closer examination.
There are too many weaknesses:
1) Reliance on session to establish client listener by the server. This is session information is stored in memory. Hence the server itself cannot be load balanced. Or if it were load balanced you need to turn ip affinity on, but now if one of the servers is bombarded you can't simply add another one and expect all these sessions to automagically migrate over to the new server.
2) For each client sitting behind a router/firewall/loadbalancer, a new end point with specific port needs to be created. Otherwise the router will not be able to properly route the callback messages to the appropriate client. An alternative is to have a router that allows custom programming to redirect specific path to a particular server. Again a tall order. Or another way is for the client with the callback to host its own database and share data via a database <-- Might work in some situation where licensing fees is not an issue... but it introduces a lot of complexity and so onerous on the client plus it mixes the application and services layer together (which might be acceptable in some exceptional situation, but not on top of the huge setup cost)
3) All this basically says that duplex is practically useless. If you need call back then you will do well to setup a wcf host on the client end. It will be simpler and much more scalable. Plus there is less coupling between client and server.
The best duplex solution for scalable architecture is in the end not using one.
It will depend on how short you need the clients new'd up and how long they will last. Pooling would not be an option if you specifically need a new client each time, but if the clients keep doing the same thing why not have a pool of them waiting to be used, if they fault out recreate that same client again.
In reality in a callback scenario if the service is calling back to the client (really calling a function on the client) to pass information the service is now the client and vice versa. You can have the service that's making the callback .Close() the connection but it will be open until the GC can dispose of it, from my experience that can take longer than expected. So in short the client should be responsible (the client being the one making the call to something) for shutting itself down, or disconnecting, the service should only give back answers or take data from a client.
In duplex callbacks the service now calling back to the client will get the address of the client abstracted behind the duplexchannelfactory. If the service can't call back to the client I don't think there's much that can be done, you'd have to ensure the port that your clients are calling to the service is open to receive callbacks I would guess.

How does code run in a WebService?

I'm currently working on a WebService that is meant to become used by numerous different thin clients. I have been testing it so far with 1 website as the client.
My question is, the web service has classes right. When someone logs onto a website that uses the webservice, is the main class instantiated for that instance of someone using it?
For example during debugging I have 1 client.
The service starts off with a variable that is instaniated as "Hello World" The client asks for the string from the service, then sends it back.
The service then changes an internal
variable to be equal to the one sent
by the client. The service then
append 2 to the end, so the variable
is now "Hello World 2".
The client asks for the updated
string, and gets Hello World 2.
Another user logs on. They ask for
the string, expecting Hello World
but get Hello World 2. Now when the
send it back they get Hello World 2
This is an undesirable result, and is what I'm trying to avoid.
How do I go about doing this?
If you want state to be reset for new clients, then you need to keep track of which clients have connected and serve them data appropriately. This might entail requiring a call to setup a session, so that you can assign and return a token to the client, which is then utilized for all subsequent calls.
An alternative is to write your service using WCF, and then use the PerSession InstanceContextMode, which will construct a new service object for every session. In this case, you still need to indicate which calls begin a session and which calls end a session. For more, see here.
A WebService in ASP.NET works much like a customer service call center. A "pool" of HttpApplications, each representing a connection to your webserver via a browser or other program, are maintained by the ASP.NET server, either actively handling a service call or waiting to receive one. When a service call comes in, it is routed to an idle instance from the pool, which runs the specified method and returns the result which is transmitted as a SOAP response (or using whatever protocol you've set up for your service). The service class then returns to its idle state. Your next call may be handled by a different instance of the service class ("your call is being transferred to the next available representative") than the one that handled your last call.
For almost any circumstance, this architecture is just fine. Service class instances can, as part of their running, read from and write to centralized data stores to which all other instances have access, so as long as either (a.) a service method doesn't need any specialized information to produce the correct answer, or (b.) the service can get any specialized information from this central data store, it doesn't matter which instance handles each call.
However, services also support session state. A client may be directed to a service, give it some information to remember without writing it anywhere centrally, and then have to call back to that same service instance to give it more information before a determinate result can be arrived at. To do this, the client requests a session identifier from the service; basically like asking a CSR in a call center for their direct extension. Some work is done while connected, then each side may go off and do other work without being connected, then the client will call back, provide the session identifier it was given, and its next service call will be handled by the instance that handled the last request. While a session identifier is outstanding, the service will remain idle in the pool, "remembering" any information it has been given, until either the client with that session identifier says it's done (closing the session), or the client hasn't called back in a given time (timing out).

C#, WCF, When to reuse a client side proxy

Im writing an application which transfers files using WCF. The transfers are done in segments so that they can be resumed after any unforeseen interruption.
My question concerns the use of the client side proxy, is it better to keep it open and reuse it to transfer each file segment or should I be reopening it each time I want to send something?
The reason to close a proxy as quickly as possible is the fact that you might be having a session in place which ties up system resources (netTcpBinding uses a transport-level session, wsHttpBinding can use security or reliability-based sessions).
But you're right - as long as a client proxy isn't in a faulted state, you can totally reuse it.
If you want to go one step further, and if you can share a common assembly with the service and data contracts between server and client, you could split up the client proxy creation into two steps:
create a ChannelFactory<IYourServiceContract> once and cache that - this is a very expensive and resource-intensive operation; since you need to make this a generic using your service contract (interface), you need to be able to share contracts between server and client
given that factory, you can create your channels using factory.CreateChannel() as needed - this operation is much less "heavy" and can be done quickly and over and over again
This is one possible optimization you could look into - given the scenario that you control both ends of the communication, and you can share the contract assembly between server and client.
You can reuse your WCF client proxy and that will make your client application faster, as proxy just will initialize once.
Creation of a new proxy takes up about 50-100 ms of time, if your system needs good scaling, it's quite a significant time.
When reusing the proxy, you have to be careful of its state and threading issues. Do not try to send data using a proxy that is already busy with sending data. (or receiving) You'll have terrible sleepless nights.
One way of reusing is, having a [ThreadStatic] private field for the proxy and testing its state & presence each time you need to send data. If a new thread was created, the thread static field will be null and you'll need to create a proxy. Assuming you have a simple threading model, this will keep the different threads from stepping on each other's toes and you'll have to worry only about the faulted state of the proxy.

Categories

Resources