I'm currently working on a WebService that is meant to become used by numerous different thin clients. I have been testing it so far with 1 website as the client.
My question is, the web service has classes right. When someone logs onto a website that uses the webservice, is the main class instantiated for that instance of someone using it?
For example during debugging I have 1 client.
The service starts off with a variable that is instaniated as "Hello World" The client asks for the string from the service, then sends it back.
The service then changes an internal
variable to be equal to the one sent
by the client. The service then
append 2 to the end, so the variable
is now "Hello World 2".
The client asks for the updated
string, and gets Hello World 2.
Another user logs on. They ask for
the string, expecting Hello World
but get Hello World 2. Now when the
send it back they get Hello World 2
This is an undesirable result, and is what I'm trying to avoid.
How do I go about doing this?
If you want state to be reset for new clients, then you need to keep track of which clients have connected and serve them data appropriately. This might entail requiring a call to setup a session, so that you can assign and return a token to the client, which is then utilized for all subsequent calls.
An alternative is to write your service using WCF, and then use the PerSession InstanceContextMode, which will construct a new service object for every session. In this case, you still need to indicate which calls begin a session and which calls end a session. For more, see here.
A WebService in ASP.NET works much like a customer service call center. A "pool" of HttpApplications, each representing a connection to your webserver via a browser or other program, are maintained by the ASP.NET server, either actively handling a service call or waiting to receive one. When a service call comes in, it is routed to an idle instance from the pool, which runs the specified method and returns the result which is transmitted as a SOAP response (or using whatever protocol you've set up for your service). The service class then returns to its idle state. Your next call may be handled by a different instance of the service class ("your call is being transferred to the next available representative") than the one that handled your last call.
For almost any circumstance, this architecture is just fine. Service class instances can, as part of their running, read from and write to centralized data stores to which all other instances have access, so as long as either (a.) a service method doesn't need any specialized information to produce the correct answer, or (b.) the service can get any specialized information from this central data store, it doesn't matter which instance handles each call.
However, services also support session state. A client may be directed to a service, give it some information to remember without writing it anywhere centrally, and then have to call back to that same service instance to give it more information before a determinate result can be arrived at. To do this, the client requests a session identifier from the service; basically like asking a CSR in a call center for their direct extension. Some work is done while connected, then each side may go off and do other work without being connected, then the client will call back, provide the session identifier it was given, and its next service call will be handled by the instance that handled the last request. While a session identifier is outstanding, the service will remain idle in the pool, "remembering" any information it has been given, until either the client with that session identifier says it's done (closing the session), or the client hasn't called back in a given time (timing out).
Related
I have a public ASP.NET Core service with 5 instances, one on each service fabric cluster node. I also have a worker service with just one instance, the primary because this is a stateful service. I want to make a call from the worker service to a specific instance of the frontend.
I am using the following standard code to create a connection...
IFrontend c = ServiceProxy.Create<IFrontend>(new Uri("fabric:/MyApp/FrontendService"));
This works but will connect to one of the 5 ASP.NET Core services and it could be any of them. I want to connect to a specific one. Is there some specific format where you can provide the service instance identifier?
There is no straight solution for this. One thing I can think of is to have the ASP.NET Core service send a message to the backend to register the WebSocket is present, and have that message include the node name the frontend service is running on (via ServiceContext.NodeContext.NodeName).
Then have a pub/sub mechanism to send send a message (including the node name of the designated ASP.NET Core service instance you want to address) from the backend to all ASP.NET Core service instance and let only the ASP.NET Core service instance handle the message if the node names match.
You could use this project for that
References:
Call a specific instance of a service in Azure Service Fabric
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/b7cc7df3-9872-4000-8cc6-c48cb47b0b3f/calling-all-stateless-service-instances-via-serviceproxy?forum=AzureServiceFabric
As PeterBons mentioned, there is no straight solution for this kind of problem, there are many catches to be aware before you decide on any approach related to your initial plan, I can point a few, maybe will help you make a better decision:
There is no guarantee that the user is connected to FE1 when the response comes back, even though you keep a connection open, the connection might fail, and the user might connect to FE2 when it happens, your response will be redirected to the wrong server.
The user might be still connected to same node on reconnection, but the original service might be moved around, is common on SF that services move around the nodes in case of failures or Load balancing, in this scenario, the connection might drop and reconnect on same node, if you keep track of the partition, the partition might be already on another node, receiving an useless message.
The Worker response might fail when sending the message to FE1 and you will need to handle retries in the Worker, the same can happen if FE to the User fail, you also have to add a retry logic in there, increasing the complexity.
Some approaches that might work:
Make the communication asynchronous using a message bus in the middle, so that both services does not care about each other state, every response sent from Worker to FE will be asynchronous, any failure might be handled on their time independently. You might want to use:
One Message Queue per partition or
A single Pub/Sub Topic to all partitions and each one handle what is forwarded to then.
Or, can use a PaaS service that manages that for you, like Azure SignalR Service, in this case you need only to have an unique identification for the client and the worker keep it to send an answer back.
We're writing a WCF service which will have to manage external client petitions, on those petitions we will:
Connect to another external service (not under our control) and validate a signature to ensure the identity of the caller.
Make several operations on our environment.
Finally, if everything is ok, call to another external service (again not under our control) and obtain a registry number to send to the client as a response.
From my point of view, the external services calls add too much entropy to allow all process in only one operation, there are lots of possible scenarios out of our reach that can finish with a timeout on the final client.
IMHO we should try to write different operations to manage each of the different parts (generating a secure token to carry authentication during 2nd and 3rd steps) so we can avoid an scenario where all the processes works fine but takes too much time and the client stops waiting for the response, generating a timeout.
My opinion is not shared for other members of the team, who wants to do all in a single operation.
Is there a best practice for this type of scenarios?
Will be better to use different operations or won't it have impact at all?
You can leverage of callback contract. Create oneway operation and let client invoke service and order all work to be done. Within method save client's reference and once all the long-running work is done, check if client's reference is not null so as to make sure client has not been closed. If not then invoce callback operation specified in callback contract. The pros of such as way out is that client does not keep waiting for the result but he is informed when result is obtained and ready to be provided. I refer you to this post.
I have a Windwos Application (Let's name it App) and a WebService Project (name it WS) and a SqlServer Database (DB), and the technologies are all from Microsoft and .net.
The roles are that whenever App needs to do an action, it calls WS and WS does the magic work with DB and then returns the result to App.
So far, so good, but I need something more than that. I need a third Application, let's call it a Robot, this Robot monster should have the ability to find all alive clients (App instances) and not kill, but call them on some specific times, then the App(s) will decide do an action on being called.
My information lacks here, and that is why I want you guys to help me find the best solution for this Server-Calls-Client-And-Client-Does-Something thing.
I have very short handed and pragmatic solution ideas:
Each client application invokes a method for instance YesIamAlive() of the webservice each x seconds/minutes. If the server gets this request it will be saved so you are be able so see which clients are alive. Each client which not sending an alive request for the last x seconds / minutes is not any longer alive. Another method which is also called on a routinely basis and it forces the client to do an action.
You could use SignalR for a websocket communication between your server and client. This example shows a chat server, which is not simular to your request but it shows the idea behind it:
http://braindrivendevelopment.com/2013/01/28/signalr-with-windows-azure-cloud-services/
I am quite sure that there are even more elegant solutions for your problem.
SignalR (GitHub) is an excellent framework for "pushing" to clients in near real-time. It works with both web and WinForms clients.
I can't deny the performance benefit of a duplex async call, but some things about makes me feel wary.
My concern is that given a client object instantiated, will WCF be able to tell which particular client service instance will receive the callback argument?
Can anyone tell me if this is a good idea? If not why not?
new DuplexChannelFactory<IServerWithCallback>(
new ClientService(),
new NetTcpBinding(),
new EndpointAddress("net.tcp://localhost:1234/"+Guid.NewGuid()))
If the virtual path above is reserved how can it be discarded. I want the client service lifetime to be fairly short. IE make a request and receive a response and when done receiving, kill it. How bad is the performance penalty in making the client service lifetime short as opposed to pooling it and keeping it alive longer.
The idea is to avoid timeout issue. When done receiving, sending, dispose ASAP. By convention - can't pass the client services around. If you need info, create a new one, simple - just like EF/L2S etc.
From inside the WCF service itself, how do I kill the session with the client. ie. I don't want the client ending the session - I know I can decorate my operation accordingly, but I want the service to terminate itself programmatically when certain conditions are met.
I can affix the port and forward accordingly to resolve any firewall issue, but what I'm worried about is if the client were to sit behind a load-balancer. How would the service know which particular server to call?
I think in the end Duplex services is simply another failed architecture from Microsoft. This is one of those things that looked really good on paper but just falls apart upon closer examination.
There are too many weaknesses:
1) Reliance on session to establish client listener by the server. This is session information is stored in memory. Hence the server itself cannot be load balanced. Or if it were load balanced you need to turn ip affinity on, but now if one of the servers is bombarded you can't simply add another one and expect all these sessions to automagically migrate over to the new server.
2) For each client sitting behind a router/firewall/loadbalancer, a new end point with specific port needs to be created. Otherwise the router will not be able to properly route the callback messages to the appropriate client. An alternative is to have a router that allows custom programming to redirect specific path to a particular server. Again a tall order. Or another way is for the client with the callback to host its own database and share data via a database <-- Might work in some situation where licensing fees is not an issue... but it introduces a lot of complexity and so onerous on the client plus it mixes the application and services layer together (which might be acceptable in some exceptional situation, but not on top of the huge setup cost)
3) All this basically says that duplex is practically useless. If you need call back then you will do well to setup a wcf host on the client end. It will be simpler and much more scalable. Plus there is less coupling between client and server.
The best duplex solution for scalable architecture is in the end not using one.
It will depend on how short you need the clients new'd up and how long they will last. Pooling would not be an option if you specifically need a new client each time, but if the clients keep doing the same thing why not have a pool of them waiting to be used, if they fault out recreate that same client again.
In reality in a callback scenario if the service is calling back to the client (really calling a function on the client) to pass information the service is now the client and vice versa. You can have the service that's making the callback .Close() the connection but it will be open until the GC can dispose of it, from my experience that can take longer than expected. So in short the client should be responsible (the client being the one making the call to something) for shutting itself down, or disconnecting, the service should only give back answers or take data from a client.
In duplex callbacks the service now calling back to the client will get the address of the client abstracted behind the duplexchannelfactory. If the service can't call back to the client I don't think there's much that can be done, you'd have to ensure the port that your clients are calling to the service is open to receive callbacks I would guess.
I have created a C# application that I want to split into server and client side. The client side should have only a UI, and the server side should manage logic and database.
But, I'm not sure about something: my application should be used by many users at the same time. If I move my application to a server, will WCF create another instance of the application for every user that logs in or there's only one instance of the application for all users?
If the second scenario is true, then how do I create separate application instances for every user that want to use my service? I want to keep my application logic on the server, to make users share the same database, but also to make every single instance independent (so several users can use the WCF service with different data). Someting like PHP does: same code, new instance of code for every user, shared database.
By default, WCF is stateless and instanceless. This basically means that any call may be issued by any client, without any clients or calls knowing about each other.
As long as you have kept that in mind while designing the service, you're good to go. What problems do you expect to occur, given how your service is built at this moment?
The server-side (handled by WCF) will usually not hold any state at all: all method calls would be self-contained and fairly atomic. For example, GetUsers, AddOrder etc. So there's no 'instance' of the app, and in fact the WCF service does not know that it's a particular app using it.
This is on purpose: if you wanted to write a web app, or simple query tool, it could use those same methods on the WCF service without having to be an 'app', or create an instance of anything on the server.
WCF can have objects with a long lifetime, that are stateful, a bit like remoting, but if you're following the pattern of 99.9% of other designs with WCF, you'll be using WCF as a web service.
Edit: from the sounds of your comments, you need to do some seriously and potentially in-depth reading about client-server architectures and the use of WCF. Start from scratch with something small, then try to apply it to your current application.