WCF Service dependencies - c#

I have three wcf services A,B and C respectively ,since i wanted it to be SOA(Service Oriented Architecture) the way my setup works is when i send a request from client to server.
All the services are self hosted windows services.
Client sends request to service A (client has no clue about the other services B and C);
Service A eventually sends that request to Service B and Service C.
Service B and C sends response back to Service A which would be sent back to the client by service A.
Issue i m facing :If i make any changes in the code of Service B and rebuild and restart the service ,i am having issue getting the response back but when i restart all the remaining services then it works fine.
In other words my client doesn't get the response back unless i restart all the services(A,B and C) even though i just changed the code in only one service and rebuilt it.I know the thing works if i restart all the three services but i want to know is this the problem in my way of designing or it is something i have to deal with self hosted windows services.And all the services(A,B,C) are independent as none depends on each other.
Did some one ever see such things happened in SOA.I would be glad if some one can guide me to appropriate solution ?

Replace WCF between services with any sort of queue (one service publishes something, other can read when they are ready). Can be anything. Can be a simple table where you read from if there is something new. Can be RabbitMQ, NServiceBus, etc, whatever works for you.
Define messages you put into the queue: commands and events. Both are simple classes with properties, no logic there. Commands represent what the system is asked to do (RegisterUser, PlaceOrder, ect), events represent what the system has done (UserRegistered, OrderApproved, PaymentReceived, etc). Be explicit about actions, Don't do something like "I have changed all the properties of a user on the client, now I call SaveUser(user)". Your service supposes to know how to change objects, clients should only command what to do.
Never break your contract. It is easy, easier than it sounds: you can add things to your message contracts, but cannot remove. In other word you just keep your contract backwards compatible.
Now you have a much better design: services communicate only through messages in queues, messages are backward compatible. This means that you can stop any of the services at any time without impacting others: they will continue sending messages into queues, and when the stopped service comes back again it will catch up processing all the stuff from the queue.
Then, if you want, you can use the same approach with client interactions: if instead of calling WCF clients would only put their commands in some sort of a queue then service upgrades or other downtime would not impact user experience.
Example: if I use WCF to place an order or to put an item into a shopping card then if there is a problem or a service is down for maintenance I will not be able to do it. I would click a button and have a nasty error. More importantly my order will not make into the system.
In contrast, if there is a queue in the middle, I only put my command into the queue. Now even if my service is down at the moment, or experience a high load (and therefore slow) then my user experience is still the same and does not degrade. It is just my command will be processed a bit later, but as a client I don't really care. And my order will not be lost in this scenario. The system became fault-tolerate and self-balanced.
There are all sorts of fantastic tricks you can do if you simply put a queue in the middle instead of experiencing problems with spatial and temporal coupling that comes with WCF :)
And what I described is just the beginning... :)

You may want to consider using a service bus such as NServiceBus to help you accomplish your functionality.
The first issue it will help you address is the decoupling of your services via publish/subscribe messaging pattern. Rather than invoking web services in one or the other service, publish events that notify the respective services when something has occurred. In your case this would look something like this:
Client invokes web service in Service A.
Service A publishes a message "Client Command Received" which Service B and C subscribe to.
Service B and C handle this event and then publish events of their own.
Service A subscribes to both events and replies to the client.
The first and immediate benefit of using something NServiceBus is reliability. On top of that you are able to easily version your message without affecting your client or your respective services. NServiceBus has full WCF integration so your client can continue to send messages to your service as before.
One of the things that makes your scenario interesting is that you can't guarantee when Service B and C send their responses back to you. Do you keep the connection to the client open until Service has received their responses? Do you need both responses before you can send a the client its response? What happens if either or one of the service crash? What if there is a time limit to how long you can wait before a response is received by Service A? All of these questions and more can be answered with a feature in NServiceBus called Sagas. Check it out.
If using NServiceBus is not possible then things become more difficult. WCF doesn't support publish/subscribe out of the box so you will have to bake your own. At a minimum I would recommend using this to decouple your services. How you manage state and temporal coupling in your services is another matter. Save yourself the trouble.
There are other frameworks out there but if you want a developer centric, cost effective way to create a .NET based solution then recommend using NServiceBus.

Related

Connect to specific service instance using ServiceProxy.Create

I have a public ASP.NET Core service with 5 instances, one on each service fabric cluster node. I also have a worker service with just one instance, the primary because this is a stateful service. I want to make a call from the worker service to a specific instance of the frontend.
I am using the following standard code to create a connection...
IFrontend c = ServiceProxy.Create<IFrontend>(new Uri("fabric:/MyApp/FrontendService"));
This works but will connect to one of the 5 ASP.NET Core services and it could be any of them. I want to connect to a specific one. Is there some specific format where you can provide the service instance identifier?
There is no straight solution for this. One thing I can think of is to have the ASP.NET Core service send a message to the backend to register the WebSocket is present, and have that message include the node name the frontend service is running on (via ServiceContext.NodeContext.NodeName).
Then have a pub/sub mechanism to send send a message (including the node name of the designated ASP.NET Core service instance you want to address) from the backend to all ASP.NET Core service instance and let only the ASP.NET Core service instance handle the message if the node names match.
You could use this project for that
References:
Call a specific instance of a service in Azure Service Fabric
https://social.msdn.microsoft.com/Forums/sqlserver/en-US/b7cc7df3-9872-4000-8cc6-c48cb47b0b3f/calling-all-stateless-service-instances-via-serviceproxy?forum=AzureServiceFabric
As PeterBons mentioned, there is no straight solution for this kind of problem, there are many catches to be aware before you decide on any approach related to your initial plan, I can point a few, maybe will help you make a better decision:
There is no guarantee that the user is connected to FE1 when the response comes back, even though you keep a connection open, the connection might fail, and the user might connect to FE2 when it happens, your response will be redirected to the wrong server.
The user might be still connected to same node on reconnection, but the original service might be moved around, is common on SF that services move around the nodes in case of failures or Load balancing, in this scenario, the connection might drop and reconnect on same node, if you keep track of the partition, the partition might be already on another node, receiving an useless message.
The Worker response might fail when sending the message to FE1 and you will need to handle retries in the Worker, the same can happen if FE to the User fail, you also have to add a retry logic in there, increasing the complexity.
Some approaches that might work:
Make the communication asynchronous using a message bus in the middle, so that both services does not care about each other state, every response sent from Worker to FE will be asynchronous, any failure might be handled on their time independently. You might want to use:
One Message Queue per partition or
A single Pub/Sub Topic to all partitions and each one handle what is forwarded to then.
Or, can use a PaaS service that manages that for you, like Azure SignalR Service, in this case you need only to have an unique identification for the client and the worker keep it to send an answer back.

why is wcf duplex required?

WCF duplex performs a callback after a method has run on the server that then runs code on the client.
If i want to execute a method on the client from the server at the push of a button on the server then i don't think WCF duplex is appropriate.
Why would i not just create a client and a server at each end of my 2 applications?
I was one of the people that commented on your previous question so I probably owe you an answer here :o)
You have posted rather a lot of code and I have not looked at it in detail. However, in general terms, there is a reason for using wsDualHttpBinding and duplex contracts in general instead of more of a peer-to-peer approach where you have services on both sides, as follows:
The duplex approach is appropriate where you have a clearly defined server that is running permanently. This provides the hub of the interaction. The idea is that clients are in some way more transient than the server. The clients can start up and shut down or move location and the server does not need to be aware of them in advance. When the client starts up, it is pre-configured to know where the server is, so it can "register" itself with the server.
In contrast, the server does not need to be preconfigured to know where the clients are. It starts up and can run independently of any clients. It just accepts "registrations" from all clients that have valid credentials whenever they come online, and can continue to run after the client goes offline. Also, if the client moves, it just re-registers itself with the server at its new location.
So the server is in some sense a more "important" part of the system. No client can participate in the communication without the server, but the server can operate independently of any client.
To do this with WCF duplex service, you have to do some extra work yourself to implement the publish/subscribe behaviour. Fortunately, the MSFT Patterns and Practises team have provided some guidance on how to do it
http://msdn.microsoft.com/en-us/library/ms752254.aspx
This is fundamentally different from a genuine peer-to-peer approach where there is no well-defined hub (i.e. server) for the network and each node can come and go without affecting the overall functioning of the network.
WCF Duplex is used when you have a Publish/Subscribe setting (also known as the Observer Pattern). Let's say you have a service that subscribes for notifications of some sort (e.g. new email). Normally, you would need to check periodically for updates. Using WCF Duplex, the subscriber can be notified automatically by the publisher when there are updates.

What is the simplest way to do simple distributed communication in .NET?

So basically I am thinking about attempting load testing on my asp.net application using various features all at once. There is a lot of dependencies and ajax requests being performed in this application so it seems like a simple replay of captured http requests will not suffice and due to other features like picking out random operations, performing then verifying results across several machines, simple load testing software will not suffice.
Also there is no budget to this project for spending, so commercial implementations can not be used. I'm debating on trying to use MSMQ (never used before) to handle communication between clients, but if that is really complicated to set up then I would either use a database table as a queue or a simple TCP server with each test machine as its clients.
Features I want are: immediate failure (one client crashes, then all clients should stop), each test run should start with a brand new scenario with no prior messages, and ability to publish a start and stop event. Also it would be nice if I don't have to worry about state management (leaning towards TCP server for this over database) or concurrency.
It doesn't sound like MSMQ is what you need. It is a message-passing asynchronous communication method, akin to email. You can send a message to another queue that no one is even listening to (i.e. the application isn't running). It seems to me you want a more "online" communication model.
How about creating agents (client applications that sit on many machines and create the load) that expose a WCF service where a controller program can connect to all of them and instruct the agents what to do? It can be a duplex contract, so that the agents can send the controller a notifications. When one of them send a error notification, the controller can instruct all the other agents to shut down. Also I'd go for a Net.TCP binding rather than HTTP binding.

Connecting with external systems | How to design the system

I have a question on how I should configure my system which interfaces with external systems.
Here are the various parts, which I was thinking of putting together:
Option1
I have 3 external services which brings in 3 different types of messages to the system.
I have a WCF service which listens to all these external services, which converts the messages into a common format.
I have a WPF UI which is connected with the WCF service using duplex tcp binding, and messages gets updated on the UI.
The UI also sends any outgoing messages to the service which in turn sends it to the external services.
My question is, do you think this is a scalable, maintainable, cost effective way to architect the solution? Do you see any specific problems that could come in if I deploy this in an intranet scenario?
The other consideration I was giving was the following:
Option 2
Have 3 windows services which connects to the external services.
Each service churns out the messages into a canonical format and puts it on a message bus on a specific topic.
Have a WCF service which listens to the topics on the message bus for the new messages.
Use a duplex binding to update the client with any incoming messages.
Client sends the outgoing messages to the WCF service, which in turn drops it on to a specific topic listened by the windows services, which sends them to the external systems.
I'm bit confused here, could you please help me out with which one would be a better approach, and if you could, please point me to any links which discusses these scenarios?
Volume of data exchange is 400 messages every 5 minutes, divided across the external systems.
I'm sure lot of you might have faced this situation, so if you have a better approach to this please let me know.
Thanks,
-Mike
I certainly think your option 2 is correct when dealing with 3 similar but different message formats. This implements two integration patterns, the adapter and the canonical message.
However, I think using the same "adapter" services as conduits for messages travelling in the opposite direction would be a mistake. It would be better to abstract the response channel out of the request processing pipeline.
You could implement a distributor (or router) pattern to handle the routing of response messages back to the request source.

Scaling out WCF, how to deal with callbacks?

Suppose, I want to scale out (add more boxes) some WCF service. This looks pretty easy, set up load balancer that calls WCF services on multiple boxes using for example round robin algorithm.
However how to deal with situation when a WCF service have callback contract. When a client connects to some particular box, it receives events only raised by this computer WCF service instance. And I want client to receive events that were raised by any WCF service instance in group (cluster).
What is the best way to make WCF service know about events raised by other WCF service instances?
Some ideas: Multicast, broadcast, WCF NetPeerTcpBinding, Single server that subscribes to all WCF services in cluster (acting as event aggregate).
UPDATE: I have managed to create test system, using NetPeerTCPBinding as a mechanism to share events across servers. I haven't made a benchmark yet, but I feel that WCF P2P is to heavy for this tusk, I'm gonna implement UDP broadcast based event sharing system.
I would implement this by setting up a MSMQ queue that each server can subscribe to, and when an event occurs that the other servers need to know about, the service can publish it.
I use a library called NServiceBus to make this entire process simple. NServiceBus is a full-featured library that uses MSMQ (among other transports) to create pub/sub messaging buses, which would exactly solve your problem. It is easy to use and has a fluent interface for configuration, subscription, and publishing.
I will come back and edit this post later with an example, but the NServiceBus website has plenty of documentation to get you started until then.
Have you considered messaging? Sounds ideal.

Categories

Resources