Order of messages in SingalR - how to ensure it - c#

I'm using SingalR in an application that sends alot messages in a short period of time.
let's say i have client A and client B.
Client A just sends messages and client B just listening to messages.
Client A sends the following messages in the following order: A->B->C->D
What i'm seeing is that Client B sometimes receives the messages in a different order, for example: B->A->C->D
It is important for maintain the same order i sent the messages.
I've looked online and i found people saying i should use async-await on the function on the hub that handles those messages.
public async Task hubMethod(msgObject msg)
{
await Clients.All.message(msg);
}
I'm not sure how that helps since each time i make a call from client A , singalR should create a new instance of the hub.
The only thing it does is wait for the singalR that it finished doing all it can do on the server in order to send the message to the other client and notifies it to client A.
So my question is this - is there a singalR or asp.net mechanism that make sure i receive the messages in the correct order on the other client or do i need to write my own mechanism (server or client) that reorders the messages if they are out of order - and if so, is there a library that already does it?

You need to write your own mechanism. SignalR in client B has no way to know in which order the clients messages were sent by client A because there is many things that could delay a specific message arrival, like network delay, the only thing SignalR can guarantee is the order in which the messages arrived.
If you really need to know the original order of the messages you could put a count inside the message and let client B sort them out. However i suggest you try another approach, because guaranteeing the order of delivery is not a easy task.

Related

NServiceBus sending message to one endpoint and waiting for reply in another

I'm using NServiceBus with RabbitMQ in my project. I have two services that don't know about each other and don't share anything. service1 publishes request messages to endpoint1 (queue1) and service2 listens to endpoint1 and publishes responses to endpoint2 (queue2). There are two questions:
How can service1 handle responses from service2 if service1 doesn't know the response message type but only expects some particular fields in the response message?
I want to create an async API method that sends a request to endpoint1 and waits for the response in endpoint2. Is it somehow possible at all? Also how can I ensure that the reply corresponds with the request?
I expect something like:
public async Task<object> SendRequest(string str) {
var request = new MyRequest(str);
await endPoint1.Publish(request);
var reply = await endPoint2.WaitingReply();
return reply;
}
I will appreciate any help.
Whenever two things communicate, there is always a contract. When functions call each other the contract is the parameters that are required to call that function. With messaging the message is the contract. The coupling is towards the message, not the sender or receiver.
I'm not really sure what you're trying to achieve? You mention an API which is async and endpoint1 and endpoint2.
First of all, there's asynchronous execution and asynchronous communication. The async part in your example code is asynchronous execution of two methods that have the word await in front of them. When we talk about sending messages, that's asynchronous communication. A message is put on the queue and then the code moves on and never looks back at the message. Even when you use the request/reply pattern, no code is actually waiting for a message.
You can wait for a message by blocking the thread, but I highly recommend you avoid that and not use the NServiceBus callback feature. If you think you have to, think again. If you still think so, read the red remarks on that page. If they can't convince you, contact Particular Software to have them explain another time why not. ;-)
It could be that you need a reply message for whatever reason. If you build some website using SignalR (for example) and you want to inform the user on the website when a message returned and some work was completed, you can wait for a reply message. The result is that the website itself becomes an endpoint.
So if the website is EndpointA and it sends a message to EndpointB, it is possible to reply to that message. EndpointA would then also need a message handler for that message. If EndpointB first needs to send a message to EndpointC, which in turn responds to EndpointB and only then it replies back to EndpointA, NServiceBus can't easily help. Not because it's impossible, but because you probably need another solution. EndpointA should probably not be waiting for that many endpoints to reply, so many things could go "wrong" and take too much time.
If you're interested to see how replies work in combination with SignalR and what not, you can check a demo I built for a presentation that has that.

How to send updates from server to clients?

I am building a c#/wpf project.
It's architecture is this:
A console application which will be on a virtual machine (or my home computer) that will be the server side.
A wpf application that will be the client app.
Now my problem is this - I want the server to be able to send changes to the clients. If for example I have a change for client ABC, I want the server to know how to call a service on the clients computer.
The problem is, that I don't know how the server will call the clients.
A small example in case I didn't explain it well:
The server is on computer 1, and there are two clients, on computers 2 and 3.
Client 2 has a Toyota car and client 3 has a BMW car.
The server on computer 1 wants to tell client 2 that it has a new car, an Avenger.
How do I keep track and call services on the clients?
I thought of saving their ip address (from calling ipconfig from the cmd) in the DB - but isn't that based on the WI-FI/network they are connected to?
Thanks for any help!
You could try implementing SignalR. It is a great library that uses web sockets to push data to clients.
Edit:
SignalR can help you solve your problem by allowing you to set up Hubs on your console app (server) that WPF application (clients) can connect to. When the clients start up you will register them with a specified Hub. When something changes on the server, you can push from the server Hub to the client. The client will receive the information from the server and allow you to handle it as you see fit.
Rough mockup of some code:
namepsace Server{}
public class YourHub : Hub {
public void SomeHubMethod(string userName) {
//clientMethodToCall is a method in the WPF application that
//will be called. Client needs to be registered to hub first.
Clients.User(userName).clientMethodToCall("This is a test.");
//One issue you may face is mapping client connections.
//There are a couple different ways/methodologies to do this.
//Just figure what will work best for you.
}
}
}
namespace Client{
public class HubService{
public IHubProxy CreateHubProxy(){
var hubConnection = new HubConnection("http://serverAddress:serverPort/");
IHubProxy yourHubProxy = hubConnection.CreateHubProxy("YourHub");
return yourHubProxy;
}
}
}
Then in your WPF window:
var hubService = new HubService();
var yourHubProxy = hubService.CreateHubProxy();
yourHubProxy.Start().Wait();
yourHubProxy.On("clientMethodToCall", () => DoSometingWithServerData());
You need to create some kind of subscription model for the clients to the server to handle a Publish-Subscribe channel (see http://www.enterpriseintegrationpatterns.com/patterns/messaging/PublishSubscribeChannel.html). The basic architecture is this:
Client sends a request to the messaging channel to register itself as a subscriber to a certain kind of message/event/etc.
Server sends messages to the channel to be delivered to subscribers to that message.
There are many ways to handle this. You could use some of the Azure services (like Event hub, or Topic) if you don't want to reinvent the wheel here. You could also have your server application track all of these things (updates to IP addresses, updates to subscription interest, making sure that messages don't get sent more than once; taking care of message durability [making sure messages get delivered even if the client is offline when the message gets created]).
In general, whatever solution you choose is plagued with a common problem - clients hide behind firewalls and have dynamic IP addresses. This makes it difficult (I've heard of technologies claiming to overcome this but haven't seen any in action) for a server to push to a client.
In reality, the client talks and the server listens and response. However, you can use this approach to simulate a push by;
1. polling (the client periodically asks for information)
2. long polling (the client asks for information and the server holds onto the request until information arrives or a timeout occurs)
3. sockets (the client requests server connection that is used for bi-directional communication for a period of time).
Knowing those terms, your next choice is to write your own or use a third-party service (azure, amazon, other) to deliver messages for you. I personally like long polling because it is easy to implement. In my application, I have the following setup.
A web API server on Azure with and endpoint that listens for message requests
A simple loop inside the server code that checks the database for new messages every 100ms.
A client that calls the API, handling the response.
As mentioned, there are many ways to do this. In your particular case, one way would be as follows.
Client A calls server API to listen for message
Server holds onto call, waiting for new message entry in database
Client B calls server API to post new message
Server saves message to database
Server instance from step 2 sees new message
Server returns message to Client A.
Also, the message doesn't have to be stored in a database - it just depends on your needs.
Sounds like you want to track users à la https://www.simple-talk.com/dotnet/asp.net/tracking-online-users-with-signalr/ , but in a desktop app in the sense of http://www.codeproject.com/Articles/804770/Implementing-SignalR-in-Desktop-Applications or damienbod.wordpress.com/2013/11/20/signalr-a-complete-wpf-client-using-mvvm/ .

Quickfix C# initiator implementation questions

I am in the process of QuickFix service initiator implementation in c# which needs to do the following.
Listen to incoming QuoteRequest messages and save them to a local database/queue.
Our users will have the ability to hit the Bids on these quote requests. These selections will be saved in a local queue.
Service will need to read the queue and send Quote messages back to the sender.
Listen to QuoteResponse / BusinessReject and QuoteStatus Messages from the sender and store on our end.
I'm planning to have two threads in my service.
Thread 1: This will be used to listen to incoming QuoteRequest, Quote response, Businessreject and quotestatus messages.
Outgoing ExecutionReport will be sent from OnMessage event handler while cracking QuoteResponse message.
Those messages will get stored in our system and published on our sites/queue etc.
Thread 2: This will listen to another local queue and sends out Quote(bids) messages to the acceptor. Quotes will be sent out using Session.SendToTarget.
Is there a way to configure two instances of initiators to be used in each thread ? Or do I create one initiator and add two sessions.
Would it work if both initiators are using same socket server and port ? Also if a message is not cracked by one thread would it be available for the other thread ?
I couldnt find any example of a multithreaded approach to handle both incoming and outgoing messages.
Appreciate any inputs/recommendation on a correct approach to implementation.
This is only one connection, and only one session, so there should only be one Initiator.
You can set up different worker threads, but your various OnMessage() callbacks should be a common entry point. They can dispatch their received messages to your thread (you could have them push received messages into a queue or something for your threads to consume). Your threads can do what they need to do and then call sendToTarget as appropriate.
Above all else, try not to put any expensive logic in the QF callbacks; put it in the threads. Other than that, you can do what you want.

Suggestions for developing a TCP/IP based message client

I've got a server side protocol that controls a telephony system, I've already implemented a client library that communicates with it which is in production now, however there are some problems with the system I have at the moment, so I am considering re-writing it.
My client library is currently written in Java but I am thinking of re-writing it in both C# and Java to allow for different clients to have access to the same back end.
The messages start with a keyword have a number of bytes of meta data and then some data. The messages are always terminated by an end of message character.
Communication is duplex between the client and the server usually taking the form of a request from the Client which provokes several responses from the server, but can be notifications.
The messages are marked as being on of:
C: Command
P: Pending (server is still handling the request)
D: Data data as a response to
R: Response
B: Busy (Server is too busy to handle response at the moment)
N: Notification
My current architecture has each message being parsed and a thread spawned to handle it, however I'm finding that some of the Notifications are processed out of order which is causing me some trouble as they have to be handled in the same order they arrive.
The duplex messages tend to take the following message format:
Client -> Server: Command
Server -> Client: Pending (Optional)
Server -> Client: Data (optional)
Server -> Client: Response (2nd entry in message data denotes whether this is an error or not)
I've been using the protocol for over a year and I've never seen the a busy message but that doesn't mean they don't happen.
The server can also send notifications to the client, and there are a few Response messages that are auto triggered by events on the server so they are sent without a corresponding Command being issued.
Some Notification Messages will arrive as part of sequence of messages, which are related for example:
NotificationName M00001
NotificationName M00001
NotificationName M00000
The string M0000X means that either there is more data to come or that this is the end of the messages.
At present the tcp client is fairly dumb it just spawns a thread that notifies an event on a subscriber that the message has been received, the event is specific to the message keyword and the type of message (So data,Responses and Notifications are handled separately) this works fairly effectively for Data and response messages, but falls over with the notification messages as they seem to arrive in rapid sequence and a race condition sometimes seems to cause the Message end to be processed before the ones that have the data are processed, leading to lost message data.
Given this really badly written description of how the system works how would you go about writing the client side transport code?
The meta data does not have a message number, and I have not control over the underlying protocol as it's provided by a vendor.
The requirement that messages must be processed in the order in which they're received almost forces a producer/consumer design, where the listener gets requests from the client, parses them, and then places the parsed request into a queue. A separate thread (the consumer) takes each message from the queue in order, processes it, and sends a response to the client.
Alternately, the consumer could put the result into a queue so that another thread (perhaps the listener thread?) can send the result to the client. In that case you'd have two producer/consumer relationships:
Listener -> event queue -> processing thread -> output queue -> output thread
In .NET, this kind of thing is pretty easy to implement using BlockingCollection to handle the queues. I don't know if there is something similar in Java.
The possibility of a multi-message request complicates things a little bit, as it seems like the listener will have to buffer messages until the last part of the request comes in before placing the entire thing into the queue.
To me, the beauty of the producer/consumer design is that it forces a hard separation between different parts of the program, making each much easier to debug and minimizing the possibility of shared state causing problems. The only slightly complicated part here is that you'll have to include the connection (socket or whatever) as part of the message that gets shared in the queues so that the output thread knows where to send the response.
It's not clear to me if you have to process all messages in the order they're received or if you just need to process messages for any particular client in the proper order. For example, if you have:
Client 1 message A
Client 1 message B
Client 2 message A
Is it okay to process the first message from Client 2 before you process the second message from Client 1? If so, then you can increase throughput by using what is logically multiple queues--one per client. Your "consumer" then becomes multiple threads. You just have to make sure that only one message per client is being processed at any time.
I would have one thread per client which does the parsing and processing. That way the processing would be in the order it is sent/arrives.
As you have stated, the tasks cannot be perform in parallel safely. performing the parsing and processing in different threads is likely to add as much overhead as you might save.
If your processing is relatively simple and doesn't depend on external systems, a single thread should be able to handle 1K to 20K messages per second.
Is there any other issues you would want to fix?
I can recommend only for Java-based solution.
I would use some already mature transport framework. By "some" I mean the only one I have worked with until now -- Apache MINA. However, it works and it's very flexible.
Regarding processing messages out-of-order -- for messages which must be produced in the order they were received you could build queues and put such messages into queues.
To limit number of queues, you could instantiate, say, 4 queues, and route incoming message to particular queue depending on the last 2 bits (indeces 0-3) of the hash of the ordering part of the message (for example, on the client_id contained in the message).
If you have more concrete questions, I can update my answer appropriately.

How can I throttle the amount of messages coming from ActiveMQ in my C# app?

I'm using ActiveMQ in a .Net program and I'm flooded with message-events.
In short when I get a queue-event 'onMessage(IMessage receivedMsg)' I put the message into an internal queue out of which X threads do their thing.
At first I had: 'AcknowledgementMode.AutoAcknowledge' when creating the session so I'm guessing that all the messages in the queue got sucked down and put into the memory queue (which is risky since with a crash, everything is lost).
So then I used: 'AcknowledgementMode.ClientAcknowledge' when creating the session, and when a worker was ready with the message it calls the 'commit()' method on the message. However, still all the messages get sucked down from the queue.
How can I configure it that ONLY an X amount of messages are being processed or are in an internal queue, and that not everything is being 'downloaded' right away?
Are you on .NET 4.0? You could use a BlockingCollection . Set it to the maximum amount it may contain. As soon as a thread tries to put in an excess element, the Add operation will block until the collection falls below the threshold again.
Maybe that would do it for throttling?
There is also an API for throttling in the Rx framework, but I do not know how it is implemented. If you implement your Queue source as Observable, this API would become available for you, but I don't know if this hits your needs.
You can set the client prefetch to control how many messages the client will be sent. When the Session is in Auto Ack, the client will only ack a message once its been delivered to your app via the onMessage callback or through a synchronous receive. By default the client will prefetch 1000 messages from the broker, if the client goes down these messages would be redelivered to another client it this was a Queue, otherwise for a topic they are just discarded as a topic is a broadcast based channel. If you set the prefetch to one then you client would only be sent one message from the sever, then each time your onMessage callback completes a new message would be dispatched as the client would ack that message, that is if the session is in Auto Ack mode.
Refer to the NMS configuration page for all the options:
http://activemq.apache.org/nms/configuring.html
Regards
Tim.
FuseSource.com

Categories

Resources