Multi-threaded RPC with timeout and resend using RabbitMQ - c#

What I'm looking to do is a Map/Reduce system using RabbitMQ as the transport to and from the worker agents, this is to allow a simple scaling roadmap but also allows for a single server implementation for development and testing.
I've seen a few articles on a single message with a timeout using RPC with RabbitMQ but these appear to be blocking. I need to be able to fire off a batch of requests, each could possibly be served by a different agent. I then need to be able to collate all of the responses so I can crunch to a single answer.
If I were to use RPC, I believe I'd end up with a very complicated serial processing of the tasks rather than in parallel.
I'd also like to be able to resend a request under certain circumstances (e.g. the agent reports a transient error), but this isn't essential.
I'm guessing I'm going to need to spawn the threads in the master application and each have an RPC call but as it's a web app and there could be in the region of 20 tasks to be served by the agents, I'm not keen on this approach and suspect it would be a bottleneck quickly.

Related

Separate threads in a web service after it's completed

If this has been asked before my apologies, and this is .NET 2.0 ASMX Web services, again my apologies =D
A .NET Application that only exposes web services. Roughly 10 million messages per day load balanced between multiple IIS Servers. Each incoming messages is XML, and an outgoing message is XML. (XMLElement) (we have beefy servers that run on steroids).
I have a SLA that all messages are processed in under X Seconds.
One function, Linking Methods, in the process is now taking 10-20 seconds, it is required for every transaction, however is not critical that it happens before the web service returns the results. Because of this I made a suggestion to throw it on another thread, but now realize that my words and the eager developers behind them might have not fully thought this through.
The below example shows on the left the current flow. On the right what is being attempted
Effectively what I'm looking for is to have a web service spawn a long running (10-20 second) thread that will execute even after the web service is completed.
This is what, effectively, is going on:
Thread linkThread= new Thread(delegate()
{
Linkmembers(GetContext(), ID1, ID2, SomeOtherThing, XMLOrSomething);
});
linkThread.Start();
Using this we've reduced the time from 19 seconds to 2.1 seconds on our dev boxes, which is quite substantial.
I am worried that with the amount of traffic we get, and if a vendor/outside party decides to throttle us, IIS might decide to recycle/kill those threads before they're done processing. I agree our solution might not be the "best" however we don't have the time to build in a Queue system or another Windows Service to handle this.
Is there a better way to do this? Any caveats that should be considered?
Thanks.
Apart from the issues you've described, I cannot think of any. That being said, there are ways to fix the problem that do not involve building your own solution from scratch.
Use MSMQ with WCF: Create a WCF service with an MSMQ endpoint that is IIS hosted (no need to use a windows service as long as WAS is enabled) and make calls to the service from within your ASMX service. You reap all the benefits of reliable queueing without having to build your own.
Plus, if your MSMQ service fails or throws an exception, it will reprocess automatically. If you use DTC and are hitting a database, you can even have the MSMQ transaction flow to the DB.

What is the simplest way to do simple distributed communication in .NET?

So basically I am thinking about attempting load testing on my asp.net application using various features all at once. There is a lot of dependencies and ajax requests being performed in this application so it seems like a simple replay of captured http requests will not suffice and due to other features like picking out random operations, performing then verifying results across several machines, simple load testing software will not suffice.
Also there is no budget to this project for spending, so commercial implementations can not be used. I'm debating on trying to use MSMQ (never used before) to handle communication between clients, but if that is really complicated to set up then I would either use a database table as a queue or a simple TCP server with each test machine as its clients.
Features I want are: immediate failure (one client crashes, then all clients should stop), each test run should start with a brand new scenario with no prior messages, and ability to publish a start and stop event. Also it would be nice if I don't have to worry about state management (leaning towards TCP server for this over database) or concurrency.
It doesn't sound like MSMQ is what you need. It is a message-passing asynchronous communication method, akin to email. You can send a message to another queue that no one is even listening to (i.e. the application isn't running). It seems to me you want a more "online" communication model.
How about creating agents (client applications that sit on many machines and create the load) that expose a WCF service where a controller program can connect to all of them and instruct the agents what to do? It can be a duplex contract, so that the agents can send the controller a notifications. When one of them send a error notification, the controller can instruct all the other agents to shut down. Also I'd go for a Net.TCP binding rather than HTTP binding.

Chatroom functionality with WCF, duplex callbacks vs polling?

I am using WCF and I am putting a chatroom facility in my C# program. So I need to be able to send information from the server to the clients for two events -
When a user connects/disconnects I update the list of connected users and send that back to all clients for display in a TextBlock
When a user posts a message, I need the server to send that message out to all clients
So I am looking for advice on the best way of implementing this. I was going to use netTcpBinding for duplex callbacks to clients but then I ran into some issues regarding not being able to call back the client if the connection is closed. I need to use percall instances for scalibility. I was advised in this thread that I shouldnt leave connections open as it would 'significantly limit scalibity' - WCF duplex callbacks, how do I send a message to all clients?
However I had a look through the book Programming WCF Services and the author seems to state that this is not an issue because 'In between calls, the client holds a reference on a proxy that doesn’t have an actual object at the end of the wire. This means that you can dispose of the expensive resources the service instance occupies long before the client closes the proxy'
So which is correct, is it fine to keep proxies open on clients?
But even if that is fine it leads to another issue. If the service instances are destroyed between call, how can they do duplex callbacks to update the clients? Regarding percall instances, the author of Programming WCF Services says 'Because the object will be discarded once the method returns, you should not spin off background threads or dispatch asynchronous calls back into the instance'
Would I be better off having clients poll the service for updates? I would have imagined that this is much more inefficient than duplex callbacks, clients could end up polling the service 50+ times as often as using a duplex callback. But maybe there is no other way? Would this be scalable? I envisage several hundred concurrent users.
Since I am guilty of telling you that server callbacks won't scale, I should probably explain a bit more. Let me start by addressing your questions:
Without owning the book in question, I can only assume that the author is either referring to http-based transports or request-response only, with no callbacks. Callbacks require one of two things- either the server needs to maintain an open TCP connection to the client (meaning that there are resources in use on the server for each client), or the server needs to be able to open a connection to a listening port on the client. Since you are using netTcpBinding, your situation would be the former. wsDualHttpBinding is an example of the latter, but that introduces a lot of routing and firewall issues that make it unworkable over the internet (I am assuming that the public internet is your target environment here- if not, let us know).
You have intuitively figured out why server resources are required for callbacks. Again, wsDualHttpBinding is a bit different, because in that case the server is actually calling back to the client over a new connection in order to send the async reply. This basically requires ports to be opened on the client's side and punched through any firewalls, something that you can't expect of the average internet user. Lots more on that here: WSDualHttpBinding for duplex callbacks
You can architect this a few different ways, but it's understandable if you don't want the overhead (and potential for delay) of the clients constantly hammering the server for updates. Again, at several hundred concurrent users, you are likely still within the range that one good server could handle using callbacks, but I assume you'd like to have a system that can scale beyond that if needed (or at peak times). What I'd do is this:
Use callback proxies (I know, I told you not to)... Clients connecting create new proxies, which are stored in a thread-safe collection and occasionally checked for live-ness (and purged if found to be dead).
Instead of having the server post messages directly from one client to another, have the server post the messages to some Message Queue Middleware. There are tons of these out there- MSMQ is popular with Windows, ActiveMQ and RabbitMQ are FOSS (Free Open Source Software), and Tibco EMS is popular in big enterprises (but can be very expensive). What you probably want to use is a topic, not a queue (more on queues vs topics here).
Have a thread (or several threads) on the server dedicated to reading messages off of the topic, and if that message is addressed to a live session on that server, deliver that message to the proxy on the server.
Here's a rough sketch of the architecture:
This architecture should allow you to automatically scale out by simply adding more servers, and load balancing new connections among them. The message queueing infrastructure would be the only limiting factor, and all of the ones I mentioned would scale beyond any likely use case you'd ever see. Because you'd be using topics and not queues, every message would be broadcast to each server- you might need to figure out a better way of distributing the messages, like using hash-based partitioning.

message broker consumer/producer with reassign when client goes down?

I am looking for a message broker API to use it with c#.
Normally the things are quite simple. I have a server that knows what jobs are to do and I have some clients that need to get these jobs.
And here are the special requirements I have:
If a client got a job but fails to answer within a specific time, then another client should do the work.
More than one queue and priorities
If possible it needs to work with big message queues (this way I could just load all jobs sometimes a month and forget about it
secured communications would be good.
API for talking with the broker from c#. How much work is done? What is still to do?
Delete some jobs...
If available replication to another broker would be good.
The broker needs to run on windows
What is not an issue:
low latency (there is no problem when a message needs minutes)
Do you know such a message broker that is free to use?
RabbitMQ and several other AMQP implementations satisfy most of (if not all of) these requirements.
RabbitMQ allows clients to acknowledge receipt and/or processing of messages. As per http://www.rabbitmq.com/tutorials/amqp-concepts.html#message-acknowledge:
If a consumer dies without sending an acknowledgement the AMQP broker
will redeliver it to another consumer or, if none are available at the
time, the broker will wait until at least one consumer is registered
for the same queue before attempting redelivery.
Many queues (and in fact many brokers) are supported, in a variety of different configurations
It scales particularly well, even for very large message queues: http://www.rabbitmq.com/faq.html#performance
Encryption is supported: http://www.rabbitmq.com/faq.html#channel-encryption
There is a .NET Client Users Guide and API docs: http://www.rabbitmq.com/documentation.html
There is live failover if a broker dies: http://www.rabbitmq.com/clustering.html
It runs on Windows, Linux, and probably anything else that has an Erlang implementation

Azure: Will it work for my App?

I'm creating an application that I want to put into the cloud. This application has one main function.
It hosts socket CLIENT sessions on behalf of other users (think of Beejive IM for the iPhone, where it hosts IM sessions for clients to maintain state on those IM networks, allowing the client to connect/disconnect at will, without breaking the IM network connection).
Now, the way I've planned it now, is that one 'worker instance' can likely only handle a finite number of client sessions (let's say 50,000 for argument sake). Those sessions will be very long lived worker tasks.
The issue I'm trying to get my head around is that I will sometimes need to perform tasks to specific client sessions (eg: If I need to disconnect a client session). With Azure, would I be able to queue up a smaller task that only the instance hosting that specific client session would be able to dequeue?
Right now I'm contemplating GoGrid as my provider, and I solve this issue by using Apache's Active Messaging Queue software. My web app enqueues 'disconnect' tasks that are assigned to a specific instance Id. Each client session is therefore assigned to a specific instance id. The instance then only dequeues 'disconnect' tasks that are assigned to it.
I'm wondering if it's feasible to do something similar on Azure, and how I would generally do it. I like the idea of not having to setup many different VM's to scale, but instead just deploying a single package. Also, it would be nice to make use of Azure's Queues instead of integrating a third party product such as Apache ActiveMQ, or even MSMQ.
I'd be very concerned about building a production application on Azure until the feature set, pricing, and licensing terms are finalized. For starters, you can't even do a cost comparison between it and e.g. GoGrid or EC2 or Mosso. So I don't see how it could possibly end up a front-runner. Also, we know that all of these systems will have glitches as they mature. Amazon's services are in much wider use than any of the others, and have been publicly available for much years. IMHO choosing Azure is a recipe for pain as they stabilize.
Have you considered Amazon's Simple Queue Service for queueing?
I think you can absolutely use Windows Azure for this. My recommendation would be to create a queue for each session you're tracking. Then enqueue the disconnect message (for example) on the queue for that session. The worker instance that's handling that connection should be the only one polling that queue, so it should handle performing the task on that connection.
Regarding the application hosting socket connections for clients to connect to, I'd double-check on what's allowed as I think only HTTP and HTTPS connections are allowed to be made with Azure.

Categories

Resources