C# - ActiveMQ - Task in Consumer - c#

Suppose we have a queue in ActiveMQ, a client that sends messages (producer) to that queue and a server that gets the messages (consumer) from the queue.
On the server side the consumer has a message listener, something like:
consumer.Listener += ConsumerOnListener;
and the implementation of ConsumerOnListener looks like the following:
private void ConsumerOnListener(IMessage message)
{
var textMessage = message as ITextMessage;
// validate textMessage
// more code here...eg. save to databse,logging etc. (part-a)
Task.Factory.StartNew(() =>
{
// do something else here (part-b)
});
}
The main idea behind the above is not to wait for part-b to be executed before processing the next message. Imagine that part-b does something completely of its own which may be succeeded or not(fire-and-forget).
So, the question here is whether is OK or not to use Tasks inside ConsumerOnListener. Will this somehow
"block" the queue?

Assuming that the Task is asynchronous then it shouldn't block either the execution of the listener or the queue itself. Typically concurrency in use-cases like this is increased by simply increasing the number of consumers/listeners, but it's also valid to have listeners kick off their own async threads, tasks, etc.

Related

TCP/IP Socket Random Disconnection and Calling a synchronous method inside asynchronous method

Application component is connected to server and getting data over TCP/IP socket connection. My delegate method of client fires whenever I received message/data from server as below.
public async Task OnMessage(byte[] message)
{
var Message = Encoding.UTF8.GetString(message);
Console.WriteLine(Message);
ProcessMessages(Message); //// Synchronous method can be long running depending upon the message content.
}
If my method ProcessMessages is synchronous calling inside an asynchronous method. Does it blocks the other messages to be received while ProcessMessages method is calling or executing?
Please note: It can take few seconds to process message depending upon some messages nature. Does my socket connection with server breaks in this case, even though heartbeat sent to server over each 3 seconds interval?
Can you please explain the OnMessage() in TCP/IP connection failure reason or disconnection whereas there is no network disconnection observed.
async won't make a method run in the background. It's syntactic sugar that allows the use of await to await already asynchronous operations. It tells the compiler to generate the state machine needed to handle await.
To process the message in another thread use Task.Run, eg :
public async Task OnMessage(byte[] message)
{
var Message = Encoding.UTF8.GetString(message);
Console.WriteLine(Message);
await Task.Run(()=>ProcessMessages(Message));
}

I have a long running process which I call in my Service Bus Queue. I want it to continue beyond 5 minutes

I have a long running process which performs matches between millions of records I call this code using a Service Bus, However when my process passes the 5 minute limit Azure starts processing the already processed records from the start again.
How can I avoid this
Here is my code:
private static async Task ProcessMessagesAsync(Message message, CancellationToken token)
{
long receivedMessageTrasactionId = 0;
try
{
IQueueClient queueClient = new QueueClient(serviceBusConnectionString, serviceBusQueueName, ReceiveMode.PeekLock);
// Process the message
receivedMessageTrasactionId = Convert.ToInt64(Encoding.UTF8.GetString(message.Body));
// My Very Long Running Method
await DataCleanse.PerformDataCleanse(receivedMessageTrasactionId);
//Get Transaction and Metric details
await queueClient.CompleteAsync(message.SystemProperties.LockToken);
}
catch (Exception ex)
{
Log4NetErrorLogger(ex);
throw ex;
}
}
Messages are intended for notifications and not long running processing.
You've got a fewoptions:
Receive the message and rely on receiver's RenewLock() operation to extend the lock.
Use user-callback API and specify maximum processing time, if known, via MessageHandlerOptions.MaxAutoRenewDuration setting to auto-renew message's lock.
Record the processing started but do not complete the incoming message. Rather leverage message deferral feature, sending yourself a new delayed message with the reference to the deferred message SequenceNumber. This will allow you to periodically receive a "reminder" message to see if the work is finished. If it is, complete the deferred message by its SequenceNumber. Otherise, complete the "reminder" message along with sending a new one. This approach would require some level of your architecture redesign.
Similar to option 3, but offload processing to an external process that will report the status later. There are frameworks that can help you with that. MassTransit or NServiceBus. The latter has a sample you can download and play with.
Note that option 1 and 2 are not guaranteed as those are client-side initiated operations.

Receive concurrent asynchronous requests and process them one at a time

Background
We have a service operation that can receive concurrent asynchronous requests and must process those requests one at a time.
In the following example, the UploadAndImport(...) method receives concurrent requests on multiple threads, but its calls to the ImportFile(...) method must happen one at a time.
Layperson Description
Imagine a warehouse with many workers (multiple threads). People (clients) can send the warehouse many packages (requests) at the same time (concurrently). When a package comes in a worker takes responsibility for it from start to finish, and the person who dropped off the package can leave (fire-and-forget). The workers' job is to put each package down a small chute, and only one worker can put a package down a chute at a time, otherwise chaos ensues. If the person who dropped off the package checks in later (polling endpoint), the warehouse should be able to report on whether the package went down the chute or not.
Question
The question then is how to write a service operation that...
can receive concurrent client requests,
receives and processes those requests on multiple threads,
processes requests on the same thread that received the request,
processes requests one at a time,
is a one way fire-and-forget operation, and
has a separate polling endpoint that reports on request completion.
We've tried the following and are wondering two things:
Are there any race conditions that we have not considered?
Is there a more canonical way to code this scenario in C#.NET with a service oriented architecture (we happen to be using WCF)?
Example: What We Have Tried?
This is the service code that we have tried. It works though it feels like somewhat of a hack or kludge.
static ImportFileInfo _inProgressRequest = null;
static readonly ConcurrentDictionary<Guid, ImportFileInfo> WaitingRequests =
new ConcurrentDictionary<Guid, ImportFileInfo>();
public void UploadAndImport(ImportFileInfo request)
{
// Receive the incoming request
WaitingRequests.TryAdd(request.OperationId, request);
while (null != Interlocked.CompareExchange(ref _inProgressRequest, request, null))
{
// Wait for any previous processing to complete
Thread.Sleep(500);
}
// Process the incoming request
ImportFile(request);
Interlocked.Exchange(ref _inProgressRequest, null);
WaitingRequests.TryRemove(request.OperationId, out _);
}
public bool UploadAndImportIsComplete(Guid operationId) =>
!WaitingRequests.ContainsKey(operationId);
This is example client code.
private static async Task UploadFile(FileInfo fileInfo, ImportFileInfo importFileInfo)
{
using (var proxy = new Proxy())
using (var stream = new FileStream(fileInfo.FullName, FileMode.Open, FileAccess.Read))
{
importFileInfo.FileByteStream = stream;
proxy.UploadAndImport(importFileInfo);
}
await Task.Run(() => Poller.Poll(timeoutSeconds: 90, intervalSeconds: 1, func: () =>
{
using (var proxy = new Proxy())
{
return proxy.UploadAndImportIsComplete(importFileInfo.OperationId);
}
}));
}
It's hard to write a minimum viable example of this in a Fiddle, but here is a start that give a sense and that compiles.
As before, the above seems like a hack/kludge, and we are asking both about potential pitfalls in its approach and for alternative patterns that are more appropriate/canonical.
Simple solution using Producer-Consumer pattern to pipe requests in case of thread count restrictions.
You still have to implement a simple progress reporter or event. I suggest to replace the expensive polling approach with an asynchronous communication which is offered by Microsoft's SignalR library. It uses WebSocket to enable async behavior. The client and server can register their callbacks on a hub. Using RPC the client can now invoke server side methods and vice versa. You would post progress to the client by using the hub (client side). In my experience SignalR is very simple to use and very good documented. It has a library for all famous server side languages (e.g. Java).
Polling in my understanding is the totally opposite of fire-and-forget. You can't forget, because you have to check something based on an interval. Event based communication, like SignalR, is fire-and-forget since you fire and will get a reminder (cause you forgot). The "event side" will invoke your callback instead of you waiting to do it yourself!
Requirement 5 is ignored since I didn't get any reason. Waiting for a thread to complete would eliminate the fire and forget character.
private BlockingCollection<ImportFileInfo> requestQueue = new BlockingCollection<ImportFileInfo>();
private bool isServiceEnabled;
private readonly int maxNumberOfThreads = 8;
private Semaphore semaphore = new Semaphore(numberOfThreads);
private readonly object syncLock = new object();
public void UploadAndImport(ImportFileInfo request)
{
// Start the request handler background loop
if (!this.isServiceEnabled)
{
this.requestQueue?.Dispose();
this.requestQueue = new BlockingCollection<ImportFileInfo>();
// Fire and forget (requirement 4)
Task.Run(() => HandleRequests());
this.isServiceEnabled = true;
}
// Cache multiple incoming client requests (requirement 1) (and enable throttling)
this.requestQueue.Add(request);
}
private void HandleRequests()
{
while (!this.requestQueue.IsCompleted)
{
// Wait while thread limit is exceeded (some throttling)
this.semaphore.WaitOne();
// Process the incoming requests in a dedicated thread (requirement 2) until the BlockingCollection is marked completed.
Task.Run(() => ProcessRequest());
}
// Reset the request handler after BlockingCollection was marked completed
this.isServiceEnabled = false;
this.requestQueue.Dispose();
}
private void ProcessRequest()
{
ImportFileInfo request = this.requestQueue.Take();
UploadFile(request);
// You updated your question saying the method "ImportFile()" requires synchronization.
// This a bottleneck and will significantly drop performance, when this method is long running.
lock (this.syncLock)
{
ImportFile(request);
}
this.semaphore.Release();
}
Remarks:
BlockingCollection is a IDisposable
TODO: You have to "close" the BlockingCollection by marking it completed:
"BlockingCollection.CompleteAdding()" or it will loop indeterminately waiting for further requests. Maybe you introduce a additional request methods for the client to cancel and/ or to update the process and to mark adding to the BlockingCollection as completed. Or a timer that waits an idle time before marking it as completed. Or make your request handler thread block or spin.
Replace Take() and Add(...) with TryTake(...) and TryAdd(...) if you want cancellation support
Code is not tested
Your "ImportFile()" method is a bottleneck in your multi threading environment. I suggest to make it thread safe. In case of I/O that requires synchronization, I would cache the data in a BlockingCollection and then write them to I/O one by one.
The problem is that your total bandwidth is very small-- only one job can run at a time-- and you want to handle parallel requests. That means that queue time could vary wildly. It may not be the best choice to implement your job queue in-memory, as it would make your system much more brittle, and more difficult to scale out when your business grows.
A traditional, scaleable way to architect this would be:
An HTTP service to accept requests, load balanced/redundant, with no session state.
A SQL Server database to persist the requests in a queue, returning a persistent unique job ID.
A Windows service to process the queue, one job at a time, and mark jobs as complete. The worker process for the service would probably be single-threaded.
This solution requires you to choose a web server. A common choice is IIS running ASP.NET. On that platform, each request is guaranteed to be handled in a single-threaded manner (i.e. you don't need to worry about race conditions too much), but due to a feature called thread agility the request might end with a different thread, but in the original synchronization context, which means you will probably never notice unless you are debugging and inspecting thread IDs.
Given the constraints context of our system, this is the implementation we ended up using:
static ImportFileInfo _importInProgressItem = null;
static readonly ConcurrentQueue<ImportFileInfo> ImportQueue =
new ConcurrentQueue<ImportFileInfo>();
public void UploadAndImport(ImportFileInfo request) {
UploadFile(request);
ImportFileSynchronized(request);
}
// Synchronize the file import,
// because the database allows a user to perform only one write at a time.
private void ImportFileSynchronized(ImportFileInfo request) {
ImportQueue.Enqueue(request);
do {
ImportQueue.TryPeek(out var next);
if (null != Interlocked.CompareExchange(ref _importInProgressItem, next, null)) {
// Queue processing is already under way in another thread.
return;
}
ImportFile(next);
ImportQueue.TryDequeue(out _);
Interlocked.Exchange(ref _importInProgressItem, null);
}
while (ImportQueue.Any());
}
public bool UploadAndImportIsComplete(Guid operationId) =>
ImportQueue.All(waiting => waiting.OperationId != operationId);
This solution works well for the loads we are expecting. That load involves a maximum of about 15-20 concurrent PDF file uploads. The batch of up to 15-20 files tends to arrive all at once and then to go quiet for several hours until the next batch arrives.
Criticism and feedback is most welcome.

Wait for RabbitMQ Threads to finish in Windows Service OnStop()

I am working on a windows service written in C# (.NET 4.5, VS2012), which uses RabbitMQ (receiving messages by subscription). There is a class which derives from DefaultBasicConsumer, and in this class are two actual consumers (so two channels). Because there are two channels, two threads handle incoming messages (from two different queues/routing keys) and both call the same HandleBasicDeliver(...) function.
Now, when the windows service OnStop() is called (when someone is stopping the service), I want to let both those threads finish handling their messages (if they are currently processing a message), sending the ack to the server, and then stop the service (abort the threads and so on).
I have thought of multiple solutions, but none of them seem to be really good. Here's what I tried:
using one mutex; each thread tries to take it when entering HandleBasicDeliver, then releases it afterwards. When OnStop() is called, the main thread tries to grab the same mutex, effectively preventing the RabbitMQ threads to actually process any more messages. The disadvantage is, only one consumer thread can process a message at a time.
using two mutexes: each RabbitMQ thread has uses a different mutex, so they won't block each other in the HandleBasicDeliver() - I can differentiate which
thread is actually handling the current message based on the routing key. Something like:
HandleBasicDeliver(...)
{
if(routingKey == firstConsumerRoutingKey)
{
// Try to grab the mutex of the first consumer
}
else
{
// Try to grab the mutex of the second consumer
}
}
When OnStop() is called, the main thread will try to grab both mutexes; once both mutexes are "in the hands" of the main thread, it can proceed with stopping the service. The problem: if another consumer would be added to this class, I'd need to change a lot of code.
using a counter, or CountdownEvent. Counter starts off at 0, and each time HandleBasicDeliver() is entered, counter is safely incremented using the Interlocked class. After the message is processed, counter is decremented. When OnStop() is called, the main thread checks if the counter is 0. Should this condition be fulfilled, it will continue. However, after it checks if counter is 0, some RabbitMQ thread might begin to process a message.
When OnStop() is called, closing the connection to the RabbitMQ (to make sure no new messages will arrive), and then waiting a few seconds ( in case there are any messages being processed, to finish processing) before closing the application. The problem is, the exact number of seconds I should wait before shutting down the apllication is unknown, so this isn't an elegant or exact solution.
I realize the design does not conform to the Single Responsibility Principle, and that may contribute to the lack of solutions. However, could there be a good solution to this problem without having to redesign the project?
We do this in our application, The main idea is to use a CancellationTokenSource
On your windows service add this:
private static readonly CancellationTokenSource CancellationTokenSource = new CancellationTokenSource();
Then in your rabbit consumers do this:
1. change from using Dequeue to DequeueNoWait
2. have your rabbit consumer check the cancellation token
Here is our code:
public async Task StartConsuming(IMessageBusConsumer consumer, MessageBusConsumerName fullConsumerName, CancellationToken cancellationToken)
{
var queueName = GetQueueName(consumer.MessageBusConsumerEnum);
using (var model = _rabbitConnection.CreateModel())
{
// Configure the Quality of service for the model. Below is how what each setting means.
// BasicQos(0="Don't send me a new message until I’ve finished", _fetchSize = "Send me N messages at a time", false ="Apply to this Model only")
model.BasicQos(0, consumer.FetchCount.Value, false);
var queueingConsumer = new QueueingBasicConsumer(model);
model.BasicConsume(queueName, false, fullConsumerName, queueingConsumer);
var queueEmpty = new BasicDeliverEventArgs(); //This is what gets returned if nothing in the queue is found.
while (!cancellationToken.IsCancellationRequested)
{
var deliverEventArgs = queueingConsumer.Queue.DequeueNoWait(queueEmpty);
if (deliverEventArgs == queueEmpty)
{
// This 100ms wait allows the processor to go do other work.
// No sense in going back to an empty queue immediately.
// CancellationToken intentionally not used!
// ReSharper disable once MethodSupportsCancellation
await Task.Delay(100);
continue;
}
//DO YOUR WORK HERE!
}
}
Usually, how we ensure a windows service not stop before processing completes is to use some code like below. Hope that help.
protected override void OnStart(string[] args)
{
// start the worker thread
_workerThread = new Thread(WorkMethod)
{
// !!!set to foreground to block windows service be stopped
// until thread is exited when all pending tasks complete
IsBackground = false
};
_workerThread.Start();
}
protected override void OnStop()
{
// notify the worker thread to stop accepting new migration requests
// and exit when all tasks are completed
// some code to notify worker thread to stop accepting new tasks internally
// wait for worker thread to stop
_workerThread.Join();
}

How does MessageQueue.BeginReceive work and how to use it correctly?

I currently have a background thread. In this thread is a infinite loop.
This loop once in a while updates some values in a database, and then listens 1 second on the MessageQueue (with queue.Receive(TimeSpan.FromSeconds(1)) ).
As long as no message comes in, this call then internally throws a MessageQueueException (Timeout) which is caught and then the loop continues. If there is a message the call normally returns and the message is processed, after which the loop continues.
This leads to a lot of First chance exceptions (every second, except there is a message to process) and this spams the debug output and also breaks in the debugger when I forgot to exclude MessageQueueExceptions.
So how is the async handling of the MessageQueue meant to be done correctly, while still ensuring that, as long as my application runs, the queue is monitored and the database is updated too once in a while. Of course the thread here should not use up 100% CPU.
I just need the big picture or a hint to some correctly done async processing.
Rather than looping in a thread, I would recommend registering a delegate for the ReceiveCompleted event of your MessageQueue, as described here:
using System;
using System.Messaging;
namespace MyProject
{
///
/// Provides a container class for the example.
///
public class MyNewQueue
{
//**************************************************
// Provides an entry point into the application.
//
// This example performs asynchronous receive operation
// processing.
//**************************************************
public static void Main()
{
// Create an instance of MessageQueue. Set its formatter.
MessageQueue myQueue = new MessageQueue(".\\myQueue");
myQueue.Formatter = new XmlMessageFormatter(new Type[]
{typeof(String)});
// Add an event handler for the ReceiveCompleted event.
myQueue.ReceiveCompleted += new
ReceiveCompletedEventHandler(MyReceiveCompleted);
// Begin the asynchronous receive operation.
myQueue.BeginReceive();
// Do other work on the current thread.
return;
}
//**************************************************
// Provides an event handler for the ReceiveCompleted
// event.
//**************************************************
private static void MyReceiveCompleted(Object source,
ReceiveCompletedEventArgs asyncResult)
{
// Connect to the queue.
MessageQueue mq = (MessageQueue)source;
// End the asynchronous Receive operation.
Message m = mq.EndReceive(asyncResult.AsyncResult);
// Display message information on the screen.
Console.WriteLine("Message: " + (string)m.Body);
// Restart the asynchronous Receive operation.
mq.BeginReceive();
return;
}
}
}
Source: https://learn.microsoft.com/en-us/dotnet/api/system.messaging.messagequeue.receivecompleted?view=netframework-4.7.2
Have you considered a MessageEnumerator which is returned from the MessageQueue.GetMessageEnumerator2 ?
You get a dynamic content of the queue to examine and remove messages from a queue during the iteration.
If there are no messages then MoveNext() will return false and you don't need to catch first-chance exceptions
If there are new messages after you started iteration then they will be iterated over (if they are put after a cursor).
If there are new messages before a cursor then you can just reset an iterator or continue (if you don't need messages with lower priority at the moment).
Contrary to the comment by Jamie Dixon, the scenario IS exceptional. Note the naming of the method and its parameters: BeginReceive(TimeSpan timeout)
Had the method been named BeginTryReceive, it would've been perfectly normal if no message was received. Naming it BeginReceive (or Receive, for the sync version) implies that a message is expected to enter the queue. That the TimeSpan parameter is named timeout is also significant, because a timeout IS exceptional. A timeout means that a response was expected, but none was given, and the caller chooses to stop waiting and assumes that an error has occured. When you call BeginReceive/Receive with a 1 second timeout, you are stating that if no message has entered the queue by that time, something must have gone wrong and we need to handle it.
The way I would implement this, if I understand what you want to do correctly, is this:
Call BeginReceive either with a very large timeout, or even without a timeout if I don't see an empty queue as an error.
Attach an event handler to the ReceiveCompleted event, which 1) processes the message, and 2) calls BeginReceive again.
I would NOT use an infinite loop. This is both bad practice and completely redundant when using asynchronous methods like BeginReceive.
edit: To abandon a queue which isn't being read by any client, have the queue writers peek into the queue to determine if it is 'dead'.
edit: I have another suggestion. Since I don't know the details of your application I have no idea if it is either feasible or appropriate. It seems to me that you're basically establishing a connection between client and server, with the message queue as the communication channel. Why is this a 'connection'? Because the queue won't be written to if no one is listening. That's pretty much what a connection is, I think. Wouldn't it be more appropriate to use sockets or named pipes to transfer the messages? That way, the clients simply close the Stream objects when they are done reading, and the servers are immediately notified. As I said, I don't know if it can work for what you're doing, but it feels like a more appropriate communication channel.

Categories

Resources