Azure Queue throwing Exception on DeleteMessage - c#

I'm working on an Azure based project for some research and have been running into some issues when deleting messages from a CloudQueue instance. The code is fairly straightforward, so I'm a bit baffled as to why an exception is being thrown when I try to delete a message from the queue.
Here is the code that produces data for the queue:
foreach (var cell in scheme(cells))
{
string id = Guid.NewGuid().ToString();
var blob = sweepItemContainer.GetBlobReference(id);
using (BlobStream stream = blob.OpenWrite())
{
BinaryFormatter bf = new BinaryFormatter();
bf.Serialize(stream, cell);
}
sweepItemQueue.AddMessage(new CloudQueueMessage(id), new TimeSpan(1, 0, 0));
}
Here is the code that consumes the data from the queue:
var msgs = sweepItemsQueue.GetMessages(MsgAmt);
foreach (var msg in msgs)
{
_handleMessage(msg, sweepItemsContainer);
sweepItemsQueue.DeleteMessage(msg);
mergeItemsQueue.AddMessage(new CloudQueueMessage(msg.AsString), new TimeSpan(1, 0, 0));
}
I don't see how the message cannot exist in the queue. Nothing else is mutating the queue besides other consumers. But I am assured that they cannot get the same message (so long as the timespan doesn't run out), so how is this happening?

There are two timeouts that you need to worry about, how long the message lives in the queue (which you've specified in the your .AddMessage() call and the visibility timeout that is set when you call .GetMessages() (by default this is 30 seconds, there is an overload that allows you to specify the timeout). When you call .GetMessages() all of the messages returned are invisible to other consumers for the period 'visibilityTimeout'. Once this period finishes all of the messages you haven't already deleted become visible to all other consumers.
To check if this is the problem I would try using the overload of .GetMessages() with it's maximum visibility timeout of 2 hours. If this is the problem you can fine tune this value down to a more sensible number. Another option would be to just retrieve one message at a time.

Another answer from Steve Marx, basically look at the storage exception and move on. I have seen this in other frameworks too.:
Steve Marx blog post
try
{
q.DeleteMessage(msg);
}
catch (StorageClientException ex)
{
if (ex.ExtendedErrorInformation.ErrorCode == "MessageNotFound")
{
// pop receipt must be invalid
// ignore or log (so we can tune the visibility timeout)
}
else
{
// not the error we were expecting
throw;
}
}

Related

How to use partitions in order to parallel consume one topic in kafka with .NET Core C#?

We are using the .NET Kafka client to consume messages from one topic in a C# code.
However, it seems to be a wee bit too slow.
Wondering if we could parallelize the process a bit, so I checked this answer there: Kafka how to consume one topic parallel
But I don't really see how to implement this partition thing with the .NET Kafka client in my example below:
var consumerBuilder = new ConsumerBuilder<Ignore, string>(GetConfig())
.SetErrorHandler((_, e) => _logger.LogError("Kafka consumer error on Revenue response. {#KafkaConsumerError}", e));
using (var consumer = consumerBuilder.Build())
{
consumer.Subscribe(RevenueResponseTopicName);
try
{
while (!stoppingToken.IsCancellationRequested)
{
var consumeResult = consumer.Consume(stoppingToken);
RevenueTopicResponseModel revenueResponse;
try
{
revenueResponse = JsonConvert.DeserializeObject<RevenueTopicResponseModel>(consumeResult.Value);
}
catch
{
_logger.LogCritical("Impossible to deserialize the response. {#RevenueConsumeResult}", consumeResult);
continue;
}
_logger.LogInformation("Revenue response received from Kafka. {RevenueTopicResponse}",
consumeResult.Value);
await _revenueService.RevenueResultReceivedAsync(revenueResponse);
}
}
catch (OperationCanceledException)
{
_logger.LogInformation($"Operation canceled. Closing {nameof(RevenueResponseConsumer)}.");
consumer.Close();
}
catch (Exception e)
{
_logger.LogCritical(e, $"Unhandled exception during {nameof(RevenueResponseConsumer)}.");
}
}
You need to create topic with multiple partitions, let's say 10.
In your code create 10 consumers with the same Consumer Group - brokers will distribute topic messages among your consumers.
Basically, just put your code inside for loop:
for (int i = 0; i < 10; i++)
{
var consumerBuilder = new ConsumerBuilder<Ignore, string>(GetConfig())
.SetErrorHandler((_, e) => _logger.LogError("Kafka consumer error on Revenue response. {#KafkaConsumerError}", e));
using (var consumer = consumerBuilder.Build())
{
// your processing here
}
}
In order to answer to this question correctly we need to know what is the reason behind this requirement to partitioning.
If your topic doesn't have lots of messages to be processed then it's not the case to use partitioning. If the issue is that a single message processing tooks too much time and you want parallelize the work, then you could add consumed messages to a Channel and have as many consumers of that channel as needed in background.
Basically you should still use a single consumer per process since a consumer utilizes threads in background
Also you may find my consideration about Kafka Consumer in C# in the article
If you have any questions, please feel free to ask! I'll be glad to help you
You can commit after a set of offsets instead of committing on each offset, which could give you some performance benefit.
if( result.offset % 5 == 0)
{
consumer.Commit(result)
}
Assuming EnableAutoCommit = false

System.Core Pipe is broken

I currently just inherited some complex code that unfortunately I do not fully understand. It handles a large number of inventory records inputting/outputting to a database. The solution is extremely large/advanced where I am still on the newer side of c#. The issue I am encountering is that periodically the program will throw an IO Exception. It doesn't actually throw a failure code, but it messes up our output data.
the try/catch block is as follows:
private static void ReadRecords(OleDbRecordReader recordReader, long maxRows, int executionTimeout, BlockingCollection<List<ProcessRecord>> processingBuffer, CancellationTokenSource cts, Job theStack, string threadName) {
ProcessRecord rec = null;
try {
Thread.CurrentThread.Name = threadName;
if(null == cts)
throw new InvalidOperationException("Passed CancellationToken was null.");
if(cts.IsCancellationRequested)
throw new InvalidOperationException("Passed CancellationToken is already been cancelled.");
long reportingFrequency = (maxRows <250000)?10000:100000;
theStack.FireStatusEvent("Opening "+ threadName);
recordReader.Open(maxRows, executionTimeout);
theStack.FireStatusEvent(threadName + " Opened");
theStack.FireInitializationComplete();
List<ProcessRecord> inRecs = new List<PIRecord>(500);
ProcessRecord priorRec = rec = recordReader.Read();
while(null != priorRec) { //-- note that this is priorRec, not Rec. We process one row in arrears.
if(cts.IsCancellationRequested)
theStack.FireStatusEvent(threadName + " cancelling due to request or error.");
cts.Token.ThrowIfCancellationRequested();
if(rec != null) //-- We only want to count the loop when there actually is a record.
theStack.RecordCountRead++;
if(theStack.RecordCountRead % reportingFrequency == 0)
theStack.FireProgressEvent();
if((rec != null) && (priorRec.SKU == rec.SKU) && (priorRec.Store == rec.Store) && (priorRec.BatchId == rec.BatchId))
inRecs.Add(rec); //-- just store it and keep going
else { //-- otherwise, we need to process it
processingBuffer.Add(inRecs.ToList(),cts.Token); //-- note that we don't enqueue the original LIST! That could be very bad.
inRecs.Clear();
if(rec != null) //-- Again, we need this check here to ensure that we don't try to enqueue the EOF null record.
inRecs.Add(rec); //-- Now, enqueue the record that fired this condition and start the loop again
}
priorRec = rec;
rec = recordReader.Read();
} //-- end While
}
catch(OperationCanceledException) {
theStack.FireStatusEvent(threadName +" Canceled.");
}
catch(Exception ex) {
theStack.FireExceptionEvent(ex);
theStack.FireStatusEvent("Error in RecordReader. Requesting cancellation of other threads.");
cts.Cancel(); // If an exception occurs, notify all other pipeline stages, then rethrow
// throw; //-- This will also propagate Cancellation, but that's OK
}
In the log of our job we see the output loader stopping and the exception is
System.Core: Pipe is broken.
Does any one have any ideas as to what may cause this? More importantly, the individual who made this large-scale application is no longer here. When I debug all of my applications, I am able to add break points in the solution and do the standard VS stepping through everything to find the issue. However, this application is huge and has a GUI that pops up when I debug the application. I believe the GUI was made for testing purposes, but it hinders me from actually being able to step through everything. However when the .exe is run from our actual job stream, there is no GUI it just executes the way it's supposed to.
The help I am asking for is 2 things:
just suggestions as to what may cause this. Could an OleDB driver be the cause? Reason I ask is because I have this running on 2 different servers. One test and one not. The one with a new OleDB driver version does not fail (7.0 i believe whereas the other where it fails is 6.0).
Is there any code that I could add that may give me a better indication as to what may be causing the broken pipe? The error only happens periodically. If I run the job again right after, it may not happen. I'd say it's 30-40% of the time it throws the exception.
If you have any additional questions about the structure just let me know.

Multithreaded linq2sql applications TransactionScope difficulties

I've created a file processing service which reads and imports xml files from a specific directory.
The service starts several workers which will poll a filequeue for new files and uses linq2sql for dataaccess. Each workerthread has its own datacontext.
The files being processed contain several orders and each order contains several addresses (Customer/Contractor/Subcontractor)
I've defined a transactionscope around the handling of each file. This way I want to ensure that the whole file is handled correctly, or that the whole file is rolled back when an exception occurs:
try
{
using (var tx = new TransactionScope(TransactionScopeOption.RequiresNew))
{
foreach (var order in orders)
{
HandleType1Order(order);
}
tx.Complete();
}
}
catch (SqlException ex)
{
if (ex.Number == SqlErrorNumbers.Deadlock)
{
throw new FileHandlerException("File Caused a Deadlock, retrying later", ex, true);
}
else
throw;
}
One of the requirements for the service is that is creates or updates found addresses in the xml files. So I've created an address service which is responsible for address management. The following piece of code gets executed for each order (within the method HandleType1Order()) in the xml importfile (And thus is part of the TransactionScope for the entire file).
using (var tx = new TransactionScope())
{
address = GetAddressByReference(number);
if (address != null) //address is already known
{
Log.Debug("Found address {0} - {1}. Updating...", address.Code, address.Name);
UpdateAddress(address, name, number, isContractor, isSubContractor, isCustomer);
}
else
{
//address not known, so create it
Log.Debug("Address {0} not known, creating address", number);
address = CreateAddress(name, number, sourceSystemId, isContractor, isSubContractor,
isCustomer);
_addressRepository.Save(address);
}
_addressRepository.Flush();
tx.Complete();
}
What I'm trying to do here, is to create or update an address, with the number being unique.
The method GetAddressByReference(string number) returns a known address or null when an address is not found.
public virtual Address GetAddressByReference(string reference)
{
return _addressRepository.GetAll().SingleOrDefault(a=>a.Code==reference);
}
When I run the service it however creates multiple addresses with the same number. The method GetAddressByReference() get's called concurrently and should return a known address when a second thread executes the method with the same addressnumber, however it returns null. There is propably something wrong with my transaction boundaries, or isolationlevel, but I can't seem to get it to work.
Can someone point me in the right direction? Help is much appreciated!!
p.s. I've no problem with the transactions being deadlocked and causing a rollback, the file will just be retried when a deadlock occurs.
Edit 1 Threading code:
public void Work()
{
_isRunning = true;
while (true)
{
ImportFileTask task = _queue.Dequeue(); //dequeue blocks on empty queue
if (task == null)
break; //Shutdown worker when a null task is read from the queue
IFileImporter importer = null;
try
{
using (new LockFile(task.FilePath).Acquire()) //create a filelock to sync access accross all processes to the file
{
importer = _kernel.Resolve<IFileImporter>();
Log.DebugFormat("Processing file {0}", task.FilePath);
importer.Import(task.FilePath);
Log.DebugFormat("Done Processing file {0}", task.FilePath);
}
}
catch(Exception ex)
{
Log.Fatal(
"A Fatal exception occured while handling {0} --> {1}".FormatWith(task.FilePath, ex.Message), ex);
}
finally
{
if (importer != null)
_kernel.ReleaseComponent(importer);
}
}
_isRunning = false;
}
The above method runs in all of our worker threads. It uses Castle Windsor to resolve the FileImporter, which has a transient lifestyle (thus not shared accross threads).
You didn't post your threading code, so its difficult to say what the issue is. I'm assuming you have started DTC (Distributed Transaction Coordinator)?
Are you using a ThreadPool? Are you using the "lock" keyword?
http://msdn.microsoft.com/en-us/library/c5kehkcz.aspx

C# FileSystemWatcher, How to know file copied completely into the watch folder

I am developing a .net application, where I am using FileSystemWatcher class and attached its Created event on a folder. I have to do action on this event (i.e. copy file to some other location). When I am putting a large size into the attached watch folder the event raised immediately even the file copy process still not completed. I don’t want to check this by file.open method.
Is there any way get notify that my file copy process into the watch folder has been completed and then my event get fire.
It is indeed a bummer that FileSystemWatcher (and the underlying ReadDirectoryChangesW API) provide no way to get notified when a new file has been fully created.
The best and safest way around this that I've come across so far (and that doesn't rely on timers) goes like this:
Upon receiving the Created event, start a thread that, in a loop, checks whether the file is still locked (using an appropriate retry interval and maximum retry count). The only way to check if a file is locked is by trying to open it with exclusive access: If it succeeds (not throwing an IOException), then the File is done copying, and your thread can raise an appropriate event (e.g. FileCopyCompleted).
I have had the exact same problem, and solved it this way:
Set FileSystemWatcher to notify when files are created and when they are modified.
When a notification comes in:
a. If there is no timer set for this filename (see below), set a timer to expire in a suitable interval (I commonly use 1 second).
b. If there is a timer set for this filename, cancel the timer and set a new one to expire in the same interval.
When a timer expires, you know that the associated file has been created or modified and has been untouched for the time interval. This means that the copy/modify is probably done and you can now process it.
You could listen for the modified event, and start a timer. If the modified event is raised again, reset the timer. When the timer has reached a certain value without the modify event being raised you can try to perform the copy.
I subscribe to the Changed- and Renamed-event and try to rename the file on every Changed-event catching the IOExceptions. If the rename succeeds, the copy has finished and the Rename-event is fired only once.
Three issues with FileSystemWatcher, the first is that it can send out duplicate creation events so you check for that with something like:
this.watcher.Created += (s, e) =>
{
if (!this.seen.ContainsKey(e.FullPath)
|| (DateTime.Now - this.seen[e.FullPath]) > this.seenInterval)
{
this.seen[e.FullPath] = DateTime.Now;
ThreadPool.QueueUserWorkItem(
this.WaitForCreatingProcessToCloseFileThenDoStuff, e.FullPath);
}
};
where this.seen is a Dictionary<string, DateTime> and this.seenInterval is a TimeSpan.
Next, you have to wait around for the file creator to finish writing it (the issue raised in the question). And, third, you must be careful because sometimes the file creation event gets thrown before the file can be opened without giving you a FileNotFoundException but it can also be removed before you can get a hold of it which also gives a FileNotFoundException.
private void WaitForCreatingProcessToCloseFileThenDoStuff(object threadContext)
{
// Make sure the just-found file is done being
// written by repeatedly attempting to open it
// for exclusive access.
var path = (string)threadContext;
DateTime started = DateTime.Now;
DateTime lastLengthChange = DateTime.Now;
long lastLength = 0;
var noGrowthLimit = new TimeSpan(0, 5, 0);
var notFoundLimit = new TimeSpan(0, 0, 1);
for (int tries = 0;; ++tries)
{
try
{
using (var fileStream = new FileStream(
path, FileMode.Open, FileAccess.ReadWrite, FileShare.None))
{
// Do Stuff
}
break;
}
catch (FileNotFoundException)
{
// Sometimes the file appears before it is there.
if (DateTime.Now - started > notFoundLimit)
{
// Should be there by now
break;
}
}
catch (IOException ex)
{
// mask in severity, customer, and code
var hr = (int)(ex.HResult & 0xA000FFFF);
if (hr != 0x80000020 && hr != 0x80000021)
{
// not a share violation or a lock violation
throw;
}
}
try
{
var fi = new FileInfo(path);
if (fi.Length > lastLength)
{
lastLength = fi.Length;
lastLengthChange = DateTime.Now;
}
}
catch (Exception ex)
{
}
// still locked
if (DateTime.Now - lastLengthChange > noGrowthLimit)
{
// 5 minutes, still locked, no growth.
break;
}
Thread.Sleep(111);
}
You can, of course, set your own timeouts. This code leaves enough time for a 5 minute hang. Real code would also have a flag to exit the thread if requested.
This answer is a bit late, but if possible I'd get the source process to copy a small marker file after the large file or files and use the FileWatcher on that.
Try to set filters
myWatcher.NotifyFilter = NotifyFilters.LastAccess | NotifyFilters.LastWrite;

Using WCF service via async interface from worker thread, how do I ensure that events are sent from the client "in order"

I am writing a Silverlight class library to abstract the interface to a WCF service. The WCF service provides a centralized logging service. The Silverlight class library provides a simplified log4net-like interface (logger.Info, logger.Warn, etc) for logging. From the class library I plan to provide options such that logged messages can be accumulated on the client and sent in "bursts" to the WCF logging service, rather than sending each message as it occurs. Generally, this is working well. The class library does accumulate messages and it does send collections of messages to the WCF logging service, where they are logged by an underlying logging framework.
My current problem is that the messages (from a single client with a single thread - all logging code is in button click events) are becoming interleaved in the logging service. I realize that the at least part of this is probably due to the instancing (PerCall) or Synchronization of the WCF logging service. However, it also seems that my messages are occurring in such rapid succession that that the "bursts" of messages leaving on the async calls are actually "leaving" the client in a different order than they were generated.
I have tried to set up a producer consumer queue as described here with a slight (or should that be "slight" with air quotes) change that the Work method blocks (WaitOne) until the async call returns (i.e. until the async callback executes). The idea is that when one burst of messages is sent to the WCF logging service, the queue should wait until that burst has been processed before sending the next burst.
Maybe what I am trying to do is not feasible, or maybe I am trying to solve the wrong problem, (or maybe I just don't know what I am doing!).
Anyway, here is my producer/consumer queue code:
internal class ProducerConsumerQueue : IDisposable
{
EventWaitHandle wh = new AutoResetEvent(false);
Thread worker;
readonly object locker = new object();
Queue<ObservableCollection<LoggingService.LogEvent>> logEventQueue = new Queue<ObservableCollection<LoggingService.LogEvent>>();
LoggingService.ILoggingService loggingService;
internal ProducerConsumerQueue(LoggingService.ILoggingService loggingService)
{
this.loggingService = loggingService;
worker = new Thread(Work);
worker.Start();
}
internal void EnqueueLogEvents(ObservableCollection<LoggingService.LogEvent> logEvents)
{
//Queue the next burst of messages
lock(locker)
{
logEventQueue.Enqueue(logEvents);
//Is this Set conflicting with the WaitOne on the async call in Work?
wh.Set();
}
}
private void Work()
{
while(true)
{
ObservableCollection<LoggingService.LogEvent> events = null;
lock(locker)
{
if (logEventQueue.Count > 0)
{
events = logEventQueue.Dequeue();
if (events == null || events.Count == 0) return;
}
}
if (events != null && events.Count > 0)
{
System.Diagnostics.Debug.WriteLine("1. Work - Sending {0} events", events.Count);
//
// This seems to be the key...
// Send one burst of messages via an async call and wait until the async call completes.
//
loggingService.BeginLogEvents(events, ar =>
{
try
{
loggingService.EndLogEvents(ar);
System.Diagnostics.Debug.WriteLine("3. Work - Back");
wh.Set();
}
catch (Exception ex)
{
}
}, null);
System.Diagnostics.Debug.WriteLine("2. Work - Waiting");
wh.WaitOne();
System.Diagnostics.Debug.WriteLine("4. Work - Finished");
}
else
{
wh.WaitOne();
}
}
}
#region IDisposable Members
public void Dispose()
{
EnqueueLogEvents(null);
worker.Join();
wh.Close();
}
#endregion
}
In my test it is essentially called like this:
//Inside of LogManager, get the LoggingService and set up the queue.
ILoggingService loggingService = GetTheLoggingService();
ProducerConsumerQueue loggingQueue = new ProducerConsumerQueue(loggingService);
//Inside of client code, get a logger and log with it
ILog logger = LogManager.GetLogger("test");
for (int i = 0; i < 100; i++)
{
logger.InfoFormat("logging message [{0}]", i);
}
Internally, logger/LogManager accumulates some number of logging messages (say 25) before adding that group of messages to the queue. Something like this:
internal void AddNewMessage(string message)
{
lock(logMessages)
{
logMessages.Add(message);
if (logMessages.Count >= 25)
{
ObservableCollection<LogMessage> messages = new ObservableCollection<LogMessage>(logMessages);
logMessages.Clear();
loggingQueue.EnqueueLogEvents(messages);
}
}
}
So, in this case I would expect to have 4 bursts of 25 messages each. Based on the Debug statements in my ProducerConsumerQueue code (maybe not the best way to debug this?), I would expect to see something like this:
Work - Sending 25 events
Work - Waiting
Work - Back
Work - Finished
Repeated 4 times.
Instead I am seeing something like this:
*1. Work - Sending 25 events
*2. Work - Waiting
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*3. Work - Back
*4. Work - Finished
(Added leading * so that the lines would not be autonumbered by SO)
I guess I would have expected that, the queue would have allowed multiple bursts of messages to be added, but that it would completely process one burst (waiting on the acync call to complete) before processing the next burst. It doesn't seem to be doing this. It does not seem to be reliably waiting on the completion of the async call. I do have a call to Set in the EnqueueLogEvents, maybe that is cancelling the WaitOne from the Work method?
So, I have a few questions:
1. Does my explanation of what I am trying to accomplish make sense (is my explanation clear, not is it a good idea or not)?
Is what I am trying to (transmit - from the client - the messages from a single thread, in the order that they occurred, completely processing one set of messages at a time) a good idea?
Am I close?
Can it be done?
Should it be done?
Thanks for any help!
[EDIT]
After more investigation and thanks to Brian's suggestion, we were able to get this working. I have copied the modified code. The key is that we are now using the "wh" wait handle strictly for ProducerConsumerQueue functions. Rather than using wh to wait for the async call to complete, we are now waiting on res.AsyncWaitHandle, which is returned by the BeginLogEvents call.
internal class LoggingQueue : IDisposable
{
EventWaitHandle wh = new AutoResetEvent(false);
Thread worker;
readonly object locker = new object();
bool working = false;
Queue<ObservableCollection<LoggingService.LogEvent>> logEventQueue = new Queue<ObservableCollection<LoggingService.LogEvent>>();
LoggingService.ILoggingService loggingService;
internal LoggingQueue(LoggingService.ILoggingService loggingService)
{
this.loggingService = loggingService;
worker = new Thread(Work);
worker.Start();
}
internal void EnqueueLogEvents(ObservableCollection<LoggingService.LogEvent> logEvents)
{
lock (locker)
{
logEventQueue.Enqueue(logEvents);
//System.Diagnostics.Debug.WriteLine("EnqueueLogEvents calling Set");
wh.Set();
}
}
private void Work()
{
while (true)
{
ObservableCollection<LoggingService.LogEvent> events = null;
lock (locker)
{
if (logEventQueue.Count > 0)
{
events = logEventQueue.Dequeue();
if (events == null || events.Count == 0) return;
}
}
if (events != null && events.Count > 0)
{
//System.Diagnostics.Debug.WriteLine("1. Work - Sending {0} events", events.Count);
IAsyncResult res = loggingService.BeginLogEvents(events, ar =>
{
try
{
loggingService.EndLogEvents(ar);
//System.Diagnostics.Debug.WriteLine("3. Work - Back");
}
catch (Exception ex)
{
}
}, null);
//System.Diagnostics.Debug.WriteLine("2. Work - Waiting");
// Block until async call returns. We are doing this so that we can be sure that all logging messages
// are sent FROM the client in the order they were generated. ALSO, we don't want interleave blocks of logging
// messages from the same client by sending a new block of messages before the previous block has been
// completely processed.
res.AsyncWaitHandle.WaitOne();
//System.Diagnostics.Debug.WriteLine("4. Work - Finished");
}
else
{
wh.WaitOne();
}
}
}
#region IDisposable Members
public void Dispose()
{
EnqueueLogEvents(null);
worker.Join();
wh.Close();
}
#endregion
}
As I mentioned in my initial question and in my comments to Jon and Brian, I still don't know if doing all of this work is a good idea, but at least the code does what I wanted it to do. That means that I at least have the choice of doing it this way or some other way (such as restoring order after the fact) rather than not having the choice.
Can I suggest that there's a simple alternative to all this coordination? Have a sequence using a cheap monotonically increasing ID (e.g. with Interlocked.Increment()) so that no matter what order things happen at the client or server, you can regenerate the original ordering later on.
That should let you be efficient and flexible, sending whatever you want asynchronously without waiting for acknowledgement, but without losing the ordering.
Obviously that means the ID (or possibly a guaranteed-unique timestamp field) would need to be part of your WCF service, but if you control both ends that should be reasonably simple.
The reason you are getting that kind of sequencing is because you are trying to use the same wait handle that the producer-consumer queue is using for a different purpose. That is going to cause all kinds of chaos. At some point things will go from bad to worse and the queue will get live-locked eventually. You really should create a separate WaitHandle to wait for completion of the logging service. Or if the BeginLoggingEvents fits the standard pattern it will return a IAsyncResult that contains a WaitHandle that you can use instead of creating your own.
As a side note, I really do not like the producer-consumer pattern presented on the Albarahi website. The problem is that it is not safe for multiple consumers (obviously that is of no concern to you). And I say that with all due respect because I think his website is one of the best resources for multithreaded programming. If BlockingCollection is available to you then use that instead.

Categories

Resources