I created a custom autocomplete control, when the user press a key it queries the database server (using Remoting) on another thread. When the user types very fast, the program must cancel the previously executing request/thread.
I previously implemented it as AsyncCallback first, but i find it cumbersome, too many house rules to follow (e.g. AsyncResult, AsyncState, EndInvoke) plus you have to detect the thread of the BeginInvoke'd object, so you can terminate the previously executing thread. Besides if I continued the AsyncCallback, there's no method on those AsyncCallbacks that can properly terminate previously executing thread.
EndInvoke cannot terminate the thread, it would still complete the operation of the to be terminated thread. I would still end up using Abort() on thread.
So i decided to just implement it with pure Thread approach, sans the AsyncCallback. Is this thread.abort() normal and safe to you?
public delegate DataSet LookupValuesDelegate(LookupTextEventArgs e);
internal delegate void PassDataSet(DataSet ds);
public class AutoCompleteBox : UserControl
{
Thread _yarn = null;
[System.ComponentModel.Category("Data")]
public LookupValuesDelegate LookupValuesDelegate { set; get; }
void DataSetCallback(DataSet ds)
{
if (this.InvokeRequired)
this.Invoke(new PassDataSet(DataSetCallback), ds);
else
{
// implements the appending of text on textbox here
}
}
private void txt_TextChanged(object sender, EventArgs e)
{
if (_yarn != null) _yarn.Abort();
_yarn = new Thread(
new Mate
{
LookupValuesDelegate = this.LookupValuesDelegate,
LookupTextEventArgs =
new LookupTextEventArgs
{
RowOffset = offset,
Filter = txt.Text
},
PassDataSet = this.DataSetCallback
}.DoWork);
_yarn.Start();
}
}
internal class Mate
{
internal LookupTextEventArgs LookupTextEventArgs = null;
internal LookupValuesDelegate LookupValuesDelegate = null;
internal PassDataSet PassDataSet = null;
object o = new object();
internal void DoWork()
{
lock (o)
{
// the actual code that queries the database
var ds = LookupValuesDelegate(LookupTextEventArgs);
PassDataSet(ds);
}
}
}
NOTES
The reason for cancelling the previous thread when the user type keys in succession, is not only to prevent the appending of text from happening, but also to cancel the previous network roundtrip, so the program won't be consuming too much memory resulting from successive network operation.
I'm worried if I avoid thread.Abort() altogether, the program could consume too much memory.
here's the code without the thread.Abort(), using a counter:
internal delegate void PassDataSet(DataSet ds, int keyIndex);
public class AutoCompleteBox : UserControl
{
[System.ComponentModel.Category("Data")]
public LookupValuesDelegate LookupValuesDelegate { set; get; }
static int _currentKeyIndex = 0;
void DataSetCallback(DataSet ds, int keyIndex)
{
if (this.InvokeRequired)
this.Invoke(new PassDataSet(DataSetCallback), ds, keyIndex);
else
{
// ignore the returned DataSet
if (keyIndex < _currentKeyIndex) return;
// implements the appending of text on textbox here...
}
}
private void txt_TextChanged(object sender, EventArgs e)
{
Interlocked.Increment(ref _currentKeyIndex);
var yarn = new Thread(
new Mate
{
KeyIndex = _currentKeyIndex,
LookupValuesDelegate = this.LookupValuesDelegate,
LookupTextEventArgs =
new LookupTextEventArgs
{
RowOffset = offset,
Filter = txt.Text
},
PassDataSet = this.DataSetCallback
}.DoWork);
yarn.Start();
}
}
internal class Mate
{
internal int KeyIndex;
internal LookupTextEventArgs LookupTextEventArgs = null;
internal LookupValuesDelegate LookupValuesDelegate = null;
internal PassDataSet PassDataSet = null;
object o = new object();
internal void DoWork()
{
lock (o)
{
// the actual code that queries the database
var ds = LookupValuesDelegate(LookupTextEventArgs);
PassDataSet(ds, KeyIndex);
}
}
}
No, it is not safe. Thread.Abort() is sketchy enough at the best of times, but in this case your control has no (heh) control over what's being done in the delegate callback. You don't know what state the rest of the app will be left in, and may well find yourself in a world of hurt when the time comes to call the delegate again.
Set up a timer. Wait a bit after the text change before calling the delegate. Then wait for it to return before calling it again. If it's that slow, or the user is typing that fast, then they probably don't expect autocomplete anyway.
Regarding your updated (Abort()-free) code:
You're now launching a new thread for (potentially) every keypress. This is not only going to kill performance, it's unnecessary - if the user isn't pausing, they probably aren't looking for the control to complete what they're typing.
I touched on this earlier, but P Daddy said it better:
You'd be better off just implementing
a one-shot timer, with maybe a
half-second timeout, and resetting it
on each keystroke.
Think about it: a fast typist might create a score of threads before the first autocomplete callback has had a chance to finish, even with a fast connection to a fast database. But if you delay making the request until a short period of time after the last keystroke has elapsed, then you have a better chance of hitting that sweet spot where the user has typed all they want to (or all they know!) and is just starting to wait for autocomplete to kick in. Play with the delay - a half-second might be appropriate for impatient touch-typists, but if your users are a bit more relaxed... or your database is a bit more slow... then you may get better results with a 2-3 second delay, or even longer. The most important part of this technique though, is that you reset the timer on every keystroke.
And unless you expect database requests to actually hang, don't bother trying to allow multiple concurrent requests. If a request is currently in-progress, wait for it to complete before making another one.
There are many warnings all over the net about using Thread.Abort. I would recommend avoiding it unless it's really needed, which in this case, I don't think it is. You'd be better off just implementing a one-shot timer, with maybe a half-second timeout, and resetting it on each keystroke. This way your expensive operation would only occur after a half-second or more (or whatever length you choose) of user inactivity.
You might want to have a look at An Introduction to Programming with C# Threads - Andrew D. Birrell. He outlines some of the best practices surrounding threading in C#.
On page 4 he says:
When you look at the
“System.Threading” namespace, you will
(or should) feel daunted by the range
of choices facing you: “Monitor” or
“Mutex”; “Wait” or “AutoResetEvent”;
“Interrupt” or “Abort”? Fortunately,
there’s a simple answer: use the
“lock” statement, the “Monitor” class,
and the “Interrupt” method. Those are
the features that I’ll use for most of
the rest of the paper. For now, you
should ignore the rest of
“System.Threading”, though I’ll
outline it for you section 9.
No, I would avoid ever calling Thread.Abort on your own code. You want your own background thread to complete normally and unwind its stack naturally. The only time I might consider calling Thread.Abort is in a scenario where my code is hosting foreign code on another thread (such as a plugin scenario) and I really want to abort the foreign code.
Instead, in this case, you might consider simply versioning each background request. In the callback, ignore responses that are "out-of-date" since server responses may return in the wrong order. I wouldn't worry too much about aborting a request that's already been sent to the database. If you find your database isn't responsive or is being overwhelmed by too many requests, then consider also using a timer as others have suggested.
Use Thread.Abort only as a last-resort measure when you are exiting application and KNOW that all IMPORTANT resources are released safely.
Otherwise, don't do it. It's even worse then
try
{
// do stuff
}
catch { } // gulp the exception, don't do anything about it
safety net...
Related
I do have a singleton component that manages some information blocks. An information block is a calculated information identified by some characteristics (concrete an Id and a time period). These calculations may take some seconds. All information blocks are stored in a collection.
Some other consumers are using these information blocks. The calculation should start when the first request for this Id and time period comes. I had following flow in mind:
The first consumer requests the data identified by Id and time period.
The component checks if the information block already exists
If not: Create the information block, put it into the collection and start the calculation in a background task. If yes: Take it from the collection
After that the flow goes to the information block:
When the calculation is already finished (by a former call), a callback from the consumer is called with the result of the calculation.
When the calculation is still in process, the callback is called when the calculation is finished.
So long, so good.
The critical section comes when the second (or any other subsequent) call is coming and the calculation is still running. The idea is that the calculation method holds each consumers callback and then when the calculation is finished all consumers callbacks are called.
public class SingletonInformationService
{
private readonly Collection<InformationBlock> blocks = new();
private object syncObject = new();
public void GetInformationBlock(Guid id, TimePersiod timePeriod,
Action<InformationBlock> callOnFinish)
{
InformationBlock block = null;
lock(syncObject)
{
// check out if the block already exists
block = blocks.SingleOrDefault(b => b.Id ...);
if (block == null)
{
block = new InformationBlock(...);
blocks.Add(block);
}
}
block?.BeginCalculation(callOnFinish);
return true;
}
}
public class InformationBlock
{
private Task calculationTask = null;
private CalculationState isCalculating isCalculating = CalculationState.Unknown;
private List<Action<InformationBlock> waitingRoom = new();
internal void BeginCalculation(Action<InformationBlock> callOnFinish)
{
if (isCalculating == CalculationState.Finished)
{
callOnFinish(this);
return;
}
else if (isCalculating == CalculationState.IsRunning)
{
waitingRoom.Add(callOnFinish);
return;
}
// add the first call to the waitingRoom
waitingRoom.Add(callOnFinish);
isCalculating = CalculationState.IsRunning;
calculationTask = Task.Run(() => { // run the calculation})
.ContinueWith(taskResult =>
{
//.. apply the calculation result to local properties
this.Property1 = taskResult.Result.Property1;
// set the state to mark this instance as complete
isCalculating = CalculationState.Finished;
// inform all calls about the result
waitingRoom.ForEach(c => c(this));
waitingRoom.Clear();
}, TaskScheduler.FromCurrentSynchronizationContext());
}
}
Is that approach a good idea? Do you see any failures or possible deadlocks? The method BeginCalculation might be called more than once while the calculation is running. Should I await for the calculationTask?
To have deadlocks, you'll need some cycles: object A depends of object B, that depends on object A again (image below). As I see, that's not your case, since the InformationBlock class doesn't access the service, but is only called by it.
The lock block is also very small, so probably it'll not put you in troubles.
You could look for the Thread-Safe Collection from C# standard libs. This could simplify your code.
I suggest you to use a ConcurrentDictionary, because it's fastest then iterate over the collection every request.
I'm asking this primarily as a sanity check: In a C# (8.0) application I've got this bit of code, which spuriously fails with an "object is not synchronized" exception from Monitor.pulse() (I've omitted irrelevant code for clarity):
// vanilla multiple-producer single-consumer queue stuff:
private Queue<Message> messages = new Queue<Message>();
private void ConsumerThread () {
Queue<Message> myMessages = new Queue<Message>();
while (...) {
lock (messages) {
// wait
while (messages.Count == 0)
Monitor.Wait(messages);
// swap
(messages, myMessages) = (myMessages, messages);
}
// process
while (myMessages.Count > 0)
DoStuff(myMessages.Dequeue());
}
}
public void EnqueueMessage (...) {
Message message = new Message(...);
lock (messages) {
messages.Enqueue(message);
Monitor.Pulse(messages);
}
}
I'm fairly new to C# and also I was stressed when I wrote that. Now I am reviewing that code to fix the exception and I'm immediately raising an eyebrow at the fact that I reassigned messages inside the consumer's lock.
I looked around and found Is it bad to overwrite a lock object if it is the last statement in the lock?, which validates my raised eyebrow.
However, I still don't have a lot of confidence (inexperience + stress), so, just to confirm: Is the following analysis of why this is broken correct?
If the following happens, in this order:
Stuff happens to be in the queue.
Consumer thread locks messages (and will skip wait loop).
EnqueueMessage tries to lock messages, waits for lock.
Consumer thread swaps messages and myMessages, releases lock.
EnqueueMessage takes lock.
EnqueueMessage adds item to messages and calls Monitor.pulse(messages) except messages isn't the same object that it locked in step (3), since it was swapped out from under us in (4). Possible consequences include:
Calling Monitor.Pulse on a non-locked object (what used to be myMessages) -- hence the aforementioned exception.
Enqueueing to the wrong queue and the consequences of that.
Even weirder stuff if the consumer thread manages to complete another full loop cycle while EnqueueMessage is still somewhere in its lock{}.
Right? I'm pretty sure that's right, it feels very basic, but I just want to confirm because I'm completely burnt out right now.
Then, whether that's correct or not: Does the following proposed fix make sense?
It seems to me like the fix is super simple: Instead of using messages as the monitor object, just use some dedicated dummy object that won't be changed:
private readonly object messagesLock = new object();
private Queue<Message> messages = new Queue<Message>();
private void ConsumerThread () {
Queue<Message> myMessages = new Queue<Message>();
while (...) {
lock (messagesLock) {
while (messages.Count == 0)
Monitor.Wait(messagesLock);
(messages, myMessages) = (myMessages, messages);
}
}
...
}
public void EnqueueMessage (...) {
...;
lock (messagesLock) {
messages.Enqueue(...);
Monitor.Pulse(messagesLock);
}
}
Where the intent is to avoid any issues caused by swapping out the lock object in strange places.
And that should work... right?
Nobody uses Queue in multi-threading since .NET 2 probably 16 yrs ago (correct me if I am wrong with dates).
it is trivial with concurrent collections.
BlockingColleciton<Message> myMessages = new BlockingColleciton<Message>();
private void ConsumerThread () {
while (...)
{
var message = myMessages.Take();
}
...
}
public void EnqueueMessage (Message msg) {
...;
myMessages.Add(msg);
}
I have 2 threads to are triggered at the same time and run in parallel. These 2 threads are going to be manipulating a string value, but I want to make sure that there are no data inconsistencies. For that I want to use a lock with Monitor.Pulse and Monitor.Wait. I used a method that I found on another question/answer, but whenever I run my program, the first thread gets stuck at the Monitor.Wait level. I think that's because the second thread has already "Pulsed" and "Waited". Here is some code to look at:
string currentInstruction;
public void nextInstruction()
{
Action actions = {
fetch,
decode
}
Parallel.Invoke(actions);
_pc++;
}
public void fetch()
{
lock(irLock)
{
currentInstruction = "blah";
GiveTurnTo(2);
WaitTurn(1);
}
decodeEvent.WaitOne();
}
public void decode()
{
decodeEvent.Set();
lock(irLock)
{
WaitTurn(2);
currentInstruction = "decoding..."
GiveTurnTo(1);
}
}
// Below are the methods I talked about before.
// Wait for turn to use lock object
public static void WaitTurn(int threadNum, object _lock)
{
// While( not this threads turn )
while (threadInControl != threadNum)
{
// "Let go" of lock on SyncRoot and wait utill
// someone finishes their turn with it
Monitor.Wait(_lock);
}
}
// Pass turn over to other thread
public static void GiveTurnTo(int nextThreadNum, object _lock)
{
threadInControl = nextThreadNum;
// Notify waiting threads that it's someone else's turn
Monitor.Pulse(_lock);
}
Any idea how to get 2 parallel threads to communicate (manipulate the same resources) within the same cycle using locks or anything else?
You want to run 2 peaces of code in parallel, but locking them at start using the same variable?
As nvoigt mentioned, it already sounds wrong. What you have to do is to remove lock from there. Use it only when you are about to access something exclusively.
Btw "data inconsistencies" can be avoided by not having to have them. Do not use currentInstruction field directly (is it a field?), but provide a thread safe CurrentInstruction property.
private object _currentInstructionLock = new object();
private string _currentInstruction
public string CurrentInstruction
{
get { return _currentInstruction; }
set
{
lock(_currentInstructionLock)
_currentInstruction = value;
}
}
Other thing is naming, local variables name starting from _ is a bad style. Some peoples (incl. me) using them to distinguish private fields. Property name should start from BigLetter and local variables fromSmall.
I have a silverlight 5 app that depends on several asynchronous calls to web services to populate the attributes of newly created graphics. I am trying to find a way to handle those asynchronous calls synchronously. I have tried the suggestions listed in this article and this one. i have tried the many suggestions regarding the Dispatcher object. None have worked well, so I am clearly missing something...
Here is what I have:
public partial class MainPage : UserControl {
AutoResetEvent waitHandle = new AutoResetEvent(false);
private void AssignNewAttributeValuesToSplitPolygons(List<Graphic> splitGraphics)
{
for (int i = 0; i < splitGraphics.Count; i++)
{
Graphic g = splitGraphics[i];
Thread lookupThread1 = new Thread(new ParameterizedThreadStart(SetStateCountyUtm));
lookupThread1.Start(g);
waitHandle.WaitOne();
Thread lookupThread2 = new Thread(new ParameterizedThreadStart(SetCongressionalDistrict));
lookupThread1.Start(g);
waitHandle.WaitOne();
}
private void SetStateCountyUtm(object graphic)
{
this.Dispatcher.BeginInvoke(delegate() {
WrapperSetStateCountyUtm((Graphic)graphic);
});
}
private void WrapperSetStateCountyUtm(Graphic graphic)
{
GISQueryEngine gisQEngine = new GISQueryEngine();
gisQEngine.StateCountyUtmLookupCompletedEvent += new GISQueryEngine.StateCountyUtmLookupEventHandler(gisQEngine_StateCountyUtmLookupCompletedEvent);
gisQEngine.PerformStateCountyUtmQuery(graphic.Geometry, graphic.Attributes["clu_number"].ToString());
}
void gisQEngine_StateCountyUtmLookupCompletedEvent(object sender, StateCountyUtmLookupCompleted stateCountyUtmLookupEventArgs)
{
string fred = stateCountyUtmLookupEventArgs.
waitHandle.Set();
}
}
public class GISQueryEngine
{
public void PerformStateCountyUtmQuery(Geometry inSpatialQueryGeometry, string cluNumber)
{
QueryTask queryTask = new QueryTask(stateandCountyServiceURL);
queryTask.ExecuteCompleted += new EventHandler<QueryEventArgs>(queryTask_StateCountyLookupExecuteCompleted);
queryTask.Failed += new EventHandler<TaskFailedEventArgs>(queryTask_StateCountyLookupFailed);
Query spatialQueryParam = new ESRI.ArcGIS.Client.Tasks.Query();
spatialQueryParam.OutFields.AddRange(new string[] { "*" });
spatialQueryParam.ReturnGeometry = false;
spatialQueryParam.Geometry = inSpatialQueryGeometry;
spatialQueryParam.SpatialRelationship = SpatialRelationship.esriSpatialRelIntersects;
spatialQueryParam.OutSpatialReference = inSpatialQueryGeometry.SpatialReference;
queryTask.ExecuteAsync(spatialQueryParam, cluNumber);
}
//and a whole bunch of other stuff i can add if needed
}
If I leave the 'waitHandle.WaitOne()' method uncommented, no code beyond that method is ever called, at least that I can see with the step through debugger. The application just hangs.
If I comment out the 'waitHandle.WaitOne()', everything runs just fine - except asynchronously. In other words, when the app reads the Attribute values of the new graphics, those values may or may not be set depending on how quickly the asynch methods return.
Thanks for any help.
It's going to be rather difficult to work through a problem like this as there are a few issues you'll need to address. SL is by nature asynch so forcing it to try and work synchronously is usually a very bad idea. You shouldn't do it unless it's absolutely necessary.
Is there a reason that you cannot wait for an async. callback? From what I see you appear to be making two calls for every state that is being rendered. I'm guessing the concern is that one call must complete before the second is made? In scenarios like this, I would kick off the first async call, and in it's response kick off the second call passing along the result you'll want to use from the first call. The second call response updates the provided references.
However, in cases where you've got a significant number of states to update, this results in a rather chatty, and difficult to debug set of calls. I'd really be looking at creating a service call that can accept a set of state references and pass back a data structure set for the values to be updated all in one hit. (or at least grouping them up to one call per state if the batch will be too time consuming and you want to render/interact with visual elements as they load up.)
I am writing a Silverlight class library to abstract the interface to a WCF service. The WCF service provides a centralized logging service. The Silverlight class library provides a simplified log4net-like interface (logger.Info, logger.Warn, etc) for logging. From the class library I plan to provide options such that logged messages can be accumulated on the client and sent in "bursts" to the WCF logging service, rather than sending each message as it occurs. Generally, this is working well. The class library does accumulate messages and it does send collections of messages to the WCF logging service, where they are logged by an underlying logging framework.
My current problem is that the messages (from a single client with a single thread - all logging code is in button click events) are becoming interleaved in the logging service. I realize that the at least part of this is probably due to the instancing (PerCall) or Synchronization of the WCF logging service. However, it also seems that my messages are occurring in such rapid succession that that the "bursts" of messages leaving on the async calls are actually "leaving" the client in a different order than they were generated.
I have tried to set up a producer consumer queue as described here with a slight (or should that be "slight" with air quotes) change that the Work method blocks (WaitOne) until the async call returns (i.e. until the async callback executes). The idea is that when one burst of messages is sent to the WCF logging service, the queue should wait until that burst has been processed before sending the next burst.
Maybe what I am trying to do is not feasible, or maybe I am trying to solve the wrong problem, (or maybe I just don't know what I am doing!).
Anyway, here is my producer/consumer queue code:
internal class ProducerConsumerQueue : IDisposable
{
EventWaitHandle wh = new AutoResetEvent(false);
Thread worker;
readonly object locker = new object();
Queue<ObservableCollection<LoggingService.LogEvent>> logEventQueue = new Queue<ObservableCollection<LoggingService.LogEvent>>();
LoggingService.ILoggingService loggingService;
internal ProducerConsumerQueue(LoggingService.ILoggingService loggingService)
{
this.loggingService = loggingService;
worker = new Thread(Work);
worker.Start();
}
internal void EnqueueLogEvents(ObservableCollection<LoggingService.LogEvent> logEvents)
{
//Queue the next burst of messages
lock(locker)
{
logEventQueue.Enqueue(logEvents);
//Is this Set conflicting with the WaitOne on the async call in Work?
wh.Set();
}
}
private void Work()
{
while(true)
{
ObservableCollection<LoggingService.LogEvent> events = null;
lock(locker)
{
if (logEventQueue.Count > 0)
{
events = logEventQueue.Dequeue();
if (events == null || events.Count == 0) return;
}
}
if (events != null && events.Count > 0)
{
System.Diagnostics.Debug.WriteLine("1. Work - Sending {0} events", events.Count);
//
// This seems to be the key...
// Send one burst of messages via an async call and wait until the async call completes.
//
loggingService.BeginLogEvents(events, ar =>
{
try
{
loggingService.EndLogEvents(ar);
System.Diagnostics.Debug.WriteLine("3. Work - Back");
wh.Set();
}
catch (Exception ex)
{
}
}, null);
System.Diagnostics.Debug.WriteLine("2. Work - Waiting");
wh.WaitOne();
System.Diagnostics.Debug.WriteLine("4. Work - Finished");
}
else
{
wh.WaitOne();
}
}
}
#region IDisposable Members
public void Dispose()
{
EnqueueLogEvents(null);
worker.Join();
wh.Close();
}
#endregion
}
In my test it is essentially called like this:
//Inside of LogManager, get the LoggingService and set up the queue.
ILoggingService loggingService = GetTheLoggingService();
ProducerConsumerQueue loggingQueue = new ProducerConsumerQueue(loggingService);
//Inside of client code, get a logger and log with it
ILog logger = LogManager.GetLogger("test");
for (int i = 0; i < 100; i++)
{
logger.InfoFormat("logging message [{0}]", i);
}
Internally, logger/LogManager accumulates some number of logging messages (say 25) before adding that group of messages to the queue. Something like this:
internal void AddNewMessage(string message)
{
lock(logMessages)
{
logMessages.Add(message);
if (logMessages.Count >= 25)
{
ObservableCollection<LogMessage> messages = new ObservableCollection<LogMessage>(logMessages);
logMessages.Clear();
loggingQueue.EnqueueLogEvents(messages);
}
}
}
So, in this case I would expect to have 4 bursts of 25 messages each. Based on the Debug statements in my ProducerConsumerQueue code (maybe not the best way to debug this?), I would expect to see something like this:
Work - Sending 25 events
Work - Waiting
Work - Back
Work - Finished
Repeated 4 times.
Instead I am seeing something like this:
*1. Work - Sending 25 events
*2. Work - Waiting
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*4. Work - Finished
*1. Work - Sending 25 events
*2. Work - Waiting
*3. Work - Back
*3. Work - Back
*4. Work - Finished
(Added leading * so that the lines would not be autonumbered by SO)
I guess I would have expected that, the queue would have allowed multiple bursts of messages to be added, but that it would completely process one burst (waiting on the acync call to complete) before processing the next burst. It doesn't seem to be doing this. It does not seem to be reliably waiting on the completion of the async call. I do have a call to Set in the EnqueueLogEvents, maybe that is cancelling the WaitOne from the Work method?
So, I have a few questions:
1. Does my explanation of what I am trying to accomplish make sense (is my explanation clear, not is it a good idea or not)?
Is what I am trying to (transmit - from the client - the messages from a single thread, in the order that they occurred, completely processing one set of messages at a time) a good idea?
Am I close?
Can it be done?
Should it be done?
Thanks for any help!
[EDIT]
After more investigation and thanks to Brian's suggestion, we were able to get this working. I have copied the modified code. The key is that we are now using the "wh" wait handle strictly for ProducerConsumerQueue functions. Rather than using wh to wait for the async call to complete, we are now waiting on res.AsyncWaitHandle, which is returned by the BeginLogEvents call.
internal class LoggingQueue : IDisposable
{
EventWaitHandle wh = new AutoResetEvent(false);
Thread worker;
readonly object locker = new object();
bool working = false;
Queue<ObservableCollection<LoggingService.LogEvent>> logEventQueue = new Queue<ObservableCollection<LoggingService.LogEvent>>();
LoggingService.ILoggingService loggingService;
internal LoggingQueue(LoggingService.ILoggingService loggingService)
{
this.loggingService = loggingService;
worker = new Thread(Work);
worker.Start();
}
internal void EnqueueLogEvents(ObservableCollection<LoggingService.LogEvent> logEvents)
{
lock (locker)
{
logEventQueue.Enqueue(logEvents);
//System.Diagnostics.Debug.WriteLine("EnqueueLogEvents calling Set");
wh.Set();
}
}
private void Work()
{
while (true)
{
ObservableCollection<LoggingService.LogEvent> events = null;
lock (locker)
{
if (logEventQueue.Count > 0)
{
events = logEventQueue.Dequeue();
if (events == null || events.Count == 0) return;
}
}
if (events != null && events.Count > 0)
{
//System.Diagnostics.Debug.WriteLine("1. Work - Sending {0} events", events.Count);
IAsyncResult res = loggingService.BeginLogEvents(events, ar =>
{
try
{
loggingService.EndLogEvents(ar);
//System.Diagnostics.Debug.WriteLine("3. Work - Back");
}
catch (Exception ex)
{
}
}, null);
//System.Diagnostics.Debug.WriteLine("2. Work - Waiting");
// Block until async call returns. We are doing this so that we can be sure that all logging messages
// are sent FROM the client in the order they were generated. ALSO, we don't want interleave blocks of logging
// messages from the same client by sending a new block of messages before the previous block has been
// completely processed.
res.AsyncWaitHandle.WaitOne();
//System.Diagnostics.Debug.WriteLine("4. Work - Finished");
}
else
{
wh.WaitOne();
}
}
}
#region IDisposable Members
public void Dispose()
{
EnqueueLogEvents(null);
worker.Join();
wh.Close();
}
#endregion
}
As I mentioned in my initial question and in my comments to Jon and Brian, I still don't know if doing all of this work is a good idea, but at least the code does what I wanted it to do. That means that I at least have the choice of doing it this way or some other way (such as restoring order after the fact) rather than not having the choice.
Can I suggest that there's a simple alternative to all this coordination? Have a sequence using a cheap monotonically increasing ID (e.g. with Interlocked.Increment()) so that no matter what order things happen at the client or server, you can regenerate the original ordering later on.
That should let you be efficient and flexible, sending whatever you want asynchronously without waiting for acknowledgement, but without losing the ordering.
Obviously that means the ID (or possibly a guaranteed-unique timestamp field) would need to be part of your WCF service, but if you control both ends that should be reasonably simple.
The reason you are getting that kind of sequencing is because you are trying to use the same wait handle that the producer-consumer queue is using for a different purpose. That is going to cause all kinds of chaos. At some point things will go from bad to worse and the queue will get live-locked eventually. You really should create a separate WaitHandle to wait for completion of the logging service. Or if the BeginLoggingEvents fits the standard pattern it will return a IAsyncResult that contains a WaitHandle that you can use instead of creating your own.
As a side note, I really do not like the producer-consumer pattern presented on the Albarahi website. The problem is that it is not safe for multiple consumers (obviously that is of no concern to you). And I say that with all due respect because I think his website is one of the best resources for multithreaded programming. If BlockingCollection is available to you then use that instead.