NLog MappedDiagnosticsLogicalContext not working in async/await with ConfigureAwait(false) - c#

I am using NLog 4.3.5 and .Net framework 4.6.1
When I begin a server side operation I call:
NLog.MappedDiagnosticsLogicalContext.Set("OperationId", Guid.NewGuid());
This gets mapped through and appears in my log files. All is good.... or is it?
When reviewing my log files, I noticed that this operation id value doesn't seem to be working as I expected it to.
Example:
In thread 19 an operation begins and sets the context.
It uses .ConfigureAwait(false) on all await calls
It performs a
var tasks = items.Select(item => Task.Run( () => { /* do stuff */}
await Task.WhenAll(tasks).ConfigureAwait(false)
One of the threads used for these tasks is thread 31 (keep that in mind for later)
Meanwhile, in thread 36, a different server method is called and begins a new operation. Several log messages are written with it's unique operation id
This operation performs 2 different await calls with ConfigureAwait(false)
The next log statement occurs on thread 31. From then on, it logs the operation id that was created for the operation that began on thread 19!
I did not expect this to happen and am unsure of how it happened. But, as I look through my log history, I see that this type of thing has happened before.
I thought the logical call context was supposed to carry over. Is it my use of ConfigureAwait(false) that is causing this behavior? That is the only thing I can think of....

Found what I believe is the problem.
https://github.com/NLog/NLog/issues/934

You could work around this as follows:
public static class LogicalThreadContext
{
private const string KeyPrefix = "NLog.LogicalThreadContext";
private static string GetCallContextKey(string key)
{
return string.Format("{0}.{1}", KeyPrefix, key);
}
private static string GetCallContextValue(string key)
{
return CallContext.LogicalGetData(GetCallContextKey(key)) as string ?? string.Empty;
}
private static void SetCallContextValue(string key, string value)
{
CallContext.LogicalSetData(GetCallContextKey(key), value);
}
public static string Get(string item)
{
return GetCallContextValue(item);
}
public static string Get(string item, IFormatProvider formatProvider)
{
if ((formatProvider == null) && (LogManager.Configuration != null))
{
formatProvider = LogManager.Configuration.DefaultCultureInfo;
}
return string.Format(formatProvider, "{0}", GetCallContextValue(item));
}
public static void Set(string item, string value)
{
SetCallContextValue(item, value);
}
}
[LayoutRenderer("mdlc2")]
public class LogicalThreadContextLayoutRenderer : LayoutRenderer
{
[DefaultParameter]
public bool Name {get;set;}
protected override void Append(StringBuilder builder, LogEventInfo logEvent)
{
builder.Append(LogicalThreadContext.Get(Name, null));
}
}
//or application_start for ASP.NET 4
static void Main(string[] args)
{
//layout renderer
ConfigurationItemFactory.Default.LayoutRenderers
.RegisterDefinition("mdlc2", typeof(LogicalThreadContextLayoutRenderer ));
}
Usage in the config file:
${mdlc2:OperationId}

Related

Persist headers when redelivering a RabbitMq message using MassTransit

Purpose: I need to keep track of headers when I redeliver a message.
Configuration:
RabbitMQ 3.7.9
Erlang 21.2
MassTransit 5.1.5
MySql 8.0 for the Quartz database
What I've tried without success:
first attempt:
await context.Redeliver(TimeSpan.FromSeconds(5), (consumeCtx, sendCtx) => {
if (consumeCtx.Headers.TryGetHeader("SenderApp", out object sender))
{
sendCtx.Headers.Set("SenderApp", sender);
}
}).ConfigureAwait(false);
second attempt:
protected Task ScheduleSend(Uri rabbitUri, double delay)
{
return GetBus().ScheduleSend<IProcessOrganisationUpdate>(
rabbitUri,
TimeSpan.FromSeconds(delay),
_Data,
new HeaderPipe(_SenderApp, 0));
}
public class HeaderPipe : IPipe<SendContext>
{
private readonly byte _Priority;
private readonly string _SenderApp;
public HeaderPipe (byte priority)
{
_Priority = priority;
_SenderApp = Assembly.GetEntryAssembly()?.GetName()?.Name ?? "Default";
}
public HeaderPipe (string senderApp, byte priority)
{
_Priority = priority;
_SenderApp = senderApp;
}
public void Probe (ProbeContext context)
{ }
public Task Send (SendContext context)
{
context.Headers.Set("SenderApp", _SenderApp);
context.SetPriority(_Priority);
return Task.CompletedTask;
}
}
Expected: FinQuest.Robot.DBProcess
Result: null
I log in Consume method my SenderApp. The first time it's look like this
Initial trigger checking returns true for FinQuest.Robots.OrganisationLinkedinFeed (id: 001ae487-ad3d-4619-8d34-367881ec91ba, sender: FinQuest.Robot.DBProcess, modif: LinkedIn)
and looks like this after the redelivery
Initial trigger checking returns true for FinQuest.Robots.OrganisationLinkedinFeed (id: 001ae487-ad3d-4619-8d34-367881ec91ba, sender: , modif: LinkedIn)
What I'm doing wrong ? I don't want to use the Retry feature due to its maximum number of retry (I don't want to be limited).
Thanks in advance.
There is a method, used by the redelivery filter, that you might want to use:
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit/SendContextExtensions.cs#L90
public static void TransferConsumeContextHeaders(this SendContext sendContext, ConsumeContext consumeContext)
In your code, you would use it:
await context.Redeliver(TimeSpan.FromSeconds(5), (consumeCtx, sendCtx) => {
sendCtx.TransferConsumeContextHeaders(consumeCtx);
});

Logger on multithread application causes exception because file is used by another procces c# [duplicate]

What is the best approach to creating a simple multithread safe logging class? Is something like this sufficient? How would I purge the log when it's initially created?
public class Logging
{
public Logging()
{
}
public void WriteToLog(string message)
{
object locker = new object();
lock(locker)
{
StreamWriter SW;
SW=File.AppendText("Data\\Log.txt");
SW.WriteLine(message);
SW.Close();
}
}
}
public partial class MainWindow : Window
{
public static MainWindow Instance { get; private set; }
public Logging Log { get; set; }
public MainWindow()
{
Instance = this;
Log = new Logging();
}
}
Here is a sample for a Log implemented with the Producer/Consumer pattern (with .Net 4) using a BlockingCollection. The interface is :
namespace Log
{
public interface ILogger
{
void WriteLine(string msg);
void WriteError(string errorMsg);
void WriteError(string errorObject, string errorAction, string errorMsg);
void WriteWarning(string errorObject, string errorAction, string errorMsg);
}
}
and the full class code is here :
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Log
{
// Reentrant Logger written with Producer/Consumer pattern.
// It creates a thread that receives write commands through a Queue (a BlockingCollection).
// The user of this log has just to call Logger.WriteLine() and the log is transparently written asynchronously.
public class Logger : ILogger
{
BlockingCollection<Param> bc = new BlockingCollection<Param>();
// Constructor create the thread that wait for work on .GetConsumingEnumerable()
public Logger()
{
Task.Factory.StartNew(() =>
{
foreach (Param p in bc.GetConsumingEnumerable())
{
switch (p.Ltype)
{
case Log.Param.LogType.Info:
const string LINE_MSG = "[{0}] {1}";
Console.WriteLine(String.Format(LINE_MSG, LogTimeStamp(), p.Msg));
break;
case Log.Param.LogType.Warning:
const string WARNING_MSG = "[{3}] * Warning {0} (Action {1} on {2})";
Console.WriteLine(String.Format(WARNING_MSG, p.Msg, p.Action, p.Obj, LogTimeStamp()));
break;
case Log.Param.LogType.Error:
const string ERROR_MSG = "[{3}] *** Error {0} (Action {1} on {2})";
Console.WriteLine(String.Format(ERROR_MSG, p.Msg, p.Action, p.Obj, LogTimeStamp()));
break;
case Log.Param.LogType.SimpleError:
const string ERROR_MSG_SIMPLE = "[{0}] *** Error {1}";
Console.WriteLine(String.Format(ERROR_MSG_SIMPLE, LogTimeStamp(), p.Msg));
break;
default:
Console.WriteLine(String.Format(LINE_MSG, LogTimeStamp(), p.Msg));
break;
}
}
});
}
~Logger()
{
// Free the writing thread
bc.CompleteAdding();
}
// Just call this method to log something (it will return quickly because it just queue the work with bc.Add(p))
public void WriteLine(string msg)
{
Param p = new Param(Log.Param.LogType.Info, msg);
bc.Add(p);
}
public void WriteError(string errorMsg)
{
Param p = new Param(Log.Param.LogType.SimpleError, errorMsg);
bc.Add(p);
}
public void WriteError(string errorObject, string errorAction, string errorMsg)
{
Param p = new Param(Log.Param.LogType.Error, errorMsg, errorAction, errorObject);
bc.Add(p);
}
public void WriteWarning(string errorObject, string errorAction, string errorMsg)
{
Param p = new Param(Log.Param.LogType.Warning, errorMsg, errorAction, errorObject);
bc.Add(p);
}
string LogTimeStamp()
{
DateTime now = DateTime.Now;
return now.ToShortTimeString();
}
}
}
In this sample, the internal Param class used to pass information to the writing thread through the BlockingCollection is :
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Log
{
internal class Param
{
internal enum LogType { Info, Warning, Error, SimpleError };
internal LogType Ltype { get; set; } // Type of log
internal string Msg { get; set; } // Message
internal string Action { get; set; } // Action when error or warning occurs (optional)
internal string Obj { get; set; } // Object that was processed whend error or warning occurs (optional)
internal Param()
{
Ltype = LogType.Info;
Msg = "";
}
internal Param(LogType logType, string logMsg)
{
Ltype = logType;
Msg = logMsg;
}
internal Param(LogType logType, string logMsg, string logAction, string logObj)
{
Ltype = logType;
Msg = logMsg;
Action = logAction;
Obj = logObj;
}
}
}
No, you're creating a new lock object every time the method is called. If you want to ensure that only one thread at a time can execute the code in that function, then move locker out of the function, either to an instance or a static member. If this class is instantiated every time an entry is to be written, then locker should probably be static.
public class Logging
{
public Logging()
{
}
private static readonly object locker = new object();
public void WriteToLog(string message)
{
lock(locker)
{
StreamWriter SW;
SW=File.AppendText("Data\\Log.txt");
SW.WriteLine(message);
SW.Close();
}
}
}
Creating a thread-safe logging implementation using a single monitor (lock) is unlikely to yield positive results. While you could do this correctly, and several answers have been posted showing how, it would have a dramatic negative effect on performance since each object doing logging would have to synchronize with every other object doing logging. Get more than one or two threads doing this at the same time and suddenly you may spend more time waiting than processing.
The other problem you run into with the single monitor approach is that you have no guarantee that threads will acquire the lock in the order they initially requested it. So, the log entries may essentially appear out of order. That can be frustrating if you're using this for trace logging.
Multi-threading is hard. Approaching it lightly will always lead to bugs.
One approach to this problem would be to implement the Producer/Consumer pattern, wherein callers to the logger only need to write to a memory buffer and return immediately rather than wait for the logger to write to disk, thus drastically reducing the performance penalty. The logging framework would, on a separate thread, consume the log data and persist it.
you need to declare the sync object at the class level:
public class Logging
{
private static readonly object locker = new object();
public Logging()
{
}
public void WriteToLog(string message)
{
lock(locker)
{
StreamWriter sw;
sw = File.AppendText("Data\\Log.txt");
sw.WriteLine(message);
sw.Close();
sw.Dispose();
}
}
}
Might be better to declare your logging class as static, and the locking object as #Adam Robinson suggested.
The question uses File.AppendText which is not an asynchronous method, and other answers correctly show that using a lock is the way to do it.
However, in many real-world cases, using an asynchronous method is preferred so the caller doesn't have to wait for this to get written. A lock isn't useful in that case as it blocks the thread and also async methods are not allowed inside the lock block.
In such situation, you could use Semaphores (SemaphoreSlim class in C#) to achieve the same thing, but with the bonus of being asynchronous and allowing asynchronous functions to be called inside the lock zone.
Here's a quick sample of using a SemaphoreSlim as an asynchronous lock:
// a semaphore as a private field in Logging class:
private static SemaphoreSlim semaphore = new SemaphoreSlim(1, 1);
// Inside WriteToLog method:
try
{
await semaphore.WaitAsync();
// Code to write log to file asynchronously
}
finally
{
semaphore.Release();
}
Please note that it's good practice to always use semaphores in try..finally blocks, so even if the code throws an exception, the semaphore gets released correctly.

Close task before run again

I working on real-time search. At this moment on property setter which is bounded to edit text, I call a method which calls API and then fills the list with the result it looks like this:
private string searchPhrase;
public string SearchPhrase
{
get => searchPhrase;
set
{
SetProperty(ref searchPhrase, value);
RunOnMainThread(SearchResult.Clear);
isAllFriends = false;
currentPage = 0;
RunInAsync(LoadData);
}
}
private async Task LoadData()
{
var response = await connectionRepository.GetConnections(currentPage,
pageSize, searchPhrase);
foreach (UserConnection uc in response)
{
if (uc.Type != UserConnection.TypeEnum.Awaiting)
{
RunOnMainThread(() =>
SearchResult.Add(new ConnectionUser(uc)));
}
}
}
But this way is totally useless because of it totally mashup list of a result if a text is entering quickly. So to prevent this I want to run this method async in a property but if a property is changed again I want to kill the previous Task and star it again. How can I achieve this?
Some informations from this thread:
create a CancellationTokenSource
var ctc = new CancellationTokenSource();
create a method doing the async work
private static Task ExecuteLongCancellableMethod(CancellationToken token)
{
return Task.Run(() =>
{
token.ThrowIfCancellationRequested();
// more code here
// check again if this task is canceled
token.ThrowIfCancellationRequested();
// more code
}
}
It is important to have this checks for cancel in the code.
Execute the function:
var cancellable = ExecuteLongCancellableMethod(ctc.Token);
To stop the long running execution use
ctc.Cancel();
For further details please consult the linked thread.
This question can be answered in many different ways. However IMO I would look at creating a class that
Delays itself automatically for X (ms) before performing the seach
Has the ability to be cancelled at any time as the search request changes.
Realistically this will change your code design, and should encapsulate the logic for both 1 & 2 in a separate class.
My initial thoughts are (and none of this is tested and mostly pseudo code).
class ConnectionSearch
{
public ConnectionSearch(string phrase, Action<object> addAction)
{
_searchPhrase = phrase;
_addAction = addAction;
_cancelSource = new CancellationTokenSource();
}
readonly string _searchPhrase = null;
readonly Action<object> _addAction;
readonly CancellationTokenSource _cancelSource;
public void Cancel()
{
_cancelSource?.Cancel();
}
public async void PerformSearch()
{
await Task.Delay(300); //await 300ms between keystrokes
if (_cancelSource.IsCancellationRequested)
return;
//continue your code keep checking for
//loop your dataset
//call _addAction?.Invoke(uc);
}
}
This is basic, really just encapsulates the logic for both points 1 & 2, you will need to adapt the code to do the search.
Next you could change your property to cancel a previous running instance, and then start another instance immediatly after something like below.
ConnectionSearch connectionSearch;
string searchPhrase;
public string SearchPhrase
{
get => searchPhrase;
set
{
//do your setter work
if(connectionSearch != null)
{
connectionSearch.Cancel();
}
connectionSearch = new ConnectionSearch(value, addConnectionUser);
connectionSearch.PerformSearch();
}
}
void addConnectionUser(object uc)
{
//pperform your add logic..
}
The code is pretty straight forward, however you will see in the setter is simply cancelling an existing request and then creating a new request. You could put some disposal cleanup logic in place but this should get you started.
You can implement some sort of debouncer which will encapsulate the logics of task result debouncing, i.e. it will assure if you run many tasks, then only the latest task result will be used:
public class TaskDebouncer<TResult>
{
public delegate void TaskDebouncerHandler(TResult result, object sender);
public event TaskDebouncerHandler OnCompleted;
public event TaskDebouncerHandler OnDebounced;
private Task _lastTask;
private object _lock = new object();
public void Run(Task<TResult> task)
{
lock (_lock)
{
_lastTask = task;
}
task.ContinueWith(t =>
{
if (t.IsFaulted)
throw t.Exception;
lock (_lock)
{
if (_lastTask == task)
{
OnCompleted?.Invoke(t.Result, this);
}
else
{
OnDebounced?.Invoke(t.Result, this);
}
}
});
}
public async Task WaitLast()
{
await _lastTask;
}
}
Then, you can just do:
private readonly TaskDebouncer<Connections[]> _connectionsDebouncer = new TaskDebouncer<Connections[]>();
public ClassName()
{
_connectionsDebouncer.OnCompleted += OnConnectionUpdate;
}
public void OnConnectionUpdate(Connections[] connections, object sender)
{
RunOnMainThread(SearchResult.Clear);
isAllFriends = false;
currentPage = 0;
foreach (var conn in connections)
RunOnMainThread(() => SearchResult.Add(new ConnectionUser(conn)));
}
private string searchPhrase;
public string SearchPhrase
{
get => searchPhrase;
set
{
SetProperty(ref searchPhrase, value);
_connectionsDebouncer.Add(RunInAsync(LoadData));
}
}
private async Task<Connection[]> LoadData()
{
return await connectionRepository
.GetConnections(currentPage, pageSize, searchPhrase)
.Where(conn => conn.Type != UserConnection.TypeEnum.Awaiting)
.ToArray();
}
It is not pretty clear what RunInAsync and RunOnMainThread methods are.
I guess, you don't actually need them.

Locking a Resource and generating Time Stamps according to lock time

Suppose that I would like to implement a synchronization primitive which generates a time stamp that is going to be used in a synchronization protocol. The time stamp would be such, that for a given key used to lock a resource, no two threads would be able to obtain the same time stamp value.
A possible implementation of the latter specification would be:
namespace InfinityLabs.PowersInfinity.BCL.Synchronization
{
public static class TimeStampMonitor
{
private static readonly IDictionary<object, long> TimeStamps;
static TimeStampMonitor()
{
TimeStamps = new Dictionary<object, long>();
}
#region API
public static long Enter(object key)
{
var lockTaken = false;
Monitor.Enter(key, ref lockTaken);
ThrowIfLockNotAcquired(key, lockTaken);
var timeStamp = GetCurrentTimeStamp();
Thread.Sleep(1);
TimeStamps.Add(key, timeStamp);
return timeStamp;
}
public static void Exit(object key)
{
var lockTaken = false;
Monitor.Enter(key, ref lockTaken);
try
{
ThrowIfInvalidKey(key);
TimeStamps.Remove(key);
}
finally
{
if (lockTaken)
Monitor.Exit(key);
}
}
public static long GetTimeStampOrThrow(object key)
{
TryEnterOrThrow(key);
var timeStamp = GetTimeStamp(key);
return timeStamp;
}
public static void TryEnterOrThrow(object key)
{
var lockTaken = false;
try
{
Monitor.Enter(key, ref lockTaken);
ThrowIfLockNotAcquired(key, lockTaken);
ThrowIfInvalidKey(key);
}
catch (SynchronizationException)
{
throw;
}
catch (Exception)
{
if(lockTaken)
Monitor.Exit(key);
throw;
}
}
#endregion
#region Time Stamping
private static long GetCurrentTimeStamp()
{
var timeStamp = DateTime.Now.ToUnixTime();
return timeStamp;
}
private static long GetTimeStamp(object key)
{
var timeStamp = TimeStamps[key];
return timeStamp;
}
#endregion
#region Validation
private static void ThrowIfInvalidKey(object key, [CallerMemberName] string methodName = null)
{
if (!TimeStamps.ContainsKey(key))
throw new InvalidOperationException($"Must invoke '{nameof(Enter)}' prior to invoking '{methodName}'. Key: '{key}'");
}
private static void ThrowIfLockNotAcquired(object key, bool lockTaken)
{
if (!lockTaken)
throw new SynchronizationException($"Unable to acquire lock for key '{key}'");
}
#endregion
}
}
Mind that the two API methods TryEnterOrThrow and GetTimeStampOrThrow are intended to be used by consuming classes as guard methods which disallow poorly written code to break the critical section's atomicity. The latter method also returns a previously acquired time stamp value for a given key. The time stamp is maintained so long that its owner did not exit the critical section.
I have been running all possible scenarios through my mind, and I cant seem to break it- not only atomically, but by trying to misuse it.
I guess my question is, since this is one of my very few attempts at writing synchronization primitives- is this code fool proof and does it provide atomicity?
Help would be much appreciated!
Take a look at https://stackoverflow.com/a/14369695/224370
You can create a unique timestamp without locking. Code below.
Your question differs slightly in that you want a unique timestamp per key, but if you have a globally unique timestamp then you automatically have a unique timestamp per key without any extra effort. So I don't think you really need a dictionary for this:
public class HiResDateTime
{
private static long lastTimeStamp = DateTime.UtcNow.Ticks;
public static long UtcNowTicks
{
get
{
long original, newValue;
do
{
original = lastTimeStamp;
long now = DateTime.UtcNow.Ticks;
newValue = Math.Max(now, original + 1);
} while (Interlocked.CompareExchange
(ref lastTimeStamp, newValue, original) != original);
return newValue;
}
}
}

Async result handle to return to callers

I have a method that queues some work to be executed asynchronously. I'd like to return some sort of handle to the caller that can be polled, waited on, or used to fetch the return value from the operation, but I can't find a class or interface that's suitable for the task.
BackgroundWorker comes close, but it's geared to the case where the worker has its own dedicated thread, which isn't true in my case. IAsyncResult looks promising, but the provided AsyncResult implementation is also unusable for me. Should I implement IAsyncResult myself?
Clarification:
I have a class that conceptually looks like this:
class AsyncScheduler
{
private List<object> _workList = new List<object>();
private bool _finished = false;
public SomeHandle QueueAsyncWork(object workObject)
{
// simplified for the sake of example
_workList.Add(workObject);
return SomeHandle;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
foreach (object workObject in _workList)
{
if (!workObject.IsFinished)
{
workObject.DoSomeWork();
}
}
Thread.Sleep(1000);
}
}
}
The QueueAsyncWork function pushes a work item onto the polling list for a dedicated work thread, of which there will only over be one. My problem is not with writing the QueueAsyncWork function--that's fine. My question is, what do I return to the caller? What should SomeHandle be?
The existing .Net classes for this are geared towards the situation where the asynchronous operation can be encapsulated in a single method call that returns. That's not the case here--all of the work objects do their work on the same thread, and a complete work operation might span multiple calls to workObject.DoSomeWork(). In this case, what's a reasonable approach for offering the caller some handle for progress notification, completion, and getting the final outcome of the operation?
Yes, implement IAsyncResult (or rather, an extended version of it, to provide for progress reporting).
public class WorkObjectHandle : IAsyncResult, IDisposable
{
private int _percentComplete;
private ManualResetEvent _waitHandle;
public int PercentComplete {
get {return _percentComplete;}
set
{
if (value < 0 || value > 100) throw new InvalidArgumentException("Percent complete should be between 0 and 100");
if (_percentComplete = 100) throw new InvalidOperationException("Already complete");
if (value == 100 && Complete != null) Complete(this, new CompleteArgs(WorkObject));
_percentComplete = value;
}
public IWorkObject WorkObject {get; private set;}
public object AsyncState {get {return WorkObject;}}
public bool IsCompleted {get {return _percentComplete == 100;}}
public event EventHandler<CompleteArgs> Complete; // CompleteArgs in a usual pattern
// you may also want to have Progress event
public bool CompletedSynchronously {get {return false;}}
public WaitHandle
{
get
{
// initialize it lazily
if (_waitHandle == null)
{
ManualResetEvent newWaitHandle = new ManualResetEvent(false);
if (Interlocked.CompareExchange(ref _waitHandle, newWaitHandle, null) != null)
newWaitHandle.Dispose();
}
return _waitHandle;
}
}
public void Dispose()
{
if (_waitHandle != null)
_waitHandle.Dispose();
// dispose _workObject too, if needed
}
public WorkObjectHandle(IWorkObject workObject)
{
WorkObject = workObject;
_percentComplete = 0;
}
}
public class AsyncScheduler
{
private Queue<WorkObjectHandle> _workQueue = new Queue<WorkObjectHandle>();
private bool _finished = false;
public WorkObjectHandle QueueAsyncWork(IWorkObject workObject)
{
var handle = new WorkObjectHandle(workObject);
lock(_workQueue)
{
_workQueue.Enqueue(handle);
}
return handle;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
WorkObjectHandle handle;
lock(_workQueue)
{
if (_workQueue.Count == 0) break;
handle = _workQueue.Dequeue();
}
try
{
var workObject = handle.WorkObject;
// do whatever you want with workObject, set handle.PercentCompleted, etc.
}
finally
{
handle.Dispose();
}
}
}
}
If I understand correctly you have a collection of work objects (IWorkObject) that each complete a task via multiple calls to a DoSomeWork method. When an IWorkObject object has finished its work you'd like to respond to that somehow and during the process you'd like to respond to any reported progress?
In that case I'd suggest you take a slightly different approach. You could take a look at the Parallel Extension framework (blog). Using the framework, you could write something like this:
public void QueueWork(IWorkObject workObject)
{
Task.TaskFactory.StartNew(() =>
{
while (!workObject.Finished)
{
int progress = workObject.DoSomeWork();
DoSomethingWithReportedProgress(workObject, progress);
}
WorkObjectIsFinished(workObject);
});
}
Some things to note:
QueueWork now returns void. The reason for this is that the actions that occur when progress is reported or when the task completes have become part of the thread that executes the work. You could of course return the Task that the factory creates and return that from the method (to enable polling for example).
The progress-reporting and finish-handling are now part of the thread because you should always avoid polling when possible. Polling is more expensive because usually you either poll too frequently (too early) or not often enough (too late). There is no reason you can't report on the progress and finishing of the task from within the thread that is running the task.
The above could also be implemented using the (lower level) ThreadPool.QueueUserWorkItem method.
Using QueueUserWorkItem:
public void QueueWork(IWorkObject workObject)
{
ThreadPool.QueueUserWorkItem(() =>
{
while (!workObject.Finished)
{
int progress = workObject.DoSomeWork();
DoSomethingWithReportedProgress(workObject, progress);
}
WorkObjectIsFinished(workObject);
});
}
The WorkObject class can contain the properties that need to be tracked.
public class WorkObject
{
public PercentComplete { get; private set; }
public IsFinished { get; private set; }
public void DoSomeWork()
{
// work done here
this.PercentComplete = 50;
// some more work done here
this.PercentComplete = 100;
this.IsFinished = true;
}
}
Then in your example:
Change the collection from a List to a Dictionary that can hold Guid values (or any other means of uniquely identifying the value).
Expose the correct WorkObject's properties by having the caller pass the Guid that it received from QueueAsyncWork.
I'm assuming that you'll start WorkThread asynchronously (albeit, the only asynchronous thread); plus, you'll have to make retrieving the dictionary values and WorkObject properties thread-safe.
private Dictionary<Guid, WorkObject> _workList =
new Dictionary<Guid, WorkObject>();
private bool _finished = false;
public Guid QueueAsyncWork(WorkObject workObject)
{
Guid guid = Guid.NewGuid();
// simplified for the sake of example
_workList.Add(guid, workObject);
return guid;
}
private void WorkThread()
{
// simplified for the sake of example
while (!_finished)
{
foreach (WorkObject workObject in _workList)
{
if (!workObject.IsFinished)
{
workObject.DoSomeWork();
}
}
Thread.Sleep(1000);
}
}
// an example of getting the WorkObject's property
public int GetPercentComplete(Guid guid)
{
WorkObject workObject = null;
if (!_workList.TryGetValue(guid, out workObject)
throw new Exception("Unable to find Guid");
return workObject.PercentComplete;
}
The simplest way to do this is described here. Suppose you have a method string DoSomeWork(int). You then create a delegate of the correct type, for example:
Func<int, string> myDelegate = DoSomeWork;
Then you call the BeginInvoke method on the delegate:
int parameter = 10;
myDelegate.BeginInvoke(parameter, Callback, null);
The Callback delegate will be called once your asynchronous call has completed. You can define this method as follows:
void Callback(IAsyncResult result)
{
var asyncResult = (AsyncResult) result;
var #delegate = (Func<int, string>) asyncResult.AsyncDelegate;
string methodReturnValue = #delegate.EndInvoke(result);
}
Using the described scenario, you can also poll for results or wait on them. Take a look at the url I provided for more info.
Regards,
Ronald
If you don't want to use async callbacks, you can use an explicit WaitHandle, such as a ManualResetEvent:
public abstract class WorkObject : IDispose
{
ManualResetEvent _waitHandle = new ManualResetEvent(false);
public void DoSomeWork()
{
try
{
this.DoSomeWorkOverride();
}
finally
{
_waitHandle.Set();
}
}
protected abstract DoSomeWorkOverride();
public void WaitForCompletion()
{
_waitHandle.WaitOne();
}
public void Dispose()
{
_waitHandle.Dispose();
}
}
And in your code you could say
using (var workObject = new SomeConcreteWorkObject())
{
asyncScheduler.QueueAsyncWork(workObject);
workObject.WaitForCompletion();
}
Don't forget to call Dispose on your workObject though.
You can always use alternate implementations which create a wrapper like this for every work object, and who call _waitHandle.Dispose() in WaitForCompletion(), you can lazily instantiate the wait handle (careful: race conditions ahead), etc. (That's pretty much what BeginInvoke does for delegates.)

Categories

Resources