// Something that might need to be invoked
private void MightnInvoke()
{
// Invoke if we need to.
if (this.InvokeRequired) this.Invoke(new Action(this.MightnInvoke));
// Do stuff here.
}
Is this the best way to invoke something on the fly in c#?
Basically i'm trying to avoid having extra code blocks where i don't need them.
or is it better to use synchronization context?
public void SyncContext(object state)
{
try
{
int id = Thread.CurrentThread.ManagedThreadId;
Console.Writeline("Run thread: " + id);
SynchronizationContext CommandContext = state as SynchronizationContext;
// Do stuff here and then use the CommandContext.
var Somestate = "Connected";
CommandContext.Send(Sometask, Somestate.ToString());
Thread.Sleep(250);
}
catch (System.ComponentModel.InvalidAsynchronousStateException)
{
}
public void Sometask(object state)
{
// We can work in here and be on the same thread we came from.
string Target = state as string;
if (Target == "Connected")
{ }
}
UPDATE:
Coming back to this, After profiling thread concurrency it turns out the method of sync context i gave as an example is indeed wrong. Don't use it useless you intend on changing it slightly to be thread safe.
In the official MSDN docs for SynchronizationContext it says;
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe..
Which would lead me to believe that your implementation may not be thread safe (though if it's working, then maybe I've mis-interpreted something).
Personally, it looks a little verbose for my liking.
I use the following in production and have never had an issue with threading.
public delegate void ActionCallback();
public static void AsyncUpdate(this Control ctrl, ActionCallback action)
{
if (ctrl != null && (ctrl.IsHandleCreated && !ctrl.IsDisposed && !ctrl.Disposing))
{
if (!ctrl.IsHandleCreated)
ctrl.CreateControl();
AsyncInvoke(ctrl, action);
}
}
private static void AsyncInvoke(Control ctrl, ActionCallback action)
{
if (ctrl.InvokeRequired)
ctrl.BeginInvoke(action);
else action();
}
Used as follows;
myTextBox.AsyncUpdate(() => myTextBox.Text = "Test");
Related
This question already has answers here:
How to wait for async method to complete?
(7 answers)
Closed 2 years ago.
Let's say I have a MyThread class in my Windows C# app like this:
public class MyThread
{
Thread TheThread;
public MyThread()
{
TheThread = new Thread(MyFunc);
}
public void StartIfNecessary()
{
if (!TheThread.IsAlive)
TheThread.Start();
}
private void MyFunc()
{
for (;;)
{
if (ThereIsStuffToDo)
DoSomeStuff();
}
}
}
That works fine. But now I realize I can make my thread more efficient by using async/await:
public class MyThread
{
Thread TheThread;
public MyThread()
{
TheThread = new Thread(MyFunc);
}
public void StartIfNecessary()
{
if (!TheThread.IsAlive)
TheThread.Start();
}
private async void MyFunc()
{
for (;;)
{
DoSomeStuff();
await MoreStuffIsReady();
}
}
}
What I see now, is that the second time I call StartIfNecessary(), TheThread.IsAlive is false (and ThreadState is Stopped BTW) so it calls TheThread.Start() which then throws the ThreadStateException "Thread is running or terminated; it cannot restart". But I can see that DoMoreStuff() is still getting called, so the function is in fact still executing.
I suspect what is happening, is that when my thread hits the "await", the thread I created is stopped, and when the await on MoreStuffIsReady() completes, a thread from the thread pool is assigned to execute DoSomeStuff(). So it is technically true that the thread I created has been stopped, but the function I created that thread to process is still running.
So how can I tell if "MyFunc" is still active?
I can think of 3 ways to solve this:
1) Add a "bool IsRunning" which is set to true right before calling TheThread.Start(), and MyFunc() sets to false when it completes. This is simple, but requires me to wrap everything in a try/catch/finally which isn't awful but I was hoping there was a way to have the operating system or framework help me out here just in case "MyFunc" dies in some way I wasn't expecting.
2) Find some new function somewhere in System.Threading that will give me the information I need.
3) Rethink the whole thing - since my thread only sticks around for a few milliseconds, is there a way to accomplish this same functionality without creating a thread at all (outside of the thread pool)? Start "MyFunc" as a Task somehow?
Best practices in this case?
Sticking with a Plain Old Thread and using BlockingCollection to avoid a tight loop:
class MyThread
{
private Thread worker = new Thread(MyFunc);
private BlockingCollection<Action> stuff = new BlockingCollection<Action>();
public MyThread()
{
worker.Start();
}
void MyFunc()
{
foreach (var todo in stuff.GetConsumingEnumerable())
{
try
{
todo();
}
catch(Exception ex)
{
// Something went wrong in todo()
}
}
stuff.Dispose(); // should be disposed!
}
public void Shutdown()
{
stuff.CompleteAdding(); // No more adding, but will continue to serve until empty.
}
public void Add( Action stuffTodo )
{
stuff.Add(stuffTodo); // Will throw after Shutdown is called
}
}
BlockingCollection also shows examples with Task if you prefer to go down that road.
Rethink the whole thing
This is definitely the best option. Get rid of the thread completely.
It seems like you have a "consumer" kind of scenario, and you need a consumer with a buffer of data items to work on.
One option is to use ActionBlock<T> from TPL Dataflow:
public class NeedsADifferentName
{
ActionBlock<MyDataType> _block;
public NeedsADifferentName() => _block = new ActionBlock<MyDataType>(MyFunc);
public void QueueData(MyDataType data) => _block.Post(data);
private void MyFunc(MyDataType data)
{
DoSomeStuff(data);
}
}
Alternatively, you can build your own pipeline using something like Channels.
I'm trying to optimize an async version of something similar (in basic funcionality) to the Monitor.Wait and Monitor.Pulse methods. The idea is to use this over an async method.
Requirements:
1) I have one Task running, that it is in charge of waiting until someone pulses my monitor.
2) That task may compute a complex (ie: time consuming) operation. In the meanwhile, the pulse method could be called several times without doing anything (as the main task is already doing some processing).
3) Once the main task finishes, it starts to Wait again until another Pulse comes in.
Worst case scenario is Wait>Pulse>Wait>Pulse>Wait..., but usually I have tenths/hundreds of pulses for every wait.
So, I have the following class (working, but I think it can be optimized a bit based on my requirements)
internal sealed class Awaiter
{
private readonly ConcurrentQueue<TaskCompletionSource<byte>> _waiting = new ConcurrentQueue<TaskCompletionSource<byte>>();
public void Pulse()
{
TaskCompletionSource<byte> tcs;
if (_waiting.TryDequeue(out tcs))
{
tcs.TrySetResult(1);
}
}
public Task Wait()
{
TaskCompletionSource<byte> tcs;
if (_waiting.TryPeek(out tcs))
{
return tcs.Task;
}
tcs = new TaskCompletionSource<byte>();
_waiting.Enqueue(tcs);
return tcs.Task;
}
}
The problem with the above class is the baggage I'm using just for synchronization. Since I will be waiting from one and only one thread, there is really no need to have a ConcurrentQueue, as I always have only one item in it.
So, I simplified it a bit and wrote the following:
internal sealed class Awaiter2
{
private readonly object _mutex = new object();
private TaskCompletionSource<byte> _waiting;
public void Pulse()
{
var w = _waiting;
if (w == null)
{
return;
}
lock (_mutex)
{
w = _waiting;
if (w == null)
{
return;
}
_waiting = null;
w.TrySetResult(1);
}
}
public Task Wait()
{
var w = _waiting;
if (w != null)
{
return w.Task;
}
lock (_mutex)
{
w = _waiting;
if (w != null)
{
return w.Task;
}
w = _waiting = new TaskCompletionSource<byte>();
return w.Task;
}
}
}
That new version is also working ok, but I'm still thinking it can be optimized a bit more, by removing the locks.
I'm looking for suggestions on how I can optimize the second version. Any ideas?
If you don't need the Wait() call to return a Task but are content with being able to await Wait() then you can implement a custom awaiter/awaitable.
See this link for an overview of the await pattern used by the compiler.
When implementing custom awaitables you will just be dealing with delegates and the actual "waiting" is left up to you. When you want to "await" for a condition it is often possible to keep a list of pending continuations and whenever the condition comes true you can invoke those continuations. You just need to deal with the synchronization coming from the fact that await can be called from arbitrary threads. If you know that you'll only ever await from one thread (say the UI thread) then you don't need any synchronization at all!
I'll try to give you a lock-free implementation but no guarantees that it is correct. If you don't understand why all race conditions are safe you should not use it and implement the async/await protocol using lock-statements or other techniques which you know how to debug.
public sealed class AsyncMonitor
{
private PulseAwaitable _currentWaiter;
public AsyncMonitor()
{
_currentWaiter = new PulseAwaitable();
}
public void Pulse()
{
// Optimize for the case when calling Pulse() when nobody is waiting.
//
// This has an inherent race condition when calling Pulse() and Wait()
// at the same time. The question this was written for did not specify
// how to resolve this, so it is a valid answer to tolerate either
// result and just allow the race condition.
//
if (_currentWaiter.HasWaitingContinuations)
Interlocked.Exchange(ref _currentWaiter, new PulseAwaitable()).Complete();
}
public PulseAwaitable Wait()
{
return _currentWaiter;
}
}
// This class maintains a list of waiting continuations to be executed when
// the owning AsyncMonitor is pulsed.
public sealed class PulseAwaitable : INotifyCompletion
{
// List of pending 'await' delegates.
private Action _pendingContinuations;
// Flag whether we have been pulsed. This is the primary variable
// around which we build the lock free synchronization.
private int _pulsed;
// AsyncMonitor creates instances as required.
internal PulseAwaitable()
{
}
// This check has a race condition which is tolerated.
// It is used to optimize for cases when the PulseAwaitable has no waiters.
internal bool HasWaitingContinuations
{
get { return Volatile.Read(ref _pendingContinuations) != null; }
}
// Called by the AsyncMonitor when it is pulsed.
internal void Complete()
{
// Set pulsed flag first because that is the variable around which
// we build the lock free protocol. Everything else this method does
// is free to have race conditions.
Interlocked.Exchange(ref _pulsed, 1);
// Execute pending continuations. This is free to race with calls
// of OnCompleted seeing the pulsed flag first.
Interlocked.Exchange(ref _pendingContinuations, null)?.Invoke();
}
#region Awaitable
// There is no need to separate the awaiter from the awaitable
// so we use one class to implement both parts of the protocol.
public PulseAwaitable GetAwaiter()
{
return this;
}
#endregion
#region Awaiter
public bool IsCompleted
{
// The return value of this property does not need to be up to date so we could omit the 'Volatile.Read' if we wanted to.
// What is not allowed is returning "true" even if we are not completed, but this cannot happen since we never transist back to incompleted.
get { return Volatile.Read(ref _pulsed) == 1; }
}
public void OnCompleted(Action continuation)
{
// Protected against manual invocations. The compiler-generated code never passes null so you can remove this check in release builds if you want to.
if (continuation == null)
throw new ArgumentNullException(nameof(continuation));
// Standard pattern of maintaining a lock free immutable variable: read-modify-write cycle.
// See for example here: https://blogs.msdn.microsoft.com/oldnewthing/20140516-00/?p=973
// Again the 'Volatile.Read' is not really needed since outdated values will be detected at the first iteration.
var oldContinuations = Volatile.Read(ref _pendingContinuations);
for (;;)
{
var newContinuations = (oldContinuations + continuation);
var actualContinuations = Interlocked.CompareExchange(ref _pendingContinuations, newContinuations, oldContinuations);
if (actualContinuations == oldContinuations)
break;
oldContinuations = actualContinuations;
}
// Now comes the interesting part where the actual lock free synchronization happens.
// If we are completed then somebody needs to clean up remaining continuations.
// This happens last so the first part of the method can race with pulsing us.
if (IsCompleted)
Interlocked.Exchange(ref _pendingContinuations, null)?.Invoke();
}
public void GetResult()
{
// This is just to check against manual calls. The compiler will never call this when IsCompleted is false.
// (Assuming your OnCompleted implementation is bug-free and you don't execute continuations before IsCompleted becomes true.)
if (!IsCompleted)
throw new NotSupportedException("Synchronous waits are not supported. Use 'await' or OnCompleted to wait asynchronously");
}
#endregion
}
You usually don't bother on which thread the continuations run because if they are async methods the compiler has already inserted code (in the continuation) to switch back to the right thread, no need to do it manually in every awaitable implementation.
[edit]
As a starting point for how a locking implementation can look I'll provide one using a lock-statement. It should be easy to replace it by a spinlock or some other locking technique. By using a struct as the awaitable it even has the advantage that it does no additional allocation except for the initial object. (There are of course allocations in the async/await framework in the compiler magic on the calling side, but you can't get rid of these.)
Note that the iteration counter will increment only for every Wait+Pulse pair and will eventually overflow into negative, but that is ok. We just need to bridge the time from the continuation beeing invoked until it can call GetResult. 4 billion Wait+Pulse pairs should be plenty of time for any pending continuations to call its GetResult method. If you don't want that risk you could use a long or Guid for a more unique iteration counter, but IMHO an int is good for almost all scenarios.
public sealed class AsyncMonitor
{
public struct Awaitable : INotifyCompletion
{
// We use a struct to avoid allocations. Note that this means the compiler will copy
// the struct around in the calling code when doing 'await', so for your own debugging
// sanity make all variables readonly.
private readonly AsyncMonitor _monitor;
private readonly int _iteration;
public Awaitable(AsyncMonitor monitor)
{
lock (monitor)
{
_monitor = monitor;
_iteration = monitor._iteration;
}
}
public Awaitable GetAwaiter()
{
return this;
}
public bool IsCompleted
{
get
{
// We use the iteration counter as an indicator when we should be complete.
lock (_monitor)
{
return _monitor._iteration != _iteration;
}
}
}
public void OnCompleted(Action continuation)
{
// The compiler never passes null, but someone may call it manually.
if (continuation == null)
throw new ArgumentNullException(nameof(continuation));
lock (_monitor)
{
// Not calling IsCompleted since we already have a lock.
if (_monitor._iteration == _iteration)
{
_monitor._waiting += continuation;
// null the continuation to indicate the following code
// that we completed and don't want it executed.
continuation = null;
}
}
// If we were already completed then we didn't null the continuation.
// (We should invoke the continuation outside of the lock because it
// may want to Wait/Pulse again and we want to avoid reentrancy issues.)
continuation?.Invoke();
}
public void GetResult()
{
lock (_monitor)
{
// Not calling IsCompleted since we already have a lock.
if (_monitor._iteration == _iteration)
throw new NotSupportedException("Synchronous wait is not supported. Use await or OnCompleted.");
}
}
}
private Action _waiting;
private int _iteration;
public AsyncMonitor()
{
}
public void Pulse(bool executeAsync)
{
Action execute = null;
lock (this)
{
// If nobody is waiting we don't need to increment the iteration counter.
if (_waiting != null)
{
_iteration++;
execute = _waiting;
_waiting = null;
}
}
// Important: execute the callbacks outside the lock because they might Pulse or Wait again.
if (execute != null)
{
// If the caller doesn't want inlined execution (maybe he holds a lock)
// then execute it on the thread pool.
if (executeAsync)
Task.Run(execute);
else
execute();
}
}
public Awaitable Wait()
{
return new Awaitable(this);
}
}
Here is my simple async implementation that I use in my projects:
internal sealed class Pulsar
{
private static TaskCompletionSource<bool> Init() => new TaskCompletionSource<bool>();
private TaskCompletionSource<bool> _tcs = Init();
public void Pulse()
{
Interlocked.Exchange(ref _tcs, Init()).SetResult(true);
}
public Task AwaitPulse(CancellationToken token)
{
return token.CanBeCanceled ? _tcs.Task.WithCancellation(token) : _tcs.Task;
}
}
Add TaskCreationOptions.RunContinuationsAsynchronously to the TCS for async continuations.
The WithCancellation can be omitted of course, if you do not need cancellations.
Because you only have one task ever waiting your function can be simplified to
internal sealed class Awaiter3
{
private volatile TaskCompletionSource<byte> _waiting;
public void Pulse()
{
var w = _waiting;
if (w == null)
{
return;
}
_waiting = null;
#if NET_46_OR_GREATER
w.TrySetResult(1);
#else
Task.Run(() => w.TrySetResult(1));
#endif
}
//This method is not thread safe and can only be called by one thread at a time.
// To make it thread safe put a lock around the null check and the assignment,
// you do not need to have a lock on Pulse, "volatile" takes care of that side.
public Task Wait()
{
if(_waiting != null)
throw new InvalidOperationException("Only one waiter is allowed to exist at a time!");
#if NET_46_OR_GREATER
_waiting = new TaskCompletionSource<byte>(TaskCreationOptions.RunContinuationsAsynchronously);
#else
_waiting = new TaskCompletionSource<byte>();
#endif
return _waiting.Task;
}
}
One behavior I did change. If you are using .NET 4.6 or newer use the code in the #if NET_46_OR_GREATER blocks, if under use the else blocks. When you call TrySetResult you could have the continuation synchronously run, this can cause Pulse() to take a long time to complete. By using TaskCreationOptions.RunContinuationsAsynchronously in .NET 4.6 or wrapping the TrySetResult in a Task.Run for pre 4.6 will make sure that Puse() is not blocked by the continuation of the task.
See the SO question Detect target framework version at compile time on how to make a NET_46_OR_GREATER definition that works in your code.
A simple way to do this is to use SemaphoreSlim which uses Monitor.
public class AsyncMonitor
{
private readonly SemaphoreSlim signal = new SemaphoreSlim(0, 1);
public void Pulse()
{
try
{
signal.Release();
}
catch (SemaphoreFullException) { }
}
public async Task WaitAsync(CancellationToken cancellationToken)
{
await signal.WaitAsync(cancellationToken).ConfigureAwait(false);
}
}
I have simple method in my C# app, it picks file from FTP server and parses it and stores the data in DB. I want it to be asynchronous, so that user perform other operations on App, once parsing is done he has to get message stating "Parsing is done".
I know it can achieved through asynchronous method call but I dont know how to do that can anybody help me please??
You need to use delegates and the BeginInvoke method that they contain to run another method asynchronously. A the end of the method being run by the delegate, you can notify the user. For example:
class MyClass
{
private delegate void SomeFunctionDelegate(int param1, bool param2);
private SomeFunctionDelegate sfd;
public MyClass()
{
sfd = new SomeFunctionDelegate(this.SomeFunction);
}
private void SomeFunction(int param1, bool param2)
{
// Do stuff
// Notify user
}
public void GetData()
{
// Do stuff
sfd.BeginInvoke(34, true, null, null);
}
}
Read up at http://msdn.microsoft.com/en-us/library/2e08f6yc.aspx
try this method
public static void RunAsynchronously(Action method, Action callback) {
ThreadPool.QueueUserWorkItem(_ =>
{
try {
method();
}
catch (ThreadAbortException) { /* dont report on this */ }
catch (Exception ex) {
}
// note: this will not be called if the thread is aborted
if (callback!= null) callback();
});
}
Usage:
RunAsynchronously( () => { picks file from FTP server and parses it},
() => { Console.WriteLine("Parsing is done"); } );
Any time you're doing something asynchronous, you're using a separate thread, either a new thread, or one taken from the thread pool. This means that anything you do asynchronously has to be very careful about interactions with other threads.
One way to do that is to place the code for the async thread (call it thread "A") along with all of its data into another class (call it class "A"). Make sure that thread "A" only accesses data in class "A". If thread "A" only touches class "A", and no other thread touches class "A"'s data, then there's one less problem:
public class MainClass
{
private sealed class AsyncClass
{
private int _counter;
private readonly int _maxCount;
public AsyncClass(int maxCount) { _maxCount = maxCount; }
public void Run()
{
while (_counter++ < _maxCount) { Thread.Sleep(1); }
CompletionTime = DateTime.Now;
}
public DateTime CompletionTime { get; private set; }
}
private AsyncClass _asyncInstance;
public void StartAsync()
{
var asyncDoneTime = DateTime.MinValue;
_asyncInstance = new AsyncClass(10);
Action asyncAction = _asyncInstance.Run;
asyncAction.BeginInvoke(
ar =>
{
asyncAction.EndInvoke(ar);
asyncDoneTime = _asyncInstance.CompletionTime;
}, null);
Console.WriteLine("Async task ended at {0}", asyncDoneTime);
}
}
Notice that the only part of AsyncClass that's touched from the outside is its public interface, and the only part of that which is data is CompletionTime. Note that this is only touched after the asynchronous task is complete. This means that nothing else can interfere with the tasks inner workings, and it can't interfere with anything else.
Here are two links about threading in C#
Threading in C#
Multi-threading in .NET: Introduction and suggestions
I'd start to read about the BackgroundWorker class
In Asp.Net I use a lot of static methods for jobs to be done. If its simply a job where I need no response or status, I do something simple like below. As you can see I can choose to call either ResizeImages or ResizeImagesAsync depending if I want to wait for it to finish or not
Code explanation: I use http://imageresizing.net/ to resize/crop images and the method SaveBlobPng is to store the images to Azure (cloud) but since that is irrelevant for this demo I didn't include that code. Its a good example of time consuming tasks though
private delegate void ResizeImagesDelegate(string tempuri, Dictionary<string, string> versions);
private static void ResizeImagesAsync(string tempuri, Dictionary<string, string> versions)
{
ResizeImagesDelegate worker = new ResizeImagesDelegate(ResizeImages);
worker.BeginInvoke(tempuri, versions, deletetemp, null, null);
}
private static void ResizeImages(string tempuri, Dictionary<string, string> versions)
{
//the job, whatever it might be
foreach (var item in versions)
{
var image = ImageBuilder.Current.Build(tempuri, new ResizeSettings(item.Value));
SaveBlobPng(image, item.Key);
image.Dispose();
}
}
Or going for threading so you dont have to bother with Delegates
private static void ResizeImagesAsync(string tempuri, Dictionary<string, string> versions)
{
Thread t = new Thread (() => ResizeImages(tempuri, versions, null, null));
t.Start();
}
ThreadPool.QueueUserWorkItem is the quickest way to get a process running on a different thread.
Be aware that UI objects have "thread affinity" and cannot be accessed from any thread other than the one that created them.
So, in addition to checking out the ThreadPool (or using the asynchronous programming model via delegates), you need to check out Dispatchers (wpf) or InvokeRequired (winforms).
In the end you will have to use some sort of threading. The way it basically works is that you start a function with a new thread and it will run until the end of the function.
If you are using Windows Forms then a nice wrapper that they have for this is call the Background Worker. It allows you to work in the background with out locking up the UI form and even provides a way to communicate with the forms and provide progress update events.
Background Worker
.NET got new keyword async for asonchrynous functions. You can start digging at learn.microsoft.com (async). The shortest general howto make function asonchrynous is to change function F:
Object F(Object args)
{
...
return RESULT;
}
to something like this:
async Task<Object> FAsync(Object args)
{
...
await RESULT_FROM_PROMISE;
...
return RESULT;
}
The most important thing in above code is that when your code approach await keyword it return control to function that called FAsync and make other computation until promissed value has been returned and procede with rest of code in function FAsync.
Problem: I am working on a ASP.NET 2.0/C# Application and I need to do the following:
I have a function I am using from a third-party library lets say
MyFunctions.CalculateTotal(int a, int b);
A known issue is that the thread locks resources. So there is another function that needs to be called afterwards to clean everything up.
MyFunctions.ThreadExit();
The issue is that this will exit the current thread and I will not be able to use any other function afterwards. Also, it does not seem appropriate for me to kill an asp.net thread like this.
I have considered spinning a separate thread, but that would be a hack.
Global.asax has those application wide events like Application_Start/End
I know there is no event such as Application_ThreadStart/End, but maybe something like that?
Any other suggestion for a possible solution?
(Updated)
It sounds like that library wants to make a mess in the current thread and force you to exit the thread if you want it to clean up after itself. In that case, I would always run that method in a separate thread. Untested code:
int result;
var thread = new Thread(_ => {
result = MyFunctions.CalculateTotal(a, b);
MyFunctions.ThreadExit();
}).Start();
I'm not sure that using a separate thread would be such a hack. It sounds like that is what required.
BTW - that third party library sounds absolutely horrible! ;-)
This article might help you, it involves using an IHttpAsyncHandler - you would then use this as "asynchandler.ashx". It does require you to know a bit about HttpHandlers if you don't already though, so it's not an instant solution.
A slightly modified version of their code with your MyFunctions is:
public class AsyncHandler : IHttpAsyncHandler
{
public void ProcessRequest(HttpContext ctx)
{
// not used
}
public bool IsReusable
{
get { return false;}
}
public IAsyncResult BeginProcessRequest(HttpContext ctx,
AsyncCallback cb,
object obj)
{
AsyncRequestState reqState =
new AsyncRequestState(ctx, cb, obj);
AsyncRequest ar = new AsyncRequest(reqState);
ThreadStart ts = new ThreadStart(ar.ProcessRequest);
Thread t = new Thread(ts);
t.Start();
return reqState;
}
public void EndProcessRequest(IAsyncResult ar)
{
AsyncRequestState ars = ar as AsyncRequestState;
if (ars != null)
{
// here you could perform some cleanup, write something else to the
// Response, or whatever else you need to do
}
}
}
public class AsyncRequest
{
private AsyncRequestState _asyncRequestState;
public AsyncRequest(AsyncRequestState ars)
{
_asyncRequestState = ars;
}
public void ProcessRequest()
{
MyFunctions.CalculateTotal(int a, int b);
// tell asp.net I am finished processing this request
_asyncRequestState.CompleteRequest();
}
}
Change their variable names are they're nasty.
I've found the "ThreadStatic" attribute to be extremely useful recently, but makes me now want a "ThreadLocal" type attribute that lets me have non-static data members on a per-thread basis.
Now I'm aware that this would have some non-trivial implications, but:
Does such a thing exist already built into C#/.net? or since it appears so far that the answer to this is no (for .net < 4.0), is there a commonly used implementation out there?
I can think of a reasonable way to implement it myself, but would just use something that already existed if it were available.
Straw Man example that would implement what I'm looking for if it doesn't already exist:
class Foo
{
[ThreadStatic]
static Dictionary<Object,int> threadLocalValues = new Dictionary<Object,int>();
int defaultValue = 0;
int ThreadLocalMember
{
get
{
int value = defaultValue;
if( ! threadLocalValues.TryGetValue(this, out value) )
{
threadLocalValues[this] = value;
}
return value;
}
set { threadLocalValues[this] = value; }
}
}
Please forgive any C# ignorance. I'm a C++ developer that has only recently been getting into the more interesting features of C# and .net
I'm limited to .net 3.0 and maybe 3.5 (project has/will soon move to 3.5).
Specific use-case is callback lists that are thread specific (using imaginary [ThreadLocal] attribute) a la:
class NonSingletonSharedThing
{
[ThreadLocal] List<Callback> callbacks;
public void ThreadLocalRegisterCallback( Callback somecallback )
{
callbacks.Add(somecallback);
}
public void ThreadLocalDoCallbacks();
{
foreach( var callback in callbacks )
callback.invoke();
}
}
Enter .NET 4.0!
If you're stuck in 3.5 (or earlier), there are some functions you should look at, like AllocateDataSlot which should do what you want.
You should think about this twice. You are essentially creating a memory leak. Every object created by the thread stays referenced and can't be garbage collected. Until the thread ends.
If you looking to store unique data on a per thread basis you could use Thread.SetData. Be sure to read up on the pros and cons http://msdn.microsoft.com/en-us/library/6sby1byh.aspx as this has performance implications.
Consider:
Rather than try to give each member variable in an object a thread-specific value, give each thread its own object instance. -- pass the object to the threadstart as state, or make the threadstart method a member of the object that the thread will "own", and create a new instance for each thread that you spawn.
Edit
(in response to Catskul's remark.
Here's an example of encapsulating the struct
public class TheStructWorkerClass
{
private StructData TheStruct;
public TheStructWorkerClass(StructData yourStruct)
{
this.TheStruct = yourStruct;
}
public void ExecuteAsync()
{
System.Threading.ThreadPool.QueueUserWorkItem(this.TheWorkerMethod);
}
private void TheWorkerMethod(object state)
{
// your processing logic here
// you can access your structure as this.TheStruct;
// only this thread has access to the struct (as long as you don't pass the struct
// to another worker class)
}
}
// now hte code that launches the async process does this:
var worker = new TheStructWorkerClass(yourStruct);
worker.ExecuteAsync();
Now here's option 2 (pass the struct as state)
{
// (from somewhere in your existing code
System.Threading.Threadpool.QueueUserWorkItem(this.TheWorker, myStruct);
}
private void TheWorker(object state)
{
StructData yourStruct = (StructData)state;
// now do stuff with your struct
// works fine as long as you never pass the same instance of your struct to 2 different threads.
}
I ended up implementing and testing a version of what I had originally suggested:
public class ThreadLocal<T>
{
[ThreadStatic] private static Dictionary<object, T> _lookupTable;
private Dictionary<object, T> LookupTable
{
get
{
if ( _lookupTable == null)
_lookupTable = new Dictionary<object, T>();
return _lookupTable;
}
}
private object key = new object(); //lazy hash key creation handles replacement
private T originalValue;
public ThreadLocal( T value )
{
originalValue = value;
}
~ThreadLocal()
{
LookupTable.Remove(key);
}
public void Set( T value)
{
LookupTable[key] = value;
}
public T Get()
{
T returnValue = default(T);
if (!LookupTable.TryGetValue(key, out returnValue))
Set(originalValue);
return returnValue;
}
}
Although I am still not sure about when your use case would make sense (see my comment on the question itself), I would like to contribute a working example that is in my opinion more readable than thread-local storage (whether static or instance). The example is using .NET 3.5:
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using System.Linq;
namespace SimulatedThreadLocal
{
public sealed class Notifier
{
public void Register(Func<string> callback)
{
var id = Thread.CurrentThread.ManagedThreadId;
lock (this._callbacks)
{
List<Func<string>> list;
if (!this._callbacks.TryGetValue(id, out list))
{
this._callbacks[id] = list = new List<Func<string>>();
}
list.Add(callback);
}
}
public void Execute()
{
var id = Thread.CurrentThread.ManagedThreadId;
IEnumerable<Func<string>> threadCallbacks;
string status;
lock (this._callbacks)
{
status = string.Format("Notifier has callbacks from {0} threads, total {1} callbacks{2}Executing on thread {3}",
this._callbacks.Count,
this._callbacks.SelectMany(d => d.Value).Count(),
Environment.NewLine,
Thread.CurrentThread.ManagedThreadId);
threadCallbacks = this._callbacks[id]; // we can use the original collection, as only this thread can add to it and we're not going to be adding right now
}
var b = new StringBuilder();
foreach (var callback in threadCallbacks)
{
b.AppendLine(callback());
}
Console.ForegroundColor = ConsoleColor.DarkYellow;
Console.WriteLine(status);
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine(b.ToString());
}
private readonly Dictionary<int, List<Func<string>>> _callbacks = new Dictionary<int, List<Func<string>>>();
}
public static class Program
{
public static void Main(string[] args)
{
try
{
var notifier = new Notifier();
var syncMainThread = new ManualResetEvent(false);
var syncWorkerThread = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(delegate // will create closure to see notifier and sync* events
{
notifier.Register(() => string.Format("Worker thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
notifier.Register(() => string.Format("Worker thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
syncMainThread.Set();
});
notifier.Register(() => string.Format("Main thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.WaitOne(); // wait for worker thread to add its notification
syncMainThread.Reset();
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
notifier.Register(() => string.Format("Main thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
}
finally
{
Console.ResetColor();
}
}
}
}
When you compile and run the above program, you should get output like this:
alt text http://img695.imageshack.us/img695/991/threadlocal.png
Based on your use-case I assume this is what you're trying to achieve. The example first adds two callbacks from two different contexts, main and worker threads. Then the example runs notification first from main and then from worker threads. The callbacks that are executed are effectively filtered by current thread ID. Just to show things are working as expected, the example adds two more callbacks (for a total of 4) and again runs the notification from the context of main and worker threads.
Note that Notifier class is a regular instance that can have state, multiple instances, etc (again, as per your question's use-case). No static or thread-static or thread-local is used by the example.
I would appreciate if you could look at the code and let me know if I misunderstood what you're trying to achieve or if a technique like this would meet your needs.
I'm not sure how you're spawning your threads in the first place, but there are ways to give each thread its own thread-local storage, without using hackish workarounds like the code you posted in your question.
public void SpawnSomeThreads(int threads)
{
for (int i = 0; i < threads; i++)
{
Thread t = new Thread(WorkerThread);
WorkerThreadContext context = new WorkerThreadContext
{
// whatever data the thread needs passed into it
};
t.Start(context);
}
}
private class WorkerThreadContext
{
public string Data { get; set; }
public int OtherData { get; set; }
}
private void WorkerThread(object parameter)
{
WorkerThreadContext context = (WorkerThreadContext) parameter;
// do work here
}
This obviously ignores waiting on the threads to finish their work, making sure accesses to any shared state is thread-safe across all the worker threads, but you get the idea.
Whilst the posted solution looks elegant, it leaks objects. The finalizer - LookupTable.Remove(key) - is run only in the context of the GC thread so is likely only creating more garbage in creating another lookup table.
You need to remove object from the lookup table of every thread that has accessed the ThreadLocal. The only elegant way I can think of solving this is via a weak keyed dictionary - a data structure which is strangely lacking from c#.