can I use C# IDisposable to automate job when exit the scope? - c#

Can I use IDisposable to do automate job when triggers in exiting the scope?
This case, I am using IDisposable to only do some job in the end of method, not 'disposing' resources.
This is one of my code :
public class ScopeTimer : IDisposable
{
private Stopwatch sw = new Stopwatch();
private string _logMessage;
public ScopeTimer(string logMessage = "")
{
sw.Start();
_logMessage = logMessage;
}
public void SetMessage(string logMessage)
{
_logMessage = logMessage;
}
void IDisposable.Dispose()
{
sw.Stop();
Logging.Logger.Log($"[{_logMessage}] takes {sw.ElapsedMilliseconds} ms.");
}
}
and I am using like this :
public void SomeMethod()
{
using var timer = new ScopeTimer();
// do some job
// when method finished, timer.Dispose() is called
}
At least now, this code looks working fine, but is this a safe way in other many circumstances?

Yes you can do that. Even Stephen Cleary does it in Nito.AsyncEx like:
SemaphoreSlim _syncRoot = new();
...
using var lockHandle = this._syncRoot.Lock();
// or async
using var lockHandle = await this._syncRoot.LockAsync();
But this is opinion based. In my opinion it helps to make code cleaner because you can setup and restore a state and both parts are next to each other. It's easy to see that you don't miss something.
The drawback is that you might create some instances that needs to be freed. That might create some pressure for the GC.
We created a struct (not a class) to wrap such cleanup code:
public readonly struct Finally : IDisposable
{
private readonly Action? _onDispose;
public Finally(Action onDispose)
{
_ = onDispose ?? throw new ArgumentNullException(nameof(onDispose));
this._onDispose = onDispose;
}
public static Finally Create(Action onDispose)
{
return new Finally(onDispose);
}
public void Dispose()
{
// Keep in mind that a struct can always be created using new() or default and
// in that cases _onDispose is null!
this._onDispose?.Invoke();
}
}
You can use this like:
this._childControl.BeginUpdate();
using var #finally = Finally.Create(this._childControl.EndUpdate);
// or
this._member.SomeEvent -= this.OnMemberSomeEvent;
using var #finally = Finally.Create(() => this._member.SomeEvent += this.OnMemberSomeEvent);
Yes the memory allocation for the Finally-struct is unnecessary and the second example creates a delegate. All that stuff needs time and memory. In applications where performance is a real issue it might be better to use the normal try...finally blocks.
But we're also using Linq and this does also need context objects and delegates and nobody cares.
In my opinion, obvious correctness is more important than performance, because a quick but wrong result doesn't help anyone.

Related

How do I mark a method as not-threadsafe?

Every so often I hit upon this problem and ignore it, but it started gnawing at me today.
private readonly object _syncRoot = new object();
private List<int> NonconcurrentObject { get; } = new List<int>();
public void Fiddle()
{
lock (_syncRoot)
{
// ...some code...
NonconcurrentObject.Add(1);
Iddle();
}
}
public void Twiddle()
{
lock (_syncRoot)
{
// ...some different code...
NonconcurrentObject.Add(2);
Iddle();
}
}
private void Iddle()
{
// NOT THREADSAFE! DO NOT CALL THIS WITHOUT LOCKING ON _syncRoot
// ......lots of code......
NonconcurrentObject.Add(3);
}
I have multiple public methods of a class with some code that is not inherently threadsafe (the List above is a trivial example). I want to use helper methods for the code shared between them (as anyone would), but in splitting off the shared code I'm faced with a dilemma: do I use recursive locking in the helper methods or not? If I do, my code is wasteful and possibly less performant. If I don't (as above), the helper method is no longer threadsafe and open to a nasty race condition if called by some other method in the future.
How can I (elegantly and robustly) signal that a method isn't threadsafe?
You use doc comments.
///<remarks>not thread safe</remarks>
You could use custom attributes to mark methods that are not thread safe.
The advantage over comments is that it gives you options for further processing (via reflection) if you wish to do so at a later date.
public class NotThreadSafe : Attribute
{
//...
}
public class MyClass
{
[NotThreadSafe]
public void MyMethod()
{
//...
}
}
You could add the _Unsafe suffix to your utility methods that are not protected with locks.
Advantages: It reminds you that you are doing dangerous things, and so that you must be extra careful. A small mistake could cost you days of debugging in the future.
Disadvantages: Not very pretty, and can be confused with the unsafe keyword.
private void Iddle_Unsafe()
{
NonconcurrentObject.Add(3);
}
public void Twiddle()
{
lock (_syncRoot)
{
NonconcurrentObject.Add(2);
Iddle_Unsafe();
}
}

Mysterious deadlock corruption with ReaderWriterLockSlim

I wrote a fairly trivial wrapper around ReaderWriterLockSlim:
class SimpleReaderWriterLock
{
private class Guard : IDisposable
{
public Guard(Action action)
{
_Action = action;
}
public void Dispose()
{
_Action?.Invoke();
_Action = null;
}
private Action _Action;
}
private readonly ReaderWriterLockSlim _Lock
= new ReaderWriterLockSlim(LockRecursionPolicy.NoRecursion);
public IDisposable ReadLocked()
{
_Lock.EnterReadLock();
return new Guard(_Lock.ExitReadLock);
}
public IDisposable WriteLocked()
{
_Lock.EnterWriteLock();
return new Guard(_Lock.ExitWriteLock);
}
public IDisposable UpgradableReadLocked()
{
_Lock.EnterUpgradeableReadLock();
return new Guard(_Lock.ExitUpgradeableReadLock);
}
}
(This is probably not the most efficient thing in the world, so I am interested in suggested improvements to this class as well.)
It is used like so:
using (_Lock.ReadLocked())
{
// protected code
}
(There are a significant number of reads happening very frequently, and almost never any writes.)
This always seems to work as expected in Release mode and in production. However in Debug mode and in the debugger, very occasionally the process deadlocks in a peculiar state -- it has called EnterReadLock, the lock itself is not held by anything (the owner is 0, the properties that report whether it has any readers/writers/waiters say not, etc) but the spin lock inside is locked, and it's endlessly spinning there.
I don't know what triggers this, except that it seems to happen more often if I'm stopping at breakpoints and single-stepping (in completely unrelated code).
If I manually toggle the spinlock _isLocked field back to 0, then the process resumes and everything seems to work as expected afterwards.
Is there something wrong with the code or with the lock itself? Is the debugger doing something to accidentally provoke deadlocking the spinlock? (I'm using .NET 4.6.2.)
I've read an article that indicates that ThreadAbortException can be a problem for these locks -- and my code does have calls to Abort() in some places -- but I don't think those involve code which calls into this locked code (though I could be mistaken) and if the problem were that the lock had been acquired and never released then it should appear differently than what I'm seeing. (Though as an aside, the framework docs specifically ban acquiring a lock in a constrained region, as encouraged in that article.)
I can change the code to avoid the lock indirection, but aren't using guards the recommended practice in general?
Since the using statement is not abort-safe, you could try replacing it with the abort-safe workaround suggested in the linked article. Something like this:
public void WithReadLock(Action action)
{
var lockAcquired = false;
try
{
try { }
finally
{
_Lock.EnterReadLock();
lockAcquired = true;
}
action();
}
finally
{
if (lockAcquired) _Lock.ExitReadLock();
}
}
Usage:
var locker = new SimpleReaderWriterLock();
locker.WithReadLock(() =>
{
// protected code
});

Guard object in C#

In C++, it's fairly easy to write a Guard class which takes a reference to a variable (usually a bool) and when the instance object exits scope and gets destructed, the destructor resets the variable to the original value.
void someFunction() {
if(!reentryGuard) {
BoolGuard(&reentryGuardA, true);
// do some stuff that might cause reentry of this function
// this section is both early-exit and exception proof, with regards to restoring
// the guard variable to its original state
}
}
I'm looking for a graceful way to do this in C# using the disposal pattern (or maybe some other mechanism?) I'm thinking that passing a delegate to call might work, but seems a bit more error-prone than the guard above. Suggestions welcome!
Something like:
void someFunction() {
if(!reentryGuard) {
using(var guard = new BoolGuard(ref reentryGuard, true)) {
// do some stuff that might cause reentry of this function
// this section is both early-exit and exception proof, with regards to restoring
// the guard variable to its original state
}
}
}
With the understanding that the above code won't work.
You are correct…without unsafe code, you can't save the address of a by-ref parameter. But, depending on how much you can change the overall design, you can create a "guardable" type, such that it's a reference type containing the value to actually guard.
For example:
class Program
{
class Guardable<T>
{
public T Value { get; private set; }
private sealed class GuardHolder<TGuardable> : IDisposable where TGuardable : Guardable<T>
{
private readonly TGuardable _guardable;
private readonly T _originalValue;
public GuardHolder(TGuardable guardable)
{
_guardable = guardable;
_originalValue = guardable.Value;
}
public void Dispose()
{
_guardable.Value = _originalValue;
}
}
public Guardable(T value)
{
Value = value;
}
public IDisposable Guard(T newValue)
{
GuardHolder<Guardable<T>> guard = new GuardHolder<Guardable<T>>(this);
Value = newValue;
return guard;
}
}
static void Main(string[] args)
{
Guardable<int> guardable = new Guardable<int>(5);
using (var guard = guardable.Guard(10))
{
Console.WriteLine(guardable.Value);
}
Console.WriteLine(guardable.Value);
}
}
Here's a functional (as in lambda-based) way to do it. Pluses are, no need to use a using:
(note: This is not thread-safe. If you are looking to keep different threads from running the same code simultaneously, look at the lock statement, the monitor, and the mutex)
// usage
GuardedOperation TheGuard = new GuardedOperation() // instance variable
public void SomeOperationToGuard()
{
this.TheGuard.Execute(() => TheCodeToExecuteGuarded);
}
// implementation
public class GuardedOperation
{
public bool Signalled { get; private set; }
public bool Execute(Action guardedAction)
{
if (this.Signalled)
return false;
this.Signalled = true;
try
{
guardedAction();
}
finally
{
this.Signalled = false;
}
return true;
}
}
EDIT
Here is how you could use the guarded with parameters:
public void SomeOperationToGuard(int aParam, SomeType anotherParam)
{
// you can pass the params to the work method using closure
this.TheGuard.Execute(() => TheMethodThatDoesTheWork(aParam, anotherParam);
}
private void TheMethodThatDoesTheWork(int aParam, SomeType anotherParam) {}
You could also introduce overloads of the Execute method that accept a few different variants of the Action delegate, like Action<T> and Action<T1, T2>
If you need return values, you could introduce overloads of Execute that accept Func<T>
Sounds like the sort of thing you'd have to implement yourself - there are no such mechanisms built into C# or the .NET framework, though I did locate a deprecated class Guard on MSDN.
This sort of functionality would likely need to use a Using statement to operate without passing around an Action block, which as you said could get messy. Note that you can only call using against and IDisposable object, which will then be disposed - the perfect trigger for resetting the value of the object in question.
You can derive your object from IDisposable interface and implement it.
In specific case you are presenting here Dispose will be called as soon as you leave using scope.
Example:
public class BoolGuard : IDisposable
{
....
...
public void Dispose()
{
//DISPOSE IMPLEMANTATION
}
}

Dispose and asynchrony [duplicate]

Let's say I have a class that implements the IDisposable interface. Something like this:
MyClass uses some unmanaged resources, hence the Dispose() method from IDisposable releases those resources. MyClass should be used like this:
using ( MyClass myClass = new MyClass() ) {
myClass.DoSomething();
}
Now, I want to implement a method that calls DoSomething() asynchronously. I add a new method to MyClass:
Now, from the client side, MyClass should be used like this:
using ( MyClass myClass = new MyClass() ) {
myClass.AsyncDoSomething();
}
However, if I don't do anything else, this could fail as the object myClass might be disposed before DoSomething() is called (and throw an unexpected ObjectDisposedException). So, the call to the Dispose() method (either implicit or explicit) should be delayed until the asynchronous call to DoSomething() is done.
I think the code in the Dispose() method should be executed in a asynchronous way, and only once all asynchronous calls are resolved. I'd like to know which could be the best way to accomplish this.
Thanks.
NOTE: For the sake of simplicity, I haven't entered in the details of how Dispose() method is implemented. In real life I usually follow the Dispose pattern.
UPDATE: Thank you so much for your responses. I appreciate your effort. As chakrit has commented, I need that multiple calls to the async DoSomething can be made. Ideally, something like this should work fine:
using ( MyClass myClass = new MyClass() ) {
myClass.AsyncDoSomething();
myClass.AsyncDoSomething();
}
I'll study the counting semaphore, it seems what I'm looking for. It could also be a design problem. If I find it convenient, I will share with you some bits of the real case and what MyClass really does.
It looks like you're using the event-based async pattern (see here for more info about .NET async patterns) so what you'd typically have is an event on the class that fires when the async operation is completed named DoSomethingCompleted (note that AsyncDoSomething should really be called DoSomethingAsync to follow the pattern correctly). With this event exposed you could write:
var myClass = new MyClass();
myClass.DoSomethingCompleted += (sender, e) => myClass.Dispose();
myClass.DoSomethingAsync();
The other alternative is to use the IAsyncResult pattern, where you can pass a delegate that calls the dispose method to the AsyncCallback parameter (more info on this pattern is in the page above too). In this case you'd have BeginDoSomething and EndDoSomething methods instead of DoSomethingAsync, and would call it something like...
var myClass = new MyClass();
myClass.BeginDoSomething(
asyncResult => {
using (myClass)
{
myClass.EndDoSomething(asyncResult);
}
},
null);
But whichever way you do it, you need a way for the caller to be notified that the async operation has completed so it can dispose of the object at the correct time.
Since C#8.0 you can use IAsyncDisposable.
using System.Threading.Tasks;
public class ExampleAsyncDisposable : IAsyncDisposable
{
public async ValueTask DisposeAsync()
{
// await DisposeAllTheThingsAsync();
}
}
Here is the reference to the official Microsoft documentation.
Async methods usually have a callback allowing you to do do some action upon completition. If this is your case it would be something like this:
// The async method taks an on-completed callback delegate
myClass.AsyncDoSomething(delegate { myClass.Dispose(); });
An other way around this is an async wrapper:
ThreadPool.QueueUserWorkItem(delegate
{
using(myClass)
{
// The class doesn't know about async operations, a helper method does that
myClass.DoSomething();
}
});
I consider it unfortunate that Microsoft didn't require as part of the IDisposable contract that implementations should allow Dispose to be called from any threading context, since there's no sane way the creation of an object can force the continued existence of the threading context in which it was created. It's possible to design code so that the thread which creates an object will somehow watch for the object becoming obsolete and can Dispose at its convenience, and so that when the thread is no longer needed for anything else it will stick around until all appropriate objects have been Disposed, but I don't think there's a standard mechanism that doesn't require special behavior on the part of the thread creating the Dispose.
Your best bet is probably to have all the objects of interest created within a common thread (perhaps the UI thread), try to guarantee that the thread will stay around for the lifetime of the objects of interest, and use something like Control.BeginInvoke to request the objects' disposal. Provided that neither object creation nor cleanup will block for any length of time, that may be a good approach, but if either operation could block a different approach may be needed [perhaps open up a hidden dummy form with its own thread, so one can use Control.BeginInvoke there].
Alternatively, if you have control over the IDisposable implementations, design them so that they can safely be fired asynchronously. In many cases, that will "just work" provided nobody is trying to use the item when it is disposed, but that's hardly a given. In particular, with many types of IDisposable, there's a real danger that multiple object instances might both manipulate a common outside resource [e.g. an object may hold a List<> of created instances, add instances to that list when they are constructed, and remove instances on Dispose; if the list operations are not synchronized, an asynchronous Dispose could corrupt the list even if the object being disposed is not otherwise in use.
BTW, a useful pattern is for objects to allow asynchronous dispose while they are in use, with the expectation that such disposal will cause any operations in progress to throw an exception at the first convenient opportunity. Things like sockets work this way. It may not be possible for a read operation to be exit early without leaving its socket in a useless state, but if the socket's never going to be used anyway, there's no point for the read to keep waiting for data if another thread has determined that it should give up. IMHO, that's how all IDisposable objects should endeavor to behave, but I know of no document calling for such a general pattern.
I wouldn't alter the code somehow to allow for async disposes. Instead I would make sure when the call to AsyncDoSomething is made, it will have a copy of all the data it needs to execute. That method should be responsible for cleaning up all if its resources.
You could add a callback mechanism and pass a cleanup function as a callback.
var x = new MyClass();
Action cleanup = () => x.Dispose();
x.DoSomethingAsync(/*and then*/cleanup);
but this would pose problem if you want to run multiple async calls off the same object instance.
One way would be to implement a simple counting semaphore with the Semaphore class to count the number of running async jobs.
Add the counter to MyClass and on every AsyncWhatever calls increment the counter, on exits decerement it. When the semaphore is 0, then the class is ready to be disposed.
var x = new MyClass();
x.DoSomethingAsync();
x.DoSomethingAsync2();
while (x.RunningJobsCount > 0)
Thread.CurrentThread.Sleep(500);
x.Dispose();
But I doubt that would be the ideal way. I smell a design problem. Maybe a re-thought of MyClass designs could avoid this?
Could you share some bit of MyClass implementation? What it's supposed to do?
Here's a more modern spin on this old question.
The real objective is to track the async Tasks and wait until they finish...
public class MyExample : IDisposable
{
private List<Task> tasks = new List<Task>();
public async Task DoSomething()
{
// Track your async Tasks
tasks.Add(DoSomethingElseAsync());
tasks.Add(DoSomethingElseAsync());
tasks.Add(DoSomethingElseAsync());
}
public async Task DoSomethingElseAsync()
{
// TODO: something else
}
public void Dispose()
{
// Block until Tasks finish
Task.WhenAll(tasks);
// NOTE: C# allows DisposeAsync()
// Use non-blocking "await Task.WhenAll(tasks)"
}
}
Consider turning it into a base class for re-usability.
And sometimes I use a similar pattern for static methods...
public static async Task MyMethod()
{
List<Task> tasks = new List<Task>();
// Track your async Tasks
tasks.Add(DoSomethingElseAsync());
tasks.Add(DoSomethingElseAsync());
tasks.Add(DoSomethingElseAsync());
// Wait for Tasks to complete
await Task.WhenAll(tasks);
}
So, my idea is to keep how many AsyncDoSomething() are pending to complete, and only dispose when this count reaches to zero. My initial approach is:
public class MyClass : IDisposable {
private delegate void AsyncDoSomethingCaller();
private delegate void AsyncDoDisposeCaller();
private int pendingTasks = 0;
public DoSomething() {
// Do whatever.
}
public AsyncDoSomething() {
pendingTasks++;
AsyncDoSomethingCaller caller = new AsyncDoSomethingCaller();
caller.BeginInvoke( new AsyncCallback( EndDoSomethingCallback ), caller);
}
public Dispose() {
AsyncDoDisposeCaller caller = new AsyncDoDisposeCaller();
caller.BeginInvoke( new AsyncCallback( EndDoDisposeCallback ), caller);
}
private DoDispose() {
WaitForPendingTasks();
// Finally, dispose whatever managed and unmanaged resources.
}
private void WaitForPendingTasks() {
while ( true ) {
// Check if there is a pending task.
if ( pendingTasks == 0 ) {
return;
}
// Allow other threads to execute.
Thread.Sleep( 0 );
}
}
private void EndDoSomethingCallback( IAsyncResult ar ) {
AsyncDoSomethingCaller caller = (AsyncDoSomethingCaller) ar.AsyncState;
caller.EndInvoke( ar );
pendingTasks--;
}
private void EndDoDisposeCallback( IAsyncResult ar ) {
AsyncDoDisposeCaller caller = (AsyncDoDisposeCaller) ar.AsyncState;
caller.EndInvoke( ar );
}
}
Some issues may occur if two or more threads try to read / write the pendingTasks variable concurrently, so the lock keyword should be used to prevent race conditions:
public class MyClass : IDisposable {
private delegate void AsyncDoSomethingCaller();
private delegate void AsyncDoDisposeCaller();
private int pendingTasks = 0;
private readonly object lockObj = new object();
public DoSomething() {
// Do whatever.
}
public AsyncDoSomething() {
lock ( lockObj ) {
pendingTasks++;
AsyncDoSomethingCaller caller = new AsyncDoSomethingCaller();
caller.BeginInvoke( new AsyncCallback( EndDoSomethingCallback ), caller);
}
}
public Dispose() {
AsyncDoDisposeCaller caller = new AsyncDoDisposeCaller();
caller.BeginInvoke( new AsyncCallback( EndDoDisposeCallback ), caller);
}
private DoDispose() {
WaitForPendingTasks();
// Finally, dispose whatever managed and unmanaged resources.
}
private void WaitForPendingTasks() {
while ( true ) {
// Check if there is a pending task.
lock ( lockObj ) {
if ( pendingTasks == 0 ) {
return;
}
}
// Allow other threads to execute.
Thread.Sleep( 0 );
}
}
private void EndDoSomethingCallback( IAsyncResult ar ) {
lock ( lockObj ) {
AsyncDoSomethingCaller caller = (AsyncDoSomethingCaller) ar.AsyncState;
caller.EndInvoke( ar );
pendingTasks--;
}
}
private void EndDoDisposeCallback( IAsyncResult ar ) {
AsyncDoDisposeCaller caller = (AsyncDoDisposeCaller) ar.AsyncState;
caller.EndInvoke( ar );
}
}
I see a problem with this approach. As the release of resources is asynchronously done, something like this might work:
MyClass myClass;
using ( myClass = new MyClass() ) {
myClass.AsyncDoSomething();
}
myClass.DoSomething();
When the expected behavior should be to launch an ObjectDisposedException when DoSomething() is called outside the using clause. But I don't find this bad enough to rethink this solution.
I've had to just go old-school. No, you can't use the simplified "using" block. But a Using block is simply syntactic sugar for cleaning up a semi-complex try/catch/finally block. Build your dispose as you would any other method, then call it in a finally block.
public async Task<string> DoSomeStuffAsync()
{
// used to be a simple:
// using(var client = new SomeClientObject())
// {
// string response = await client.OtherAsyncMethod();
// return response;
// }
//
// Since I can't use a USING block here, we have to go old-school
// to catch the async disposable.
var client = new SomeClientObject();
try
{
string response = await client.OtherAsyncMethod();
return response;
}
finally
{
await client.DisposeAsync();
}
}
It's ugly, but it is very effective, and much simpler than many of the other suggestions I've seen.

Does C# have a "ThreadLocal" analog (for data members) to the "ThreadStatic" attribute?

I've found the "ThreadStatic" attribute to be extremely useful recently, but makes me now want a "ThreadLocal" type attribute that lets me have non-static data members on a per-thread basis.
Now I'm aware that this would have some non-trivial implications, but:
Does such a thing exist already built into C#/.net? or since it appears so far that the answer to this is no (for .net < 4.0), is there a commonly used implementation out there?
I can think of a reasonable way to implement it myself, but would just use something that already existed if it were available.
Straw Man example that would implement what I'm looking for if it doesn't already exist:
class Foo
{
[ThreadStatic]
static Dictionary<Object,int> threadLocalValues = new Dictionary<Object,int>();
int defaultValue = 0;
int ThreadLocalMember
{
get
{
int value = defaultValue;
if( ! threadLocalValues.TryGetValue(this, out value) )
{
threadLocalValues[this] = value;
}
return value;
}
set { threadLocalValues[this] = value; }
}
}
Please forgive any C# ignorance. I'm a C++ developer that has only recently been getting into the more interesting features of C# and .net
I'm limited to .net 3.0 and maybe 3.5 (project has/will soon move to 3.5).
Specific use-case is callback lists that are thread specific (using imaginary [ThreadLocal] attribute) a la:
class NonSingletonSharedThing
{
[ThreadLocal] List<Callback> callbacks;
public void ThreadLocalRegisterCallback( Callback somecallback )
{
callbacks.Add(somecallback);
}
public void ThreadLocalDoCallbacks();
{
foreach( var callback in callbacks )
callback.invoke();
}
}
Enter .NET 4.0!
If you're stuck in 3.5 (or earlier), there are some functions you should look at, like AllocateDataSlot which should do what you want.
You should think about this twice. You are essentially creating a memory leak. Every object created by the thread stays referenced and can't be garbage collected. Until the thread ends.
If you looking to store unique data on a per thread basis you could use Thread.SetData. Be sure to read up on the pros and cons http://msdn.microsoft.com/en-us/library/6sby1byh.aspx as this has performance implications.
Consider:
Rather than try to give each member variable in an object a thread-specific value, give each thread its own object instance. -- pass the object to the threadstart as state, or make the threadstart method a member of the object that the thread will "own", and create a new instance for each thread that you spawn.
Edit
(in response to Catskul's remark.
Here's an example of encapsulating the struct
public class TheStructWorkerClass
{
private StructData TheStruct;
public TheStructWorkerClass(StructData yourStruct)
{
this.TheStruct = yourStruct;
}
public void ExecuteAsync()
{
System.Threading.ThreadPool.QueueUserWorkItem(this.TheWorkerMethod);
}
private void TheWorkerMethod(object state)
{
// your processing logic here
// you can access your structure as this.TheStruct;
// only this thread has access to the struct (as long as you don't pass the struct
// to another worker class)
}
}
// now hte code that launches the async process does this:
var worker = new TheStructWorkerClass(yourStruct);
worker.ExecuteAsync();
Now here's option 2 (pass the struct as state)
{
// (from somewhere in your existing code
System.Threading.Threadpool.QueueUserWorkItem(this.TheWorker, myStruct);
}
private void TheWorker(object state)
{
StructData yourStruct = (StructData)state;
// now do stuff with your struct
// works fine as long as you never pass the same instance of your struct to 2 different threads.
}
I ended up implementing and testing a version of what I had originally suggested:
public class ThreadLocal<T>
{
[ThreadStatic] private static Dictionary<object, T> _lookupTable;
private Dictionary<object, T> LookupTable
{
get
{
if ( _lookupTable == null)
_lookupTable = new Dictionary<object, T>();
return _lookupTable;
}
}
private object key = new object(); //lazy hash key creation handles replacement
private T originalValue;
public ThreadLocal( T value )
{
originalValue = value;
}
~ThreadLocal()
{
LookupTable.Remove(key);
}
public void Set( T value)
{
LookupTable[key] = value;
}
public T Get()
{
T returnValue = default(T);
if (!LookupTable.TryGetValue(key, out returnValue))
Set(originalValue);
return returnValue;
}
}
Although I am still not sure about when your use case would make sense (see my comment on the question itself), I would like to contribute a working example that is in my opinion more readable than thread-local storage (whether static or instance). The example is using .NET 3.5:
using System;
using System.Collections.Generic;
using System.Text;
using System.Threading;
using System.Linq;
namespace SimulatedThreadLocal
{
public sealed class Notifier
{
public void Register(Func<string> callback)
{
var id = Thread.CurrentThread.ManagedThreadId;
lock (this._callbacks)
{
List<Func<string>> list;
if (!this._callbacks.TryGetValue(id, out list))
{
this._callbacks[id] = list = new List<Func<string>>();
}
list.Add(callback);
}
}
public void Execute()
{
var id = Thread.CurrentThread.ManagedThreadId;
IEnumerable<Func<string>> threadCallbacks;
string status;
lock (this._callbacks)
{
status = string.Format("Notifier has callbacks from {0} threads, total {1} callbacks{2}Executing on thread {3}",
this._callbacks.Count,
this._callbacks.SelectMany(d => d.Value).Count(),
Environment.NewLine,
Thread.CurrentThread.ManagedThreadId);
threadCallbacks = this._callbacks[id]; // we can use the original collection, as only this thread can add to it and we're not going to be adding right now
}
var b = new StringBuilder();
foreach (var callback in threadCallbacks)
{
b.AppendLine(callback());
}
Console.ForegroundColor = ConsoleColor.DarkYellow;
Console.WriteLine(status);
Console.ForegroundColor = ConsoleColor.Green;
Console.WriteLine(b.ToString());
}
private readonly Dictionary<int, List<Func<string>>> _callbacks = new Dictionary<int, List<Func<string>>>();
}
public static class Program
{
public static void Main(string[] args)
{
try
{
var notifier = new Notifier();
var syncMainThread = new ManualResetEvent(false);
var syncWorkerThread = new ManualResetEvent(false);
ThreadPool.QueueUserWorkItem(delegate // will create closure to see notifier and sync* events
{
notifier.Register(() => string.Format("Worker thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
notifier.Register(() => string.Format("Worker thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.Set();
syncWorkerThread.WaitOne(); // wait for main thread to execute notifications in its context
syncWorkerThread.Reset();
notifier.Execute();
syncMainThread.Set();
});
notifier.Register(() => string.Format("Main thread callback A (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
syncMainThread.WaitOne(); // wait for worker thread to add its notification
syncMainThread.Reset();
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
notifier.Register(() => string.Format("Main thread callback B (thread ID = {0})", Thread.CurrentThread.ManagedThreadId));
notifier.Execute();
syncWorkerThread.Set();
syncMainThread.WaitOne(); // wait for worker thread to execute notifications in its context
syncMainThread.Reset();
}
finally
{
Console.ResetColor();
}
}
}
}
When you compile and run the above program, you should get output like this:
alt text http://img695.imageshack.us/img695/991/threadlocal.png
Based on your use-case I assume this is what you're trying to achieve. The example first adds two callbacks from two different contexts, main and worker threads. Then the example runs notification first from main and then from worker threads. The callbacks that are executed are effectively filtered by current thread ID. Just to show things are working as expected, the example adds two more callbacks (for a total of 4) and again runs the notification from the context of main and worker threads.
Note that Notifier class is a regular instance that can have state, multiple instances, etc (again, as per your question's use-case). No static or thread-static or thread-local is used by the example.
I would appreciate if you could look at the code and let me know if I misunderstood what you're trying to achieve or if a technique like this would meet your needs.
I'm not sure how you're spawning your threads in the first place, but there are ways to give each thread its own thread-local storage, without using hackish workarounds like the code you posted in your question.
public void SpawnSomeThreads(int threads)
{
for (int i = 0; i < threads; i++)
{
Thread t = new Thread(WorkerThread);
WorkerThreadContext context = new WorkerThreadContext
{
// whatever data the thread needs passed into it
};
t.Start(context);
}
}
private class WorkerThreadContext
{
public string Data { get; set; }
public int OtherData { get; set; }
}
private void WorkerThread(object parameter)
{
WorkerThreadContext context = (WorkerThreadContext) parameter;
// do work here
}
This obviously ignores waiting on the threads to finish their work, making sure accesses to any shared state is thread-safe across all the worker threads, but you get the idea.
Whilst the posted solution looks elegant, it leaks objects. The finalizer - LookupTable.Remove(key) - is run only in the context of the GC thread so is likely only creating more garbage in creating another lookup table.
You need to remove object from the lookup table of every thread that has accessed the ThreadLocal. The only elegant way I can think of solving this is via a weak keyed dictionary - a data structure which is strangely lacking from c#.

Categories

Resources